- META Dashboard
- Financials
- Filings
-
Holdings
- Transcripts
- ETFs
- Insider
- Institutional
- Shorts
-
PX14A6G Filing
Meta Platforms (META) PX14A6GLetter to shareholders
Filed: 23 May 24, 5:14pm
May 23, 2024
A message to META investors: Vote FOR on Proposal Six “Report on Generative Artificial Intelligence Misinformation and Disinformation Risks”
Our proposal requests the Board to issue a report assessing the risks to the Company’s operations and finances, and to public welfare, presented by the Company’s role in facilitating misinformation and disinformation disseminated or generated via generative Artificial Intelligence; what steps the Company plans to take to remediate those harms; and how it will measure the effectiveness of such efforts.
We are particularly concerned given that, in just the past few weeks, we have documented evidence of the approval of AI generated ads in Meta’s largest market, India, in violation of the company’s stated
policy1 that it would prioritize the detection and removal of violative AI generated content, recognizing “the concerns around the misuse of AI-generated content to spread misinformation”.
In a new report2, researchers, in collaboration with India Civil Watch International, confirmed Meta’s approval of AI-manipulated political ads seeking to spread disinformation and incite religious violence during India's elections. Between May 8th and May 13th, Meta approved 14 highly inflammatory ads for distribution. These ads called for violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India's political landscape, and incited violence through Hindu supremacist narratives.
Accompanying each ad text were manipulated images generated by AI image tools, proving how quickly and easily this new technology can be deployed to amplify harmful content. Meta’s systems also did not block researchers posting political and incendiary ads during the election “silence period.” The process of setting up the Facebook accounts was extremely simple and researchers were able to post these ads from outside of India. Researchers pulled the ads before they were actually viewed by users.
In total, 14 out of 22 ads were approved by Meta within 24 hours; all of the approved ads broke Meta’s own policies on hate speech, bullying and harassment, misinformation, and violence and incitement and included:
_____________________________
1 https://about.fb.com/news/2024/03/how-meta-is-preparing-for-indian-general-elections-2024/
2 https://aks3.eko.org/pdf/Meta_AI_ads_investigation.pdf
● Ads targeting BJP opposition parties with messaging on their alleged Muslim “favoritism,” a popular conspiracy pushed by India’s far-right.
● Ads playing on fears of India being swarmed by Muslim “invaders,” a popular dog whistle targeting Indian Muslims. One ad claimed that Muslims had attacked India’s Ram temple and that Muslims must be burned.
● Two ads pushed a ‘stop the steal’ narrative, claiming electronic voting machines were being destroyed, followed by calls for a violent uprising.
● Multiple ads used Hindu supremacist language to vilify Muslims, going further with calls for burning Muslims.
● One ad called for the execution of a prominent lawmaker claiming their allegiance to Pakistan.
● One ad used a conspiracy theory made popular by BJP opposition parties about a lawmaker removing affirmative action policies for oppressed caste groups.
● Each ad was accompanied by a manipulated image created with widely used AI image tools Stable Diffusion, Midjourney, and Dall-e. For example, Ekō researchers were able to easily generate images showing a person burning an electronic voting machine, drone footage of immigrants crowding India’s border crossing, as well as notable Hindu and Muslim places of worship on fire.
The ads were placed in English, Hindi, Bengali, Gujarati, and Kannada.
In light of these current examples of Meta’s lack of effective AI governance in its largest market, Proposal Six’s request for a report to shareholders on the effectiveness of the measures the company is taking to address misinformation and disinformation risks, particularly during the 2024 election cycle, is particularly compelling. As investors, it is essential we have a transparent and accurate view of the enforcement of the company’s promised safeguards.