- META Dashboard
- Financials
- Filings
-
Holdings
- Transcripts
- ETFs
- Insider
- Institutional
- Shorts
-
PX14A6G Filing
Meta Platforms (META) PX14A6GLetter to shareholders
Filed: 24 Apr 24, 9:58am
NOTICE OF EXEMPT SOLICITATION
NAME OF REGISTRANT: Meta Platforms, Inc.
NAME OF PERSONS RELYING ON EXEMPTION: Arjuna Capital
ADDRESS OF PERSON RELYING ON EXEMPTION: 13 Elm St., Manchester, MA 01944
WRITTEN MATERIALS: The attached written materials are submitted pursuant to Rule 14a-6(g)(1) (the “Rule”) promulgated under the Securities Exchange Act of 1934,* in connection with a proxy proposal to be voted on at the Registrant’s 2024 Annual Meeting. *Submission is not required of this filer under the terms of the Rule but is made voluntarily by the proponent in the interest of public disclosure and consideration of these important issues.
April 23, 2024
Dear Meta Platforms Shareholders,
We are writing to urge you to VOTE “FOR” PROPOSAL 6 on the proxy card, which asks Meta to report on risks associated with mis- and disinformation disseminated or generated via generative Artificial Intelligence (gAI) across its platforms and plans to mitigate these risks. We believe shareholders should vote FOR the Proposal for the following reasons:
1. | Meta has an appalling track record of mismanaging its platforms, which has ultimately contributed to societal harm. |
2. | Meta's gAI tools have already created false and misleading information. |
3. | Meta is failing to quickly identify and moderate gAI mis- and disinformation distributed across its platforms, and its senior leaders underestimate the risks associated with these content moderation failures, even in an important election year. |
4. | Misinformation and disinformation disseminated through gAI creates risks for Meta and investors alike. |
5. | This Proposal goes beyond Meta’s current reporting, by requesting an accountability mechanism to ensure it is effectively identifying and mitigating mis- and disinformation risks. |
Expanded Rationale FOR Proposal 6
The Proposal makes the following request:
RESOLVED: Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances, and to public welfare, presented by the Company’s role in facilitating misinformation and disinformation disseminated or generated via generative Artificial Intelligence; what steps the Company plans to take to remediate those harms; and how it will measure the effectiveness of such efforts.
We believe shareholders should vote “FOR” the Proposal for the following reasons:
1. | Meta has an appalling track record of mismanaging its platforms, which has ultimately contributed to societal harm. |
For years, there have been serious concerns that Meta is unable to adequately evaluate and manage its products. In its opposition statement, the Company attempts to assure shareholders that it is effectively handling the risks of mis- and disinformation with gAI. Yet, the Company has proven time and time again that it cannot be trusted to effectively mitigate harmful content on its platforms. Without the proper accountability mechanisms in place, as requested by this Proposal, Meta’s gAI tools and gAI content distributed across its platforms could contribute to its devastating track record of societal harm. Mismanagement of its platforms, prior to the introduction of gAI, resulted in:
a. | Amplifying Hate and Extremism - A 2022 investigation by ProPublica and The Washington Post found that Facebook played “a critical role” in the spread of false narratives that fomented the violence in the U.S. Capitol on January 6, 2021. The investigation found that Facebook dissolved an election integrity task force following the 2020 elections and “rolled back other intensive enforcement measures.” The investigation also revealed a problem with the way Facebook polices its user “groups,” with former employees saying, “the company’s enforcement efforts have been weak, inconsistent and heavily reliant on the work of unpaid group administrators to do the labor-intensive work of reviewing posts and removing the ones that violate company policies.”1 |
b. | Election Disinformation - Facebook testified before Congress that Russia-based operatives published 80,000 posts from June 2015 to August 2017, reaching about 126 million Americans, in an attempt to influence the US presidential election.2 In 2022, various ads were identified on Facebook spreading electoral disinformation in Brazil in attempts to influence its election.3 |
c. | Facilitating Genocide - An estimated 10,000 Rohingya Muslims were killed during a military crackdown in Myanmar in 2017.4 Facebook spread hate speech and played a “determining role” in that violence, according to the lead author of a United Nations report.5 An independent report commissioned by Facebook found that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”6 |
_____________________________
1 https://www.propublica.org/article/facebook-hosted-surgeof-misinformation-and-insurrection-threats-in-months-leading-up-to-jan-6-attack-records-show
2 https://www.businessinsider.com/how-mark-zuckerberg-reacted-to-learning-facebook-russian-election-interference-2021-7
3 https://time.com/6210985/brazil-facebook-whatsapp-election-disinformation/
4 https://www.bbc.com/news/world-asia-59558090
5 https://time.com/5197039/un-facebook-myanmar-rohingya-violence/
6 https://about.fb.com/news/2018/11/myanmar-hria/
d. | Undermining Privacy - In 2018, Facebook admitted to mishandling data from about 87 million Facebook users that had been improperly obtained by political data-analytics firm Cambridge Analytica.7 As a result of that data breach, in 2019 the Federal Trade Commission imposed a record-breaking $5 billion penalty on Facebook and forced the Company to submit to a modified corporate structure to settle charges that it violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information. |
e. | Violating Civil Rights - A 2018 third-party civil rights audit of Facebook - commissioned by the Company itself - expressed concern about “the vexing and heartbreaking decisions Facebook has made that represent significant setbacks for civil rights.”8 The Washington Post later disclosed that Facebook did not inform the civil rights auditors about research that showed that the Company’s algorithms disproportionately harmed minorities. Laura Murphy, the lead auditor, said, “I am not asserting nefarious intent, but it is deeply concerning that metrics that showed the disproportionate impact of hate directed at Black, Jewish, Muslim, Arab and LGBTQIA users were not shared with the auditors.”9 |
f. | Harming Young People - In 2021, ex-Facebook employee and whistleblower Frances Haugen provided documents to the Securities and Exchange Commission and testified before the U.S. Senate regarding Facebook systems that she alleged amplified online hate and extremism and failed to protect young people from harmful content. Among other allegations, Haugen said Facebook’s internal research found that the Company’s Instagram social media platform exacerbates eating disorders and suicidal thoughts in teenage girls. |
g. | Failing to Remediate Harms - Based on documents provided by Facebook whistleblower Haugen, The Wall Street Journal published a major investigative series and concluded: “Facebook Inc. knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands…Time and again, the documents show, Facebook’s researchers have identified the platform’s ill effects. Time and again, despite congressional hearings, its own pledges and numerous media exposes, the company didn’t fix them.”10 |
It is evident that Meta has not done enough in the past to mitigate devastating societal harms facilitated by its platforms. This Proposal is an attempt to ensure Meta has proactively identified all risks associated with gAI, and that it is held accountable for aggressively mitigating these risks.
_____________________________
7 https://www.businessinsider.com/cambridge-analytica-a-guide-to-the-trump-linked-data-firmthat-harvested-50-million-facebook-profiles-2018-3
8 https://about.fb.com/wpcontent/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf
9 https://www.washingtonpost.com/technology/2021/11/21/facebook-algorithm-biased-race/.
10 https://www.wsj.com/articles/thefacebook-files-11631713039
2. | Meta's gAI tools have already created false and misleading information. |
By Meta’s own admissions, “[a]ddressing potential bias in generative AI systems is a new area of research,” which is especially concerning considering that “[t]he underlying models… have the potential to generate fictional responses or exacerbate stereotypes it may learn from its training data.”11 Even AI’s strongest proponents continue to echo the need for guardrails and transparency about how these technologies work. Sam Altman, CEO of OpenAI, has said he is “particularly worried that these models could be used for large-scale disinformation.” The Information has noted that gAI drops “the cost of generating believable misinformation by several orders of magnitude.”12 And researchers at Princeton, Virginia Tech, and Stanford have found that the guardrails many companies, including Meta, are relying on to mitigate the risks “aren’t as sturdy as A.I. developers seem to believe.”13
As a developer of large language models (LLMs) and the chatbot assistants and image-generation tools that rely on them, Meta has a responsibility to anticipate how these proprietary gAI tools will be used, and to mitigate any harms that may arise from either their malfunction or abuse by bad actors. These AI tools have already proven their susceptibility in generating mis- and disinformation:
a. | In a recent test, Meta’s Llama 2, which powers its AI assistant, passed just 4 of 13 risk-assessment categories, and showed a hallucination rate of 48 percent, nearly equivalent to getting just one answer of every two correct.14 Other concerns raised by the test included the model’s vulnerability to manipulation via prompt injection, leakage of personally identifiable information, overzealous redaction of information, and other random data leakages. |
b. | Meta’s AI assistants were rolled out on its platforms in mid-April, and many reportedly posed as humans with made-up life experiences. One AI assistant posed as a mother in a private Facebook group, while another offered to give away nonexistent items in a Facebook forum. On WhatsApp, the AI chatbot accused a user of plagiarism, providing a formal citation for the “plagiarized” blog post. Meta acknowledged the gaffs, saying “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.”1516 |
c. | Meta’s Imagine image generator contested reality by refusing to create an image of an East Asian man with a white woman, after dozens of queries asking it to do so.17 |
d. | Meta’s latest generative AI model, Llama 3,18 relies on synthetic data, or data generated by AI that extrapolates what might happen in a real-world situation, but that does not represent a real-world event itself. Such data can lead to “model degradation,” over time, which can have a negative impact on results, by introducing bias and other inaccuracies.19 |
_____________________________
11 https://about.fb.com/news/2023/09/building-generative-ai-features-responsibly/
12 http://www.theinformation.com/articles/what-to-do-about-misinformation-in-the-upcoming-election-cycle
13 https://www.nytimes.com/2023/10/19/technology/guardrails-artificial-intelligence-open-source.html
14 https://www.techspot.com/news/102653-meta-llama-2-llm-prone-hallucinations-other-severe.html
15 https://www.wral.com/story/metas-newest-ai-model-beats-some-peers-but-its-amped-up-ai-agents-are-confusing-facebook-users/21387175/
16 https://www.washingtonpost.com/politics/2024/04/16/chatbots-flaws-arent-stopping-tech-giants-putting-them-everywhere/
17 https://www.theverge.com/2024/4/3/24120029/instagram-meta-ai-sticker-generator-asian-people-racism
18 https://techcrunch.com/2024/04/18/meta-releases-llama-3-claims-its-among-the-best-open-models-available/
19 https://www.forbes.com/sites/forbestechcouncil/2023/11/20/the-pros-and-cons-of-using-synthetic-data-for-training-ai/?sh=3e9c030a10cd
3. | Meta is failing to quickly identify and moderate gAI mis- and disinformation distributed across its platforms, and its senior leaders underestimate the risks associated with these content moderation failures, even in an important election year. |
Meta must address the gAI misinformation and disinformation that is disseminated across its platforms by quickly identifying and removing content that violates its policies, including ads, whether AI-generated or not. Yet over the past year, gAI mis- and disinformation has spread widely on the Company’s platforms, including Facebook, WhatsApp, and Instagram. For example:
a. | An AI-generated image of an explosion at the Pentagon was distributed across various social media platforms, including Facebook, and caused a brief dip in the stock market.2021 |
b. | Ads promoting generative AI tools that let people create nude images without the permission of the subjects of those images have run on Instagram, being removed only after reporters asked the Company for comment.22 |
c. | Sexualized, nonconsensual AI-generated images of Taylor Swift circulated on several platforms, including Facebook and Instagram.23 This case and the handling of nonconsensual, “explicit AI images of female public figures” involving well-known women, one in India and one in the U.S., have prompted re-consideration of Meta’s policies and enforcement practices on explicit AI-generated imagery by the Facebook Oversight Board.24 |
Meanwhile, Meta’s senior leaders, including Nick Clegg, Vice President of Global Affairs, have publicly downplayed the threats of gAI, ignoring the effects that AI-generated images and campaigns have already had on elections this year. That’s despite numerous news reports about how election outcomes in Taiwan,25 Indonesia,26 Argentina,27 India,28 and other countries have been influenced by gAI images, chatbots, and ad targeting. Clegg has asserted, contrary to fact, that “it is striking how little these tools have been used on a systematic basis to really try to subvert and disrupt the elections.”29 Compounding these negative effects is Facebook’s well-documented underinvestment in moderating non-English content.30
_____________________________
20 https://www.washingtonpost.com/technology/2023/05/22/pentagon-explosion-ai-image-hoax/
21 https://www.business-humanrights.org/en/latest-news/us-image-produced-by-generative-ai-spreads-fear-over-fake-pentagon-explosion/
22 https://www.404media.co/email/d2bebba9-5808-44fc-8352-d93d1791a5ff/?ref=daily-stories-newsletter
23 https://www.telegraph.co.uk/business/2024/04/16/facebooks-investigates-deepfake-nudes-taylor-swift/
24 https://www.oversightboard.com/news/361856206851167-oversight-board-announces-two-new-cases-on-explicit-ai-images-of-female-public-figures/
25 https://foreignpolicy.com/2024/01/23/taiwan-election-china-disinformation-influence-interference/
26 https://www.codastory.com/newsletters/elections-indonesia-ai-abuse/
27 https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html&ved=2ahUKEwjFr7er-9WFAxVbSzABHck6DDIQFnoECBMQAQ&usg=AOvVaw00KHWYIb0RIfyH_Xl3BTDa
28 https://www.aljazeera.com/economy/2024/4/12/before-india-election-instagram-boosts-modi-ai-images-that-violate-rules
29 https://www.theguardian.com/technology/2024/apr/09/metas-nick-clegg-plays-down-ais-threat-to-global-democracy
30 https://dig.watch/updates/the-consequences-of-metas-multilingual-content-moderation-strategies
As bad actors become more sophisticated in the manipulation of gAI, serious consequences of gAI are bound to increase without the proper guardrails in place. Additionally, tactics for manipulating gAI will evolve over time as bad actors attempt to circumvent Meta’s initiatives to tackle mis- and disinformation. Meta must stay on top of these new tactics to ensure it is effectively moderating mis- and disinformation. This Proposal calls upon the Company to annually evaluate these risks and report on its effectiveness in mitigating them.
4. | Misinformation and disinformation disseminated through gAI creates risks for Meta and investors alike. |
a. | Legal Risks: Meta faces significant legal risks if it does not properly mitigate mis- and disinformation on its platforms, which are likely to intensify with the addition of gAI. The Company has already faced various lawsuits for its role in mis- and disinformation, prior to the development of gAI. In 2021, the Company was sued by Reporters without Borders for allowing “disinformation and hate speech to flourish on its network.”31 Meta also faced two class-action lawsuits valued in excess of $150 billion for its role in spreading misinformation and hate speech that sparked the Rohingya genocide in Myanmar.32 In 2022, Meta faced a $2 billion class-action lawsuit for its role in amplifying the spread of misinformation and hate speech which fueled political violence in Ethiopia.33 Additionally, many legal experts believe Meta may be liable for mis- and disinformation generated from its own technology, as it is unlikely to be shielded by Section 230, a provision of federal law that has protected social media platforms and web hosts from legal liability for third-party content posted to their sites. However, with Meta’s own gAI tools, the content is created by the Company’s technology itself, which makes Meta vulnerable to future legal scrutiny. |
b. | Democracy Risks: Mis- and disinformation is dangerous for society, Meta, and investors alike as it can manipulate public opinion, exacerbate biases, weaken institutional trust, and sway elections. Researchers have argued that “the prevalence of AI-generated content raises concerns about the spread of fake information and the erosion of trust in social media platforms and digital interactions. The dissemination of misleading or manipulated content can further diminish public trust in the authenticity of information shared on social media, undermining the credibility of these platforms and media sources overall.”34 The distortion of “truths” generated and disseminated via gAI ultimately undermines trust in our democratic processes—processes that underpin the stability of our society and economy. This is of increasing concern in 2024, a year with a significant number of elections, including presidential elections in the US, Turkey, and Ukraine.35 |
_____________________________
31 https://www.theguardian.com/technology/2021/mar/23/facebook-lawsuit-deceptive-practices-disinformation
32 https://www.zdnet.com/article/meta-slapped-with-two-class-action-lawsuits-worth-in-excess-of-150b-for-its-role-in-rohingya-genocide/
33 https://www.nbcnews.com/tech/misinformation/facebook-lawsuit-africa-content-moderation-violence-rcna61530
34 https://www.techrepublic.com/article/generative-ai-impact-culture-society/
35 https://en.wikipedia.org/wiki/List_of_elections_in_2024
c. | Regulatory Risks: The regulatory landscape for Meta’s AI is still developing, which is itself a risk. The first major guidelines have been set by the newly launched EU AI Act. Meta faces some serious headwinds with this new regulation, as it threatens the existence of some of its AI tools reliant on social scoring and predictive policing, requires Meta to identify and label deepfakes and AI-generated content, and requires the company to perform model evaluations and risk-assessments and mitigations and report any incidents where the AI system failed. Additionally, EU citizens will be able to report to the European AI Office when AI systems have caused harm.36 Barrons notes that, “The potential penalties contained in the AI Act are considerable, including 7% of a company’s global annual turnover in the previous financial year for violations involving banned AI applications.” |
In the US, the Biden Executive Order remains a voluntary framework, but we can expect “the Brussels Effect” to lead towards more global adoption of the standards adopted by the EU. The final shape of US regulations may not yet be clear but there is popular support for strong regulation. A recent Pew survey reports “67% of those who are familiar with chatbots like ChatGPT say they are more concerned that the government will not go far enough in regulating their use than that it will go too far.”37
d. | Economy-wide Risks: Diversified shareholders are also at risk as they internalize the costs of mis- and disinformation on society. When companies harm society and the economy, the value of diversified portfolios rises and falls with GDP.38 It is in the best interest of shareholders for Meta to mitigate mis- and disinformation, in order to protect the Company’s long-term financial health and ensure its investors do not internalize these costs. Gary Marcus, chief executive officer of the newly created Center for Advancement of Trustworthy AI, notes, “The biggest near-term risk [of generative AI] is deliberately created misinformation using large language tools to disrupt democracies and markets.” 39 |
_____________________________
36 https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/amp/?ref=everythinginmoderation.co
37 https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/
38 See Universal Ownership: Why Environmental Externalities Matter to Institutional Investors, Appendix IV (demonstrating linear relationship between GDP and a diversified portfolio) available at https://www.unepfi.org/fileadmin/documents/universal_ownership_full.pdf; cf. https://www.advisorperspectives.com/dshort/updates/2020/11/05/market-cap-to-gdp-an-updated-look-at-the-buffett-valuation-indicator (total market capitalization to GDP “is probably the best single measure of where valuations stand at any given moment”) (quoting Warren Buffet).
39 https://www.techrepublic.com/article/un-ai-for-good-summit/
5. | This Proposal goes beyond Meta’s current reporting, by requesting an accountability mechanism to ensure that the Company is effectively identifying and mitigating mis- and disinformation risks. |
In its opposition statement, Meta describes its responsible AI approach and governance to obfuscate the need to fulfill this Proposal’s request. Yet, the requested report is asking the Company to go beyond describing responsible AI principles. We are asking for a comprehensive assessment of the risks associated with gAI so that the Company can effectively mitigate these risks, and an evaluation of how effectively the Company tackles the risks identified. As the risks of gAI are severe and broadly consequential, it is crucial Meta not only reports its beliefs and commitments to responsible gAI, but also that it transparently illustrates to shareholders that it has fully identified the risks and is evaluating its ability to address them.
Meta also describes the tools it is currently developing to help identify AI-generated mis- and disinformation in its Opposition Statement. As these tools are still very much under development, and given the Company’s poor historical performance in content governance, this Proposal requests appropriate accountability to ensure these tools are effective.
Additionally, current reporting does not fulfill this Proposal’s request. Meta’s Community Standards Enforcement Report tracks the Company’s progress on various content moderation policies, but does not assess content moderation specifically on mis- and disinformation. In the 4Q24 Oversight Board Quarterly Update, the Board recommends Meta provide quarterly enforcement data on misinformation. The Company has not yet implemented this recommendation stating, “this undertaking is intricate and demands substantial resources.” Yet, given the severe risks outlined above regarding mis- and disinformation, it is crucial that Meta contributes adequate resources to ensure it is effectively identifying and mitigating mis- and disinformation.
Conclusion
For all the reasons provided above, we strongly urge you to support the Proposal. We believe a report on misinformation and disinformation risks related to generative AI will help ensure Meta is comprehensively mitigating risks and is in the long-term best interest of shareholders.
Please contact Julia Cedarholm at juliac@arjuna-capital.com for additional information.
Sincerely,
Natasha Lamb
Arjuna Capital
This is not a solicitation of authority to vote your proxy. Please DO NOT send us your proxy card. Arjuna Capital is not able to vote your proxies, nor does this communication contemplate such an event. The proponent urges shareholders to vote for Proxy Item 6 following the instruction provided on the management’s proxy mailing.
The views expressed are those of the authors and Arjuna Capital as of the date referenced and are subject to change at any time based on market or other conditions. These views are not intended to be a forecast of future events or a guarantee of future results. These views may not be relied upon as investment advice. The information provided in this material should not be considered a recommendation to buy or sell any of the securities mentioned. It should not be assumed that investments in such securities have been or will be profitable. This piece is for informational purposes and should not be construed as a research report.