Whistleblower Claims Facebook is Causing Harm and Destruction

Free+use+image+courtesy+of+pixabay.com

Free use image courtesy of pixabay.com

In 2019, Mark Zuckerburg wrote an op-ed for the Wall Street Journal where he claimed, “We don’t sell people’s data.” Despite this, the company has repeatedly been fined by governments around the world for violations of privacy laws. That same year, Facebook was fined $5,000,000,000 by the Federal Trade Commission (FTC) in part for selling user data to third parties like Cambridge Analytica. In total, Facebook has been fined more than 10 billion dollars. More recently, Facebook has had multiple whistleblowers who previously worked at the company claim that Facebook is polarizing societies, causing violence worldwide and harming the mental health of its users. These effects can be seen, albeit on a less overt scale, within the Friends Select community as well as globally.

The latest in a long series of whistleblower complaints against Facebook could have serious implications for all social media companies; Frances Haugen is the latest whistleblower to go public with allegations against Facebook. She has worked at many social media companies including Google and Pinterest in the past and said Facebook was substantially worse than anywhere she had worked before. While Facebook has dealt with similar whistleblowing incidents, Haugen’s is different because she was able to duplicate 10,000 pages of internal Facebook research before leaving the company. She alleges that the documents show that Facebook prioritizes its profits over the safety of its users. One internal document she replicated said, “We have evidence from a variety of sources that hate speech, divisive political speech and misinformation on Facebook and the family of apps are affecting societies around the world.” 

Facebook, like other social media platforms, uses algorithms to decide what content people see. In 2018, the company made changes to its algorithm that prioritized engaging content. According to internal Facebook research, divisive, hateful, and misleading content is what elicits the most engagement. Because of this, Facebook and other algorithmic social media platforms have amplified misinformation and hate speech.

The documents taken by Haugen also show that Facebook knows they only remove a small percentage of posts that violate their guidelines. One internal document said, “For example, we estimate that we may action as little as 3-5% of hate [speech] and ~0.6% of V&V [violent and inciting content] on Facebook.”

Meanwhile, in Myanmar, Facebook was used to incite hate against Rohingya Muslims and the company did little to combat the thousands of posts that broke their guidelines until they were publicized in Western media. Accounts that made posts inciting hatred and calling for violence against the Rohingya people compiled a combined 12 million Burmese followers before they were banned. Even after massive public outcry, many of the posts remained online and similar ones continue to be made to this day. The violence against Myanmar’s Rohingya Muslims has since been ruled a genocide by the United Nations.

Prior to the 2020 U.S. presidential election, Facebook put safety protocols in place to reduce the spread of misinformation but then decided to remove these measures after the election. Many Facebook employees including Frances Haugen were outraged by this choice, especially after Facebook was found to be one of the platforms used to plan the January 6th insurrection.

The claims made by Haugen extend beyond Facebook and include its subsidiaries like Instagram. One study she copied before leaving the company concluded that 13.5% of teen girls report that Instagram makes thoughts of suicide worse and for 17% of teen girls Instagram makes eating disorders worse. Prior to the allegations made by Haugen, Facebook was developing a new version of Instagram for younger children. They have since decided to pause development on that platform. Instagram will also be adding new features meant to protect teenage Instagram users. These include prompting users to take a break from using the platform and “nudging” users who repeatedly look at potentially harmful content. 

Child safety groups have bashed these new features as they feel they will be ineffective. Josh Golin, executive director of Fairplay, said he doesn’t think features that allow parents to observe teens’ activities on Instagram would be useful since many teens already use secret accounts often called “finstas.” He was also skeptical of nudging teens to take a break or move away from harmful content. His main suggestion was that legislators regulate Facebook’s algorithms.

 Closer to home, in a survey of 60 Friends Select Upper School students, 50% of respondents felt that their mental health had been negatively affected by their use of social media and 48% of students also said that social media use had made them feel anxious. 82% percent of students also said that YouTube, TikTok, and Instagram were the primary platforms they used, all of which rely on algorithms to recommend content to users.

During their many previous PR disasters, Facebook’s response was apologetic. However, this time, their response has been more dismissive of the recent accusations against them. “Many of the claims don’t make any sense,” Mark Zuckerberg said in a blog post. “If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place?” Facebook has been facing crises relating to its public image since 2017, but this is the first time that Congress has been so heavily involved.

Despite the public outcry over previous incidents and the public’s dislike of the company, legislation against Facebook has been unsuccessful so far in the United States. There is a pending antitrust lawsuit brought against Facebook by the FTC, but it has already been dismissed once. However, within the last year, five bills have been introduced that, if passed, would regulate the algorithms that power social media sites like Facebook. Previous bills have concentrated on content moderation solutions like removing misinformation. This approach, while helpful, did not get to the root of the problem: social media algorithms’ tendency for circulating misinformation.