Facebook missed warning signs leading to the Capitol uprising

0

WASHINGTON – As supporters of Donald Trump stormed the United States Capitol on January 6, battling police and forcing lawmakers into hiding, an insurgency of a different kind was unfolding within the big social media company in the world.

Thousands of miles away in California, Facebook engineers were rushing to fine-tune internal controls to slow the spread of disinformation and incentive content. Emergency actions – some of which were called off after the 2020 election – included banning Trump, freezing comments in groups with record-breaking hate speech, filtering the rally cry “Stop the Steal” and empowering content moderators to act more assertively by labeling the United States as a “temporary high-risk location” for political violence.

At the same time, the frustration within Facebook FB,
-5.05%
erupted because of what some saw as the company’s hesitant and inconsistent response to the rise of extremism in the United States

“Haven’t we had enough time to figure out how to deal with discourse without allowing violence? an employee wrote on an internal bulletin board during the height of the January 6 turmoil. “We have been fueling this fire for a long time and we shouldn’t be surprised that it is now out of control.”

It’s a question that still hangs over the company today, as Congress and regulators investigate Facebook’s role in the January 6 riots.

New internal documents provided by former Facebook employee-turned-whistleblower Frances Haugen provide rare insight into how the company appears to have simply stumbled into the January 6 riot. It quickly became apparent that even after years of being under the microscope for insufficiently monitoring its platform, the social network had failed to understand how riot participants spent weeks swearing – on Facebook itself – d ‘prevent Congress from certifying Joe Biden’s electoral victory.

The documents also appear to bolster Haugen’s claim that Facebook has put growth and profits ahead of public safety, opening the clearest window yet on how Facebook’s conflicting impulses – to protect its business and protect democracy – clashed in the days and weeks leading up to the attempted January 6 coup.

This story is based in part on disclosures Haugen made to the Securities and Exchange Commission and provided to Congress in drafted form by Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news agencies, including the Associated Press.

What Facebook called the ‘Break the Glass’ emergency measures put in place on January 6 were essentially a toolkit of options designed to stem the spread of dangerous or violent content that the social network had first used. in the run-up to the bitter 2020 elections. As many as 22 of those measures were rolled back at some point after the election, according to an internal spreadsheet analyzing the company’s response.

“As soon as the election was over, they either deactivated them or they brought the settings back to what they were before, to prioritize growth over safety,” Haugen said in an interview with “60 Minutes”.

An internal Facebook report after Jan. 6, previously reported by BuzzFeed, criticized the company for having a “piecemeal” approach to the rapid growth of “Stop the Steal” pages, related sources of misinformation and violent and inciting comments.

Facebook says the situation is more nuanced and that it is carefully calibrating its controls to respond quickly to spikes in hateful and violent content, as it did on January 6. place before that day would not have helped.

Facebook’s decisions to introduce or phase out certain security measures took into account signals from the Facebook platform as well as information from law enforcement, spokeswoman Dani Lever said. “When those signals changed, the metrics changed too. “

Lever said some of the measures remained in place until February and others remain active today.

Some employees were unhappy with Facebook’s handling of problematic content even before the January 6 riots. An employee who left the company in 2020 left a long note accusing promising new tools, backed by extensive research, being limited by Facebook for “fears of responses from public and political stakeholders”. negative reactions from Trump allies and investors).

“Likewise (although more concerning), I have seen already built and functional protections being canceled for the same reasons,” wrote the employee, whose name is blacked out.

Research conducted by Facebook long before the 2020 campaign left little doubt that its algorithm could pose a serious danger to the spread of disinformation and the potential radicalization of users.

A 2019 study, titled “Carol’s Journey to QAnon — A Test User Study of Misinfo & Polarization Risks Encountered through Recommendation Systems,” described the results of an experiment conducted with a test account established to reflect the views of a prototype of a “strong conservative” – ​​but not extremist – 41-year-old woman from North Carolina. This test account, using the alias Carol Smith, indicated a preference for mainstream news sources like Fox News, followed comedy groups that mocked liberals, embraced Christianity, and was a fan of Melania Trump.

Within a single day, the page recommendations for this account generated by Facebook itself had evolved into a “pretty disturbing and polarizing state,” the study found. On Day 2, the algorithm recommended more extremist content, including a group related to QAnon, which the fake user did not join because she was not naturally drawn to conspiracy theories.

A week later, the test subject’s newsfeed featured “a barrage of extreme, conspiratorial and graphic content,” including articles reviving Obama’s false birth lie and linking the Clintons to the murder of a former Arkansas state senator. Much of the content has been posted by questionable groups run from overseas or by administrators known to have broken Facebook’s bot activity guidelines.

These findings led the researcher, whose name was redacted by the whistleblower, to recommend security measures ranging from removing content with known plot references and disabling “top contributor” badges for them. Disinformation commentators to lowering the threshold number of subscribers required before Facebook verifies the admin of a page. identity.

Among other Facebook employees who read the research, the response has been almost universally favorable.

“Hey! This is such an in-depth, well-described (and disturbing) study,” one user wrote, his name obscured by the whistleblower. “Do you know of any concrete changes that have resulted from this?

Facebook said the study was one of many examples of its commitment to continuously study and improve its platform.

Another study given to congressional investigators, titled “Understanding the Dangers of Harmful Thematic Communities,” explained how like-minded people embracing a borderline subject or identity can form “echo chambers” for disinformation. which normalizes harmful attitudes, stimulates radicalization and may even provide a justification for violence.

Examples of these harmful communities include QAnon and hate groups promoting theories of race war.

“The risk of violence or harm offline becomes more likely when like-minded people come together and support each other in taking action,” the study concludes.

Prosecution documents filed by federal prosecutors against those who allegedly stormed Capitol Hill contain examples of like-minded people coming together.

Prosecutors say a prominent militia leader Oath Keepers used Facebook to discuss forming an “alliance” and plans to coordinate with another extremist group, the Proud Boys, ahead of the riot on Capitol Hill.

“We decided to work together and shut this shit up,” wrote on Facebook Kelly Meggs, described by authorities as the head of the Florida chapter of the Oath Keepers, according to court records.


Source link

Leave A Reply

Your email address will not be published.