NewsNation

Midterm elections to put misinformation policies to the test

Social media apps are arranged for a photograph on Friday, August 19, 2022. (Greg Nash)

Social media platforms’ plans to tackle election-related misinformation will be put to the test as congressional candidates ramp up online activity in the final months of midterm campaigns. 

Since the 2020 election, mainstream platforms like Twitter and Facebook have been more liberal in applying measures to block, label and remove politicians — including their watershed decisions to suspend former President Trump’s accounts last year. 


But as more politicians test the boundaries of the platforms’ rules with incendiary posts, especially after an uptick in violent rhetoric following last week’s FBI search of Mar-a-Lago, critics warn that tech companies need to do more than dust off their 2020 playbooks to follow through on their commitments to block misinformation and hate speech. 

New York University researcher Laura Edelson said to get at the core of the issue, platforms need to reassess the algorithms recommending content to users. 

She compared taking content down after amplifying it to a wide online audience — the approach most platforms use, and plan to use ahead of November based on their public posts — to creating a car with no brakes and only airbags. 

“By the time those are useful, the car’s crashed,” she said. 

Meta, Twitter and TikTok released their plans to moderate midterm election-related content in the past two weeks. Largely, the companies plan to deploy the same tactics as they did in 2020. All three said they will label such posts and point users to their respective election centers with authoritative voting information from partner organizations and local officials.

TikTok gained more widespread appeal since the last election. And despite some warnings about national security concerns over its parent company ByteDance being based in China, which TiKTok has refuted, more candidates are using the platform to reach voters — especially young ones.

TikTok does not allow paid political advertising, meaning the content it’s moderating will mostly be organic posts. The company said it will label all posts related to midterm elections, regardless of if there’s a disputed comment.

Facebook, now under the parent company name Meta, took that approach in 2020, but the company signaled a shift for this season. Meta President of Global Affairs Nick Clegg said in a blog post if labels are deployed, they will be used in a “targeted and strategic way” after feedback from users that the 2020 labels were “over-used.” 

According to a Meta fact sheet, the company has “hundreds of people focused on the midterms across more than 40 teams” and spent $5 billion on global safety and security last year. But reports indicate the company is cutting back on content moderation to a degree. About 60 contract workers at Accenture working for Facebook for services like content moderation would be losing their jobs, Insider reported Thursday.

A Meta spokesperson declined to comment about the report. 

Facebook has been criticized over its handling of misinformation in the 2020 election, with critics saying the company didn’t do enough to stop the spread of false narratives casting doubt on election results or false claims about voter fraud. 

“Facebook again hasn’t fundamentally changed anything and they have defunded their own program of whack-a-mole,” Edelson said. 

She said the lack of change is worrying due to a spike in conspiracy theories about local elections and federal law enforcement. 

“And now they’re being tied together as if this is some grand conspiracy,” she said. 

Twitter said in a blog post last week it would label posts with misleading content or claims about voting, including false information about the outcome of the election. 

The election-specific policies from social media giants appear focused on claims of voter fraud or suppression, referencing the Stop the Steal movement casting doubt on President Biden’s win that gained steam online in 2020, despite platforms’ efforts to fight such misinformation. The false narrative, amplified by Trump and his allies, gave rise to the violent riot at the Capitol on Jan. 6 of last year.

After the FBI searched Mar-a-Lago, there’s been an increase in violent rhetoric on mainstream and fringe sites that are casting the move as politically motivated against Trump. The posts are raising alarms about potential real world attacks. 

Jacqueline Maralet, an assistant director at the Digital Forensic Research Lab, said the platforms’ election policies address far-right extremism “mostly insofar as, ‘This type of speech is already not allowed under our content moderation policies.’”

They may take actions against specific candidates that “push too far into” the edge of “clearly inciting violence,” she said, but it’s not yet clear how strictly platforms will enforce their guidelines.

Facebook and Twitter have been taking stricter action against politicians than they did before the 2020 election, including cutting elected officials off from their accounts. 

In addition to the suspensions of Trump — Twitter’s permanent and Facebook’s lasting until at least 2023 — over the past year the companies have suspended various lawmakers, mostly for violations of their COVID-19 misinformation policy. 

Twitter, for example, permanently suspended Rep. Marjorie Taylor Greene’s (R-Ga.) personal account in January for that reason, although her official congressional account remains active. 

Martin Rooke, a research fellow at Harvard’s Shorenstein Center, said moderating content for medical misinformation is “far simpler” than it is for posts linked to events like the FBI search of Mar-a-Lago which are “inherently political,” however.

Regarding medical misinformation, “there are scientific authorities that can be consulted with,” he said. 

“But with this Mar-a-Lago event, when does content moderation really start to step on freedom of expression around being critical of law enforcement agencies, being critical of the government as well? I think that’s where the more mainstream social media platforms are really going to run up against their limit of what they are prepared to do or what they can legally do,” Rooke said. 

That can be just as challenging around campaign-centered language, he said. 

“You could say, ‘Alright, well, we’ll try and tamper down on sort of jingoistic expressions.’ But almost every election there’s language about it being a contest, a battle, a flight for freedom, fight for the future — stuff like that,” Rooke said. “That combative language is embedded in our very modern way of discussing politics. So unless the social media platforms are going to be hiring people to sit there and monitor and observe these networks on an almost consistent basis it’s going to be very, very hard to pick up.”

Another challenge is the “gray area” posts being spread by some Republican officials on mainstream platforms, Maralet said.

She characterized these as part of a “call and response” type of relationship in which a post from a Republican official on Twitter or Facebook may not explicitly call for violence, but leads to a more direct call from users on fringe sites. 

Beyond incendiary posts about the FBI search, advocacy groups are also raising concerns about how social media platforms are handling other forms of hate speech, including a rise in anti-LGBTQ posts identified after the passage of Florida’s “Don’t Say Gay or Trans” bill.

A report released by the Center for Countering Digital Hate (CCDH) and the Human Rights Campaign (HRC) earlier this month found that just 10 Twitter accounts drove 66 percent of impressions for the 500 most viewed anti-LGBTQ tweets using “groomer” as a slur between January and July. 

Among them were the accounts of Reps. Greene and Lauren Boebert (R-Colo.), who are both seeking reelection in November. 

The posts seemingly violate Twitter’s policy. A Twitter spokesperson confirmed that use of the term “groomer” is prohibited under the hateful conduct policy when used as a descriptor in context of discussion of gender identity. 

“We agree that we can and must do better. Our mission and our responsibility is to proactively enforce all of our policies, and to do so as quickly as possible. We continue to invest in our automated tools and teams of specialist reviewers, to identify and address gaps in our enforcement,” the spokesperson said in a statement. 

Through Meta’s Ad Library, researchers found 59 paid ads served to Facebook and Instagram users that shared a dangerous narrative that the LGBTQ community and allies are “grooming” children, according to the report. 

“Are social media companies prepared for the impact of this dangerous rhetoric in the wake of the 2022 elections? As we approach 2022 midterm elections, are social media companies prepared to enforce their policies and prevent some of the real life consequences we’ve seen in association with this language being used in previous elections,” Justin Unga, director of strategic initiatives at HRC, said. 

A spokesperson for Meta said, “We reviewed the ads flagged in the report and have taken action on any content that violates our policies.”

To mitigate the issues, HRC officials said the platforms need to do a better job at enforcing the policies they already have in place. 

“This has real world consequences. It has political consequences. But we’re really worried about the actual danger they’re putting people in,” Jay Brown, senior vice president of programs at HRC, said.

“These platforms need to do better enforcing their own policies,” he added.