As of October 6, Twitter’s Birdwatch community moderation program has been expanded to all US users.

This is a big step for Birdwatch, which officially launched in beta in January 2021, and marks progress in the platform’s efforts to reduce the spread of misinformation on the platform. But as the scheme expands, data reviewed by On the edge suggests that the most common topics under scrutiny are already covered by Twitter’s misinformation policies, raising new questions about the program’s overall impact.

At its core, Birdwatch’s promise is to “decentralize” the fact-checking process of disinformation, putting the power in the hands of the user community, not a tech company. But fact-checking covers a vast range of topics, from trivial and easily debunked rumors to complex claims that may hinge on fundamental uncertainties in the scientific process.

“This could speak to the random internet curiosities that pop up”

In public statements, Twitter executives involved in the program focused on easier solutions. Speaking to reporters last month, Keith Coleman, Twitter’s vice president of product, suggested that Birdwatch’s strength is in addressing statements that aren’t covered by Twitter’s misinformation policies or aren’t serious enough to be assigned internal fact- resource check. “This could speak to random internet curiosities that pop up,” Gizmodo quotes Coleman as they say. “Is there a giant void in space? Or is this bat actually the size of a man?’


We downloaded data from Birdwatch up to 20 September. This dataset contained a total of 37,741 notes, of which 32,731 were unique.

We used Python a set of natural language tools a library for parsing notes and extracting the most common meaningful words that appear in them.

To do this, we dropped conjunctions like “and,” “but,” “there,” “which,” and “for,” and excluded words that are often used in the fact-check construction process, like “tweet, ‘, ‘source’, ‘claims’, ‘evidence’ and ‘article’. We also ignored the words in the URLs – which Twitter includes as part of the note text – and reduced plurals to their singular form (so “cars” would count as “car”).

The processed data gives us a good overview of the topics that are frequently addressed or added context to using the Birdwatch system.

➡️ To explore the full data yourself, you can browse our interactive database of birdwatching notes.

But the cases from the program’s beta phase show that many Birdwatch users are trying to address more serious misinformation issues on the platform and overlap significantly with Twitter’s existing policies. Birdwatch data released by Twitter shows that COVID-related topics are the most common topic addressed in Birdwatch notes. What’s more, many of the accounts that posted the annotated tweets have since been suspended, suggesting that Twitter’s internal review process is catching content violations and taking action.

As part of its broader open source efforts, Twitter maintains a regularly updated dataset of all Birdwatch notes is freely available for download from the project blog. On the edge analyzed this data, looking at a data set that covers the period from 22 January 2021 to 20 September 2022. Using computational tools to match and summarize the data, we can gain insights into the main themes in the Birdwatch notes that would otherwise be difficult to receive from manual review.

The data showed that Birdwatch users spent a lot of time viewing tweets related to COVID, vaccination and the government’s response to the pandemic. The word’s frequency list shows us that “COVID” is the most common subject term, with the related term “vaccine” at number three on the list.

From these notes, the type of claims that are typically fact-checked evolve over time as public understanding of the pandemic changes. Tweets from 2021 address false narratives alleging that Dr. Anthony Fauci somehow had a personal role in creating the new coronavirus or casting doubt on the safety and effectiveness of vaccines when they become available.

Other Birdwatch notes from this time refer to unproven or dangerous treatments for COVID, such as ivermectin and hydroxychloroquine.

While some of the more outlandish myths about COVID are easy to fact-check—such as the idea that the virus is a hoax, that it’s mostly harmless, or that it’s being spread by 5G towers—other claims about transmission, severity, and mortality can be more -difficult to get definitively right.

For example, when the vaccines were distributed in January 2021, one Birdwatch user tried to add context to an argument about the effectiveness of one brand of vaccine in preventing hospitalization versus preventing any infection. New Jersey Governor Phil Murphy tweeted that trial data for the Johnson & Johnson vaccine showed “COMPLETE protection against hospitalization and death” and provoked an angry response from a statistician who linked to trial data showing only “66% efficacy” of the vaccine.

“The [tweet] the author confuses the reported efficacy in preventing hospitalization and death with the reported overall efficacy in preventing infection,” helpfully added note from Birdwatch referencing Bloomberg coverage which clearly distinguish the indicators.

More suspiciously, another Birdwatch user attempted to fact-check a claim widely reported by mainstream news outlets, using a blog post on a brewing website as a quote. Where news outlets followed the CDC’s lead, reporting that the omicron variant accounted for 73 percent of new infections as of December 2021, a blog post On argued that the claim may stem from an error in the CDC’s statistical modeling. The blog post was well-argued, but without confirmation from a more reliable and verified source, it’s hard to know whether the annotation helped the situation or just muddied the waters.

Birdwatch users rated tweets like these as some of the most problematic to deal with. (By completing a survey when creating a note, users can rate tweets on four binary values ​​qualifying how misleading, plausible, harmful, and difficult to fact-check the claims are.) Clearly, accurate and accessible communication of scientific findings is a difficult task, but public health outcomes depend on producing accurate health advice and preventing the spread of bad advice. Experts agree that the platforms need strong, clear and coordinated standards to address pandemic misinformation, and community-driven moderation seems unlikely to meet this limit.

While COVID is a major theme in the Birdwatch notes, it is far from the only one.

On the frequency list, the words “earthquake” and “prediction” ranked high due to a large number of identically worded notes that were attached to tweets from accounts falsely claiming to be able to predict earthquakes around the world.

There is no evidence that earthquakes can be reliably predicted, but inaccurate earthquake forecasts keep going viral online. With 48k followers at the time of writing, the @Quakeprediction Twitter account is one of the worst offenders, posting a steady stream of predictions of increased earthquake risk in California. One Birdwatch user appears to have taken it upon himself to attach a cautionary note to more than 1,300 tweets from this and other earthquake forecasting accounts, each time linking to a debunking from the US Geological Survey explaining that scientists have never predicted an earthquake.

It’s not clear why the user focused on earthquakes, but the end result is a human reviewer that ironically behaves more like automated fact-checking software: it looks for a pattern in tweets and responds with an identical action each time.

Stop “stop theft”

The data also clearly shows ongoing efforts to challenge the results of the 2020 election, a phenomenon that plagues many other online platforms.

Further down the list of most common words are the terms “Trump,” “election,” and “Biden.” Many notes that contain these terms refer to claims that Donald Trump won the 2020 election or, conversely, that Joe Biden lost. Although widespread, claims such as these are easy to fact-check due to the overwhelming amount of evidence against widespread election fraud.

“Joe Biden won the election. This is the big lie that goes on,” reads a note attached to a a tweet from associated with white nationalists Arizona State Senator Wendy Rogers, who falsely claimed that fraud was committed in densely populated areas.

“It is nearly impossible to commit vote-by-mail fraud, and there is absolutely no evidence that the 2020 election results were the result of fraud,” reads another note attached to fake tweet by Irene Armendariz-Jackson, a Republican candidate running for Beto O’Rourke’s former congressional seat in El Paso, Texas.

Another user wrote simply: “The election was not rigged. Trump lost.” On that note, as in many other cases, the original tweets simply cannot be viewed: searching for a tweet ID results in a blank page and a message that the account has been suspended.

Although Birdwatch users commented on many tweets challenging the results of the 2020 election, self-report surveys rated these tweets as less challenging to address given the overwhelming amount of evidence supporting a Biden victory.

Given the sheer number of suspended accounts, it seems clear that either Twitter’s algorithms or its human moderation team also find it easy to flag and remove the same content.

A screenshot of a tweet from @StateofusAll that read:

So far, data from the Birdwatch program shows a strong community of volunteer fact-checkers trying to tackle tough issues. But the evidence also suggests a large degree of overlap in the type of tweets these volunteers address and the content already covered by Twitter’s existing misinformation policies, raising questions about whether the fact-checking memos will have a significant impact. (Twitter argues that Birdwatch should be a supplement to existing fact-checking initiatives, not a replacement for disinformation control.)

Twitter says preliminary results from the program look good: The company claims that people who see fact-checking notes attached to tweets, 20–40 percent less likely to agree with the content of a potentially misleading tweet than someone who only sees the tweet. This is a promising discovery, but by implication, many viewers of the tweet are still misled by lies.

Twitter did not immediately respond to a request for comment.

Click here to browse our interactive database of birdwatching notes.

Previous articleUS and UK navies conduct bilateral exercise Phantom Scope
Next articleFoodology, a Colombian virtual restaurant startup, has raised US$50 million with Maluma among the investors