WASHINGTON — The spread of misinformation on social media has been front and center during the pandemic and since the 2020 election with concerns on both sides of the aisle.
Now a new peer-reviewed study found that even if a post is flagged on one platform, it can still spread far and wide on other platforms.
Researchers at New York University’s Center for Social Media and Politics looked at Twitter specifically and how it handled former President Donald Trump’s tweets about what it flagged as election misinformation.
“We were interested in understanding the impact of a single platform, in this case Twitter,” said Zeve Sanderson, Executive Director of NYU’s Center for Social Media and Politics. “We live in a networked information age and what that means is information can spread extraordinarily quickly.”
Twitter flagged hundreds of former President Trump’s tweets about the 2020 election, either attaching a warning label or blocking it altogether.
The study found that tweets with warning labels actually spread further and longer within Twitter than tweets that weren’t flagged at all.
It also found while blocked tweets were effective in stopping the spread of those messages within Twitter, those posts ended up spreading more on other social media platforms.
“Information that is posted on one platform can pop up on other platforms in the form of links, screenshots or direct quotes of those messages,” said Sanderson.
What’s unclear is the ‘why’ behind these trends and Sanderson said that’s because of limited publicly available data but research into the causes is ongoing.
What the study does say though is that if strategies to stop the spread of misinformation are different from site to site, those messages often still end up spreading.
“As we think about what intervention strategies might be able to make these information environments healthier, or discourse sort of more robust, we really want to make sure that we keep in mind this network dynamic and not get solely focused on individual platforms,” said Sanderson.
In response to the study, a spokesperson for Twitter pointed to the enforcement actions it took to stop the spread of election misinformation including labeling around 300,000 tweets as misleading in October and November of last year.
“The challenges of misinformation continue to be complex, and require a whole-of-society approach,” said a Twitter spokesperson. “We continue to research, question, and alter features that could incentivize or encourage behaviors on Twitter that negatively affect the health of the conversation online or could lead to offline harm.”
Republicans on Capitol Hill have accused social media companies of targeting conservatives on their platforms through censorship during several Congressional hearings on how the companies handle misinformation.
©2021 Cox Media Group