You probably know you need to take information about COVID-19 found on social media with the proverbial grain of salt. But you may not be aware that some posts are generated by automated software and posted to counterfeit accounts, called bots.

One problem with bots is how widely and how fast they can spread misinformation about legitimate scientific research.

“If we get rid of bots, we can get rid of most of the misinformation today.”

And it can be hard to differentiate between posts from bots and those from real people. “Bots are not fair to the way people use social media. No one looks at social media and thinks at the other end is a machine, not a person,” John Ayers, the corresponding author of a new study on the negative impact of bots, told TheDoctor.

Ayers and his colleagues decided to see how much and what kind of misinformation about COVID-19 was being spread by bots. They found bots appeared to have controlled a campaign on Facebook to spread misinformation about DANMASK-19, a study published online last November in the Annals of Internal Medicine that found masks were an effective way to control the spread of SARS-CoV-2, the virus that causes COVID-19.

Bots have a bigger impact than people. That’s what makes them such a threat. “We know that 10 percent of Twitter is bots, and it is estimated that 20 percent of Facebook is bots,” Ayers explained.

Almost 300,000 posts to more than 560 Facebook groups were downloaded. Every post featured a link to the DANMASK-19 study. Facebook groups that were the most and least likely targeted by bots were identified by calculating the frequency with which identical links to DANMASK-19 were posted to pairs of groups, then noting how much time elapsed between these posts during the five-day study period.

Groups Likely and Unlikely to be Targeted by Bots
Facebook groups with identical links that were posted five or more times, with at least two of these links posted within 10 seconds of each other, were defined as most likely targeted by bots. Those Facebook groups where the time between posts was an average of four to five hours were considered by the researchers — from the University of California, San Diego and UCLA, as well as the University of Pennsylvania — to be unlikely to have been targeted by bots.

Next, posts in both types of groups — those that were likely and those that were unlikely to have been targeted by bots — were categorized as making claims that wearing a mask is ineffective or harmful, asserting conspiratorial claims about the DANMASK study or including neither type of misinformation. Examples of conspiratorial claims would include claims of covert corporate or political control, such as, “Corporate fact-checkers are lying to you! All this is to serve their Dystopian #Agenda2030 propaganda!”

Over 710 posts with a direct link to DANMASK-19 study were shared in more than 560 Facebook groups. About 280 of these posts were in groups most likely targeted by bots, and 17 of these posts were deleted. Only about 60 posts about DANMASK-19 were made in groups that were unlikely to have been targeted by bots. Of these, three posts were deleted.

People should demand that social media companies and legislators make sure that the social platforms they use are free of bots.

Almost 20 percent of the posts made to groups likely targeted by bots claimed wearing masks was harmful. Over 50 percent of the posts in these groups made conspiratorial claims about DANMASK-19, and nearly 44 percent of the posts referred to neither claim.

Less than 10 percent of the posts made to groups unlikely to have been targeted by bots claimed masks were harmful. Only about 20 percent made conspiratorial claims about DANMASK-19. Seventy-three percent of the posts in these groups made neither claim.

“If we get rid of bots, we can get rid of most of the misinformation today,” said Ayers, the vice chief of innovation in the division of infectious disease and global public health at UC San Diego. The way to do that is with legislation, he believes. People should demand that social media companies and legislators make sure that the social platforms they use are free of bots. “We need to have approaches to dealing with bots that are systematic and apolitical.”

The study is published in JAMA Internal Medicine.