Many tech companies already have systems in place to try to combat misinformation, but have ramped up their efforts in light of the coronavirus. Here’s an overview of what some of the biggest players in social media are doing to stick to the facts.
On March 18th, Facebook launched its Coronavirus Information Center, which acts as a hub to collect updates and content directly from sources such as the World Health Organization as well as other trusted media outlets. When users follow the Coronavirus Information Center, they receive notifications about new content or important updates. The majority of Facebook users, however, aren’t getting their coronavirus information from Facebook’s hub. Fraudulent posts pop up in every corner of the service from the newsfeed to private groups that are totally unrelated to the virus. That falls more to the company’s more general misinformation guidelines. Here are the parameters laid out in Facebook’s coronavirus guidance: “Since January, we’ve applied this policy to misinformation about COVID-19 to remove posts that make false claims about cures, treatments, the availability of essential services or the location and severity of the outbreak. We regularly update the claims that we remove based on guidance from the WHO and other health authorities.” For content like conspiracy theories, the company relies on its regular network of roughly 55 fact-checking organizations to evaluate its merit. If something is deemed inaccurate or misleading, fewer users see it and it’s accompanied by an alert or pop-up to suggest its possible inaccuracy. In regards to advertising, Facebook put into place a new policy to prevent misleading messages. “…we are now prohibiting ads for products that refer to the coronavirus in ways intended to create a panic or imply that their products guarantee a cure or prevent people from contracting it.” While Facebook tries to automatically intercept this kind of misleading message before it reaches many people, the approach’s efficacy has limits. Content shared within private groups can still regularly slide under the radar, which is why you might see more bad coronavirus info popping up in places like garage sale communities or other groups. You should still report them when you see them to flag them for the company’s attention. But, with increased volume, Facebook says it’s prioritizing content that could cause harm or directly dissuade people from getting treatment, so there may be a delay in reviewing reports. If your content gets flagged, you can still appeal its removal, and Facebook will take note of your disagreement, but likely won’t review it again for a chance at reinstatement due to the sheer volume of reports and its available staffing.
While the encrypted messaging service is part of Facebook, it has a few specific tweaks. Earlier this week, WhatsApp announced that it would expand its program to limit forwarded messages to stop bad info from going viral. The app has been labeling messages with many forwards with a double arrow icon to clearly show users that the information didn’t come from a close contact. Now, users can only forward this kind of content to one chat at a time rather than spamming it out to their entire list of contacts. WhatsApp is also asking users to forward potentially misleading or harmful information spreading on its service to flag it for review. Like Facebook proper, WhatsApp relies on a selection of fact checkers to evaluate posts. The high volume will likely affect response times. Some reports peg WhatsApp as particularly fertile ground for coronavirus misinformation. Even the Irish Prime Minister Leo Varadkar tweeted to urge people not to share “unverified info on WhatsApp groups.”
Twitter typically relies on automated systems to evaluate the information in tweets, but it’s ramping up its AI efforts during the coronavirus pandemic. As the service receives more reports with its workers spread out in remote offices, Twitter says it’s increasing its automated efforts to try to identify misleading content before users report it. There’s an element of mystery to these efforts, but we do know, however, that Twitter will not permanently suspend users based purely on judgement calls made automatically without a human evaluation. According to Twitter, users should continue to report misleading content, but should expect a longer delay between report and response due to the increased volume. Coronavirus-related posts will still go through human teams, whether they’re kicked up by AI or human users.
YouTube
YouTube’s biggest coronavirus issue landed this week when a popular livestream with 65,000 viewers falsely claimed that there’s a link between 5G wireless networks and the spread of COVID-19. The speaker also failed to condemn incidents in the UK in which people had set fire to 5G towers based on that conspiracy. YouTube deleted the video after it had finished, but the event exposed a particularly tricky position for the video behemoth. The 5G conspiracy content reportedly falls under what YouTube considers “borderline content.” It doesn’t directly influence people to take harmful actions or dissuade them from seeking treatment, but it’s unfounded and, in this case, simply false. Now, YouTube is directly restricting the reach for videos promoting the 5G conspiracy on top of its normal practices. In normal usage, searches for coronavirus topics result in a notification at the top of the page guiding viewers to the CDC’s official website when you’re in the US. Other countries may produce different alerts guiding them to services like the NHS or the most appropriate source in their home country.
Search Google for “coronavirus” and you’ll arrive at a curated content hub filled with trusted sources and confirmed statistics as well as direct links to health organizations. The hub also promotes local and health authorities across other social media platforms like Twitter, and a section called “common questions,” with immediate answers to some of the most popular queries.
Last year, Pinterest took a strong stance against misinformation regarding vaccines. Searching for any terms relating to “anti-vax” directs users to official sources of information like the WHO. The company has taken a similarly hard line against misinformation regarding coronavirus. Searching for “coronavirus” now brings up messages as well as a grid made almost exclusively of informational graphics from reputable sources. Even searching for terms like “coronavirus masks,” which would typically be right in the service’s DIY wheelhouse, turns up the same results.