Increasingly the struggle within the marketplace of ideas between science and pseudoscience is taking place online. All sides have reaped the benefits of online communities and social media, and it is difficult to quantify the exact effect. We are all still trying to figure out this new world order that has emerged out of our technology.
But it is certainly true that it is easier to spread misinformation than ever before. These concerns were raised every time a new media was introduced, even the printing press. But that does not mean the concerns are not legitimate, or that we do not need to develop countermeasures. In fact, SBM is our primary countermeasure, those of us who believe the best medicine is based on solid science. This is a good first approach, with no real downsides – public education. There is, I would argue, no better fix than to simply have a well-informed scientifically literate public with good critical thinking skills.
The question remains, however, should we just accept the reality of massive demonstrable misinformation on social media? The latest episode in this saga involves antivaxxer ads on Facebook.
Recently Facebook enacted rules designed to limit the spread of clear and harmful misinformation on its platform. In case you are wondering, this is not a First Amendment issue because Facebook is a private company, not any part of the government. The real question is whether a company like Facebook, with a dominant market share, should be choosing which speech to allow. But in reality social media platforms are generally not a completely level playing field anyway, because they use algorithms to determine what content to spread, promote, and curate for specific users. So really, recent efforts by social media giants to stem misinformation has focused on tweaking the algorithms to value information virtues (like accuracy, intellectual honesty, and transparency) over sensationalism.
There are multiple layers to such efforts, and one layer is to have rules that determine which ads to allow. A private company is under no obligation to promote ads that violate their company policy, such as ads that are essentially fraudulent. When it comes to health care information, fraudulent ads can be directly harmful, and one might argue that a company like Facebook has an obligation to police their own advertising for harmful fraudulent misinformation.
Facebook has started by focusing on anti-vaccine misinformation, of which there is a great deal online. Here are their recently published rules:
- We will reduce the ranking of groups and Pages that spread misinformation about vaccinations in News Feed and Search. These groups and Pages will not be included in recommendations or in predictions when you type into Search.
- When we find ads that include misinformation about vaccinations, we will reject them. We also removed related targeting options, like “vaccine controversies.” For ad accounts that continue to violate our policies, we may take further action, such as disabling the ad account.
- We won’t show or recommend content that contains misinformation about vaccinations on Instagram Explore or hashtag pages.
- We are exploring ways to share educational information about vaccines when people come across misinformation on this topic.
- Update on April 26, 2019 at 10AM PT: We may also remove access to our fundraising tools for Pages that spread misinformation about vaccinations on Facebook.
These are reasonable and I think defensible rules, but the question always arises – how will this be determined and by who? The first layer of defense is an algorithm. Here is where experienced skeptics may cringe a little. Previously SBM experimented with algorithm-driven advertising to help support the costs of this page (which are substantial, in a good way because we have decent traffic – we have since eliminated all ads and rely entirely on Patreon for support). Part of the problem was that the advertising algorithms had a difficult time telling the difference between a skeptical and a gullible treatment of a particular topic. It only knew our content was dealing with the topic.
In other words – if we publish an article debunking homeopathy, the algorithms will see this as an article about homeopathy, and then feed ads promoting homeopathy and other alternative medicine pseudoscience. I have run into versions of this problem many times. Just writing about pseudoscience tags you with the pseudoscience.
As you might have guessed by now – this is exactly what is happening with the Facebook algorithm. As The Daily Beast reports:
This month, the Idaho Department of Health and Welfare, the state’s official health department, bought 14 ads to promote a statewide program providing free pediatric vaccinations. Facebook removed all of them.
We can consider this a “false positive” problem in the algorithm. In addition there have been some prominent “false negative” issues:
During the same time period, Children’s Health Defense, an anti-vaccine nonprofit founded and chaired by the nation’s most prominent vaccine conspiracy theorist, Robert F. Kennedy Jr., successfully placed more than 10 ads stoking unfounded fear about vaccines and other medical conspiracy theories.
There have been other cases as well. In each case a human had to get involved, to review the choices made by the algorithm and correct them. To be clear, this is not a disaster. It’s more of a hiccup. But it does relate part of the problem – the antivaxxers are adaptive. They figure out which words and phrases trigger the algorithm and they avoid it. Whereas, those promoting public health may be naïve to such issues.
This problem has a fix. In the short term Facebook will have to actively review individual cases and correct them. In the long term they will need to keep tweaking their algorithm to keep one step ahead.
But the real long term issue is much bigger. Anti-vaccine misinformation is often demonstrable nonsense and provably false. Facebook is on firm ground and can likely cite official expert panels to back up their position. But what process do they have to determine what other issues qualify for the anti-vaccine treatment? How transparent should this process be? Where should they set their threshold?
And in the end are we all comfortable with a few giant corporations having this much power over the flow of information through our society (which is increasingly dependent on the flow of information)? I am glad that Facebook has decided to fight against anti-vaccine misinformation. But what if they decided not to? What if they determined that it wasn’t their job to police the quality of information on their platform (and there are many people who think this should be the case)? What if they make a determination that I disagree with?
I don’t have the answers. These are true dilemmas. But I do think that we collectively need to carefully consider the current situation and where we should go from here, and probably should be experimenting with possible fixes. Early attempts to counteract misinformation should be considered experiments. We should carefully monitor how they go, and then make adjustments as necessary.