When supporters of former Brazilian President Jair Bolsonaro stormed the country’s Congress, comparisons were made to the 2021 US Capitol Building riot. Like the American attack, the invaders were extremists acting in the name of an ousted right-wing president and — in both cases — the insurrections came following elections rife with misinformation.
Regulating misinformation without suppressing free speech has been a persistent headache for governments around the world. Australia’s updated misinformation code still doesn’t capture all messenger services, while new laws in Turkey are “micro-managing and throttling social media” in the eyes of journalists.
If there was hope social media platforms would step in where governments couldn’t, recent developments aren’t encouraging. When it comes to moderating misinformation, Facebook’s parent company Meta is slashing jobs after underwhelming earnings and might be stepping away from news altogether. The ascendant TikTok is rife with misinformation and Twitter has rarely escaped headlines since billionaire Elon Musk took the reins.
Daniel Angus, Professor of Digital Communication at Queensland University of Technology says Musk’s leadership has sent Twitter’s moderation backwards.
“He has trashed the platform’s online safety division and as a result misinformation is back on the rise,” he says.
"[Musk] looks to technological fixes to solve his problems. He’s already signalled to upping use of AI for Twitter’s content moderation. But this isn’t sustainable nor scalable, and is unlikely to be the silver bullet.”
Experts point out no single billionaire can take down misinformation — it’ll take a multi-faceted all-hands response from governments, businesses, civil society activists and consumers.
Cash in, cash out
Disinformation agents often do their damage by buying up advertising space to post misleading content. Publishers hosting the ads either don’t know (if the process is automated) or don’t care (high click rates make them money). Busting the cycle requires creative thinking.
The Global Disinformation Index, a not-for-profit group aimed at compiling data to help advertisers make informed decisions about where to place their brands, has found it effective to engage with legitimate brands whose products were appearing next to disinformation.
Co-founder Daniel J. Rogers, who is also an Adjunct Assistant Professor at New York University, says: “Advertisers end up with their brands appearing alongside unsuitable content, harming their reputation and costing them money.
“We seek to balance out that equation. Advertisers were missing data on where on the web disinformation was occurring. With that information they could avoid those platforms, safeguarding their brand and directing funds away from disinformation peddlers.”
The US Trust ratings agency Newsguard is also fighting disinformation and has audited most media outlets in Australia and New Zealand. The ratings are sent to advertising agencies and brands, urging them to only support outlets that provide online safety for readers and support for democracies.
Striking a balance
When misinformation spreads, it’s better to correct than let it fester. Anya Schriffrin, senior lecturer at Columbia University, says it’s crucial to catch false information before it escapes into the world.
“Corrections may also aggravate the problem: due to the exposure effect, audiences seeing something twice may believe it more; or corrections may only enhance distrust in the media,” she says.
“Establishing more prevalent fact-checking also helps create and support a culture of truth and signalling, and may build relationships among journalists. The creation of global standards for truth could help advertisers make better decisions about who they support, and build trust back in the media.”
Teaching fact from digital fiction
Ensuring citizens are media literate is a strong protective mechanism to limit the damage. This could include widespread state programs that help equip news consumers with the skills to think critically about the content they’re exposed to and help discern the credible from the untrustworthy.
Tanya Notley, Associate Professor of Media at Western Sydney University says a media-literate citizen is a strong contributor in a democracy.
“A fully media-literate citizen will be aware of the many ways they can use media to participate in society. They will know how media are created, funded, regulated, and distributed and they will understand their rights and responsibilities in relation to data and privacy.”
Attack the causes
Misinformation and disinformation would be considerably defanged if people simply didn’t believe it. Conspiracy theories gained a foothold in many countries during the COVID pandemic. In Australia, the far-right movement latched on to conspiracies about widespread vaccine deaths which involved tech magnate Bill Gates and then-US Chief Medical Advisor Anthony Fauci.
Mario Peucker, senior research fellow at Victoria University, suggests understanding why some gravitate towards conspiracy theories is the key to unlocking a strategy to blunt their influence.
“Only then, can nuanced strategies be developed to prevent more people falling down misinformation rabbit-holes — and, possibly, restoring a space where robust public debate can replace ideologically parallel communities.”