Industrie nieuws

Section 230 and the erosion of truth

Inhoud
Geschreven door
Dan Gee
Managing Director UK
November 18, 2024

A view on how we can level the playing field for content accountability.

Social media platforms, once heralded as the great levellers of information and democracy, have morphed into ungoverned digital empires. Section 230 of the U.S. Communications Decency Act has enabled this transformation by granting these platforms immunity from liability for user-generated content. While this law has spurred innovation and democratized access to information, it has also had a devastating side effect, and has been a major factor in fueling the decline of quality journalism, facilitating the spread of disinformation, and deepening societal division.

The time has come for a radical rethink. Platforms should no longer operate in the shadows, free from accountability, relative to publishers and broadcasters. Instead, they must embrace radical transparency in content moderation and face regulatory mandates to deprioritise sensationalised content in their algorithms. Here’s why this is essential and how it could work.

The Defunding of Quality Journalism

Journalism is key to a healthy democracy. Investigative reporting, fact-checking, and nuanced analysis are crucial, services. But they can be costly. Historically, these efforts were funded by advertising revenues, a model that sustained newsrooms and gave journalists the time and wherewithal to pursue truth, to speak truth to power, and to provide a relatively pluralised set of opinions.

But the rise of social media has decimated this revenue stream. Platforms like Facebook and Google have been vehicles for unbridled content. The expansion of the platforms into the audience journey for publishers created a dependency. They became subservient to the algorithms, creating content that got clicks in search and social to feed the audience beast that drove the ad revenue. But over time the platforms choked off the audience supply, instead hosting content as much as possible in their own yard. This, plus the rise and rise of user generated content, created the fuel to grow massive global audiences. From here the platforms monopolised online advertising, offering cheaper, more targeted solutions to advertisers. These platforms don’t create content but profit from it, often at the expense of those who do. The result?

National and local papers shutting down and teams being disbanded. Between 2004 and 2019 over 1,800 newspapers shuttered in the US. The majority being weekly local papers. In the UK between 2005 and 2015 almost 220 local papers closed.

Where original journalism once thrived, our lives are now filled with clickbait, low-effort regurgitated content, and outright misinformation.

This isn't just a media industry problem; it's a societal one. Without a robust press to hold power to account, misinformation spreads unchecked, and public trust in institutions erodes.

Section 230 and the Digital Disinformation Wildfire

The very architecture of social media platforms is designed to amplify engagement. Unfortunately, what engages us isn’t necessarily what informs us. Disinformation, conspiracy theories, and sensationalist headlines spread faster than verified news because they provoke stronger emotional reactions.

Consider this:

  • False information spreads six times faster on Twitter than factual stories.
  • During the COVID-19 pandemic, misinformation about vaccines and treatments proliferated, undermining public health efforts.

Social media platforms, shielded by Section 230, are not held accountable for the societal harm caused by this spread. They benefit from user engagement metrics—likes, shares, and comments—that disinformation generates, yet they bear none of the consequences when real-world harm follows.

A Divided Society

If disinformation is the accelerant, algorithms are the match. Social media platforms deploy complex algorithms to maximise user engagement. These algorithms prioritise content that keeps users hooked, and unfortunately, divisive and sensationalist material fits the bill better than balanced reporting.

Echo chambers form as users are fed content that aligns with their existing beliefs, further entrenching divisions. The result?

  • Polarised political landscapes where compromise becomes impossible.
  • Erosion of shared realities, as facts become malleable and subjective.

From Brexit to climate change, the public discourse has been hijacked by a relentless tide of sensationalism, misinformation, and outright lies. The presence of misinformation in the mainstream has become increasingly bold over time with a prime example being the first use of the phrase “alternative facts” as a way of describing “provable falsehoods” in the way Donald Trump’s then White House Press Secretary, Sean Spicer lied in describing the attendance numbers of the inauguration. Collectively we all know that lies are damaging. They mislead, they enrage, they divide. But we have become less able to discern truth from falsehood. In part because the journalistic structures that would call out lies, and hold power to account have been defunded. The algorithms driving this chaos remain opaque, their mechanics hidden behind corporate walls.

Radical Transparency and Algorithmic Reform

But the platforms are not immune to public sentiment. Twitter audiences and revenues are way down. Facebook has experienced a notable decline in usage among younger demographics, particularly teenagers and young adults. A Pew Research Center study from 2022 revealed that only 32% of U.S. teens aged 13-17 reported using Facebook, a significant drop from 71% in 2014-2015. So there is every chance that today’s empires are on their way to becoming tomorrow’s ashes. If platforms are to regain public trust and fulfill their societal responsibilities, they must adopt radical transparency in content moderation. Here’s what that could look like:

  1. Public Disclosure of Moderation Policies: Platforms should publish detailed, easily accessible reports on how they moderate content. This includes the criteria for content removal, the use of automation vs. human moderators, and data on the volume and type of content flagged or removed.
  2. Algorithmic Transparency: Users deserve to know how algorithms curate their feeds. This involves disclosing the factors that influence content prioritisation—be it engagement metrics, user preferences, or advertiser interests.
  3. Independent Audits: Regulators and third-party auditors should be allowed to inspect platforms’ content moderation and algorithmic practices. This ensures accountability and prevents platforms from gaming the system.

Transparency alone isn’t enough though. To combat the harmful effects of sensationalist content, regulators must step in and mandate changes to the algorithms themselves. Here’s what that might entail:

  • Deprioritise Sensationalism: Platforms should be required to reduce the visibility of sensationalist and divisive content. Instead of prioritising what’s most engaging, algorithms should favour what’s most informative and constructive.
  • Promote Quality Content: Regulators could incentivise platforms to highlight content from reputable news sources. This would help combat the "clickbait economy" and ensure users are exposed to accurate, balanced reporting.
  • Risk Assessments for Harmful Content: Similar to the EU’s Digital Services Act, platforms could be required to conduct regular risk assessments to identify and mitigate the spread of harmful content. Fines for non-compliance should be substantial enough to deter inaction.
  • User-Controlled Algorithms: Give users more control over their feeds. Platforms could offer algorithm-free feeds or allow users to customise their algorithmic preferences, empowering them to prioritise quality content over engagement-driven material. WeAre8 are leaders in the way they do this.

I strongly believe that there should be proactive efforts to bring this stuff into the open. To ensure the profits won are on the back of innovation and excellence in enterprise, rather than extracted from the rubble of societal decay and discord. These steps I propose won't solve all issues overnight, but action is necessary if we are to trust the information in our feeds and have faith that the platforms operate responsibly.

A Balanced Approach to Free Speech

Critics will argue that increased regulation and transparency could stifle free speech. But this isn’t about silencing voices; it’s about curbing the amplification of harmful and false content. Free speech remains intact; the difference is that platforms won’t be incentivised to elevate the loudest, most divisive voices simply because they drive engagement.

Moreover, platforms already moderate content extensively—they just do so in opaque ways that often seem arbitrary. Radical transparency would bring much-needed clarity to these processes, ensuring moderation is consistent and fair.

A Global Imperative

While Section 230 is a U.S. law, its effects are felt globally. Social media platforms operate across borders, influencing societies in the UK, Europe, and beyond. Countries like the UK, with its Online Safety Act, and the EU, through its Digital Services Act, are already taking steps to hold these platforms accountable.

But unilateral action is not enough. The global nature of these platforms requires a coordinated international effort to set standards for transparency, algorithmic reform, and accountability.

Conclusion: The Path Forward

The unchecked power of social media platforms has reshaped our information ecosystem in ways that undermine democracy, public trust, and societal cohesion. Section 230, while foundational to the internet’s growth, has shielded these platforms from the accountability they desperately need.

It’s time for a new approach. Radical transparency in content moderation and regulatory mandates for algorithmic reform are not just desirable—they are essential. By deprioritising sensationalist content and promoting quality journalism, we can begin to reverse the damage and rebuild a more informed, less divided society.

The question isn’t whether we can afford to regulate social media; it’s whether we can afford not to.