No Silver Bullet for Fake News. By Sara Kachwalla from BPP University, and Best in Category winner for the vLex International Writing Competition category: Influence, Law & Technology.

By Sara Kachwalla

On the 4th of December in 2016, Edgar Maddison Welch fired three shots at Comet Ping Pong in Washington DC, wrongly believing that the pizzeria headquartered a satanic child trafficking ring run by Hillary Clinton. The source of his convictions? A conspiracy theory spun from a series of leaked emails between Clinton and her campaign manager, John Podesta.[1] But although the theory has been widely debunked, no factual news story has managed to deter the torrent of posts from resolute believers. Pizzagate has all too clearly illustrated the dangers of fake news — a deep mistrust in traditional sources of authority, a decline in the visibility of legitimate press and an information environment where false claims are routinely defended as absolute facts.

These outcomes are not accidental, they are the by-products of a business model deliberately designed to monetise human engagement even at the cost of polarisation, foreign interference and manipulation. With over 3.6 billion users worldwide, social media wields an unprecedented amount of influence and yet these tech companies have enjoyed a lack of regulatory oversight.[2] But as artificial intelligence becomes more sophisticated and deep fakes more realistic, the development of a resilient regulatory framework increasingly appears to be the only sustainable way forward.[3] At stake is the protection of democracy, veracity and ultimately, a shared reality if existential problems such as climate change, rising inequality and a global pandemic are to be effectively tackled.

The Real Cost of Free

Although social media companies are often viewed as free platforms, they are more accurately categorised as giant advertising companies that profit off user engagement. This is because the longer a user stays engaged, the more exposure social media’s advertisements receive.[4] However, this extractive business model is incredibly conducive to fake news and adversely affects how information is spread and consumed.[5]

By using algorithms to maximise engagement, social media inadvertently spreads misinformation given clickbait and scandal are more likely to attract attention. Conversely, boring but verified information garners less engagement and is therefore shared less frequently. These effects are further exacerbated by design features such as unlimited scrolling and auto-play which push users down rabbit holes filled with inflammatory conspiracy theories.[6] Ultimately, by personalising content, social media’s distributional framework is inherently divisive and inhibits the plurality of information so crucial to challenging opinions and fostering balanced perspectives.[7]

Evidently, social media’s business model is fundamentally misaligned with the public interest. But the current whack-a-mole regulatory approach and its overreliance on existing paradigms has struggled to provide effective solutions — the platform-publisher debate fails to recognise that neither term accurately categorises social media, an antitrust policy must contend with the fact that dangerous monopolies do not always exist in the form of high prices[8], and a content moderation approach ignores the dangers of allowing governments[9] and unelected tech companies[10] to define misinformation. Consequently, social media’s unique characteristics have enabled it to elude systematic regulation given they cannot be adequately grasped by existing concepts.[11]

With Great Power Comes Great Responsibility

Social media companies currently influence millions of people every day. But as repeated scandals have lucidly illustrated, they have not displayed a commensurate level of responsibility. As long as their unregulated business models drive decision making, fake news will continue to pervade social media. To this end, a novel regulatory framework enforced by an independent regulator will be key to effectively and sustainably tackling fake news. Although multiple areas of law will be required to address the issues at hand, legislation should be underpinned by the objectives of regulating reach, protecting users and fostering transparency.

Regulating Reach

Currently, a key issue with social media’s distributional framework is that it artificially amplifies and personalises content in order to engage users. A practical solution could lie in distinguishing between freedom of speech and freedom of reach. Under this approach, regulation would focus on making controversial content less visible by changing how social media algorithms operate and by making it more difficult for users to organically spread fake news.[12] This would also prevent proposed regulation from getting bogged down by concerns about free speech. Lastly, the reach of foreign actors and their ability to weaponize misinformation must be combatted by defining the lines they must not cross and enforcing government led penalties if they do.

Protecting Users

A business model focussed on commoditising its users will inevitably impact their welfare.[13] This warrants protective regulation, particularly in the context of preserving user autonomy and dismantling the psychological environment that allows fake news to thrive.[14] Essential to this will be recognising that certain aspects of social media may need to be vetted in the same way that new technologies must undergo substantial testing before they can penetrate the market. Regulation should also empower users by allowing them to decide how their data is used and how their news feeds are curated.[15] In this regard, Tristan Harris’ research on ethical user design[16] and Jack Balkin’s work on information fiduciaries provide illustrative examples of how these aims can be achieved in practice.[17]

Fostering Transparency

Although social media companies harbour volumes of personal data about their users, little is known about their internal processes. This asymmetry is problematic as legislators need to first understand how content moderation policies, design features and content algorithms contribute to spreading misinformation before they can draft effective regulation. Making social media companies comply with certain duties could foster transparency by requiring them to clearly set out their content moderation policies, disclose how algorithms are used to filter content, and provide regular reports detailing the amount of misinformation circulating their platforms.

Implementing a new legal framework will not be an instant panacea to fake news and the rapidity of technological change will continue to pose enduring challenges. But an intelligent and impartial regulatory body, willing to learn from experience and equipped with the necessary legal and technological expertise, may just provide the best way forward.

[1] Wendling, Mike. “The Saga of ‘Pizzagate”: The Fake Story That Shows How Conspiracy Theories Spread.” BBC News, 2 Dec. 2016, www.bbc.com/news/blogs-trending-38156985. Accessed 14 Nov. 2020.
[2] Clement, Jessica. “Number of Social Media Users 2025.” Statista, 24 Nov. 2020, www.statista.com/statistics/278414/number-of-worldwide-social-network-users/. Accessed 14 Nov. 2020.
[3] Citron, Danielle, and Robert Chesney. “Deepfakes and the New Disinformation War.” Center for Internet and Society: Stanford Education, 11 Dec. 2018, cyberlaw.stanford.edu/publications/deepfakes-and-new-disinformation-war. Accessed 19 Nov. 2020.
[4] Kim, Sang Ah. “Social Media Algorithms: Why You See What You See.” Georgetown Law Technology Review, 4 Dec. 2017, georgetownlawtechreview.org/social-media-algorithms-why-you-see-what-you-see/GLTR-12–2017/. Accessed 16 Nov. 2020.
[5] Navneet, Alang. “Algorithms Have Gotten out of Control. It’s Time to Regulate Them.” The Week, 3 Apr. 2019, theweek.com/articles/832948/algorithms-have-gotten-control-time-regulate. Accessed 16 Nov. 2020.
[6] Sample, Ian. “Study Blames YouTube for Rise in Number of Flat Earthers.” The Guardian, 17 Feb. 2019, www.theguardian.com/science/2019/feb/17/study-blames-youtube-for-rise-in-number-of-flat-earthers. Accessed 21 Nov. 2020.
[7] Spohr, Dominic. “Fake News and Ideological Polarization: Filter Bubbles and Selective Exposure on Social Media.” Business Information Review, vol. 34, no. 3, Sept. 2017, pp. 150–160, doi:10.1177/0266382117722446. See also: See generally: Berman, Ron, and Katona, Zsolt. “Curation Algorithms and Filter Bubbles in Social Networks.” Marketing Science, vol. 39, no. 2, 2 Feb. 2020, pp. 296–316. doi:10.1287/mksc.2019.1208.
[8] Rosenquist, James, and Scott Morton, Fiona M., and Weinstein, Sam N. “Addictive Technology and Its Implications for Antitrust Enforcement.” Sept. 2020, Yale University Press, som.yale.edu/sites/default/files/Addictive-Technology.pdf. Accessed 23 Nov. 2020.
[9] The Editorial Board. “Legislation against Fake News Is Open to Abuses.” Financial Times, 7 Apr. 2019, www.ft.com/content/b1d78fc2-57b4-11e9-a3db-1fe89bedc16e. Accessed 4 Dec. 2020.
[10] York, Jillian. “Jillian York: The Global Impact of Content Moderation” ARTICLE 19, 7 Nov. 2020, www.article19.org/resources/the-global-impact-of-content-moderation/. Accessed 3 Dec. 2020.
[11] Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power. Profile Books, 2018. Print. Accessed 23 Nov. 2020. See also: Warner, Mark R. “Potential Policy Proposals for Regulation of Social Media and Technology Firms.” CNBC, 26 July 2019, www.ftc.gov/system/files/documents/public_comments/2018/08/ftc-2018-0048-d-0104-155263.pdf. Accessed 25 Nov. 2020.

[12] DiResta, Renee. “Free Speech Is Not the Same as Free Reach.” Wired, 30 Aug. 2018, www.wired.com/story/free-speech-is-not-the-same-as-free-reach/. Accessed 21 Nov. 2020.
[13] Lau, Annie Y. S., et al. “Social Media in Health — What Are the Safety Concerns for Health Consumers?” Health Information Management Journal, vol. 41, no. 2, (1 June 2012), pp. 30–35, doi:10.1177/183335831204100204. Accessed 26 Nov. 2020.
[14] Moravec, Patricia, Randall Minas, and Alan R. Dennis. “Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense at All.” Kelley School of Business (6 Nov. 2018) pp. 18–87. doi: dx.doi.org/10.2139/ssrn.3269541. Accessed 21 Nov. 2020 and www.tandfonline.com/doi/abs/10.1080/21670811.2017.1345645 and
Bakir, Vian, and Andrew McStay. “Fake news and the economy of emotions: Problems, causes, solutions.” Digital Journalism, Vol. 6, no. 2, (20 July 2018): pp. 154–175. doi: https://doi.org/10.1080/21670811.2017.1345645. Accessed 21 Nov. 2020.
[15] See: Seargeant, Philip and Tagg, Caroline. “Social media and the future of open debate: a user-oriented approach to Facebook’s filter bubble conundrum”. Discourse, Context & Media, Vol. 27 (Mar. 2019) pp. 41–48. doi: dx.doi.org/doi:10.1016/j.dcm.2018.03.005. Accessed 24 Nov. 2020.
[16] Harris, Tristan. “Opinion | Our Brains Are No Match for Our Technology.” The New York Times, 5 Dec. 2019, www.nytimes.com/2019/12/05/opinion/digital-technology-brain.html. Accessed 1 Dec. 2020.
[17] Balkin, Jack M. “The Fiduciary Model of Privacy.” Harvard Law Review, 9 Oct. 2020, harvardlawreview.org/2020/10/the-fiduciary-model-of-privacy/. Accessed 1 Nov. 2020.‌


No Silver Bullet for Fake News: Social Media Needs a New Regulatory Framework was originally published in vLex News and Updates on Medium, where people are continuing the conversation by highlighting and responding to this story.