How to make the internet safe for democracy
14 Jan 2020|

In October, a confrontation erupted between one of the leading Democratic candidates for the US presidency, Senator Elizabeth Warren, and Facebook CEO Mark Zuckerberg. Warren had called for a breakup of Facebook, which Zuckerberg said in an internal speech represented an ‘existential’ threat to his company. Facebook was then criticised for running an ad by President Donald Trump’s re-election campaign that carried a manifestly false claim charging former vice president Joe Biden, another leading Democrat contender, with corruption. Warren trolled the company by placing her own deliberately false ad.

This dust-up reflects the acute problems social media pose for democracy. The internet has in many respects displaced legacy media like newspapers and television as the leading source of information about public events, and the place where they’re discussed. But social media have enormously greater power to amplify certain voices and to be weaponised by forces hostile to democracy, from Russian trolls to American conspiracy theorists. This has led, in turn, to calls for the government to regulate internet platforms in order to preserve democratic discourse itself.

But what forms of regulation are constitutional and feasible? The US Constitution’s first amendment contains very strong free-speech protections. While many conservatives have accused Facebook and Google of ‘censoring’ voices on the right, the amendment applies only to government restrictions on speech; law and precedent protect the ability of private parties like the internet platforms to moderate their own content. In addition, section 230 of the 1996 Communications Decency Act exempts them from private liability that would otherwise deter them from curating content.

The US government, by contrast, faces strong restrictions on its ability to censor content on the internet in the direct way that, say, China does. But the United States and other developed democracies have nonetheless regulated speech in less intrusive ways. This is particularly true with legacy broadcast media: governments have shaped public discourse through their ability to license broadcast channels, prohibit certain forms of speech (like terrorist incitement or hardcore pornography) and establish public broadcasters with a mandate to provide reliable and balanced information.

The original mandate of the Federal Communications Commission was not simply to regulate private broadcasters, but to support a broad ‘public interest’. This evolved into the FCC’s fairness doctrine, which enjoined TV and radio broadcasters to carry politically balanced coverage and opinion. The constitutionality of this intrusion into private speech was challenged in the 1969 case Red Lion Broadcasting Co. v FCC, in which the Supreme Court upheld the commission’s authority to compel a radio station to carry replies to a conservative commentator. The justification for the decision was based on the scarcity of broadcast spectrum and the oligopolistic control over public discourse held by the three major TV networks at the time.

The Red Lion decision didn’t become settled law, however, as conservatives continued to contest the fairness doctrine. Republican presidents repeatedly vetoed Democratic attempts to turn it into a statute, and the FCC itself rescinded the doctrine in 1987.

The rise and fall of the fairness doctrine shows how hard it would be to create an internet-age equivalent. There are many parallels between then and now. Today, Facebook, Google and Twitter host the vast majority of internet speech and are in the same oligopolistic position as the three big TV networks were in the 1960s. Yet it’s impossible to imagine today’s FCC articulating a modern equivalent of the fairness doctrine. Politics in America is far more polarised now. Reaching agreement on what constitutes unacceptable speech (for example, the various conspiracy theories offered up by radio host Alex Jones, including that the 2012 school massacre in Newtown, Connecticut, was a sham) would be impossible. A regulatory approach to content moderation is therefore a dead end, not in principle but as a matter of practice.

That’s why we need to consider a competition law, or antitrust, approach. The right of private parties to self-regulate content has been jealously protected in the US; we don’t complain that the New York Times refuses to publish Jones, because the newspaper market is decentralised and competitive. A decision by Facebook or YouTube not to carry him is much more consequential because of their monopolistic control over internet discourse. Given the power a private company like Facebook wields, it will rarely be seen as legitimate for it to make such decisions.

On the other hand, we would be much less concerned with Facebook’s content moderation decisions if it were simply one of several competitive internet platforms with differing views on what constitutes acceptable speech. This points to the need for a massive rethinking of the foundations of antitrust law.

The framework under which regulators and judges today look at antitrust was established during the 1970s and 1980s as a by-product of the rise of the Chicago School of free-market economics. As chronicled in Binyamin Appelbaum’s recent book The economists’ hour, figures like George Stigler, Aaron Director and Robert Bork launched a sustained critique of over-zealous antitrust enforcement. The major part of their case was economic: antitrust law was being used against companies that had grown large because they were innovative and efficient. They argued that the only legitimate measure of economic harm caused by large corporations was lower consumer welfare, as measured by prices or quality. And they believed that competition would ultimately discipline even the largest companies. For example, IBM’s fortunes faded not because of government antitrust action, but because of the rise of the personal computer.

The Chicago School critique made a further argument, however: the original framers of the 1890 Sherman Antitrust Act were interested only in the economic impact, and not the political effects, of monopolies. With consumer welfare the only standard for bringing a government action, it was hard to make a case against companies like Google and Facebook that gave away their main products for free.

We are in the midst of a major rethinking of that inherited body of law in light of the changes wrought by digital technology. Economists and legal scholars are beginning to recognise that consumers are hurt by things like lost privacy and forgone innovation, as Facebook and Google sell users’ data and buy start-ups that might challenge them.

But the political harms are critical issues as well, and ought to be considered in antitrust enforcement. Social media have been weaponised to undermine democracy by deliberately accelerating the flow of bad information, conspiracy theories and slander. Only the internet platforms have the capacity to filter this garbage out of the system. But the government cannot delegate to a single private company (largely controlled by a single individual) the task of deciding what is acceptable political speech. We would worry much less about this problem if Facebook were part of a more decentralised, competitive platform ecosystem.

Remedies will be very difficult to implement: it is the nature of networks to reward scale, and it’s not clear how a company like Facebook could be broken up. But we need to recognise that while digital discourse must be curated by the private companies that host it, such power cannot be exercised safely unless it is dispersed in a competitive marketplace.