A roadmap for reining in big tech
2 Aug 2018|

Anyone watching Mark Zuckerberg’s testimony to Congress in April following the Cambridge Analytica scandal would be forgiven for thinking US politicians shouldn’t be allowed anywhere near tech policy.

The questions the mostly grey-haired representatives fielded to the 33-year-old Facebook CEO ranged from the basic to the bizarre. ‘How do you sustain a business model in which users don’t pay for your service?’, asked Orrin Hatch. ‘Senator, we run ads’, was Zuckerberg’s reply.

Other gems included: ‘Why am I suddenly seeing chocolate ads all over Facebook?’, ‘Is Facebook spying on the emails I send via WhatsApp?’ and ‘Do I have as many friends as I think I do?’.

Thankfully, a policy white paper released on Monday by Democratic Senator Mark Warner, himself a former tech executive, has belatedly offered a framework for politicians to use when representatives from the big tech firms return to Washington in September.

The document lists 20 proposals compiled by Warner’s staff that they hope will ‘stir the pot and spark a wider discussion’ in the wake of the Cambridge Analytica scandal and Russian hacking of the 2016 US election, as well as a general unease about the growing power of the big tech firms.

The proposals span three major areas: dealing with the epidemic of disinformation, strengthening privacy and consumer protection, and ensuring competition in the marketplace.

Some of the ideas, including the introduction of a so-called ‘Blade Runner law’, which would require bots to be clearly and conspicuously labelled, represent technical tweaks that the platforms would likely find achievable and that some are already working towards implementing.

Other ideas call for fundamental changes to the business models of the big social media companies, and are unlikely to be met with much enthusiasm from Silicon Valley.

One such proposal involves introducing media-style rules on fairness and libel. Specifically, it suggests changing section 230 of the US Communications Decency Act to make it possible for people to sue tech platforms that don’t take down defamatory material posted by users.

This change would take the onus away from victims to search for, and report, material that defames, threatens or falsely accuses them. Instead, that responsibility would fall to the platforms, which are better placed and resourced to prevent the spread of such material. In effect, the rule change would treat Facebook as a media company, not just a platform—something it has long resisted.

Another major proposal entails introducing legislation similar to the EU’s General Data Protection Regulation (GDPR) that would significantly toughen privacy laws and prevent the use of personal data without the unambiguous and individualised consent of the user.

The proposals, which include ‘data portability, the right to be forgotten, 72-hour data breach notification, 1st party consent, and other major data protections’, would, in total, give users more transparency over and control of their own data.

Perhaps the most radical idea in the white paper is a proposal to require platforms to calculate the value of each user’s data. Such a requirement, the paper argues, would stimulate competition by providing ‘price transparency’ to consumers. It could also educate users about the true value of their data and attract new competitors with more favourable privacy provisions built into their products.

It’s easy to see how such a change might encourage users to get behind even more radical ideas that would require big tech companies to pay them to use their services and to hand over valuable data—something advocates of ‘data as labour’ have been calling for elsewhere.

The paper found that calculating the value of each user’s data could even help guide antitrust policy. Regulators could consider actions that increase the value of user data to be anticompetitive since they would be equivalent to a ‘price’ increase for users.

Another proposal would see tech platforms labelled as ‘information fiduciaries’, which means they would need to follow similar rules to legal or financial institutions.

The white paper drills down to underhanded tricks employed by tech firms known as ‘dark patterns’, which are used to corral users into accepting terms that invade their privacy and greatly benefit the service provider.

It even looks ahead at emerging problems that are likely to make fighting false information online even harder, such as ‘deepfake’ technology, which will make detecting fake news even more difficult.

It’s not hard to see why the big tech companies would baulk at many of the ideas included in the white paper—given that the major policy proposals would require a complete overhaul of their business models.

Facebook lost a million users in Europe after the GDPR was introduced. Just last week, its stock plummeted, wiping more than US$120 billion off its market capitalisation. The next day, Twitter lost 15.5% of its market value.

Part of the reason for the major market correction is that the social media platforms have been harassed into actually doing something about the harmful externalities their services are creating.

Facebook is on a media blitz to show that it’s on the front foot in dealing with Russian disinformation campaigns that use its platform, and Twitter has been culling fake accounts and bots.

The companies have calculated that some short-term pain from Wall Street is worth it if it staves off any heavy-handed regulatory action from Washington.

But how long will they continue to make that calculation before they give in to the stock market’s perverse incentives of growing engagement and user numbers at any cost?

Tech firms have been happy to turn a blind eye as long as they were making profits hand over fist. The US legislature, having so far demonstrated staggering incompetence, now has a set of ideas it can use to finally move the conversation along in a more substantive way.