Is the U.K. playing Internet God?
Last week, the U.K.’s Secretary of State for Digital, Culture, Media and Sport and the Secretary of State for the Home Department released a white paper calling for significantly increased regulation for tech companies with the goal of banning “trolling” and “disinformation” .
The main piece of that legislation is something that Commonwealth law calls “duty of care”. Here’s the Financial Times' explanation of the law:
Under the proposals, companies will have to take “reasonable and proportionate action” to tackle “online harms” — ranging from terrorist content and child sexual exploitation to problems that “may not be illegal but are nonetheless highly damaging” such as disinformation, extremist content and cyberbullying.
The proposed regulations would apply to any company that allows users to share or discover user-generated content or interact with each other online – social media sites like Facebook, public forums like Reddit, messaging services like WhatsApp, and search engines like Google.
For now, the proposal is simply a white paper and the department will take public comment for the next three months before drafting legislation.
All I can say right now is that internet regulation is going in a terrifying direction (see Articles 11 & 13 for more).
Is it morally correct to regulate platforms?
Let’s start from the assumption that terrorism, abuse and child exploitation content are fundamentally wrong and we’d be better off without it.
But ends don’t justify the means. Here’s a question – is it morally correct to make a platform liable for the communication that it enables?
If your knee-jerk reaction is to say yes, pause for a minute and think about how phones work. Phones facilitate communication between two parties, but aren’t liable for that exchange of information.
Anything can happen over a phone or text messages, but Vodafone isn’t responsible for the coordination of a terrorist attack, or Apple for bullying over text messages.
Regardless of what you (and I) think, the U.K. has a formed opinion. Here’s Theresa May:
“Online companies must start taking responsibility for their platforms, and help restore public trust in this technology.”
Well, I disagree with Theresa.
The dangers of making tech companies liable
Duty of care makes big tech companies liable for “online harms”. When you add a liability, companies will be incentivized to reduce (or completely eliminate) that liability.
In theory, since we want instances of “online harm” to approach to zero, that sounds reasonable. The main question is how to do that and, as with everything, the devil is in the details.
There are three options.
- First, tech companies can suppress communication and uploads. This is a no-go.
- Second, tech companies can create exhaustive content filters. I already covered why they are a no-go as well.
- Third, tech companies can monitor every single communication and piece of content uploaded to these services.
Let’s focus on the third one, which is the most likely. Do you know what constant monitoring is called in my neighborhood?
Spying.
Let’s look at Australia, and the new “abhorrent violent material” law. Triggered by the terrible Christchurch mass shooting, this law states that companies (including ISPs and cloud providers, not only social networks) are liable for “abhorrent violent” content discovered on their service.
That leaves no choice but to spy on all user traffic or, for small-and-medium-sized platforms outside of Australian, avoid the country altogether. Jim Killock, executive director of Open Rights Group, believes this will play out in the exact same way in the U.K.:
“The government’s proposals would create state regulation of the speech of millions of British citizens.”
The problem with not defining "online harm"
When you define something broadly while trying to regulate it, particularly speech, things get messy and often dangerous. One person’s “trolling”, can be someone else’s valid opinion, and drawing a line between the two is practically impossible.
Let’s look at trolling.
People don’t have the right to “not get offended.” If I can only say something that has 0% chance of not offending anyone, then I wouldn’t be able to speak. No matter what you say, there’s always a subset of people who will be upset about it.
Here’s Casey Newton sharing my concern:
"A white paper that announces its intention to ban “trolling” and “disinformation” but makes little attempt to define either gives me the shivers."
The U.K. is overestimating its position
During Brexit negotiations, the U.K. overestimated its position and relied on the assumption that their economy would be enough to compel the European Union to make a deal.
The U.K. thought they had all the cards, but the EU was playing chess, not poker.
Well, this is Brexit all over again. The U.K. government is banking on the opinion that their market is large enough to incentivize tech companies to comply.
Well, we (me, and the MIT) beg to differ:
"The plans also shift the government view away from any idea that the technology industry is somehow stateless or ungovernable—judging instead, likely rihtly, that the U.K. market is large and wealthy enough to give the industry a powerful interest in complying even with legislation they loathed."
If the rules are as far-reaching as proposed, it's conceivable that some global internet companies would simply write off their U.K. presence rather than comply.
Tech companies can thrive without China. They can undoubtedly do it without an increasingly isolated U.K. The U.K. market is big, but it is not big enough to play Internet God.