Skip to Content

Google, don’t do the right thing

Attempts to make the world a kinder, gentler place should worry us.

Sign that says 'Google'
Photo licensed under Creative Commons by tangi_bertin

A few months ago, Google’s chairman, Eric Schmidt, published an essay in the New York Times, writing that technology can and ought to be used to make online conversations peaceful and open-minded, giving voice to kind and silencing the hateful. He seemed, I wrote in a post on my own blog, Double Aspect, to be claiming the role of latter day Platonic guardians for online companies such as the one he runs. Developments since then show that Mr. Schmidt’s musings were not just empty words. Facebook banned private gun sales on its network and on Instagram. Its boss Mark Zuckerberg has declared that  Google itself will use its search-related ads to  try to steer people away from extremist websites. It might all sound wonderful and very socially responsible, but these attempts to make the world a kinder, gentler place worry me.

It’s important to point out that Facebook and Google are not merely trying to comply with the law. Not all gun sales that Facebook is banning are illegal in the United States, for instance. (Perhaps some are also legal in other countries, though I have no idea.) By banning them, Facebook is implementing its own views about what the world should look like, formed after talks with gun control advocates. Similarly, nothing forces Google to serve up ads for anti-radicalization efforts to people searching for potentially extremist sources (some of whom might of course have perfectly legitimate reasons to be searching for them). Google's policy reflects its executives' views about the role of the internet in fostering dialogue and stopping the forces of hatred. And it is not yet clear what shape Facebook’s effort to eradicate “hate speech” will take, if it results in a single policy for all of its members, wherever in the world they are, it will go beyond the legal requirements in those countries that do not ban “hate speech” at all, like the United States, and perhaps even those that have relatively narrow definitions of this concept, like Canada. 

Why should we worry about this? Most of my readers, I suspect, will agree with Facebook's distrust of firearm commerce; and it's certainly hard to fault Google for wanting to make sure that, in the words of one its executives, "when people [who] are feeling isolated ... go online, they find a community of hope, not a community of harm." And of course everyone hates hate speech every bit as much as Mark Zuckerberg does. But the pursuit by the companies of moral or political ideals, rather than ― or even in the ultimate service of ― profit is not altogether benign. For now, the ideals in question are relatively uncontroversial, at least in some parts of the world or among people of certain political views. (Whether Facebook will attract backlash in the United States for its anti-gun position and restrictions on speech remains to be seen.) But the same means now being deployed in the service of ideals that most of us happen to share may be employed in other ways. And the pressures to do so will grow as more people, both in government and among activists of various persuasions realize what a powerful weapon they could get their grasping hands on.

Do we really want Google, Facebook, and similar companies to be the judges of our debates?

Imagine for a moment a change to Facebook's terms of service that bans the sale not of guns, but of books ― perhaps books deemed to amount to hate speech, or to extremist literature. Or imagine Google deciding that a candidate for office is a hate-monger, and that anyone searching for information on that candidate should be seeing ads for more moderate alternatives? Is that still the corporate-socially-responsible thing to do, or is it creepy? The definitions of “hate,” “extremism,” and other scary-sounding thought crimes that the well-meaning Platonic Guardians 2.0 will seek to prevent are notoriously vague, and what is vitriolic and fanatical in the opinion of one person may be merely strongly-worded or principled in that of another. Do we really want Google, Facebook, and similar companies to be the judges of our debates?

It is troubling enough when governments undertake that function. Indeed, I believe that they should not. But at least, in democratic polities anyway, definitions of what is are publicly debated in legislatures, and then applied by independent courts where the accused have the opportunity to make full answer and defence. But what sort of process is going to apply with Google or Facebook? When it was trying to resist having to implement the European Union’s “right to be forgotten,” Google argued ― quite rightly ― that it was not well-positioned to weigh public interest against individual rights. The same concerns of which it was aware then apply now. Decisions will be made behind closed doors. The resulting rules will likely be murkier than the already vague laws enacted by states. And there will be neither separation of powers nor judicial independence nor due process nor separation of powers to safeguard the interests of real or alleged transgressors.

Now, as readers of my posts here and at Double Aspect can guess, I am much too fond of property rights and freedom of contract, and much too skeptical of the state’s ability to regulate anything, and least of all anything related to freedom of expression and to technology, without making a mess of things, to call for governments to step in. Moreover, as I wrote in a previous attempt to grapple with similar issues, “we should not forget that, on the whole, the net contribution of Google and the rest of them to our ability to express ourselves and to find and access the thoughts of others has clearly been positive ― and certainly much more positive than that of governments.” I would not trust governments to protect my freedom of expression from Google or Facebook.

But, increasingly, I do not trust Google and Facebook to protect my freedom of expression either. And as they start acting more and more like governments, allowing moral considerations to guide their policies, yet without accepting the checks and balances that come from transparency, democratic procedures, and fair adjudication, their position as commercial actors entitled to plead freedom of contract in defence to charges of immoral behaviour becomes less and less sustainable. If you allow moral, rather than financial, considerations to guide you, you are also opening yourself up to moral criticism. Google famously aspires not to “be evil,” which seems like a nice corporate philosophy. But firms who don’t want to be evil should realize that wanting to do the right thing imperils that aim.