Es gibt eine etwas kürzere, deutschsprachige Version dieses Textes drüben bei WIRED.
Along with several activists and journalists, I recently had the pleasure of being invited by Google to a meeting with Vint Cerf, whose name should make any German smile when spoken aloud. Mr. Cerf is an incredibly likeable, eloquent and well dressed gentleman with a biography and list of achievements that makes you feel not worthy.
As one of the developers of TCP/IP, he is rightfully recognised as one of the „fathers of the internet“. Since 2005, Cerf is also Vice President and Chief Internet Evangelist of Google.
In a short introduction, Mr. Cerf addressed a lot of the current challenges. Hate speech. Fake news. Privacy. Security of the IoT. More surprisingly for someone speaking on behalf of Google, he also underplayed the power of algorithms a little. There is only so much that Google is capable of, he told attendees. The filtering of content regarding hate speech on YouTube, for example, would be impossible to manage manually, and since context is so important, algorithms cannot be used successfully yet. Also, this was one of the main points in his short talk, Vint Cerf believes that interference with people’s content on the internet would be censorship, and we don’t want that.
Regarding hate speech/bullying online, Mr. Cerf did a little more downplaying in my opinion. In many cases, he argued, bullying or rudeness is being taken too serious by some individuals. Simply ignoring rude comments and just blocking people could solve a lot of cases that are interpreted as hate speech.
Mr. Cerf also said that we’re facing various challenges that can’t be solved by one company or in single steps, and I agree. In a following time frame of not even an hour, Vint Cerf answered some questions from the attendees and there was also some time to talk to the man in smaller rounds, but since my questions didn’t make it in the open round, I decided to post my thoughts here.
To fight back against hate speech and fake news, Vint Cerf also added, we need more computer literacy, critical thinking, use of social norms and common sense. And I agree. Computer literacy is needed everywhere, critical thinking can never be bad, social norms play an important part in civil society and common sense always makes … well, sense.
The big questions, though, are:
Who defines the norms that we agree on as a social community?
What exactly is common sense?
And who’s to criticise these days?
It’s all fun and games and decency when you’re in a room full of well educated people over the age of 40 who grew up in a non-digital age. In the pre-Google age. We can probably all agree on a lot of things. We can agree that it’s not very cool to yell at each other even if we disagree, and we certainly wouldn’t start a fist fight (not without very honorable rules, anyway). Our critical minds are educated by science, knowledge, reason, fact checking. And by humanism, I guess. But by which and whose standards will next generations be educated?
When Mr. Cerf and others in the room asked for common sense, critical minds, literacy and social norms, they might have overseen the fact that within a few years, maybe another generation, all those terms are defined by the internet, by Google and Facebook and Twitter and whoever is coming up big next. The truth according to Google’s ranking algorithms is what most people or other sites link to (remember the term „wisdom of crowds“, rarely used these days?). And critical thinking might be defined by what’s criticised most often, most heavily, the loudest – or the richest. If you can afford a campaign, you might influence public opinion (mind you, that was already the case before the internet came along, but it wasn’t as easy to reach a lot of people). And the social norm of some forums, discussion groups or other parts of the internet certainly already differs a lot from those still known and lived by in parts of the offline world.
The recent case of Google’s „Has the Holocaust really happened?“ search results is a perfect example. Who can guarantee that in ten years time, „being critical“ will still mean „mistrusting Holocaust deniers“? What if „being critical“ will mean „questioning historic facts“ all over for a next generation, a generation raised on Google and Facebook and, even more so, on YouTube – another Google company – and Whatsapp and Instagram, two Facebook firms?
In another example that was also mentioned in the meeting with Mr. Cerf by Jillian C. York of the EFF, people found out that YouTube’s „restricted mode“, initially set up as a tool for parents to filter non-fitting video content for their kids, blocks some LGBTQ content. We’re not talking porn or even nudity here. We’re talking content that was simply produced by gay or transsexual or non-binary gendered people.
I’m all for better tools for parents to protect young children from having to see gore, violence or hardcore pornography at an age where it could seriously harm them. But as you might have noticed, I didn’t mention „listening to gay people talk“ because there is absolutely no harm being done to kids by listening to people talk about gay, straight, trans or other topics. On the contrary, it might broaden their view on the world, and that’s a wonderful thing.
Also, I don’t even think that kids up to a certain age have a lot of interest in „adult“ topics (I don’t know if all blocked channels even talk about those topics or – for instance – about making the best lasagna). If they are interested, however, it might well be that they’re looking for answers to questions they have.
The internet has helped many of us, old and young, to find people with similar hobbies, interests or problems as ourselves. It has also helped us to just anonymously browse and research topics that we can’t find in our offline environment or that we can’t or don’t want to discuss with our parents, spouses, teachers, friends for various possible reasons. If you are 10, 12, 14 years old and you’re feeling insecure about your sexuality for example or you just know that it differs from those of your friends, you might turn to the net for advice or people that can relate to your situation. This is even more true if you’re not living in a large city or your social circle is very conservative, and it also works for grown ups, of course (but they probably won’t use YouTube in restricted mode).
In other words: filtering „LGBTQ content“ in restricted mode on YouTube can seriously harm the flow of information that might be important to young people, even more so because at the same time, YouTubers who bash homosexual people are not being blocked. So even if you disagree on that specific topic, you have to wonder which or whose social norms Google is answering to by blocking certain content.
Vint Cerf admitted that Google didn’t act very wisely in that case and he agreed that measures such as this one must at least be communicated transparently. Asked about why Google is blocking LGBTQ content in restricted mode at all, he mentioned something like „pressure from different groups of society“ that Google has to handle. I don’t remember the exact phrase, I’m afraid, but it made me raise my eyebrows. Because it seems that Google feels the need to react to parts of its audience and therefore its customers – in that case conservative or religious parents, I guess. It also seems that handling, filtering or categorising content isn’t as impossible as Mr. Cerf wanted to make us believe earlier in his talk. And it also shows that „censorship“ is a term we must treat carefully and wisely.
All this is an even bigger topic with Facebook, of course. We know that Facebook is filtering content automatically as well as manually. And we know that sometimes, Facebook has a very different opinion on what to protect its users from than the users have. So Facebook makes the rules. And Google makes the rules. Whereas the rules should be made by society and – if it comes to legal issues, by legislative authorities.
I agree with Mr. Cerf: It’s complicated. I agree with him that there’s no easy answers, no single step that can be taken to solve all the challenges we’re facing. I don’t think Google should be held accountable for the world’s or society’s problems. But I do believe that any company as influential as Google or Facebook needs to be part of society and has to accept responsibility. Just offering services and acting on intransparent, self written rules won’t do anymore.
As I explained earlier, we’re living in a time where Google is shaping the world. So Google has to decide and make clear which social norm system it wants to represent and support, and it has to be one that is constantly negotiated with the public and with law makers as well, of course. Regarding hate speech, bullying and the deliberate spreading of fake news, I don’t think you can put all the weight of solving those challenges on the users. Of course it’s great if you can do your own research, treat everyone kindly and if you are very careful with what you share on the net. But we need the support of the platforms as well. Not to filter content before it gets posted or to let platforms decide what’s right and what’s wrong, but to make it a little easier to distinguish truth from fiction and – regarding hate speech – to block and report abuse.
I don’t want to have someone control what is posted online. I want it to have consequences. Online abusers are like bike thieves: They do it because it’s easy and because it rarely has any consequences. But a stolen bike can be replaced. A badly injured psyche can’t.
As for security and privacy … I find it hard to discuss those topics with a company that still won’t let users encrypt their emails easily and by default. Don’t get me wrong – I love Google’s services, I truly think they’re amongst the best on the web. But the day will come when the company has to find different ways to monetise those services than via advertising.
The problem here is, of course, that business is flourishing, Google and Facebook earn bazillions with ads based on user data and it doesn’t seem that this is gonna change very soon. And it’s not as easy as letting people pay for features like encryption and freedom from ads because that’d mean you need to be able to afford privacy. It would be unfair, anti-social.
But for users, ad systems have already broken the internet. Constant tracking has taken away our privacy and turned our web surfing and smartphone usage into a surveillance system, even if it’s „only“ used for ad targeting. We need better business models, and those models probably won’t make as much money as ads do right now. But they’d contribute more and better to society.
So maybe we should finally bury that old cliché „Don’t be evil“ and turn it into „Don’t be greedy“. I really think that Google could work very well as the biggest non-profit organisation the world has ever seen. You might be richer if you’re answering to stock owners all the time. But you might be even more successful if you don’t have to.
Turning Google into a non-profit org wouldn’t solve all the challenges that were mentioned, of course. But it would certainly change the game big time because Google’s motifs would change dramatically. With google.org, the company already invests in non-profit causes and supports third party organisations and initiatives. So why not turn into one?
(Es ist übrigens natürlich völlig okay, diesen Text auf Deutsch zu kommentieren und diskutieren.)