Just before the turn of the new decade, Misti Epstein, the young wife of Google whistleblower Dr Robert Epstein, tragically passed away in hospital after being involved in a violent road accident. Although Dr Epstein assures me his nearly 40 years as a researcher has taught him to always look at such situations in the most sobering light, he admits for a brief moment, the warning of a current United States state Attorney-General – that his life was in danger – gave him pause. His claim, essentially that Google is, willingly or otherwise, meddling in free elections, is both colossal and incredibly damning. But would one of the largest, most iconic and trusted multinationals in the world really want to harm him or his loved ones over it?
After your wife’s tragic passing, you tweeted out rather pointedly that though you were in mourning, you had no intention of taking your own life, and you tagged both Hillary Clinton and Google. Do you think that your life is at risk for the work you’re doing?
I have no idea. I was approached by the Attorney-General of one of our states who suggested that my life was in danger. I know Zach Vorhees, who’s a software engineer from Google who left there last year and took with him 950 pages of documents and a video. I know that he definitely felt that his life was in danger, but I’m not sure. I don’t know. My fantasy is that since I’m an old friend of someone who’s very high up at the company, I’m perfectly safe.
Let’s talk a little bit about the research that you conducted in 2016 and 2018 and some of the research you’re going to conduct this year with the 2020 election coming up.
I first collected evidence in 2016 in the months leading up to the election. I set up the first-ever project to monitor real users. These are all registered voters. I had 95 registered voters in 24 States, a very diverse group of voters. And with their permission, I was using custom software that my team developed to look over their shoulders as they were conducting election-related searches on Google, Bing and Yahoo. We preserved more than 13,000 searches and the nearly 100,000 webpages to which the search results linked. And we had independent raters rate the webpages to see whether they were pro-Hillary or pro-Trump. And then we just computed a bias level for each position of search results that people saw.
What we found was substantial pro-Clinton bias on all 10 search positions of the first page of Google search results. But we did not find a pro-Clinton bias on Bing or Yahoo. That was very important for comparison purposes. And the bias statistically was significant at the 0.001 level, meaning highly statistically significant.
In 2018, we deliberately focused on three congressional districts that were staunchly Republican. The Congressperson from each one of those districts had long been a Republican and we gathered the same kind of data. Again, we found highly significant bias toward Democrats on all 10 search positions on the first page of Google search results, but not on Bing or Yahoo.
That could easily have accounted for the fact that all three of those districts flipped to blue. In other words, they flipped Democrat.
In 2016, the level of bias we found in Google search results was enough to have shifted somewhere between 2.6 and 10.4 million votes to Hillary Clinton with no one knowing that they had been influenced and without leaving a paper trail for authorities to trace.
You talk about several techniques that Google and other tech companies use to change people’s thinking and behaviour. Can you run us through some of those techniques?
I’ve stumbled on to about a dozen and I’m sure there are many more that I just haven’t figured out. I’m studying maybe seven or eight now in one way or another. We just launched a whole new research program to try to understand the impact that YouTube videos have on people’s opinions, on people’s thinking and behaviour and votes.
The first effect I discovered is called SEME, which is Search Engine Manipulation Effect, and that involves the search results. Filtering and ordering of search results has an enormous impact on people and they’re just not aware of it. A second one I discovered is called SSE or Search Suggestion Effect. It turns out that in our experiments, just by manipulating the search suggestions that flash at you when you’re starting to type a search term, we can turn a 50-50 split among undecided voters into a 90-10 split. Another one is the Answer Bot Effect, ABE. It turns out that when Google adds the box above the search results that just gives you the answer (the answer!) that increases the power of the effect by another 10 to 30 per cent.
Unfortunately, that effect also applies to all the new personal assistants, such as Amazon Alexa and Google Home. Those devices just give you the answer and the answer that they give you impacts thinking and behaviour.
What they all have in common is they are invisible to people and they don’t leave a paper trail. In other words, they work by using what, internally at Google, they call an ephemeral experience. Ephemeral means short-lived. So, you type in a term, they flash some suggestions at you, they show you some search results, you click on something and all that stuff disappears. It’s not stored anywhere, and you can’t go back in time and reconstruct what happened.
Donald Trump incorrectly referenced your research in a tweet along with the claim that his victory over Clinton ought to have been bigger had she not been aided by Google’s bias. How did it feel to be used by the president to bolster his election victory?
I saw his tweet of course, and he got some numbers wrong for sure. And used the word manipulate. I don’t think I ever claimed that Google deliberately manipulated the 2016 election. What I said was that I measured substantial pro-Clinton bias in their search results. And I didn’t see that bias in Yahoo or Bing search results. That’s not the same as saying Google manipulated the election. So Trump got things slightly wrong, but I don’t think he did much harm to me or for that matter to society in what he said. But Clinton’s reply, that’s a different story. That did me tremendous harm. She replied that the research has been debunked and was based on data from 21 undecided voters. That did me tremendous harm because within 48 hours there were dozens of mainstream news reports that were repeating and amplifying what she said. And until that moment, I had a spotless reputation as a scholar and scientist for almost 40 years. And my reputation was destroyed in literally a matter of hours.
Why do you think Google would want to influence the outcome of an election?
We have to look at the leaks from the company. We have to look at the one-hour video of the all-hands meeting that was held right after the 2016 election, where Google’s leaders got up on stage one after another and they said we’re not going to let this happen again. This is a disaster for our country.
There’ve been a lot of leaks within the last year and a half in which people have either been fired by Google or quit and have been saying that starting after the 2016 election the whole direction of the company changed and that they have been on a path to make sure that Trump is not re-elected, number one and number two, to make sure that no one like Trump is ever elected.
You say that you don’t think Google intentionally manipulated the election results, but at the same time, you do say that there’s a bias that comes through from Google searches and from other sources. If it’s not intentional, then where does it come from?
There’re at least three different ways the bias could have turned up in their search results. It could have been executives at the top of the company saying, “Let’s bias our search results”. That’s possible. Another possibility, which is not at all farfetched, is that individual coders at the company could have made this happen. And then the third possibility is actually the most likely and the most disturbing. The third possibility is that the algorithms themselves incorporate the bias of the programmers. Now there’s extensive literature showing that this happens routinely. In fact, it’s impossible for it not to happen. In other words when coders code, they unconsciously incorporate their biases into the code they write.
I find this most disturbing because that means that the outcomes of elections around the world have been determined by computer programs. One, in particular, namely, Google’s search algorithm.
I had calculated as of 2015, upwards of 25 per cent of the outcomes of national elections in the world were being determined by Google’s search algorithm. The reason why the proportion is so high is mainly because a lot of national elections are extremely close. Given the number of people now who get their news and their information about politics online, and given the fact that a lot of elections, national elections especially, are very close, it’s very simple for a Google search algorithm since it shifts so many votes among those who were undecided. It’s very simple for that algorithm to be flipping an election one way or the other because it’s always going to favour one dog food over another or one, one music service over another and it’s always going to favour one candidate over another. It’s built to put things into an order where the stuff at the top is better than the stuff down below, whether anyone deliberately told it to do that or not.
What solution would you propose to solve this problem?
Australia is one of the most aggressive countries in the world in trying to protect Australian society from big tech. Now aggressive and completely wrong-headed because Australia has set up a commission to try to exercise some oversight over the algorithms.
That’s absolute nonsense that shows no understanding whatsoever over what an algorithm is because algorithms are opaque and today’s algorithms used by the big tech companies are far more opaque than any that have ever been created before because they often rely on machine learning, which means that the coders themselves have no idea how they work. So you cannot protect society from these tech companies by looking at their algorithms. Algorithms are millions of lines of code that no one understands. There’s only one way to eliminate the threat that Google poses to democracy worldwide. That is to make Google’s index [a database of all the world’s publicly available webpages that Google uses to generate search results] into public commons. And that’s a very old idea in law. What that means is you give everyone access to Google’s database. That can be put into effect by a government. [N.B. Google’s index is created by scraping publicly available information from the web. This means it’s not their intellectual property. - Ed.]
In other words, search would become competitive again as it was in the early years when there were many search engines. Google was not the first search engine. It was the twenty-first search engine. And search used to be highly competitive until Google became a dominant monopoly. You’d end up with thousands of search platforms which means search would look like media because the search platforms would all be competing for our attention and they would cater to different audiences just like media.
There’d be search platforms catering to conservatives. Catering to liberals, catering to women, catering to Lithuanians, catering to Australians. There’d be thousands of search platforms just like we have thousands of newspapers, magazines, radio stations and television stations and websites.
There are three big threats that Google poses to humankind. One is surveillance, which is accomplished primarily through their search engine, but through many other means as well, such as Gmail, Chrome, Android. The second threat is censorship, of course, because they decide what two and a half billion people see or don’t see on the internet. And the third threat is manipulation, which I talked about. But if Google’s index is made public, then all three of these threats disappear. Because in my proposal, when all of these competing search platforms have the right to add content to Google’s index, that takes care of the censorship problem. And then anytime anything is removed from the index, that gets logged and everyone has access to the log. It’s a simple proposal and as regulation goes, it’s a very light touch. It would completely eliminate the threat that Google poses currently to democracy and to human autonomy around the world
Google called your research a poorly constructed conspiracy theory, and it claims that it has never re-ranked sites to manipulate political sentiment.
Repeatedly, whenever I’ve published results, Google releases that kind of statement, “We don’t re-rank blah, blah, blah”. Well, it turns out in this last big leak of material from Google, I was absolutely astonished to find that they’ve developed a manual for a re-ranking system. Normally, Google manually changes search results or YouTube content with blacklists. In this big leak that came out of Google last autumn, not only were there two blacklists, they actually called them blacklists!
According to Zack Vorhees, the senior software engineer who took all that stuff from the company, if they wanted to even out what search results were shown to you in an election, they’ve already developed the software to do exactly that. In other words, they have actually developed re-ranking systems. Now it’s hard to believe that they don’t use them given that they’ve developed them. So you can’t know for sure what’s happening without whistleblowers or warrants or court discovery.
Now, beyond that, we can only speculate. We don’t know what decisions are being made within the company, who’s making the decisions. We don’t know any of that stuff.
Why do you think that the media has taken such a negative position towards your research?
I understand full well it’s because a lot of people in so-called mainstream media lean left – just as I do. And right now, the tech companies support the left and donate primarily to Democrats. And here I am pointing out ways in which they threaten democracy. So I’m attacking these companies which are embraced by people on the left and embraced by people in mainstream media. I understand that perfectly well, but I’m still going to report what I find. And until a few months ago, I was treated very well by everyone in media because I have a long history working in media myself. But after the Hillary Clinton tweet that all fell apart. And that’s when the attacks really started.
I love discovering things and I’ve never intended to find out just disturbing things. But I found over the years very disturbing things. These companies pose a serious threat to democracy and human autonomy. They’re run by very arrogant people who think they know better. That troubles me. So I’ll continue to report what I find.