The debate around artificial intelligence boils down to a single big question: will advancing AI help, hinder, or destroy us? Is it worth having the robot butler if it might turn on us? This has been a central question of science-fiction, and now technology has reached a point where this is a question worth asking in real life.
The big figures in Silicon Valley all have their own opinions, and Elon Musk has been very vocal about his fears of unregulated AI technology leading to our collective downfall. On September 4 last year, Musk tweeted “competition for AI superiority at national level most likely cause of WW3 imo.” He has called AI a greater threat than North Korea and nuclear warfare and has compared it to summoning a demon.
Speaking at VivaTech in May, Google’s Eric Schmidt waded in deep to declare Elon Musk “exactly wrong” in his numerous statements about the dangers of AI. “The fact of the matter is that AI and machine learning are so fundamentally good for humanity,” he told the crowd, stating that there is an “overwhelming benefit” in expanding AI technology. The very AI technologies, one supposes, that Google AI has been investing in heavily for some time now. Hmmm.
These two views roughly align with the business interests of the two men’s companies, of course. Schmidt, who stepped down as executive chairman of Google’s parent company Alphabet to shift into a technical advisor position in January, is invested in the success of Google AI. As for Musk, it’s difficult to believe that when he talks about the risks people face, he’s talking about all people. His anti-union sentiments, his recent refusal to pay royalties to an artist whose work Tesla copied, and…well, everything about the guy, really, suggests that he’s a man without the common touch. Perhaps his distrust of AI is motivated by business interests more so than an innate desire to protect humanity – SpaceX’s plan to colonise Mars looks more important the more potentially doomed Earth is, after all.
They’re not the only billionaires thinking about this: Facebook’s Mark Zuckerberg believes that AI will be instrumental in Facebook’s future. It’ll assist in facial recognition and combating toxicity on the platform, he says, and help to better serve users with ads that suit their interests. Zuckerberg has said that AI could help to uphold standards and prevent harassment on Facebook, something its human moderators have resolutely failed to do. We’d be remiss, however, not to question whether there’s any actual evidence of Facebook caring about its users beyond wanting to keep them on the platform for as long as possible and whether such AI would represent moral standards that we could all agree on.
Perhaps we shouldn’t trust billionaires with vested financial interests to dictate whether specific advancements in technology will ultimately be good or bad for us. But take capitalism out of the equation and there are real reasons to be wary. We might not have Skynet unleashing Terminators on us anytime soon, but a recent report entitled ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’ acknowledges that there are genuine concerns, including the possibility of AI being used for acts of political sabotage or violence. Any intelligence capable of growth of its own volition could also develop in unpredictable ways.
And above all else, the primary function of AI is to render human work obsolete: the argument that AI will increase jobs and free people up to pursue their passions is unconvincing. But hey, that’s what happens when the debate is happening between people with bottomless pockets.