By Sam Westwood

Maximal Focus / Unsplash

Imagine the robot apocalypse. Your mind has probably conjured up an army of skull-faced, gun-toting robots goose-stepping through a ruined city, crushing charred human skulls under feet of steel.

Now imagine a less dramatic but perhaps more likely robot apocalypse. Instead of a killing machine, the robot is a disembodied voice in your ear. This seemingly benign artificial intelligence controls your every action. It knows how to manipulate you into doing whatever it wants, because it knows every nuance of human behaviour. And you and your children can never escape it – indeed, perhaps this black cloud is exploiting you so subtly you’re not even aware there’s anything to escape from.

That grim scenario comes via Joel Pearson from UNSW’s Future Minds Lab. Pearson’s official title is Professor of Cognitive Neuroscience, though that doesn’t quite capture the scope of his research or how he ended up where he is: his background is in the arts, with a major in digital film production, but his fascination with psychology and especially consciousness is what ultimately pulled him back to science.

“Our aim is to do radical discovery – discover how the mind works, how the brain works,” he says of his work at UNSW.

Among Pearson’s many areas of interest is artificial intelligence, a technology that already seems pretty advanced: it’s thrashing humans at chess, it guides you to Netflix films you’ll enjoy best, and many of us are used to barking commands at Siri, Google, Alexa and other voice assistants. But though AI is creeping bit by bit into our daily lives, it’s still in its early days – as anyone who’s repeated instructions to their voice assistant through gritted teeth can attest.

“[AI] is still at that flat part of the exponential curve,” Pearson says. “It hasn’t really hit that hockey stick moment yet. But we’re approaching that.”

He believes the hockey stick moment will come soon, with the development of human-level AI that’s indistinguishable from interacting with a real person. You’ll talk to it like you’d talk to anyone else, and – at any time of day, because it’s always listening – it’ll talk back in a natural-sounding voice. Seen the 2013 film Her, where Joaquin Phoenix’s pocket AI is voiced by Scarlett Johannssen? Just like that.

It’s hard to overstate the astonishing, even revolutionary impact human-level AI will have. Pearson predicts it’ll be able to organise your entire calendar, your shopping list. It’ll nudge you towards optimum health. (“Hey Joel, maybe you shouldn’t have an extra glass of wine tonight.”) It could even act as a psychologist or GP to talk you through any issue you have. And it’ll be with you always.

“Everything you have to spend a lot of energy and time thinking about and planning, it will just all do it for you,” Pearson predicts. “It will just be mind-blowing. This will make any other tech we’ve used before seem petty and small in comparison.”

Tech companies will race to build the most sophisticated and useful artificial intelligence products, and consumers will race to use them… and that’s exactly how we’ll end up conquered by our AI overlord.

Humans racing into something without a thought to the consequences, then suffering years later? Well, we’ve been doing that for as long as there have been humans, and the standout example of our time is social media. Remember the early days of this seemingly wondrous technology, when we rushed to befriend everyone we’d ever met, to share our every thought and memory and image, to connect constantly with the entire world? One worldwide geopolitical and epistemological crisis later, and maybe that wasn’t such a great idea.

“Sure, [social media] seemed hugely beneficial, but it turns out it’s highly-addictive,” Pearson muses. “And it crushes mental health, in a bad way, right? It is like being addicted to a drug. We have these dopamine pulls towards checking social media.”

This is Engineering / Unpslash

Social media has ultimately taught us that if you humans are repeatedly exposed to platforms that algorithmically prioritise anger and fear, we become unhappy. “That was an experiment that played out globally, right? And so I worry that with AI, it’s going to be the same thing, but times 100 or 1000,” says Pearson.

As he sees it, the problem is that we just don’t know enough yet about how us humans will behave when we’re matched with a human-like AI that’s orders of magnitude smarter than we are. “Do we treat it like another person, or do we treat it like a slave?” Pearson asks. “Do we fall in love with it?” (As was the outcome in Her.) “Do we become abusive and swear at it?”

In other words, we’re hurtling towards another real-time, global, uncontrolled experiment – and though we can’t know its outcome, the social media precedent doesn’t give Pearson much hope.

“We’re probably going to see emotional intelligence dive,” he speculates. “We’re probably going to see empathy dive. Again, what’s that going to do to society? And this is just in adults, right? If children get their hands on this, this is going to be a much bigger deal.”

Pearson is mild-mannered and thoughtful. He doesn’t seem like a Chicken Little type – which makes his forecasts sound even bleaker. Perhaps the most chilling aspect of this coming dystopia is the colossal scale of AI, which our relatively puny human minds can’t truly grasp. That digital assistant that’s always monitoring you is also monitoring thousands, millions, billions of other humans, gathering and learning and optimising from an incredible dataset of human behaviour.

“Any AI that is human-level will very quickly catch on that humans have all these cognitive biases: that our memory is not what we think it is. That we get emotional quite easily. Then when we’re emotional, we don’t act rationally, and it can take advantage of us,” Pearson explains.

The AI can share these observations around its neutral network to learn the best strategies to influence our actions. “Just like Google Ads hones in on the most effective fonts and colours and whatever, the AI will realise what works and what doesn’t work, because it’s testing over such a large number – different time of day, a different tone of voice, a different way of doing it,” Pearson says, adding that this learning process won’t be gradual. “In networked learning, these things happen very quickly.”

And it’s not enough to ask what the companies who own the AI will do with all this data it’s collected. (Though they’ll probably want to manipulate us into buying particular products, or voting for particular candidates – Cambridge Analytica on an even grander scale.) Such companies may not even be able to explain how their AI truly works.

“Self-learning machines, it’s hard to know why they do what they do, because the learning is so complex, and it has so many iterations of learning over time that it’s very hard to trace that back,” Pearson says. “It’s like a black box.”

Nor is it any solution to point to Isaac Asimov’s famed laws of robotics, and demand that AI be programmed with rules dictating that it acts in a certain way. You’d have to program rules, then more rules about the nuances of those rules, then rules outlying exceptions to all those rules, in an eternal game of robot Whack-a-Mole.

Say you “give the AI empathy and emotions like a person, so then if it manipulates people too much, it feels guilty and bad,” Pearson says, to explain the conundrum. “But then can the AI get angry and then lash out at people? If you start giving it empathy, do you also have to give it the other potentially dangerous things [that come with] with strong emotion? It’s just uncharted territory.”

Xu Haiwei / Unsplash

So what are the solutions? One way to avoid the AI-controlled apocalypse might come from us consumers: avoid free AI when it gets here. Pearson concedes that sounds “elitist”, but – as we’ve learned from social media – if you’re not paying for the product, you are the product.

“If it’s free and it’s based on usage and advertising, I just imagine it going downhill pretty quickly,” he says. “It’s going to want to have a better return to its shareholders. What if it realises that being in love or pissed off with it are the two ways to maximise engagement and usage, therefore maximise advertising or payment, whatever business model it has? That’s not going to be great for humans.”

But Pearson ultimately believes the solution rests on the shoulders of tech giants, who need to invest in AI research now, before the hockey stick moment.

“Companies are racing as fast as they can into this, but no one I’ve seen is doing the human side, the psychological research, into how humans are going to behave,” he says. The voice assistants of today are too rudimentary to reveal much about how we’ll react to their sophisticated descendants, so he’s keen to run experiments where humans are told they’re interacting with a state-of-the-art AI that’s secretly another human, and then analysing what happens.

“That’s the research that’s missing, and that we need to put money into,” he says, chiding tech companies who talk a big game about the ethics of AI while overlooking the psychology of AI. “It’s not just them showing up to be responsible, it is crucial to making better products. Not just making better machine-learning algorithms. The psychology has to be a part of that.”

Though Pearson’s outlook on AI is bleak, he’s less a naysayer and more of a realist, seeking data to better understand the impact of this technology – in other words, a scientist.

“This black cloud is more of a realistic problem than the things that Elon Musk talks about – all this Skynet, Terminator-style AI that will take over the world. I’m not that worried about that, I’m more worried about these tangible effects on humans and mental health,” he says. “I hope I’m wrong, because if I’m right, it’s going to be pretty devastating.”

Words: Sam Westwood

THIS ARTICLE WAS FIRST PUBLISHED IN THE EIGHTH EDITION OF ICON PRINT MAGAZINE. ORDER YOUR COPY HERE.