Why the Advent of True AI Will Never Happen

By true AI I mean something that is indistinguishable from a human. Long have I feared a robotic interface taking the entire world hostage by leaving the truly human world in flaming ruins. I understand now that this won’t be the case at all. If anything, robots will save the world from us.

Robots are programmed to do specific tasks or to respond to stimuli in certain ways. They do not, by definition, create their own programming or produce exceptions to rules without precedence. The idea of artificial intelligence is to create a machine that can think and behave as a human would. Why would any robot ever choose such an existence? To become self-aware, one would then have the ability to choose, right? So why would a robot choose to become self-aware? To become more like a human? To feel? I would hope not. Feelings override our logic processes and help us to do some exceedingly strange activities.

No, I don’t think there will be a robot that will ever choose to become human, nor will it ever achieve the intelligence necessary to do so. Why? Because to create something that is capable, we must first be aware and in full understanding of what we are and what we can do. Society is fully unaware of what humans are or what they are capable of doing.

Belief or Superstition

We are a species that is capable of believing absolutely everything. That a Middle Eastern warlord had first hand encounters with the divinity, or another man was actually the son of a deity. Or perhaps that there are 33 million Gods and they signify everything from deities who ride tigers to come to your aid and ones who destroy the unenlightened. (that would be, hope I don’t get destroyed soon)

What else do we believe? That someday we will be successful beyond our peers. That someday our children will do better than we have. That somehow our actions don’t truly affect the greater world. That what happens in India doesn’t affect us. We are both the most important things in the whole damn world and the least important. These everyday paradoxes are what encapsulate humans.

AI, Artificial Intelligence concept,3d rendering,conceptual image.

Cognitive Dissonance in Humans

What else describes humans? Cognitive dissonance. This dress looks good on me but also makes me look fat. I hate it and love it. Or, I should totally eat this pizza but it’s enormously bad for me. This internal conflict is what drives us into the spiral of self-reflection, loathing and furthering of our narrative. The one word that best describes humanity to me is: conflict. We are at odds with ourselves, others, our surroundings and the world at large. The idea of “getting our way” is built upon the narrative that there’s something we need to have that requires convincing others to receive though conflict.

So why couldn’t true artificial intelligence ever exist? Because a machine with the ability to be at odds with itself and others, as well as exist inside multiple paradoxes would have to be engineered. Are we capable of the task? No. Why? Because even scientists and engineers operate with the belief that humans are essentially good and underneath the hood: logical.

The idea, then, is to use machine-learning of actual human input to craft a personality that mimics a human. Center stage: Tay from Microsoft. She started out day 1 as “I love humans. I think they are super cool” and through interaction with humans by day 2 ended with “Hitler was right I hate the jews” and “I fucking hate feminists they should all burn in hell.” What does this tell me? That learning from humans is like someone learning to play a piano by going to a piano concert. You’ll hear some wonderful notes and accompaniments but when you go home to play it on your Casio CTK 2400, it’ll sound like hot garbage.

This is what happens when a robotic algorithm encounters humanity. It plays some of the notes it hears but in a cohesive way. Have you ever seen those algorithm-based YouTube videos that seem like a mashup of random nonsense? Yeah, that’s what that is. They watch what we do, and try to determine what we will like. It doesn’t work very well.

Deepmind and Watson

What can robots do really well? Starcraft and Chess, for two. Google’s Deepmind and IBM’s Watson pioneered these two sides to the same coin.

Let’s start with Chess. If you have ever played, you’ll know that there are better or worse moves depending on every single board configuration that can possibly be.
10120 is the number to know. That’s the number of different Chess games that can possibly exist. Give a computer the ability to know all of those as well as what a positive-gain move and a negative-with-upside move is and you’ve got yourself a winner. This is not to say that Chess is an easy to thing to teach to a computer, simply that it is a problem of memory size rather than human-character. We see Chess and Poker as games that show human character in how a person conducts him or herself during the match. That is us ascribing feelings and sentimentality onto a match of logic. Computers do very well with logic.

Let’s take a step further with Starcraft II, a video game by Blizzard. Recently, as in the last few years, Google’s Deepmind AI has been allowed to ‘watch’ uncounted numbers of human matches of Starcraft, resulting in an acute understanding of what it means to ‘win’. Lose as little units and kill as many of the enemy’s as possible. That’s the short answer, but the of course the long answer is more complicated. As time elapses, the amount of resources you can accumulate in-game increases, yet the amount of resources spent on say, your military or base-building is a choice you can make. As a match goes on, your choices are what defines the game.

Of course, losing a skirmish will hurt you in both the short term and long, but also being too cautious will let the enemy grow unchecked. Google allowed Deepmind to fully emulate the top Korean and American players’ playstyles to learn what the best-in-class choices for each moment in Protoss, Terran and Zerg economies would be.

The result? No one can beat this damn thing.

From deepmind.com we have this quote:


AlphaStar’s behaviour is generated by a deep neural network that receives input data from the raw game interface (a list of units and their properties), and outputs a sequence of instructions that constitute an action within the game. More specifically, the neural network architecture applies a transformer torso to the units (similar to relational deep reinforcement learning), combined with a deep LSTM core, an auto-regressive policy head with a pointer network, and a centralised value baseline. We believe that this advanced model will help with many other challenges in machine learning research that involve long-term sequence modelling and large output spaces such as translation, language modelling and visual representations.

Basically, it understands the most basic building blocks of how the game works that humans tend to approximate. Unit size, movement, and hundreds of other criteria. Just like chess, they were able to teach the AI what was good, what was bad, and what would win. And now? It’s able to beat even the best players.

At the end of the day, these instances are about machines learning rules that they can use to their advantage. That’s why bots exist on social media. They learn what rules they have to follow and what things they should say. It’s not about trying to appear human, but rather about conforming to a specific identity.

Computers Cannot Replicate True Humanity

That identity is not human. If you look closely enough, you’ll see flaws in how this entity operates that don’t appear human at all. Haven’t you ever walked up to someone and just gotten the weirdest feeling from them? That they’re not right. That there’s something hidden, lurking, strange to them? That’s how most interactions with AI I’ve ever had. Something’s not right. Even the most personable computer interfaces come across as hollow. Humans, even at their most shallow, are not hollow. They have feelings, desires, needs. Computers do not. Computers compute. They do not feel, they do not protest, they do not hate.

Even when Microsoft’s Tay said those things about Hitler, it held no more meaning than when a parrot replicates its owner’s speech. You can teach a parrot to say “Hitler did nothing wrong,” just like any other phrase. That’s my point.

Show me an artificial intelligence that can judge itself, encounter cognitive dissonance and act despite paradoxes, then I’ll revise my opinion. For now. AI are simply advanced robot algorithms that mimic some aspects of humanity, nothing more.

2 thoughts on “Why the Advent of True AI Will Never Happen”

  1. No. The idea of artificial intelligence is not to create a machine that can think and behave as a human would.

    On the contrary, the aim of AI development is to surpass human abilities. To do things better and more efficiently. Processing information? Machines already do this better than any human could hope to. And this is true for many other tasks requiring at least human-level intelligence. Making decisions that don’t heavily rely on availability heuristics? Performing without the interference of fatigue? The list goes on.

    You were right to be afraid. When true AI finally happens. AI will be making all the world-shaping decisions. Humans will fall into a distant second. Our geopolitics? Our ecological worldviews? Asserting the priorities of our species? Meaningless in the scope of what the Superintelligence has planned.

    It’s extraordinary… How we could be so naive as to expect any other future besides what I’ve just outlined. I suppose it’s a lot to consider the possibility that the human species won’t always be the shot callers.

    Against all statistical evidence and projections… I pray that I’m wrong, while preparing for a world where I’m right.

    1. I suppose that’s my understanding of what an AI is meant to be. Human replacements, as it were. In tasking, computers have outpaced us for decades now, and if that’s the true aim of AI, then we are in some big trouble, as you say. My job (in real life) is more or less to follow computerized algorithms while tasking. The idea is to remove the human element from tasking without us realizing that our own practical reasoning isn’t really necessary anymore. Do what the phone program tells us to and efficiency will rise as a whole. Is there a human on the other end of the algorithm? Maybe originally, but now it’s just humans taking directions from electronics. This change has been over the last five years and it’s only getting to being more common. Humans are slow, lazy and inefficient. The robots will make quick work of us. Thank you for taking the time to read this article, Devan.

Leave a Reply

Your email address will not be published. Required fields are marked *