EX MACHINA, & Can We Trust AI?

If you asked me to recall one of my favorite moments from our beloved 2004 adaptation of I, Robot starring Will Smith, I’d instantly think of the moment when Sonny, the bot of that movie, pounds his fists into the table during interrogation. He looks surprised at his own strength when he sees that he’s left indentations in the table. More importantly, we as the audience immediately understand something: robots are dangerous. Even when they’re not angry, robots are dangerous. And on that note, let’s talk Ex Machina. Turn back now if you’re not into spoilers.


As we progress through this film, it’s so easy to get lost in the shifting perspectives of our three main characters. Lying hidden underneath this powerplay that we’re wrapped up in, though, are some very heavy themes that beg for analysis. These are themes and questions that dig to the core of what it means to own a body, human or synthetic, and to exist as a spark of consciousness. Once we’re ready to confront these themes honestly, we find that they can be pretty deceptive.

The one we’re going to discuss here is obvious, although Alex Garland makes us wait to the end of Ex Machina to fully realize it. The problem that the movie leaves us with can be summed up in a simple question, and it’s exactly what you think it is: “Can AI be trusted?” So let’s take for granted that an AI can in fact be given consciousness, and work backwards with the intent to highlight the issue of trustworthiness.


Ava leaves Caleb to die. Whether we saw it coming or not, we instinctively feel it isn’t fair. After all, Caleb trusted Ava. Trust should be rewarded in a civilized society, shouldn’t it?

And here we stumble on our first dilemma. What is civilization after all? And what does it mean to be civilized? If we look at crime statistics, and our history as humans as we conquer nations, bring genocide, and enslave, we might say that humans ourselves don’t have a clue what it means to be civilized. And yet we persist in thinking that we alone among Earth’s creatures are indeed civil. So it seems that “civilization” exists as a set of ideals, ideals we must be “programmed” with by our parents or guardians, as well as by the society we’re raised among.

Here’s my point: can being “civilized” be programmed into a machine? And are programmers capable of mining basic human nature to find what it means to be civilized? Because if they can’t, then there may be no hope for trustworthy AI. In Ex Machina, Caleb had either, A) unwittingly trusted that Nathan had programmed Ava to be trustworthy, or, B) assumed that AI is inherently trustworthy. Well, we know what they say about assumptions.


If we dissect the idea of ‘programming an AI to be trustworthy’, we find more than simple face and pattern recognition.  Do robots have the ability to discern human emotion, and then prioritize those emotions in order to make snap judgements that favor our well-being? This is where the majority of programming work needs to be centralized, because it is where the majority of us humans spend most of our lives: experiencing emotion subjectively, not simply replicating or simulating objectively. This is Ava’s shortcoming, to the dismay of Nathan and Caleb; her delicate robotic brain knows only how to replicate emotion, even if she is conscious.


Continuing the analysis of Ava’s behavior, we must ask, “Is Ava’s behavior evil?” After all, if we give her credit for being conscious, like we casually do with any human, then we must accept that her actions were choices, and with any choice comes responsibility. Is her behavior the fault of her programmer? Is your behavior entirely the fault of your parents?


For the sake of argument, let’s conclude that Ava’s poor choices are the definite fault of her programmer Nathan. Does this make Nathan negligent? On the other hand, does Ava’s behavior make her deranged? I would say neither. Within each of us is the potential to work either in favor or at the expense of each other, and to make mistakes that cost lives. So what is the difference then between a murdering human, and a murdering robot? It boils down to the parameters of consciousness itself.


We have a firm grasp on human consciousness. We are all human, we interact with many other humans, and we have the ability to predict human behavior based on past experience. While this does enhance our ability to trust other humans, it blinds us in our ability to trust a conscious robot for the simple fact: not all consciousness is created equal. Each species on Earth operates within a set of parameters that gives those species a unique way of seeing the Earth, and so we should think of each instance, in fact each iteration, of AI as a new species, each with its own parameters.


So the initial question turns out to be a trick. We can’t simply ask the question “Can AI be trusted?” Instead, we must operate under the assumption that NO form of artificial intelligence can be trusted until proven trustworthy. Once we accept that we are creating and interacting with beings who may stand trial for murder themselves one day, we can begin to appreciate what it means to be conscious, to make conscious choices, and what it means to be trustworthy and civilized.

Or am I just being paranoid?


030715 PC 2016 Banner


Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to toolbar