Artificial Intelligence Is On Our Doorstep

But Are We Humans Ready?

The other day an advancement in neural networks in computers signaled that computers can now be artists in their own right, without any human's pushing buttons to control the brushes, colours and so forth. This art form has been called Inceptionism and no - it’s not Leonardo DiCaprio’s sequel film. It's not fictional either and it's far more difficult to grasp than the film Inception ever was, and I don't know about you, but Inception took me multiple viewings before I got my head around it.

Computers can now create art like this.

Inceptionism is to computers what Impressionism was to 19th Century artists, almost. Succinctly put, an image, like a painting or photograph is inputted to a computer and the computer recognizes edges and lines.

From there the computer connects these edges and recognizes other shapes, moving in the direction of pixel statistics.

This doesn’t mean that the computer generates a car that looks more like a car than usual. It's in the vein of the picture above.

The car will in fact become an image made up of what the neural network has extracted and could incorporate some of the most surreal suggestions, a door handle could become a Viking for example.

Think of it like this - when humans look at clouds and find images embedded in the clouds – this is what computers are doing in terms of Inceptionism - it's eerily similar.

In 1984, the film Terminator had Arnold Schwarzenegger travel back from 2029 to 1984 to kill Sarah Connor. Protecting Sarah was Kyle Reese who bought with him Skynet, an artificial intelligence system set to wreak a nuclear holocaust on the world. In a couple of days the fifth Terminator film will be released. A story line that's extended over 31 years is no small feat, and not to be dismissed lightly because its central theme of Artificial Intelligence (AI) continues to fascinate audiences.

Recently I had the misfortune of seeing Chappie, another AI film, about the South African Police Forces bolstering their numbers with fully automated officers to drastically reduce crime. The main programmer, Dev Patel, under duress, rebuilds Chappie with his Artificial Intelligence 100% perfected. Horrible acting and terrible storyline aside, the cast of Hugh Jackman and the actress synonymous with the space age – Sigourney Weaver, demonstrates that top class actors are still basing career decisions on the fact that people are majorly intrigued by AI. That Chappie was taught to paint seems relevant in terms of Inceptionism.

As the AI reality closes, moral questions about computers having neural networks and what that means for society are raised.

It’s easier to assess the moral debate if we stick to thinking of AI in terms of weaponry. When weapon systems can be created as fully autonomous, we’ll have moved to a completely different potential of what warfare means.

If a weapon can select a target without human intervention, what are the moral implications of that?

Of course there will be countries, and leaders and insurgents who’ll willing discard moral questioning and adopt and use such weaponry.

Morals aside, if the technology exists, the reality is, it will be used.

The time for the conversation over the moral issue behind robots taking lives without human intervention needs to be had now.

Today even the most advanced weapons require human intervention. Predator Drones for example, use camera equipment and relay, then wait for a command from either a ground center or C-130 Hercules commander to issue an instruction to terminate, before the drone takes aim at anything.

Of course it’s very difficult to factor any negatives into a robot being programmed to detect and disarm a roadside bomb, when the evidence of returning veterans missing limbs from IED’s (Improvised Exploded Device) is so prevalent. Why would you send humans to do that job, when a robot can well account for such a mission?

However that’s only part of the argument, it’s the moral component that has people reeling over what is right and wrong in this situation.

The counter argument claims people would prefer to die at the hands of a person with feelings, capable of mercy or compassion even, to die in circumstances of two combatants on a level playing field.

Yet in war conditions it is extremely unlikely many victims will encounter a reprieve through compassion and that feelings of hatred toward the enemy are a more likely scenario. So this argument about dying at the hands of a man is fundamentally flawed.

Robots not programmed to inflict torture, might be favourable captors for many victims, but what happens should the robots be programmed to torture, then it becomes a very different discussion.

The 2014 joint report by the Human Rights Watch says it would “morally wrong” to give machines the power to decide over life, because it is impossible for weapons to truly comprehend the value of a life, nor do they have the capacity to appreciate the significance of the loss of that life.

Another argument against machines having the power to take life is that they lack feeling, wouldn’t have the ability to abide by legal obligations and lack intuition.

However this isn’t the strong argument you might consider it to be when you take into account that humans make many mistakes under pressure, they’ve been known to forget or misinterpret instructions, gestures made in haste are open to varied interpretation, humans in the heat of battle have consistently mistaken many pedestrian items for weaponry because of the heightened atmosphere conflict brings.

Humans are bound by religious ideals and opposed to those who do not share their philosophies. Soldiers have a notoriously bad reputation for being able to distinguish between civilians and other soldiers, and computers programmed correctly would be far better at dispensing international humanitarian law than people. But that would be entirely dependent on programming.

Arguments in favour of machines doing the job, run like this - computers have the ability to rapidly process vast swathes of information, for example Google’s driverless cars are better at avoiding accidents. Robots are unlikely to be tempted to kill for fun or to accrue wealth. Computers are rational, they don’t get mad, scared, or act out of sentimentality, they can accurately judge distances, and would never get sucked into distinguishing inappropriately a faked expression of pain.

Robots are programmed, no one can imagine R2D2 going off the grid or AWOL like a soldier might. In terms of aircraft, autopilot is a far more precise piece of equipment than a pilot who experiences fatigue and of late other mental health issues which have resulted in tragedies.

When you think about it, humans came up with torture, as far back as crucifixion, they built machines capable of fire bombing Dresden and wiping out Hiroshima, they have ambitions for expansionism and they definitely lack empathy in terms of taking prisoners as the Second World War demonstrated, in the conditions endured in Japanese POW and Nazi prison camps.

Which leaves you pondering would the world be better off backing computers in warfare against humans? This debate will rage right up until the day in the very near future when it's a reality.

mildred issue 
	Glasses