September 7, 2018

Act 4: Me, Myself and AI

Why suddenly everything is different: Digitalisation is leading to a phase of rapid innovation and market upheaval. In our series of essays, Philipp Bouteiller looks at the trends and technologies behind it and sheds light on what they mean for people. In the fourth act, everything revolves around chess, robots that save people and machines that teach themselves things.

“Artificial Intelligence” (AI) or “Artificial Intelligence” (AI) is probably the most ambivalent part of this consideration. Recently, there have been amazing advances that bring “intelligent” machines within reach. From 2012 to 2015, there was a major robotics competition run by the American military research agency DARPA.

It was about the use of robots in disaster areas, and the robots were supposed to be able to do necessary things in the event of a disaster, such as driving a car, walking over rubble, opening a door, connecting a hose to a hydrant. There are very entertaining video clips on YouTube that impressively demonstrate the failure of most robots at these seemingly simple tasks.

“We know that any technology that can be abused will be abused in the medium term.”

At the time, many thought we were still a long way from the next generation of truly intelligent robots. But now there are dramatic advances and laboratory experiments from the fields of robotics and artificial intelligence that prove us wrong. Boston Dynamics, for example, has proclaimed the “Next Generation” of robotics, which no longer has anything in common with the teething troubles of its predecessors.

Great technical progress, certainly, but this development can also make you think. We know that any technology that can be abused will be abused in the medium term. This leads us into an area of ethical consideration that already made science fiction authors like Isaac Asimov ponder in the middle of the last century: How do we ensure that machines don’t turn on humans when they eventually outnumber us? Thus Asimov’s “robot laws” were born, which have since become the basis of our thinking and are also present in many science fiction films: 1. a robot must not harm a human being or, through inaction, allow a human being to be harmed. 2. a robot must obey commands given to it by a human being – unless such a command would conflict with rule one. 3. a robot must protect its existence as long as this protection does not conflict with rule one or two. But any military robot will deliberately violate these laws, will distinguish between friend and foe. Thus, with future robots, we may not be able to be sure that they are safe.

“Basic decisions must always be made by the human being, the machine may only act as a support.”

That is why there are serious voices from the centre of future development, such as Elon Musk, the visionary founder of Space-X and TESLA Automotive, Steve Wozniak, a co-founder of Apple, but also astrophysicists like the recently deceased Stephen Hawking, who warn against leaving all decisions to machines in the future. Basic decisions must always be made by the human being; the machine may only provide support, e.g. by analysing complex data. Just imagine a drone flying autonomously and killing people on its own, without an authorised soldier having given the order. This must not happen – and we are not far from it. This presents us with urgent normative challenges. Computers now beat the world’s best players at chess and even the much more complex Chinese game of Go.

While Garri Kasparov’s defeat in chess against “Deep Blue” in 1996 was still due to the computer giant IBM and its mainframe capacities, things have changed in the meantime. This is also thanks to a couple of computer scientists from London who founded a company a few years ago that has since been bought by Google (Alphabet): DeepMind.

AI-based “deep learning” algorithms have ushered in a new phase of artificial intelligence: the machine trains itself and, based on simple rules, finds ever smarter ways to win a game. This doesn’t work for all games yet, but it’s still impressive. Computer-based image recognition is also getting better and better. Facebook recently started using neural network-based systems that are able to interpret the colourful pixels (pixels) of a photo in such a way that they can describe to a blind person via voice output what can be seen in the respective photo.

At first glance, this does not seem revolutionary, since everyone immediately recognises what is depicted in a photo. But how can a machine generate meaning from coloured dots? Researchers have been brooding over this highly complex problem for decades, and here again it is the last few years that have led to breakthroughs.

“Computers still lack empathy and emotionality, real humour and the quick-wittedness and openness we so appreciate in our friends.”

It is no coincidence that many of these approaches are emerging almost simultaneously. It is the consequence of the enormously increased computing capacities described at the beginning and the intensive exchange of information within the scientific guild that makes these remarkable advances possible. This is precisely why many researchers, philosophers and politicians point to the coming age of artificial intelligence, and why there are now serious voices warning us of our extinction by systems of our own making. But we are not quite there yet. Computers are still not originally creative, nor can they convincingly conduct conversations that go beyond simple tasks like booking an appointment at a restaurant.

They lack empathy and emotionality, real humour and the quick-wittedness and openness that we so appreciate in our friends. And yet they are now able to detect our mood so precisely from a few spoken sentences that suicidal tendencies can be recognised quite reliably.

They are already better than general practitioners at diagnosing simple diseases, image diagnostics in the radiological field are improving all the time, and skin cancer can already be detected more reliably automatically than by a dermatologist. There are hotels that now only have robot butlers for room service. More and more robots are being used in the care sector, currently to assist caregivers in their work. Investment decisions made by machines are already often better than those of their human counterparts.

So perhaps we are not really that far away from intelligent machines after all.

The 5th act of the essay series will be published in 2 weeks.


September 7, 2018

Contact form

We process personal data, in particular your e-mail address, in order to process your contact request. You can find more information about the processing and your rights in our privacy policy information