Artificial intelligence and the purpose of life

German philosopher Richard David Precht has written a new book with the title “Artificial intelligence and the purpose of life”. It’s a very well written book but the interesting thing is he doesn’t speak too much about artificial intelligence itself.

The book is more about the scientists in silicon valley and about how they are envisioning artificial intelligence. The book is less about artificial intelligence it’s more about our perspective on artificial intelligence.

Post humanists and trans-humanists – what’s that?

The first distinction he does here is between the post humanists and the trans-humanists. What does that mean? Well according to Precht post-humanists are the ones that think that artificial intelligence is gonna get more intelligent and more intelligent until the point that it gets more intelligent than the human beings. So if the post humanists are right then the artificial intelligence is gonna take over the world.

Trans-humanists on the other hand see that humans and artificial intelligence are gonna merge so we’re gonna be cyborgs and we’re gonna use artificial intelligence to enhance humankind.

So those are the two visions that Precht sees coming from silicon valley. What he doesn’t like is that the big topics are not being talked about. For example climate change. We are in a global crisis concerning the climate and the guys in silicon valley are not talking about this. So according to Precht the people in silicon valley are doing same business as usual: growth, further expansion more destruction of the world.

So post-humanists envision a world where the artificial intelligence gets so intelligent that humankind is either pushed aside and not important anymore and that the intelligence reproduces itself and grows and grows and takes more and more space or that it gets aggressive towards humankind and there is a war man against machine.

Of course, Hollywood paints a lot of pictures from the post-humanists. For example in “Terminator” or in the movie “Ex machina” or for example the movie “Matrix”. In scientific literature this point where the machines become more intelligent and where they start developing faster than humankind is called singularity. Precht brings a good argument against this case because what says is that it’s not clear that a hyper-intelligent being would act exactly as human beings do.

Is it really smart to grow every year? Is it really smart to destroy the planet you’re living on? If this is not smart a being that is smarter than human beings would see this and would act accordingly. So it’s not clear that an artificial intelligence if it’s really that intelligent would be on the same path as human beings are today.

This is a good argument.

What is AI and what are humans?

He has a few more interesting points in his book. For example when talking about artificial intelligence he tries to see: What are human beings what artificial intelligence is not. By defining artificial intelligence you can actually define human beings in contrast to artificial intelligence. This is an interesting question: what is a human being? What defines us? What are our distinctive qualities which an artificial intelligence can’t bring to the table and what practices are usually criticized about human beings? For example our emotions, our fears, our impulsive behavior and so on.

So all those things are usually criticized about human beings and we want a machine to perform better. To think more rational. To leave feelings aside.

But Precht says this is actually what makes a human being. This is actually something which is positive. This is something we should embrace. We should embrace our feelings. We should also embrace our somehow and sometimes irrational behavior. This is actually something which can give us strength.

It’s going to take a lot of time until a machine can feel and can be as a human being. This is where the book gets really interesting because then he writes about the topic of ethics.

AI and ethics

Because when it comes to ethics Precht says that the topic is more complicated than those guys in those ethical councils want to tell us. Because ethics and morality are always connected to personal feelings. So ethics is always part of the society you are living in. Ethics and morality are always depending on the social situation we are in.

Ethics are also sometimes dependent on the personal feelings and the personal thoughts we are having. So when you’re trying to program ethics this becomes a big problem. Because you’re actually doing something quite inhumane. Because one big part of ethics is thinking about a problem. Is feeling how does this feels. A machine would go and make step one step two step three and then make a decision.

Example: transportation sector

Let’s talk about for example the transportation sector. When we talk about self-driving cars there is a big discussion: what in the case of emergency should the car do? It goes too fast and it can’t break anymore and it can only decide going to the right and driving into 10 people or driving to the left and driving into a small child.

What should the machine do? What should the machine decide? This is a difficult ethical question which is seriously discussed in the big car manufacturers. Precht also brings an interesting argument here because he says: how is it happening today? At the moment when you’re driving a car and you come into this situation you’re also not answering a morals question in this moment. No, you’re just acting. You’re acting impulsive and doing anything. So at the moment this is not a morally or ethically interesting question.

It becomes a question when the machine has to decide. So what we are actually doing is bringing morals to a topic which hasn’t been there yet. Probably everybody would say that we can more easily pardon a person who acts just impulsively in an emergency, but we could rather not pardon a machine that really thinks everything through in a matter of milliseconds and then acts rationally and kills a person.

So the whole subjectivity, the whole feelings, the whole acting impulsively: this is missing in the machine and this makes it rational and this makes it actually a bad thing.

So this is an utilitarian approach where you try to weight which life is more important. Is it the life of 10 people or is it the life of older or younger person? This is a dangerous thinking.

Isn’t every being worth of living? Can ethics and morality really be programmed?

The indiviudual counts!

Precht has a different position. He is on the side of the German constitution which has the individual up front. He talks a lot about individuality. He talks a lot about autonomy because when it comes to morals talking about the individual is something totally different than talking about for example the whole of humankind.

For example when it’s about your mom or your dad or some relative some abstract thoughts are not important anymore for you. Precht says this utilitarian thought is a dangerous path. We should always look at the individual. We should always look at individual rights.

So trying to optimize whole of human kind is kind of a dangerous path that could lead to that we forget about individual human beings. This is a thought that he compares to communism. This is a thought that he compares to totalitarianism. In all those fields the whole was more important than the individual. So this is a danger.

This is a dangerous direction we might be going when talking about ethics and artificial intelligence.

Use AI in a new situation

The book really gets to its strength at the end. The last chapter is really interesting because there Precht takes a strong position. We are destroying our planet. We are destroying our environment. We are destroying the place that gets us to live and he says this is a new phenomenon.

So we are in a new situation that affects us globally but we are trying to answer it with old answers. Actually we’re trying it to answer it by expansion. We’re trying to expand now to Mars for example. We’re trying to expand into new fields. We are trying to exploit our personal data.

No. There has to be a global shift. There has to be a shift in consciousness – a shift towards seeing what is important. Seeing that our planet is important. So what he’s saying is: he’s not against artificial intelligence. But the problem is if we are only using it to exploit our data, to destroy our individuality, to have only more economic growth: this is not the way we should go!

We should also take into consideration the societies we’re living in, the morals, the personal feelings, how precious the planet is we’re living on. So when using artificial intelligence it should not be about optimizing our personal happiness. It should be about finding purpose in life. It’s about finding a life that we can live in autonomy. It’s about finding a life where we can express our individuality.

What is the goal with AI?

It’s about creating a sustainable world. This is how we should use artificial intelligence. This is how we should program our artificial intelligence. It’s not about optimizing the world. It’s not about optimization of humankind. It’s not about finding an artificial intelligence that could enhance us and make us better as human beings. It should be about being an individual. It should be about finding a purpose in life. It should be about the good life for each and every one of us.