Hi everybody!
I am back. Was caught into a lot of papers to review for my academic work. But I am back to writing here again !
Let’s start this first article of the chapter “AI and consciousness” with a question.
Why would we like to represent consciousness in robots? At least the way we define it in humans. With the capacity for feeling or perceiving. Also called “qualia” in psychology by the physicist and Nobel Prize John Eccles to define subjective experience.
We, as humans, tend to think that the future should be born and raised like us, thus that machines would mimic us – what I define as “human arrogance”… In other words, we tend to think that we are the norm. What about if things evolving beyond us might be slightly different? Let’s shift perspective and imagine a world when machines would not be “humanized” but instead evolve beyond what we define as being “human”, becoming the next species (i.e. the next level of evolution).
If we follow Darwin’s theory of evolution by natural selection (cf. his book “On the Origin of Species“, 1859), only species/features/variables worth surviving survived evolution. For example, species which have a fundamental ability to adapt, or features which adapt in order to serve different functions, i.e. to survive evolution.
This said, why would machines need to “feel”, i.e. have emotions? This feature may not be necessary for the natural selection of evolution in order for the AI machines to adapt, and “think”. This may not be the dominant selection criteria, which may thus “die” with us, as human species. Another possibility is that we may evolve with AI, maybe combining ourselves with machines. This might sound derived from a science-fiction movie, but there might be ways to do so, ways we cannot conceive yet, but mostly relying on unconscious processing that our brain does to integrate information. Indeed a lot of things are processed by our brain without us being aware about such processing. I will develop this point in the next article of this chapter.
Everyone agrees that it is difficult to reproduce consciousness in robots – but actually what is really consciousness? According to the book of Professor Stanislas Dehaene entitled Le Code de la Conscience (i.e. “The Consciousness Code“), most of the useful processing of information is done unconsciously – the “learning how to learn” – our vision of the world – etc. and could be translated in a robot. The question is, are conscious processes such as the feeling of an intense color, a sunrise, or sadness really necessary?
In the book “Homo deus” by Yuval Harari, the author argues that emotions and feelings (perceptions, sensations) are simple algorithms useful for the survival of species. In the animal world, indeed, decisions and action selections are made for survival, based on reproduction, and finding food without being eaten. As an example in the book, if a monkey sees bananas close to a lion, probabilistic calculations have to be made in order to decide what is the best move to make (get the banana without being eaten by the lion, basically).
We can call the fact that humans can feel a beautiful sunrise, feel for people starving on the other side of the planet or going through a terrible and devastating tsunami the beauty of the human race, but in a sense, isn’t it a break to our focus, intelligence and efficiency, thus to our productivity, by slowing us down? (in a purely pragmatic way). We are indeed usually not efficient when we get driven by our emotions… Be heart-broken or worried about a parent’s health and we surely decrease our work efficiency, overwhelmed by a flood of (sad, angry or worried) thoughts…
I am not saying emotions are bad; however I am saying that maybe the next species (level of evolution) would exist without these subjective qualia consciously expressed. Why? maybe in order to evolve in a new era of productivity and efficiency, when the world would run at such a fast pace that emotions will not be “permitted” anymore. Therefore, the only way to not be left behind may be to get rid of conscious processes and of all these subjective entities.
Just something to meditate on.
Another school rather thinks that emotions are our “added value”. It means that it is not just about taking strategic decisions in an automatic way, but taking them with affect, emotions. For example, as mentioned in one of my previous posts, when Lee Sedol played against the software AlphaGoLee, Lee Sedol finally won against the computer when he was not playing in an automatic way anymore, not “like a machine” anymore, but with affect and emotions. He was indeed angry and very disappointed to have lost 3 games already against the computer and to have been officially beaten up as he lose 3 games over 4 or 5. Lee Sedol was playing with affect, emotions, and the computer was lost, did not know how to adapt anymore to such “strategy” / mode of playing of its adversary.
Thus, emotions could rather be a strength which will allow us to impose our singularity over technology in an era when collective intelligence and social brain are the key of future society needs. We will develop such concepts in a next post.
This is it for today. More in the following (short) articles. Stay tuned !