Consciousness and AI

[Reading time < 5 min]

Dear all,

I am back. So far so good, we did not get invaded by AI systems trying to overcome the human race, so all is fine! However, I was invaded by academic workloads which required quite some investment. Now, as it has settled a bit, I am back. And I would like to continue my series of post about consciousness.

Recently, consciousness has regained attention in the field of Neurosciences, with a nonprofit charity foundation who got the idea to elucidate the topic with an “outlandish” competition consisting in well-designed experiments in human participants meant to directly confront leading theories of consciousness against each other. Indeed, around half a dozen theories have been proposed by experts in the field about how consciousness arises in our human brains.

My turn now to feed this post with some aspects of consciousness related to AI.

First of all, what is consciousness? Well, the answer is not easy but mostly what is admitted in the field is two criteria (dimensions).

1. Global availability.  The fact that conscious information becomes globally available in our brain. You can report this information verbally (with language) or non-verbally (e.g., writing), you are aware of it, you can recall it, act accordingly, etc. thus activate all your process systems in the brain.

2. Introspection. The fact that we are aware of our body, of our location, of the fact that we know something, with a certain level of confidence, or not, perceive something or sense that we have made an error, for example. It is the sense of “knowing” (knowing that we know or do not know).

Those two dimensions are kept separate as they can exist separate from each other.

Then, brain computations involving neither dimension 1 nor dimension 2 are called “unconscious”.

You may ask. How could consciousness be implemented in a machine to render it conscious, and if so, what would be its advantage?

  • Conscious machines

First, a small intro. It is good to remind here that as Alan Turing thought, complex information processing can be achieved by an unconscious machine. Indeed, face recognition, speech recognition, chess-game type of play, are performed nowadays by deep-learning algorithms, as I described in my previous posts, thus do not require a conscious mind. The dimensions 1 and 2 are indeed not required.

So, how to implement consciousness in a machine?

Most of the machines/AI do not have global availability neither introspection. If you take the example of a car, if for instance something indicates you are running low on gas, the car is warning you with a light that you will soon run out of gas but does not stop at the next gas station available near your location, despite being equipped with a GPS device. This means that all the different devices the car is equipped with do not communicate with each other. Dimension 1 would make the connections between all those devices.

Regarding introspection, a machine is not aware of its own knowledge (the “know that I know”). Except when machines use probability distribution for their computations, in this case,  they monitor the chances they have to win (i.e., to be correct). Machines are neither aware that someone may disagree with their vision.

Nevertheless, we can program machines that can keep track of their learning progress and develop a “sense of curiosity” by managing resources according to the best gain they can benefit from, in terms of information gained (learned).

Basically, if we combined dimension 1 and dimension 2 into a machine, the machine can be “equipped” with something close to human consciousness. According to a recent paper published in the Journal Science (Dehaene et al., 2019) :

“We contend that a machine endowed with C1 [dimension 1] and C2 [dimension 2] would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations [as in some human psychiatric diseases] when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans.”

 

Thus, the advantage of a conscious machine lies in the fact that more autonomy would be acquired, as well as efficiency; indeed, from an input signal, a whole chains of events will be activated and handled till the desired output will be reached, for example a solution to a problem. In our previous example, refill the car with gas at the next gas station, preventing you –  if you missed the light signal, from being stuck in a highway with no gas, awaiting a tow truck to come pick you up and fix your problem.

 

This is it for now. I let you think about a possible future with conscious AI !

Till the next post.

 

 

 

Risks of AI. Data protection.

Risks of AI 

data protection

Data protection

Back to our healthcare example, a growing sector in the field of AI. If you combine this to what I wrote before about predictions, that the more data you implement the system with and the more diverse dataset you fed the system with, the better the system becomes at making best (more accurate) predictions, with less false positive / false negative. Then we can come up with the following thoughts: we will probably agree to share our private health data for the greater good to improve healthcare and the cure of more patients (since the more data available to feed the system with, the more accurate output  we get – in this example, best treatment outcome for patients worldwide).

However, this may not be without consequences. Imagine the data you shared for the greater good in the healthcare sector are also utilized for other means, means that you do not have access to neither control of. It might fall into hands you may not want. Meaning your data might be used for other means/purposes that you may not know about.

europe-3246346_1280

So far, data protection is still ongoing in Europe and let’s hope it will not change. This is at least something we might acknowledge. As this is not the case in every country, for example in the US or in China. And this will not be without consequences as it can represent a serious threat to democracy.

So far, democracy was a convenient model over dictatorship for our societies and way of living after the Second World War, as the well-known historian Yuval Harari (*) reminded at the World Economic Forum last year. However, in the context of AI, would democracy still have the monopoly and convenient supremacy over dictatorship? This question remains open and definitely deserves that you stop a moment here and devotes to it some thinking … (but sincerely let’s hope so).

(*) Author of the book “Homo deus”, which I recommend.

the-thinker-1431333_1280

Let’s imagine that our data fall into bad hands and we have strictly no control over them. Would the system and the people controlling our data know us better than we ever know ourselves? Well, this is the scary part. Because by knowing us better, they can also manipulate us better, play with our weaknesses and vulnerabilities, and everything we would consciously or even unconsciously try to hide from the public.

During his talk at the World Economic Forum, Yuval Harari gave the example of a young boy, teenager, who you can imagine trying to hide his sexual preference to guys over girls. If a AI system is built so that it might know anyone better that anyone knows herself, therefore it might detect that this teenager prefer guys over girls before he even noticed it himself, for example, by screening his eyes looking at guys over girls, etc. Imagine rich kids having such a system at their “boom-party” full of teenagers of the boy’s school, as something “fun” to try. This may violate the teens’ intimacy as well as the privacy of some part of themselves they may not want or be ready to share at this critical period of their life.

Even more scary, in my opinion, would be if the robots (AI intelligent computers) reach the ability to “learn how to learn” on their own, thus evolve faster and become better and smarter than they currently are. They may require humans to provide (make accessible) all kinds of data for the greater good of the humanity or rather for the greater good of their own evolution … Imagine if the robots manage to figure on their own what I just wrote you before – that a system would become better – learn better therefore become in a way “more intelligent” – if implemented with more data… Well, if the robots become smart enough to understand this and to know how to implement themselves with such a high amount of data, they might do so with all sorts of trillions and trillions of human data  (made accessible) worldwide to become smarter than us, overcome us, and therefore evolve as the superior species on Earth… (pushed as the extreme case)

To reassure you, an open letter very recently signed by a lot of founders and companies CEOs states that AI should not be utilized to compromise or “diminish the data rights or privacy of individuals, families or communities”. It states that “the ways in which data is gathered and accessed need to be reconsidered“. This, the report says, “is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.” (you can also read this – an article entitled “5 core principles to keep AI ethical” from the World Economic Forum 2018).

* * * * 

You reach here the end of this post. Hope you enjoyed it. Do not hesitate if you have any question or comment to address them below. In the next post, I plan to write about a topic I like the most and which I find raises a lot of interesting questions and issues – AI and consciousness, and the role of emotions.

new-brighton-1239724_1280

AI and responsabilities

Our future with AI

I bet relying on AI for your business model and future of your company or organization appears pretty scary. But the more you will know about it the more you’ll be able to trust the system.

However, a tricky question is that if a mistake happens, who would be held responsible? The AI system? The person who wrote the algorithm of the AI to make it able to learn and make predictions based on what it has learned? The person at the origin of the data which could have made the system fail? This is not an easy problem to solve. Therefore lawyers might be needed in a way that they might need to shift as much as possible every other job employees their way of working and become specialized in AI legal rights or how to deal with AI in legal terms.

Indeed, worldwide politicians need to all sit at the same table and discuss legal issues in order to define or rather create laws in case of dysfunctioning of the AI system and its negative consequences. Discuss and think…

thinker-1810929_1920

… think about potential bad outcomes, pitfalls and limitations in case the system, as all new system, makes mistakes or dramatically fails, i.e. goes wrong or even terribly wrong.

Imagine the case of a AI computer making decision about patients’ treatment or outcome, as it is already existing since recently (see for example here and here). Would you held responsible a computer for the death of your child in case it was a mistake, worst, in case the decision was against the will of a doctor but somehow you decided to trust the machine and go for what it was suggesting as you knew it had better success rates.

Politicians definitely need to decide how to adapt the law with our society needs, for example decide who would be in charge in our previous example – is it to blame the person who wrote the algorithm of the AI computer at the basis of the decision-making process, or who? So far we do not have AI systems considered yet as conscious entities, therefore responsible for their own decisions and actions… This is tricky and definitely not simple to come up with a solution. This would definitely require A LOT of thinking and discussion among politicians, and lawyers to help argue to which laws any case-by-case situation may apply.

Philosophers, scientists, psychologists, medical doctors, historians, lawyers together with politicians need to sit around a big table and seriously discuss how to disentangle such dilemma which may arrive sooner than we think, and agree on legal terms on how to behave in case the AI system and its predictions fail and / or have bad or even terrible outcomes.

* * * *

You have reached the end of this post. The next post will be about the Risks of AI.

 

Machine learning

What is Machine Learning?

Machine learning encompasses both supervised and unsupervised (machine) learning. As AI, the term of “machine learning” has been first introduced in the 50s. It is a sub-branch of computer sciences that consists in training a computer program to “learn” from data, that is to say to increase its performance in doing a task (task based on data) for which it has not been explicitly programmed.

oracle-girl-2133976_1280

To perform such task, the computer needs to use data implemented into its system via a program, and “learn” from these data. That way, the program improves its performance, i.e. “learns”, to achieve its task until becoming very good at it, as already explained in one of my previous post entitled “supervised learning”. What the computer does is mostly 2 categories of learning : Classification (e.g. assigning a new patient to a certain category, for example) and prediction (i.e. predicting which treatment would fit this patient best based on the category the computer assigned her to, for example).

horsewoman-2711276_1280

The best example for this type of classification and prediction is Netflix or Spotify. Based on the previous songs you’ve listened on Spotify or the previous movies you’ve watched on Netflix, these two companies can best guess what would be the movies (in the case of Netflix) or music (in the case of Spotify) you might want to watch or listen next. Both companies use computer programs trained on database containing data about your search history (about music or movie) as well as the history of music you’ve listened and movies you’ve watched, respectively, which constitute the training set for the algorithm to be trained on, learn, thus predict with pretty high accuracy what you might want to listen or watch best, respectively.

Related to this, I watched a video last week of a Machine Learning Lead Architect at Wolfram company, called Etienne Bernard, who was interviewed to talk about “neural nets” (neural networks), and he gave the following example of what neural nets could achieve : if a baker would ask to this AI neural net system how many croissants he is going to sell today, the computer, according to its database – i.e. taken into account many variables, as maybe predicting the weather, which might have an impact on the sales, etc. – would be able to give him a reply pretty quickly based on its database and all the relevant parameters implemented in it.

matrix-2953869_1280

The advantage of machine learning compared to humans? The incredible power of making predictions, as it can look at very high dimensional data – hundreds or even thousands of dimensions, i.e., hundreds or even thousands of variables considered within the data, determine correlations between these dimensions, detect patterns among them, therefore make pretty high-accuracy predictions once the machine has learned these patterns, predictions which are much better than any human being would ever be able to make, as we are only able to understand and/or learn at most 3 dimensions. A reason for that is the way we plot data to see a pattern, in a graph according to x, y and z axis. Visually, 3 dimensions is still pretty easy to look at and detect a pattern. More dimensions start to be very tricky as we do not know how to visualize them unless using a computer using machine learning algorithms which can analyze such high-dimensional data.

social-1989152_1280

The disadvantage of machine learning compared to humans? We have to feed the machine lots and lots of data before it can learn. And this has to be done by humans so far. Then, depends on the type of learning: either the machine gets asked a question whose answer is already contained in the data the machine got fed in with – which is called, as we already defined in a previous post, supervised learning – or we fed the machine with data and then we require the machine to figure a pattern – this is called unsupervised learning. As an example: you fed the computer with thousands and thousands of pictures of cats – the machine has no clue what a cat is – and it needs to figure its own criteria to define it / recognize it so that if a totally new picture comes (i.e. a picture you never fed it with before), it is able to say with a high probability (i.e., with high confidence) whether this is indeed a cat or not. The machine might have totally different criteria as we do to define a cat; it would rather use algorithms of deep learning (defined in another post) and specific neural networks (as object recognition – a subsection of computer vision). This is used a lot in the field of image recognition. In resume, we can say that machine learning is a subsection of AI which is “data-based” and which encompasses both supervised learning and unsupervised learning.

You have reached the end of this post. Hope you enjoyed it. Do not hesitate to make any comment or give any feedback.

Reinforcement learning. Deep learning

What is reinforcement learning?

Reinforcement learning is another class of learning which I find to actually be the “real learning” (at least compared to supervised learning). Indeed, this type of learning uses rules without “supervision” and without data, i.e. without a database previously entered into its system and from which the computer “learns”, as it is the case in supervised learning. Here, the computer “learns” by starting to “play” on its own, based on rules previously defined (for example, how we are allowed to move the figures in chess play), from random move, simply by trying out possibilities and rating them depending on their outcome (e.g. win/loose; i.e. trial/error strategy). Actually, this is a bit more complicated than that, as behind these rules, there are algorithms written by human beings (actually computer freaks! ;-p).

These algorithms program the computer to evaluate, per move, all the possibilities of what the next move could be, and for each possibility, evaluate its probability of leading to a win. After this evaluation, the computer decides to play the next move with the highest probability. Some programs, as AlphaGo, can plan (i.e., are programmed to plan) ahead 50 to 60 moves in advance. Even with a simple computer program! The procedure of evaluation as described just before is called a “simulation”, which we could refer to mental thinking in humans, for example when a chess player proceeds to the same type of evaluation as a computer, trying to plan ahead what would be the best next move to play; however, probably not 50 to 60 moves ahead… !!

go-3339748_1280

In the case of a computer, the system learned that one trial randomly chosen would lead to a win or to a loose, and by self-play, learns accordingly to reinforce itself to only learn the “win” or let’s say optimal possibilities. To achieve such performance, a self-play reinforcement learning algorithm on which neural networks (defined below) get trained on is necessary.

In the case of AlphaGoZero, a self-play training algorithm, by reinforcement learning, consists of a neural network updated as it plays to guess (predict) the next moves and of a powerful search algorithm which outputs, for each possible move, the probability lambda of playing each move. (AlphaGo is the famous program developed by the British company Google DeepMind to compete with human players to play the game of GO; AlphaGoZero its latest version playing with no human input except the basic rules of the game of GO).

This search algorithm is called the “Monte Carlo Tree Search” (MCTS for the acronym), but I won’t go through any details here. I think this is already all too complicated and I hope I make it simple for you to follow!

The overall motivation? Cumulative (more and more!) reward! For humans, it is usually money or at least anything to win/achieve at the end. For a computer, it is – for now – defined as the goal of the program/algorithm which drives it to “win” (i.e., achieve the goal it has been programmed for!).

What is “Deep Learning”?

scene

Deep Learning is an ensemble of methods brought together to create automated learning. In brief, Deep Learning, a sub-field of Machine Learning, encompasses what we call “convoluted neural networks” and “recurrent neural networks”. Convoluted neural networks are utilized to scan an image, thus commonly used in the branch of computer vision called object recognition. These networks are able to discriminate objects within a scene. Recurrent neural networks are used to remember the past, i.e. history of experiences and actions, by having access to previously acquired and stored knowledge. These deep neural networks are what is the closest so far to the human’s brain, trying to reproduce our cognitive abilities in a machine, and grouped in the category called Deep Learning.

Some more concrete examples?

Back to AlphaGo, it only uses supervised learning; that was the first version of it. AlphaGoFan and AlphaGoLee (the version which beats Lee Sedol in 2016), later (better) versions of it, use supervised and reinforcement learning. The last version, AlphaGoZero, as mentioned above, only uses reinforcement learning. By using only one single neural network, compared to its predecessors, AlphaGoZero was therefore able to “learn on its own”; on that account, we can qualify the system/the machine of being “intelligent” (hence the term of “Artificial Intelligence” (AI)). But not yet as intelligent as us, humans… (see below)

Another example or branch of unsupervised machine learning algorithms is called “Generative adversarial networks” (called GANs as its acronym). You do not have to remember the full name. I know it sounds pretty scary (complex) at first !! GANs is easier to remember and I find it much cooler !! These specific type of networks – the GANS – have been around since 2014 and invented by Ian Goodfellow and colleagues. I mention it because so far, it seems to be the most powerful method in machine learning. The concept of this GANs is to train 2 neural networks to compete with each other in order to reinforce themselves. Like this, they can mutually train themselves and increase their performances without human intervention.

hornet-3350248_1280

For example, these networks have been extensively applied in the field of computer vision, for image recognition. For example, the “StackGANs“ (for Stacked GAN), an improved version of the GANs, generate high-resolution images with what the authors called „photo-realistic details“ (I think the term speaks on its own), a technical feat in the field of computer vision. The concept of these StackGANs is pretty simple (at least I find it pretty simple !! I hope you will too !! I’ll try to describe it in an easy way and then will give you another example to make the analogy with a real-life case): one neural network (NN), let’s call it NN1, generates images, a second NN, let’s call it NN2 – within the same system – makes a decision about whether the image generated by NN1 is real or fake. Based on the feedback received by NN2, NN1 improves its performance, i.e. is able to generate better and better images. Consequently, NN2 also gets better at deciding whether images generated by NN1 are real or fake since it becomes easier and easier to discriminate between real and fake images as NN1 becomes better and better at generating high-quality images. This creates a feedback loop of continuous improvement without human intervention.

NN1 :

network-3357617_1280

NN2:

NN2

Let me now give you another real-life example to make the analogy with what I just explained. Recall this famous and fun board game you probably have one day played with your friends or family during long winter evenings, ideally close to a fireplace !! Well, the game is called Amnesia. Basically you have a card “gently glued” onto your forehead which indicates to everybody but you who you are in the game (i.e. which celebrity name is written on the card; it could be Cleopatra, Charlie Chaplin, Marylin Monroe, Elvis Presley, BenHur, Charlon Brando, Lady Gaga, and many more; you got the idea… !!). This game is played by team. The goal of your team, in order to win, is to succeed in making you guess the celebrity you are in the game by making “mimes” (i.e. gestures for imitation without verbal tips).

mime-2056078_1280

According to the same principles as NN2 network previously, which needed to guess whether images were real or fake, you will have to guess who you are (i.e. which name of the celebrity is currently gently “glued” onto your forehead) based on your friend’s mimes/gestures. If you cannot guess, your friend (NN1 for the analogy) will try to get better at making mimes so that you (NN2 for the analogy) can guess better who she is imitating. If she does improve, therefore you will guess better thus improve performances. This way you will both reinforce each other (i.e. you will both improve your performances based on each other’s behavior). Well that’s exactly the same concept in the previous example with the two NN, NN1 and NN2.

Coming back now to the field of image recognition, Google was able to train machines which are now better than humans at finding and discriminating objects in a scene, with 3% of errors compared to 5% in humans. This was achieved by increasing automated learning methods (called Deep Learning, cf. the next post) and computational power. Therefore in the field of image recognition, machines recently became better than humans. But no worries, that’s still far for considering them as “intelligent” as us… !!

buffalo-on-the-rice-field-3344519_1280

Last but not least, and for your information, the domain of healthcare had become a growing field benefiting a lot from AI with several projects actually already funded by the EU, as the MURAB project, MURAB for “MRI and Ultrasound Robotic Assisted Biopsy“, setup with AI to better diagnose cancers and other diseases, among other projects.

Here you have reached the end of this post. Hope you enjoyed it. Do not hesitate to share any comment or ask any question! To read more about “What’s really behind AI?” you can click here.

Unsupervised Learning

What is unsupervised learning?

baby-84626_1280

Unsupervised learning differs from supervised learning in that it does not have a known output, therefore no mapping function. We say that the data are “unlabeled”, compared to “labeled” data in supervised learning. “Labeled” in the sense that we already know the output of the data. Back to our previous example (cf. previous post, Supervised Learning), we know that this handwriting labels a “A”, a “B”, etc. In unsupervised learning, we do not know the output of the data we implement the computer with. Therefore, people who use an algorithm of unsupervised learning want the computer to find a structure in the data, i.e. to figure a pattern, as commonly said.

An example would be if you have a lot of data but are only interested in some aspects of them. Let’s say you have information for the last 3 years, day by day, all over EU, about the weather, the date, the oil price, the real-estate market, the growth of each European countries, how many products (from all sorts of veggies, glasses, heart pacemakers, sleeping pills, clothes and shower gel, etc.) have been bought. You might not be interested by the correlations between all these data variables, also called “dimensions” of the data.

Therefore, you could write a program to group these data into similar categories, this procedure is called “clustering” and the sub-groups of data are called “clusters”. For instance, one could group/cluster all the material bought products into “products”. You could also decide to write a program which would only analyze 2 or 3 dimensions of your data, in this case it is called “reduction of dimensionality”. In this case, one could choose, for example, to only consider the following dimensions of the data: temperature, time of the year and real-estate market and only analyze data according to these 3 variables (3 dimensions).

perspective-3201397_1280

Finally, you could also decide to analyze whether a common rule can describe a large part of your data, i.e. whether the whole dataset would have data following the same rules. In other terms you could ask whether a given variable X defined as being correlated to A would also be correlated to B. This means that whenever X increases, A either decreases (in that case we say that both variables A and X are negatively correlated) or increases as well (in this case we say that both variables A and X are positively correlated). For example, you could look whether a given dimension of the data, increasing or decreasing with the weather, would also be correlated with another dimension of the data, or whether one type of people who tend to buy this product also tend to buy another product. This type of learning also belongs to what is called “Machine learning”.

Here you have reached the end of this post. Hope you liked it. Do not hesitate to share any comment if you wish or ask any question! To read more about “Machine Learning” you can click here. 

Supervised Learning

What is supervised learning?

Supervised learning is a class of learning which uses data entered by humans into a computer/machine in the form of input-output. The term “supervised” stands for human supervision; you can think of it as a teacher who will supervise the work and tell the computer how much its prediction of the output from the inputs fits the (already known) output. Indeed we know that this input and that input would lead to the same given output.

writing-3321866_1280

For example, you would like the computer to “decrypt” handwriting of many human candidates. You give the computer thousands of scanned images from human handwritings and after the computer processed the images, you tell him “this is a “A”. So the computer has to learn that from the handwritten images you gave him, this is a “A”.

Then you give the computer another set of data, the computer has to guess and you “tell” him that the output is now a “B”, etc. You do that until let’s say letter M, with thousands of handwriting cases for each letter, until the computer gets very good at it. From the N on, you want to test the computer, based on what it has previously learned from previous letters, by asking it to guess which letters this new handwriting represents. If the computer is doing good enough (i.e. its level of response is very accurate and it makes a very small percentage of errors), then you can show the computer new handwritings to decrypt.

This knowledge on which the computer gets trained is called a training dataset or a training set. The training part supervised is the learning process. To be more accurate, this is actually an algorithm written by humans, also called a code or a computer program, which learns from this training set. The algorithm is written in the computer system, so by analogy we say that the computer “learns”.

The computer “learns” via its learning algorithm. This algorithm tells him how to “learn” and from where, in this case from the training dataset where data in the form of inputs and outputs are stored.

The goal for the computer is to infer (i.e. deduce) from the input variables entered in the training set the correct output variable also entered in the training set. From the input variables to the output variable there is a function called a mapping function and the task of the computer / of its learning algorithm is to come as close as possible to the best approximation of this function, i.e. to the best inference of the output based on the inputs. When this is achieved, i.e. when we consider that the performance of the learning algorithm is good enough, meaning that the approximation to the mapping function is good enough, the learning process ends. Then comes the testing part.

The testing part consists in testing the computer with a new input, i.e. an input which has not been previously entered in the training set. Once the computer has learned, then it is able to make very good guesses, called predictions, on what would be the outcome of a new input. Indeed the goal of the computer, based on what it learned from the training set, is now to apply the best approximation of the mapping function he learned during its learning process to the new input and infer an output.

These data collected into the training set consist of knowledge of the same kind collected into a database, for instance all the legal cases since 1950. Another example would be if we enter millions of chess parties played since 100 years in a computer, with all possibilities of moves leading to a win or to a loose, this would allow the computer to analyze from this database what is the most reliable move to make for a given position (while playing) in order to win. Another example would be in the heathcare sector. Imagine a database of every patient diagnosed for cancer all over the world, for the past 50 years: their vitals’ rate (e.g. blood pressure, rate of antibodies or concentration of iron in their blood, …), main symptoms ahead of the disease (e.g. headaches, tiredness, sour throat, …), type of cancer, time till the development of the cancer, treatment, outcome of the treatment, outcome of the cancer, in how long, and all the associated pertinent information related to the disease.

computer-3343887_1280

Now imagine such database entered in a single computer. The latter would then become much better than any doctor in the world to diagnose, cure or even prevent specific types of cancer. Why? Let’s come back to what I wrote previously. The inputs, the number of variables of each patient diagnosed for cancer, might be different depending on the patient condition (e.g. whether she has diabetes, her previous history of cancer, number of cancer cells, stage and evolution of the disease, treatment applied, etc.) and lead to different outputs (cancer cured or not cured, in how long, etc.). The treatment for curing this cancer might have been different and adjusted according to each patient’s special condition and might have worked or not. The computer would then learn all these associations (e.g. patients’ condition, type of cancer, treatment, outcome, …), based on millions and millions of cases, therefore learn how to come up with the best outcome in terms of treatment depending on a particular patient condition, for instance making the best guess on which treatment between chemotherapy and surgery would be more appropriate and the most successful to a new patient recently diagnosed for lung cancer but also suffering, for example, of diabetes. To be able to achieve such performance, it means that the computer learned how to come very close to the mapping function and approximate the best output based on so many inputs (the number of patients diagnosed for cancer all over the world for the past 50 years, in our example). This is already applied in Korea, where doctors use an AI supercomputer as a “medical doctor colleague” loaded with a database of more than 12 millions research papers and cancer medical cases to help new patients diagnosed for cancer to have the right treatment and with the best outcome. Of course, doctors discuss this new case together and based on the outcome of the supercomputer, came up with a decision.

Here you have reached the end of this post. Hope you liked it. Do not hesitate to share any comment if you wish or ask any question! To read more about “Unsupervised Learning” you can click here.

AI and consciousness

Hi everybody!

I am back. Was caught into a lot of papers to review for my academic work. But I am back to writing here again !

Let’s start this first article of the chapter “AI and consciousness” with a question.

Why would we like to represent consciousness in robots? At least the way we define it in humans. With the capacity for feeling or perceiving. Also called “qualia” in psychology by the physicist and Nobel Prize John Eccles to define subjective experience.

We, as humans, tend to think that the future should be born and raised like us, thus that machines would mimic us – what I define as “human arrogance”… In other words, we tend to think that we are the norm. What about if things evolving beyond us might be slightly different? Let’s shift perspective and imagine a world when machines would not be “humanized” but instead evolve beyond what we define as being “human”, becoming the next species (i.e. the next level of evolution).

If we follow Darwin’s theory of evolution by natural selection (cf. his book  “On the Origin of Species“, 1859), only species/features/variables worth surviving survived evolution. For example, species which have a fundamental ability to adapt, or features which adapt in order to serve different functions, i.e. to survive evolution.

evolution-3885331_1280

This said, why would machines need to “feel”, i.e. have emotions? This feature may not be necessary for the natural selection of evolution in order for the AI machines to adapt, and “think”. This may not be the dominant selection criteria, which may thus “die” with us, as human species. Another possibility is that we may evolve with AI, maybe combining ourselves with machines. This might sound derived from a science-fiction movie, but there might be ways to do so, ways we cannot conceive yet, but mostly relying on unconscious processing that our brain does to integrate information. Indeed a lot of things are processed by our brain without us being aware about such processing. I will develop this point in the next article of this chapter.

Everyone agrees that it is difficult to reproduce consciousness in robots – but actually what is really consciousness? According to the book of Professor Stanislas Dehaene entitled Le Code de la Conscience (i.e. “The Consciousness Code“), most of the useful processing of information is done unconsciously – the “learning how to learn” – our vision of the world – etc. and could be translated in a robot. The question is, are conscious processes such as the feeling of an intense color, a sunrise, or sadness really necessary?

winding-road-1556177_1280

In the book “Homo deus” by Yuval Harari, the author argues that emotions and feelings (perceptions, sensations) are simple algorithms useful for the survival of species. In the animal world, indeed, decisions and action selections are made for survival, based on reproduction, and finding food without being eaten. As an example in the book, if a monkey sees bananas close to a lion, probabilistic calculations have to be made in order to decide what is the best move to make (get the banana without being eaten by the lion, basically).

We can call the fact that humans can feel a beautiful sunrise, feel for people starving on the other side of the planet or going through a terrible and devastating tsunami the beauty of the human race, but in a sense, isn’t it a break to our focus, intelligence and efficiency, thus to our productivity, by slowing us down? (in a purely pragmatic way). We are indeed usually not efficient when we get driven by our emotions… Be heart-broken or worried about a parent’s health and we surely decrease our work efficiency, overwhelmed by a flood of (sad, angry or worried) thoughts…

I am not saying emotions are bad; however I am saying that maybe the next species (level of evolution) would exist without these subjective qualia consciously expressed. Why? maybe in order to evolve in a new era of productivity and efficiency, when the world would run at such a fast pace that emotions will not be “permitted” anymore. Therefore, the only way to not be left behind may be to get rid of conscious processes and of all these subjective entities.

Just something to meditate on.

Another school rather thinks that emotions are our “added value”. It means that it is not just about taking strategic decisions in an automatic way, but taking them with affect, emotions. For example, as mentioned in one of my previous posts, when Lee Sedol played against the software AlphaGoLee, Lee Sedol finally won against the computer when he was not playing in an automatic way anymore, not “like a machine” anymore, but with affect and emotions. He was indeed angry and very disappointed to have lost 3 games already against the computer and to have been officially beaten up as he lose 3 games over 4 or 5. Lee Sedol was playing with affect, emotions, and the computer was lost, did not know how to adapt anymore to such “strategy” / mode of playing of its adversary.

Thus, emotions could rather be a strength which will allow us to impose our singularity over technology in an era when collective intelligence and social brain are the key of future society needs. We will develop such concepts in a next post.

This is it for today. More in the following (short) articles. Stay tuned !

AI and Consciousness (Part 1)

Hi everybody,

Sorry for not writing much earlier, I was caught in writing papers for academia.

I want to start a new series of blogs about AI and Consciousness. My favorite theme. Here are the topics I want to cover in the posts of this series:

  1. Human consciousness.
  2. How could it be applied to a machine?
  3. Which features of human consciousness would survive evolution? if “conscious” AI is our future. Would “conscious” AI be at all useful? Why consciousness will be a feature which should survive Evolution? Is it not purely human ?

I will try to post a new blog post every few weeks, so stay tuned!

 

 

 

 

AI and bias

(Reading time of this post: < 5 minutes)

Hi everybody,

I’m back writing. I’ve been pretty busy lately, thus did not have much time to write.

Particularly, I’ve been working on an AI research proposal with two other scientist friends (a material physicist and a mathematician). After quite some brainstorming we decided to pick up the following topic, AI and (its influence on) bias in society.

tribalism-1201696_1280

At first I felt somehow undecided and a bit skeptical about the topic as I think bias is a difficult feature to value and to somehow remove from human kind. As we are all biased – consciously and/or unconsciously. Governed by past experiences, by prejudices, without even being aware of them, most of the times. Meaning they can influence our decisions and actions without us being aware they do. We might for instance think that a man or a woman would fit best this job or that job, therefore tend to hire more men or women for a given position based on biased standard norms we assume being right, often unconsciously.

We tend to think that AI might help to avoid such biases without subjective human judgement, therefore being the solution to the human bias problem. However, some studies revealed that AI was actually doing much worse than humans in terms of bias (i.e. giving much biased output from data entered in the system).

Thus, the general idea is that AI may be used as a convenient tool by humans to increase bias. Indeed it may be easy (though pretty nasty) to hide bias in line of codes of algorithms written to program AI while pretending this AI system is made to decrease or even annihilate bias. As I wrote in my past articles, AI is evolving at a rather extremely fast (exponential) pace. Thus it is hard to follow up on regulations on how to control each AI technology newly developed as well as on every progress made with appropriate checkpoints at every step of the AI development process. This limitation might surely profit some AI developers while the general public is unfortunately mostly lagging behind.

balance-1938874_1280

The main idea would be to investigate how unwanted bias strongly influences our society, as soon as humans keep an hand on AI – i.e. still implement AI systems with data and write algorithms – and address this issue by comparing with conscious AI entities which may be able to learn how to learn about the world without human intervention (which does not exist yet).

We planned to address this question by proposing a model of consciousness, suggested in human research, see how it may apply to a machine, and discuss the question about whether or not bias is purely human, thus how humans and AI can cooperate in our society if AI systems manage to become “conscious” entities.

This brings me to the next series of posts I now plan to write about AI and consciousness in the next coming weeks, which is the post I’ve been waiting to write since I started this blog, by far my favorite, I have to confess. This topic indeed raises a lot of questions and debates, still, in brain research (Neurosciences) to disentangle consciousness in the human brain, but also about the way it may or may not be applied in machines so that we could talk about “conscious AI”. One might also consider consciousness as an unique feature of human kind, therefore not applying to machines.

Other interesting questions might be, what would be the most efficient conscious AI serving best our society? Which human features AI would still need to keep in order to use best its potential and adapt best to the world? Answering these questions will give us a good tip about which features of human consciousness would still be required for machines that think? thus which features will evolve (be kept) as an useful variable surviving evolution… (according to Darwin theory of evolution)?

awakening-675330_1280

I am currently reading the book of John Brockman entitled “What to think about machines that think”, which I recommend, interviewing leading thinkers – mostly scientists – about machines that think and resuming in short chapters what they think about the topic.

For the next series of posts about AI and consciousness, I will always document my comments with references, and this book will be one of them.

Looking forward to posting more on the topic.

Till the next post.