AI and bias

(Reading time of this post: < 5 minutes)

Hi everybody,

I’m back writing. I’ve been pretty busy lately, thus did not have much time to write.

Particularly, I’ve been working on an AI research proposal with two other scientist friends (a material physicist and a mathematician). After quite some brainstorming we decided to pick up the following topic, AI and (its influence on) bias in society.


At first I felt somehow undecided and a bit skeptical about the topic as I think bias is a difficult feature to value and to somehow remove from human kind. As we are all biased – consciously and/or unconsciously. Governed by past experiences, by prejudices, without even being aware of them, most of the times. Meaning they can influence our decisions and actions without us being aware they do. We might for instance think that a man or a woman would fit best this job or that job, therefore tend to hire more men or women for a given position based on biased standard norms we assume being right, often unconsciously.

We tend to think that AI might help to avoid such biases without subjective human judgement, therefore being the solution to the human bias problem. However, some studies revealed that AI was actually doing much worse than humans in terms of bias (i.e. giving much biased output from data entered in the system).

Thus, the general idea is that AI may be used as a convenient tool by humans to increase bias. Indeed it may be easy (though pretty nasty) to hide bias in line of codes of algorithms written to program AI while pretending this AI system is made to decrease or even annihilate bias. As I wrote in my past articles, AI is evolving at a rather extremely fast (exponential) pace. Thus it is hard to follow up on regulations on how to control each AI technology newly developed as well as on every progress made with appropriate checkpoints at every step of the AI development process. This limitation might surely profit some AI developers while the general public is unfortunately mostly lagging behind.


The main idea would be to investigate how unwanted bias strongly influences our society, as soon as humans keep an hand on AI – i.e. still implement AI systems with data and write algorithms – and address this issue by comparing with conscious AI entities which may be able to learn how to learn about the world without human intervention (which does not exist yet).

We planned to address this question by proposing a model of consciousness, suggested in human research, see how it may apply to a machine, and discuss the question about whether or not bias is purely human, thus how humans and AI can cooperate in our society if AI systems manage to become “conscious” entities.

This brings me to the next series of posts I now plan to write about AI and consciousness in the next coming weeks, which is the post I’ve been waiting to write since I started this blog, by far my favorite, I have to confess. This topic indeed raises a lot of questions and debates, still, in brain research (Neurosciences) to disentangle consciousness in the human brain, but also about the way it may or may not be applied in machines so that we could talk about “conscious AI”. One might also consider consciousness as an unique feature of human kind, therefore not applying to machines.

Other interesting questions might be, what would be the most efficient conscious AI serving best our society? Which human features AI would still need to keep in order to use best its potential and adapt best to the world? Answering these questions will give us a good tip about which features of human consciousness would still be required for machines that think? thus which features will evolve (be kept) as an useful variable surviving evolution… (according to Darwin theory of evolution)?


I am currently reading the book of John Brockman entitled “What to think about machines that think”, which I recommend, interviewing leading thinkers – mostly scientists – about machines that think and resuming in short chapters what they think about the topic.

For the next series of posts about AI and consciousness, I will always document my comments with references, and this book will be one of them.

Looking forward to posting more on the topic.

Till the next post.


What’s behind Artificial Intelligence? In simple words.

What is “Data mining”?

Data mining is the process of screening data by looking for “patterns” in significantly large datasets. This is what we already defined previously as unsupervised machine learning. Here data mining is the act of looking for patterns in data deprived of any structure (remember, this is what you want the computer to achieve for you, to look for a structure – or pattern – in your data since no structure exists yet; as a reminder, these data have no output, compared to supervised learning, see above). Usually, this requires methods like machine learning, statistics, programming.

To give you a concrete example: in the case of flow of money detection, lawyers could be looking for years at large datasets like emails, SMS, chat conversations, for a period of time from months to even years, screening the least information leading to a proof for potential flow of money. A program written from AI which would search for keywords in such large datasets could probably take hours or let’s say days instead of years of work by a human to achieve the same goal. It becomes then much easier to “catch” flow of money in days when the money probably didn’t have been flowed yet in so many locations that it then becomes impossible to track down. Therefore this AI program dramatically increases efficiency and allows the lawyer in our example to now invest his time working on so many other things less time consuming and surely more interesting!

There is also a software program called Quil used for journalism. This software can analyze information available online and choose – based on predefined criteria written in an algorithm – the most relevant details to write an article on its own. To achieve the latter, the software uses its language-generation software to build sentences. The best example for data mining is Spotify. Spotify is indeed using outstanding amount of data about customers (their localization, their potential mood – by the music listened – which day of the week they are listening to specify types of music, weather where they are located, …) to give you more music of that kind… I just hope that Spotify concludes that if you are in a sad mood, that it is a rainy Sunday and thus you might be in a sad or depressive mood, it would maybe be smart to suggest you feel-good music – like “No worry, be happy” or “Everything’s gonna be okay” !! – instead of suggesting you even more sad songs saying that life is tough and that we are all lonely at the end… !! which might be true at times but surely not what you need on a rainy and melancholic Sunday! ;o)

pic1_AI Blog

What is the link with the brain?

The scientific discipline by which we study the brain is called Neurosciences. That’s why we currently link AI and Neurosciences. This is closely related as in order to create better AI we need to understand how the brain works. However, this is not an easy task, especially when it comes to understand mechanisms underlying our cognitive abilities. In order for the brain to acquire learning, it requires what we call “neuronal networks”. These neuronal networks are networks of neurons, i.e. groups of connected neurons within or between brain regions, which are working together to process and integrate information by reinforcing (strengthening) connections between them (phenomenon called “plasticity”) in order to learn, i.e. remember information and be able to recall it when needed. This information is mostly perceived from the world and our environment (bottom-up afferent inputs) but could also be thought about, i.e. mentally recalled (top-down intrinsic inputs).

You can think of this mental recall as previously acquired knowledge and past experiences stored somewhere in your brain and that you could retrieve whenever you want and almost immediately, whenever you want to think about something in particular, for example when you close your eyes and want to think about your previous holidays at the beach, when you want to remember an important information in order to complete an exam or a previous experience in order to take a decision.

This is how it works basically (here I simply give the big lines in order for you to get the big picture, I won’t go through any details): we perceive a lot of information from the world and our environment, via our senses (i.e. what we see from our eyes, what we hear from our ears, smell from our nose, …). This perceived information gets sent to our brain almost instantaneously. Our brain then processes this information, integrates it, and if it is perceived as relevant*, makes it reach our state of consciousness so that it can be stored somewhere in our brain as a lasting memory and therefore can be further retrieved (recalled) when needed. (*the brain knows how to detect whether an information is relevant or not, one hypothesis is via attention, i.e. when you pay attention to something). Machines/AI, according to the same concept (“neuronal networks”), are computed via artificial neural networks (ANN).

What are Artificial Neural Networks (ANN)?


ANN are composed by an input layer and an output layer, and (or not) other layers in between called “hidden layers” that help doing the work to get from the input/inputs (i.e. what is received by/arrived to the system) to an output (i.e. what comes out of the system; let’s think about it as a response / action / decision).

To better understand what an input and an output are, let me give you an example of a real-life situation: let’s imagine you walk in a forest, calmly, when suddenly a tree is about to fall right in front of you. Based on what you’ve learned (a tree could kill you if it falls on you) and on the information coming to your eye (the tree falling, i.e. the input) which gets sent to your brain and reach your consciousness (making you aware of the situation), it makes you have a quick reaction (i.e. make a quick action/decision). The quick reaction is that you need to make a fast move in order to get out of the way of the tree about to fall. The output, in this case, is a motor output, i.e. an output from your motor system somewhere in your brain sending a message to your leg muscles to proceed to a movement, in order to avoid the tree to fall on you, based on previous knowledge or past experiences that it might kill you… 


The hidden layers could be think of, in our example, all the networks of neurons in our brain processing mechanisms we are not aware of (unconscious processing of information) that go from the input being processed to generate a quick output. Another example would be if you wish to hold an information in mind. It would require that you pay attention to this information (the input) so that it reaches your consciousness and based on a lot of mechanisms occurring when information reaches your consciousness (hidden layers), it will be kept in (your) mind (output). Those two former examples were made in the case of the human brain.

In the case of a machine now: back to our example of AlphaGoZero, an output would be to have a win move from the starting position (the input). In order to reinforce learning, the weights of connections between members of the ANN need to be strengthened, in the same way as human brain connections between neurons are strengthened/reinforced via plasticity mechanisms during learning. The unique goal: optimizing reward outcome, goal achievement (as listed above in the case of the human brain: to avoid a tree falling or to keep an information in mind, in our examples).

What’s really behind AI?

So far, always a human being is behind AI, writing algorithms, “feeding” the computer with data, i.e. creating a database of input/output pairs, as already described above. For example, implementing the system with the following database: a bunch of different medical symptoms in millions of patients all over the world leading to the same pathology, to take a similar example as in previous posts.

Behind these AI algorithms written by humans, there are statistics, simulations, in order for the system to be trained to learn as quickly and as efficiently as possible (cf. AlphaGo). So learning algorithms to train artificial neural networks, statistics and simulations are key.

An important point I would like to emphasize here is that being “intelligent” is not the same as what is commonly thought when we say that “a computer beats a human champion” for chess or for this Go game, as AlphaGoLee beating world-famous Korean player Lee Sedol in 2016 or its improved version AlphaGoMaster beating the famous world champion Ke Jie in 2017. 


A computer, even the most basic one, using simple computer programs, as long as it has high computational power (which means that such computer can do complex calculations very quickly), would be able to beat any human for mathematical tasks like calculation or memory-based capacities – for example being able to process millions of chess parties in a rather short time – as we cannot compete with such outstanding computing performance. (!)

Back to our example of beating the world-champion Lee Sedol, the computer program AlphaGoZero was, over the course of the training, self-playing about 5 millions games of GO!! On top, it was evaluating 1600 simulations for each next move search computed by the self-play algorithm (i.e. for each move, it was evaluating 1600 possibilities of where the next move could be and assigning a probability to each of these 1600 possibilities, choosing to play the next move with the highest probability) !! After learning, this program was able to plan 50 to 60 moves ahead (with about 150 moves in total for the GO game), which is far more than any human could ever dream of. How can even the best human champion in the discipline compete with such performance, unless the computer makes a mistake?! The computer can make a mistake when it is confronted to a situation it never saw before and/or cannot deal with based on what it has previously learned or has been trained for.

And this is how Lee Sedol (is thought to have) managed to win the last game over the 5 games played against AlphaGoLee version of AlphaGo. Indeed for that game, Lee Sedol was playing with emotions, not anymore with strategy only, therefore the computer was lost and did not know how to adapt. Such inability for a AI computer to cope with a novel scenario is what makes humans unique and still “more intelligent” than any computer so far. Not because humans have less computational power (!) but because they have this fantastic ability to adapt to their environment. Adaptation here means that humans, when put in a completely new situation we have never dealt with before, take time to think through it, examine possibilities, evaluate options and outcomes, retrieve previous knowledge and past experiences, create new alternatives by finding new appropriate strategies in order to select the best action to make as our decision to deal with the new scenario.


Let’s take an example. You are in the desert and start to become very thirsty. You only have a beer (glass bottle) with you but nothing to open it. How would you do that? I’m sure you’ll figure something out by using a stone, your shoes, anything, to open this beer and hydrate yourself. The same would apply in any situation when you would need a special tool to complete a task but this specific tool would be missing in your surroundings, and no possibility to go buy one or ask someone around you; therefore you will try to use any object, strategy or what could be used as a tool in your close-by environment to complete the task at hand.


Hence we, as humans, are surely not as fast as AI computers to process maths, but we know how to adapt, we know – we have the basis – to know how to learn, and this is the level of evolution even the best AI supercomputer still doesn’t have reached yet.

Thus in that sense, we are still “superior”, i.e. more intelligent, than the best AI entity out there. This is good news ! However, we do not know for how long… As soon as we would know how to build a AI supercomputer which will know how to “learn how to learn” on its own, without the need of a human to feed its system with data or to implement its system with learning rules but instead would know on its own how to have access to data and learning rules, and adapt through its environment and new situations, i.e. “learn how to learn”, then a AI machine would reach our ability of being intelligent. But the technology is not there yet, at least as far as I can tell.


To keep in mind though, the field of AI, thanks to humans devoted to this new technological revolution, is growing at a very quick (exponential) pace, which is the pace humans understand the least (it is almost impossible for humans to picture or understand exponential)! Anyway, this means that AI develops and improves extremely quickly ! So who knows when it would reach human intelligence… Something to meditate on maybe… till we discuss the topic later on. I indeed plan to address such topic in one next post of this blog, so stay tuned !

What does “a computer learns” mean?

Let’s first start to define:

What is learning?


I think one could define learning as the acquisition, consciously or unconsciously, of new information or let’s say knowledge about the world, our environment, our routine, the people around us, things we need to know how to do, things we need to remember, in order to reach our goals and adapt to our environment. Learning usually requires an improvement (i.e. an increase) of performances. Continue reading