Predictions and how to rethink the future

One of the goal of AI is to improve predictions

Hi everybody!

Welcome back!

This time, after reading articles and mostly listening talks about AI, I want to write about:

(1.) the main role of AI : making better predictions, as AI is the best predictor ever

(2.) the risk of it, since increasing predictions implies that this AI system gets implemented with a lot of data, our data, eventually, which directly leads to the fair question (and issue) of AI limitations:

(3.) (our) data protection.

I will first start by writing about the advantages of AI in business, how it will help improve business by improving predictions’ accuracy, then talk about its risks and limitations.

Predictions and business 

business envirt

Predictions start to be better predicted by AI because it relies on more data, i.e. the output can be better predicted because the model can learn better from so many past cases implemented into its system. Remember the previous section about supervised learning or machine learning when I introduced the idea of training set / testing set, input versus known output. Once the computer is fully trained it can now predict an output based on new inputs, i.e. new data you feed it with.

weather-3345746_1280

Let’s say you want to know what the weather would be in a week because of a big event approaching (and you think the weather might affect the venue of people). By definition the weather is unpredictable, as the stock market, because this is a non-linear function but models try to approximate it as much as they can.

With AI, predictions become faster, more accurate, easier, and therefore cheaper. The only “problem” is to integrate such predictions in business models so that people can really make their decision and consequently the future of their company in the hand of AI, i.e. of predictions made by AI. New companies, as “house of bots“, try to teach so.

Google, for example, is developing a new AI supercomputer with intermediate states – between 0 and 1 binary (basic) states, using quantum bits – which allows to improve even better predictions (i.e. increase prediction accuracy). You can think of basic states as an ON and OFF switch (if you think of the light, for example) and quantum bits as intermediate states (between ON and OFF) as “50 shades of grey”. These quantum bits are a superposition (a linear combination) of these basic states (0 and 1).

bits

So an easy way to get started with AI applications onto your business is to define: Which decisions would you like to be relying on less uncertainty and be more accurately predicted? With this question, you will find the answer which will allow you to identify what you need AI for. From there, you might wonder how you will use and trust AI – its predictions – to impact on your business/organization/… decisions and, therefore, actions? Fair question to wonder how much you could rely on it? How would you value the impact of AI outcome? And, last but not least, and maybe rather to start with, what type of data would you need to improve the AI system you got, i.e. its predictions accuracy to make your business/organization/… perform even better.

Indeed, based on inaccurate predictions, a lot of false positive, for example, you may loose a lot of money. Therefore, predictions which become more and more accurate, in other words predictions that tend to be 100% correct without ever reaching this level might profit a lot your business model. For this, you just need to choose the best AI system corresponding to your needs. This is key. Then you are good to go and try out, keeping in mind that the AI system can always improve, i.e. perform better.

For example, the more data – input/output case scenarios – and the more diverse dataset you implement your AI system with, the more it would be able to make accurate and appropriate predictions as more and more diverse data would allow the AI system to learn more, faster and better.

cameras

For instance, if you buy an alarm system for your company / organization, it may be improved so much that you could even be able to predict the intention of someone to commit a crime (break in) at the door or in the vicinity of your company before she even thinks or even knows she will make one.

This example was inspired from a great movie with Tom Cruise called “Minority Report” showing that in the future – around the years 2050 – it would be possible to predict who is going to commit a crime, thus prevent it from happening. In theory, sounds great. But then comes worst.

A recent serie on Netflix called “Person of Interest” tells the story of millions of people being screened in public spaces from cameras equipped with face recognition softwares. This becomes more scary. Besides, this is no science fiction anymore. Such practice is currently applied in China where good points and bad points get attributed to citizens based on their behavior. The latter gets screened by millions of cameras connected to a big database and to the Internet, therefore easily accessible to their employers, to the bank to which they wish to ask a credit for their houses, etc. Sounds unbelievable but this is already happening…

Imagine all the data about you one could find, for example, if you pay your bills on time, how many fanes did you receive the past six months, why, etc. – all translated into points (good or bad) which would have an impact into your life (access to credit, etc.). As if someone is always watching you and accordingly, give you reward or punishment… Doesn’t sound very appealing, does it? More importantly, would you still feel free?

surveillance-3351758_1280

You have reached the end of this post. The post would continue on a new page.

What’s behind Artificial Intelligence? In simple words.

What is “Data mining”?

Data mining is the process of screening data by looking for “patterns” in significantly large datasets. This is what we already defined previously as unsupervised machine learning. Here data mining is the act of looking for patterns in data deprived of any structure (remember, this is what you want the computer to achieve for you, to look for a structure – or pattern – in your data since no structure exists yet; as a reminder, these data have no output, compared to supervised learning, see above). Usually, this requires methods like machine learning, statistics, programming.

To give you a concrete example: in the case of flow of money detection, lawyers could be looking for years at large datasets like emails, SMS, chat conversations, for a period of time from months to even years, screening the least information leading to a proof for potential flow of money. A program written from AI which would search for keywords in such large datasets could probably take hours or let’s say days instead of years of work by a human to achieve the same goal. It becomes then much easier to “catch” flow of money in days when the money probably didn’t have been flowed yet in so many locations that it then becomes impossible to track down. Therefore this AI program dramatically increases efficiency and allows the lawyer in our example to now invest his time working on so many other things less time consuming and surely more interesting!

There is also a software program called Quil used for journalism. This software can analyze information available online and choose – based on predefined criteria written in an algorithm – the most relevant details to write an article on its own. To achieve the latter, the software uses its language-generation software to build sentences. The best example for data mining is Spotify. Spotify is indeed using outstanding amount of data about customers (their localization, their potential mood – by the music listened – which day of the week they are listening to specify types of music, weather where they are located, …) to give you more music of that kind… I just hope that Spotify concludes that if you are in a sad mood, that it is a rainy Sunday and thus you might be in a sad or depressive mood, it would maybe be smart to suggest you feel-good music – like “No worry, be happy” or “Everything’s gonna be okay” !! – instead of suggesting you even more sad songs saying that life is tough and that we are all lonely at the end… !! which might be true at times but surely not what you need on a rainy and melancholic Sunday! ;o)

pic1_AI Blog

What is the link with the brain?

The scientific discipline by which we study the brain is called Neurosciences. That’s why we currently link AI and Neurosciences. This is closely related as in order to create better AI we need to understand how the brain works. However, this is not an easy task, especially when it comes to understand mechanisms underlying our cognitive abilities. In order for the brain to acquire learning, it requires what we call “neuronal networks”. These neuronal networks are networks of neurons, i.e. groups of connected neurons within or between brain regions, which are working together to process and integrate information by reinforcing (strengthening) connections between them (phenomenon called “plasticity”) in order to learn, i.e. remember information and be able to recall it when needed. This information is mostly perceived from the world and our environment (bottom-up afferent inputs) but could also be thought about, i.e. mentally recalled (top-down intrinsic inputs).

You can think of this mental recall as previously acquired knowledge and past experiences stored somewhere in your brain and that you could retrieve whenever you want and almost immediately, whenever you want to think about something in particular, for example when you close your eyes and want to think about your previous holidays at the beach, when you want to remember an important information in order to complete an exam or a previous experience in order to take a decision.

This is how it works basically (here I simply give the big lines in order for you to get the big picture, I won’t go through any details): we perceive a lot of information from the world and our environment, via our senses (i.e. what we see from our eyes, what we hear from our ears, smell from our nose, …). This perceived information gets sent to our brain almost instantaneously. Our brain then processes this information, integrates it, and if it is perceived as relevant*, makes it reach our state of consciousness so that it can be stored somewhere in our brain as a lasting memory and therefore can be further retrieved (recalled) when needed. (*the brain knows how to detect whether an information is relevant or not, one hypothesis is via attention, i.e. when you pay attention to something). Machines/AI, according to the same concept (“neuronal networks”), are computed via artificial neural networks (ANN).

What are Artificial Neural Networks (ANN)?

technology-3243375_1280

ANN are composed by an input layer and an output layer, and (or not) other layers in between called “hidden layers” that help doing the work to get from the input/inputs (i.e. what is received by/arrived to the system) to an output (i.e. what comes out of the system; let’s think about it as a response / action / decision).

To better understand what an input and an output are, let me give you an example of a real-life situation: let’s imagine you walk in a forest, calmly, when suddenly a tree is about to fall right in front of you. Based on what you’ve learned (a tree could kill you if it falls on you) and on the information coming to your eye (the tree falling, i.e. the input) which gets sent to your brain and reach your consciousness (making you aware of the situation), it makes you have a quick reaction (i.e. make a quick action/decision). The quick reaction is that you need to make a fast move in order to get out of the way of the tree about to fall. The output, in this case, is a motor output, i.e. an output from your motor system somewhere in your brain sending a message to your leg muscles to proceed to a movement, in order to avoid the tree to fall on you, based on previous knowledge or past experiences that it might kill you… 

nature-3151869_640-2

The hidden layers could be think of, in our example, all the networks of neurons in our brain processing mechanisms we are not aware of (unconscious processing of information) that go from the input being processed to generate a quick output. Another example would be if you wish to hold an information in mind. It would require that you pay attention to this information (the input) so that it reaches your consciousness and based on a lot of mechanisms occurring when information reaches your consciousness (hidden layers), it will be kept in (your) mind (output). Those two former examples were made in the case of the human brain.

In the case of a machine now: back to our example of AlphaGoZero, an output would be to have a win move from the starting position (the input). In order to reinforce learning, the weights of connections between members of the ANN need to be strengthened, in the same way as human brain connections between neurons are strengthened/reinforced via plasticity mechanisms during learning. The unique goal: optimizing reward outcome, goal achievement (as listed above in the case of the human brain: to avoid a tree falling or to keep an information in mind, in our examples).

What’s really behind AI?

So far, always a human being is behind AI, writing algorithms, “feeding” the computer with data, i.e. creating a database of input/output pairs, as already described above. For example, implementing the system with the following database: a bunch of different medical symptoms in millions of patients all over the world leading to the same pathology, to take a similar example as in previous posts.

Behind these AI algorithms written by humans, there are statistics, simulations, in order for the system to be trained to learn as quickly and as efficiently as possible (cf. AlphaGo). So learning algorithms to train artificial neural networks, statistics and simulations are key.

An important point I would like to emphasize here is that being “intelligent” is not the same as what is commonly thought when we say that “a computer beats a human champion” for chess or for this Go game, as AlphaGoLee beating world-famous Korean player Lee Sedol in 2016 or its improved version AlphaGoMaster beating the famous world champion Ke Jie in 2017. 

monitor-1307227_1280

A computer, even the most basic one, using simple computer programs, as long as it has high computational power (which means that such computer can do complex calculations very quickly), would be able to beat any human for mathematical tasks like calculation or memory-based capacities – for example being able to process millions of chess parties in a rather short time – as we cannot compete with such outstanding computing performance. (!)

Back to our example of beating the world-champion Lee Sedol, the computer program AlphaGoZero was, over the course of the training, self-playing about 5 millions games of GO!! On top, it was evaluating 1600 simulations for each next move search computed by the self-play algorithm (i.e. for each move, it was evaluating 1600 possibilities of where the next move could be and assigning a probability to each of these 1600 possibilities, choosing to play the next move with the highest probability) !! After learning, this program was able to plan 50 to 60 moves ahead (with about 150 moves in total for the GO game), which is far more than any human could ever dream of. How can even the best human champion in the discipline compete with such performance, unless the computer makes a mistake?! The computer can make a mistake when it is confronted to a situation it never saw before and/or cannot deal with based on what it has previously learned or has been trained for.

And this is how Lee Sedol (is thought to have) managed to win the last game over the 5 games played against AlphaGoLee version of AlphaGo. Indeed for that game, Lee Sedol was playing with emotions, not anymore with strategy only, therefore the computer was lost and did not know how to adapt. Such inability for a AI computer to cope with a novel scenario is what makes humans unique and still “more intelligent” than any computer so far. Not because humans have less computational power (!) but because they have this fantastic ability to adapt to their environment. Adaptation here means that humans, when put in a completely new situation we have never dealt with before, take time to think through it, examine possibilities, evaluate options and outcomes, retrieve previous knowledge and past experiences, create new alternatives by finding new appropriate strategies in order to select the best action to make as our decision to deal with the new scenario.

dune-3339682_1280

Let’s take an example. You are in the desert and start to become very thirsty. You only have a beer (glass bottle) with you but nothing to open it. How would you do that? I’m sure you’ll figure something out by using a stone, your shoes, anything, to open this beer and hydrate yourself. The same would apply in any situation when you would need a special tool to complete a task but this specific tool would be missing in your surroundings, and no possibility to go buy one or ask someone around you; therefore you will try to use any object, strategy or what could be used as a tool in your close-by environment to complete the task at hand.

poings

Hence we, as humans, are surely not as fast as AI computers to process maths, but we know how to adapt, we know – we have the basis – to know how to learn, and this is the level of evolution even the best AI supercomputer still doesn’t have reached yet.

Thus in that sense, we are still “superior”, i.e. more intelligent, than the best AI entity out there. This is good news ! However, we do not know for how long… As soon as we would know how to build a AI supercomputer which will know how to “learn how to learn” on its own, without the need of a human to feed its system with data or to implement its system with learning rules but instead would know on its own how to have access to data and learning rules, and adapt through its environment and new situations, i.e. “learn how to learn”, then a AI machine would reach our ability of being intelligent. But the technology is not there yet, at least as far as I can tell.

artificial-intelligence-2167835_1920

To keep in mind though, the field of AI, thanks to humans devoted to this new technological revolution, is growing at a very quick (exponential) pace, which is the pace humans understand the least (it is almost impossible for humans to picture or understand exponential)! Anyway, this means that AI develops and improves extremely quickly ! So who knows when it would reach human intelligence… Something to meditate on maybe… till we discuss the topic later on. I indeed plan to address such topic in one next post of this blog, so stay tuned !

What does “a computer learns” mean?

Let’s first start to define:

What is learning?

kids-2985782_1280

I think one could define learning as the acquisition, consciously or unconsciously, of new information or let’s say knowledge about the world, our environment, our routine, the people around us, things we need to know how to do, things we need to remember, in order to reach our goals and adapt to our environment. Learning usually requires an improvement (i.e. an increase) of performances. Continue reading

Introduction

Introduction

In this blog, I will talk about Artificial Intelligence (AI). AI became a hot topic over the past few years, and it is growing, at a rather fast pace now. You probably heard about it, as many people, and might now like to put easy words behind these “mysterious” or still foggy terms about AI. Talking to many non-experts in the field of Sciences or of AI, I realized their interest of knowing more about it but somehow feeling lost or left behind by so many information or maybe too complex one.

I therefore decided to create this blog and explain AI and its related topics with easy words, as much as possible. (At least that’s my intention, hope it will be the case!)

I also think this is crucial to get educated (well informed) on this topic as a lot is at stake. A new era is indeed coming, right into us. The recently signed Declaration of Cooperation on Artificial Intelligence by 25 countries within Europe last Friday (Friday, April, 13th) confirms it (you can find the text here).

This blog is my contribution, as a Neuroscientist, to this new era.

I would therefore like to start this blog guiding you, first, through definitions which I think might be necessary to understand AI before going towards more fundamental questions of what is at stake, how it would impact our societies, our future, our world and our vision of it. I would like everyone to understand the basics of AI in order to think through it and be aware of what is currently happening and how it could have an impact on your job, business, organization, and on the world we are currently living. Indeed, people get scared it would take over their jobs, dominate the world, become the next technology revolution or evolutionary step after the human race.

All these considerations might be true but first it is important to be aware of what AI really means and what is behind it. Only after this may you be able to get inspired and imagine how you could apply it in an efficient way into your life, work, business, as well as question yourself about its consequences and limitations, read more about it and have your own opinion.

Via this blog, I want to give you the tools to achieve so.

The term AI has actually been first introduced in the 50s, by John McCarthy, in 1955, an American computer scientist and cognitive scientist, which was among the Founding Fathers of AI attending the “Dartmouth Summer Research project on AI” in 1956. Another name to keep in mind when you think of AI is Alan Turing, a mathematician from the UK, famous for having invented the well-known “Turing test”. The Turing test infers how much a person can discriminate a computer from an human by the actions they are both performing (in the initial test it was via speech). These scientists in the 50s therefore paved the way of AI nowadays. Therefore you might think: why is it picking up now, compared to before? There are few reasons for this.

First of all, data !! As Prof. Manuela Veloso, from the Carnegie Mellon University, said: we became “collector of data”. Lots and lots of data !! This means that we need tools to analyze these data and interpret them. The best way to do so: AI ! Second, subsystems of AI – which have been around since the 80s – only become feasible since the last 5 years due to graphic cards being cheap and accessible nowadays. Finally, AI is used to describe „intelligent“ behavior by machines or computers. A big part of this „intelligence“ comes from the fact that these computers can now „learn“. Hence, to explain what is behind all of this, let’s first define what learning actually means for humans, and then compare this to how machines „learn“ or could learn.

Therefore let’s start to define:

What is “Learning”?

To read more, I invite you to click here now.