Risks of AI
Data protection
Back to our healthcare example, a growing sector in the field of AI. If you combine this to what I wrote before about predictions, that the more data you implement the system with and the more diverse dataset you fed the system with, the better the system becomes at making best (more accurate) predictions, with less false positive / false negative. Then we can come up with the following thoughts: we will probably agree to share our private health data for the greater good to improve healthcare and the cure of more patients (since the more data available to feed the system with, the more accurate output we get – in this example, best treatment outcome for patients worldwide).
However, this may not be without consequences. Imagine the data you shared for the greater good in the healthcare sector are also utilized for other means, means that you do not have access to neither control of. It might fall into hands you may not want. Meaning your data might be used for other means/purposes that you may not know about.
So far, data protection is still ongoing in Europe and let’s hope it will not change. This is at least something we might acknowledge. As this is not the case in every country, for example in the US or in China. And this will not be without consequences as it can represent a serious threat to democracy.
So far, democracy was a convenient model over dictatorship for our societies and way of living after the Second World War, as the well-known historian Yuval Harari (*) reminded at the World Economic Forum last year. However, in the context of AI, would democracy still have the monopoly and convenient supremacy over dictatorship? This question remains open and definitely deserves that you stop a moment here and devotes to it some thinking … (but sincerely let’s hope so).
(*) Author of the book “Homo deus”, which I recommend.
Let’s imagine that our data fall into bad hands and we have strictly no control over them. Would the system and the people controlling our data know us better than we ever know ourselves? Well, this is the scary part. Because by knowing us better, they can also manipulate us better, play with our weaknesses and vulnerabilities, and everything we would consciously or even unconsciously try to hide from the public.
During his talk at the World Economic Forum, Yuval Harari gave the example of a young boy, teenager, who you can imagine trying to hide his sexual preference to guys over girls. If a AI system is built so that it might know anyone better that anyone knows herself, therefore it might detect that this teenager prefer guys over girls before he even noticed it himself, for example, by screening his eyes looking at guys over girls, etc. Imagine rich kids having such a system at their “boom-party” full of teenagers of the boy’s school, as something “fun” to try. This may violate the teens’ intimacy as well as the privacy of some part of themselves they may not want or be ready to share at this critical period of their life.
Even more scary, in my opinion, would be if the robots (AI intelligent computers) reach the ability to “learn how to learn” on their own, thus evolve faster and become better and smarter than they currently are. They may require humans to provide (make accessible) all kinds of data for the greater good of the humanity or rather for the greater good of their own evolution … Imagine if the robots manage to figure on their own what I just wrote you before – that a system would become better – learn better therefore become in a way “more intelligent” – if implemented with more data… Well, if the robots become smart enough to understand this and to know how to implement themselves with such a high amount of data, they might do so with all sorts of trillions and trillions of human data (made accessible) worldwide to become smarter than us, overcome us, and therefore evolve as the superior species on Earth… (pushed as the extreme case)
To reassure you, an open letter very recently signed by a lot of founders and companies CEOs states that AI should not be utilized to compromise or “diminish the data rights or privacy of individuals, families or communities”. It states that “the ways in which data is gathered and accessed need to be reconsidered“. This, the report says, “is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.” (you can also read this – an article entitled “5 core principles to keep AI ethical” from the World Economic Forum 2018).
* * * *
You reach here the end of this post. Hope you enjoyed it. Do not hesitate if you have any question or comment to address them below. In the next post, I plan to write about a topic I like the most and which I find raises a lot of interesting questions and issues – AI and consciousness, and the role of emotions.
Pingback: AI and responsabilities | Demystify_AI