AI and responsabilities

Our future with AI

I bet relying on AI for your business model and future of your company or organization appears pretty scary. But the more you will know about it the more you’ll be able to trust the system.

However, a tricky question is that if a mistake happens, who would be held responsible? The AI system? The person who wrote the algorithm of the AI to make it able to learn and make predictions based on what it has learned? The person at the origin of the data which could have made the system fail? This is not an easy problem to solve. Therefore lawyers might be needed in a way that they might need to shift as much as possible every other job employees their way of working and become specialized in AI legal rights or how to deal with AI in legal terms.

Indeed, worldwide politicians need to all sit at the same table and discuss legal issues in order to define or rather create laws in case of dysfunctioning of the AI system and its negative consequences. Discuss and think…


… think about potential bad outcomes, pitfalls and limitations in case the system, as all new system, makes mistakes or dramatically fails, i.e. goes wrong or even terribly wrong.

Imagine the case of a AI computer making decision about patients’ treatment or outcome, as it is already existing since recently (see for example here and here). Would you held responsible a computer for the death of your child in case it was a mistake, worst, in case the decision was against the will of a doctor but somehow you decided to trust the machine and go for what it was suggesting as you knew it had better success rates.

Politicians definitely need to decide how to adapt the law with our society needs, for example decide who would be in charge in our previous example – is it to blame the person who wrote the algorithm of the AI computer at the basis of the decision-making process, or who? So far we do not have AI systems considered yet as conscious entities, therefore responsible for their own decisions and actions… This is tricky and definitely not simple to come up with a solution. This would definitely require A LOT of thinking and discussion among politicians, and lawyers to help argue to which laws any case-by-case situation may apply.

Philosophers, scientists, psychologists, medical doctors, historians, lawyers together with politicians need to sit around a big table and seriously discuss how to disentangle such dilemma which may arrive sooner than we think, and agree on legal terms on how to behave in case the AI system and its predictions fail and / or have bad or even terrible outcomes.

* * * *

You have reached the end of this post. The next post will be about the Risks of AI.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.