In the famous sci-fi book (and movie), the “precrime” police is able to stop murders before they happen thanks to humans who predict the future. The concept is fascinating and is an interesting example of a possible use of Artificial intelligence. But we would leave the decision and control fully to the machines which we must not let happen in the real world.
People have the right to own their future and no machine can be deciding on their behalf. When I read articles such as the recent one in La Stampa by Lorenzo Longhitano that a computer can predict what a chance of curing a sick person is and whether it is reasonable to admit her or him to the hospital, I think we need to draw a clear line.
With the development of deep learning we do not want a situation where innocence is convicted or our health condition is predicted only because an algorithm compares some odds. Even though the value of computers is undeniable, and our society has grown dependent on their precision in various fields, we do not want computers to decide independently, without any human input or control, or pre-shape our future.
There must always be a human element which will add evaluation and control to pure data calculation and will decide. Leaving the decision to machines would also mean nearly an Orwellian monitoring of our lives and would make us lose control over our future - control which could be a new constitutional right for the digital age similar to the way GDPR ensures protection of people’s personal data.
But we still must continue to move forward because this is the future. According to a recent report by Accenture analysing 16 industries, Artificial Intelligence can boost profitability by 38 percentage points before 2035. Deep learning can make diagnosis in hospitals more precise and this means saving lives. It can decrease air and water pollution, stimulate innovation, smooth traffic or make industrial production more efficient. But we need to make sure that people are in control and understand when algorithms run the show.
Artificial Intelligence can solve many societal challenges but it is also becoming a societal challenge itself. It also raises many ethical and philosophical questions, like: who will be liable in case of mistakes made by algorithms, will the systems be transparent and safe, etc. That is why I think European values and principles have to be embedded in every application of this technology.
It is crucial to keep these aspects in mind when discussing possible policy actions. We are currently in the middle of such a debate and the Commission will soon present a strategy. People will be at the heart of any EU action on deep learning and artificial intelligence. Our objective is to ensure that Europeans make the most of new technologies, not the other way around.