As a machine learning scientist and CEO of a leading AI company, I have devoted my working life to building systems in which artificial intelligence can improve human lives. From enabling personalised evidence-based healthcare for patients and better foster care for children, or its use for complex medical research, where the ability to process large quantities of data goes far beyond the abilities of any human expert, AI has the potential to effectively transform our society for the better.
Worldwide discussions are taking flight on how ethical and policy guidelines could ensure a more constructive and legitimate use of AI. In the last few months, I have been fortunate enough to feature in such discussions, as part of the European Union High-Level Expert Group on Artificial intelligence. During this time, it has become clear to me that most discussions on AI ethics are driven by the moral and social resistance to the impact of this technology. I find that this resistance has great potential. We must embrace it as a wake-up call; treating it not as a direct cause of alarm, but as a valuable societal warning sign.
Technological historians have argued that resistance is a force for shaping technology, and that resistance is a key factor in driving advancement in the right direction - to develop long term sustainable processes. It’s true, AI creates incredible opportunities and has the potential to advance human life. However, there is also a shift in control as people surrender their choices to AI systems. Behind the scenes, designers of such AI systems will have a substantial influence on society.
In a series of blog posts, I will share my perspectives on the different resisting groups, aiming to open up further discussion and understand more about: Who exactly is resisting AI? What are these groups objecting to? And why? What are the main characteristics and constraints for an effective use of AI?