By Dr. Loubna Bouarfa, Member of the High-Level Expert Group on AI
As a machine learning scientist and CEO of a leading AI company, I have devoted my working life to building systems in which artificial intelligence can improve human lives. From enabling personalised evidence-based healthcare for patients and better foster care for children, or its use for complex medical research, where the ability to process large quantities of data goes far beyond the abilities of any human expert, AI has the potential to effectively transform our society for the better.
Worldwide discussions are taking flight on how ethical and policy guidelines could ensure a more constructive and legitimate use of AI. In the last few months, I have been fortunate enough to feature in such discussions, as part of the European Union High-Level Expert Group on Artificial intelligence. During this time, it has become clear to me that most discussions on AI ethics are driven by the moral and social resistance to the impact of this technology. I find that this resistance has great potential. We must embrace it as a wake-up call; treating it not as a direct cause of alarm, but as a valuable societal warning sign.
Technological historians have argued that resistance is a force for shaping technology, and that resistance is a key factor in driving advancement in the right direction - to develop long term sustainable processes. It’s true, AI creates incredible opportunities and has the potential to advance human life. However, there is also a shift in control as people surrender their choices to AI systems. Behind the scenes, designers of such AI systems will have a substantial influence on society.
In a series of blog posts, I will share my perspectives on the different resisting groups, aiming to open up further discussion and understand more about:
Who exactly is resisting AI?
What are these groups objecting to? and Why?
What are the main characteristics and constraints for an effective use of AI?
The first group I want to start with is the group resisting AI for cultural reasons:
Cultural resistance to AI
Certain groups resist AI (and other new technologies) almost as a cultural mission, attempting to influence public opinion via existing platforms. One such group is known as the “reflective elite”, consisting of celebrities, academics and other individuals with great symbolic influence. For instance, this group includes Stephen Hawking on the perils of AI, or Elon Musk’s argument that “AI is far more dangerous than nukes”.
As a second group within this category, we can identify activist social movements, such as environmentalist, feminist, religious, or consumer groups. Such groups typically do not resist all aspects of AI, but rather focus on how it directly affects their cause. Certain religious groups may for example resist the application of AI for biotechnologies, while ethnic minority groups may resist specific AI algorithms that contain discriminatory biases for sentencing and recruitment.
Finally, we must account for a growing anti-movement, which (based on media influence or general resistance to change) resists AI without a clearly defined alternative. These groups are often formed in response to isolated news and events, such as the Cambridge Analytica scandal which revealed that AI had influenced the Brexit referendum and Trump US election. At the American SXSW festival, individuals held up signs such as "Stop the robots" and "Humans are the future", chanting "A-I, say goodbye!" and "I say robot, you say no-bot!”. This group forms a popular anti-tech and anti-AI voice.
So what does this mean for AI ethics and policy development?
These groups highlight real and important shortcomings. However, technology is a mirror of its developer, which means that technology can be developed for better or worse applications, with more or less attention to bias. We must make use of policies on different levels to evaluate AI models - treating them as human products, rather than superhuman machines.
Policy developers including standards and regulatory bodies, should develop governance frameworks to make sure AI development does not infringe upon human rights, freedoms, dignity, and privacy. There is no need to replicate existing legal frameworks such as GDPR that already protect human values, however more work is required to translate existing legal frameworks to technical requirements.
AI experts need to increase their voice about the authentic use of AI systems in practice, in line with the transparent and explainable AI frameworks. Individuals such as Musk and Hawking fail to distinguish between artificial intelligence as algorithmic models versus a vision of sci-fi robots. We need to increase trust in AI systems: whereas AI-powered robots may seem threatening, machine learning models are merely mathematical formulas that generate a certain outcomes faster (and often more accurately) than humans could. If we communicate this more clearly, we may encourage the anti-movement to make less sweeping claims, thus opening opportunities for a constructive and non-threatening dialogue on the use of AI.
In my next blog, I will elaborate on the second resisting group, the corporates, and how we can learn from the resistance of this group in how to shape the future of AI.
MOKYR, J. (1990). The lever of riches. Technological creativity and economic progress. Oxford: Oxford Univeristy Press.
AI today AI tomorrow; Awarness, acceptance and anticipation of AI a Global consumer perspective
Bauer, M. (Ed.). (1995). Resistance to New Technology: Nuclear Power, Information Technology and Biotechnology. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511563706]