Artificial intelligence (AI) is one of the most transformative forces of our time. Over the last decade, major progress has been made thanks to the ever-growing availability of data, powerful computing architectures and advances in techniques such as machine learning. AI developments in an increasingly large number of domains – from healthcare to mobility or cybersecurity – are improving the quality of our lives.
The development of AI is not a means in itself. The goal is to increase human well-being, which necessitates a holistic approach, ensuring that we maximise the benefits of AI, while at the same time minimising its risks. As is the case with all technologies, AI raises a number of concerns that need to be tackled. If we are not vigilant, the use of AI may lead to undesirable outcomes – intended or not. In fact, most often, AI is used precisely with the view of getting a fairer or more objective result. But a lack of awareness of ethical issues (e.g. arising from bias inherent in the data or algorithm) can lead to unintended harm. To truly reap its benefits, we need to ensure that AI is developed and used responsibly.
With this goal in mind - maximising the benefits and minimising the risks of AI - the European Commission set up the High-Level Expert Group on AI (AI HLEG), and tasked us with the preparation of two deliverables. The first, a set of draft Ethics Guidelines for AI, is meant to establish an ethical framework that can guide the wider AI community to develop and use AI responsibly. The second, policy and investment recommendations, is for the Commission itself. This is where we will set out our recommendations on how to ensure that we maximise the technology’s benefits by boosting AI innovation and uptake, and at the same time minimise the risks by having the right kind of policy and regulatory elements in place.
Ethics and competitiveness go hand in hand. Businesses cannot be run sustainably without trust, and there can be no trust without ethics. And when there is no trust, there is no buy-in of the technology, or enjoyment of the benefits that it can bring. Europe needs to lead the way in promoting responsible competitiveness, distinguishing itself from others by building a trademark of trustworthiness. Only by doing so we can expect to be able to lead by example in this rapidly evolving environment.
As of today, we cannot claim to be the leader in investment in or use of AI compared to other parts of the world. There are three very broad areas where AI has an important role to play: business to consumers; business to business; and public to citizens. While the first is heavily dominated by the US and China, Europe can really make an impact in the second and third. Each of these markets and their sectors have different sensitivities, both in terms of boosting AI uptake and safeguarding against ethical risks. This means that while some guidelines and recommendations will be applicable across the board, for other aspects a case-by-case approach will be required.
52 experts and beyond
We started our task in June 2018, when the composition of the expert group was announced: 52 experts (nearly half of them women) from 28 countries and covering a range of backgrounds. The group includes representatives from academia, industry and of civil society – a crucial mix when it comes to such a multifaceted technology. The Commission also selected representatives from three non-EU companies (two US and one Canadian), out of the 23 industry representatives within the group, to bring insight into the development of AI more globally. We need to understand what is going on in the rest of the world to better fix our own position and set out our actions.
While the diversity of backgrounds at times has made our debates very intense, it has also proved to be a strength, as it helps us to capture the breadth of issues at stake. Our debates have not been limited to the expert group, however. At the same time, the Commission launched the European AI Alliance, a broad multi-stakeholder online platform, to exchange information and views with all those interested in AI. Today, the Alliance counts almost 2500 members, and we frequently ask them to contribute to our work. These discussions will continue to shape not only the work of our group, but also AI policy-making in Europe more generally.
Besides online discussions, a number of events were organised across Europe with a broad range of participants to gather further input. In addition, our expert group also interacts with other relevant groups, such as the European Group on Ethics in Science and New Technologies (EGE), and the Member States group on AI that shares feedback from their own countries.
On 18 December, we delivered a first draft of the guidelines. They are based on the concept of trustworthy AI and provide guidance to those developing, deploying or using AI. Trustworthy AI has two components: it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”; and its implementation should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
The first chapter sets out the fundamental rights, principles and values AI should comply with. It asks developers and users to pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities or minorities, or to situations with asymmetries of power or information. The second chapter lists the requirements for trustworthy AI and offers an overview of technical and non-technical methods that can be used for its implementation, such as ethics-by-design, regulation or standardisation. The third chapter subsequently operationalises the requirements by providing a concrete but non-exhaustive assessment list for trustworthy AI.
A living document
Our aim is that the guidelines are open for voluntary endorsement by stakeholders who commit to the achievement of trustworthy AI. Crucially, they are not a substitute for regulation. This is an important point to make, since many existing regulations already cover key elements of trustworthy AI, such as for instance the General Data Protection Regulation. Our guidelines are not meant to prejudice or pre-empt any new regulation.
The aim is to finalise the guidelines as quickly as possible so that they can start to inform the debate within the Commission and Member States, and serve as a basis for international discussions. AI is fast-moving sector: many processes run in parallel at both national and international level, necessitating a common, European approach. Moreover, many companies developing AI, especially those without the means hire a team of consultants to help them – are looking for guidance on how to ensure that their AI is ethical, robust and trustworthy. There is however an inevitable tension between speed and process, and we are trying to find the right balance.
We have asked for comments on our guidelines from the wider AI community, and once the consultation process is concluded on 1 February we will try to take on board as many of the comments (and those from the European AI Alliance) as possible in order to create the final guidelines. Our aim is to share them with the Commission in March and present them at a public event early April.
But this won’t be the end of the process. As we explain in the guidelines, they should be seen as a living document that needs to be regularly updated to ensure continuous relevance as the technology and our understanding of it evolves. We will also have to continue to monitor the respect of fundamental rights, applicable regulation, core principles and values within AI, as well as the technical robustness and reliability of the technology. And we need to keep talking, exchanging views, questioning the technology – and ourselves.
This is essential if we are to achieve our goal of creating a culture of cutting-edge, innovative, and trustworthy AI ‘made in Europe’.