Towards a Trustworthy AI "Made in Europe"

  • Lucilla SIOLI profile
    Lucilla SIOLI
    28 January 2019 - updated 2 years ago
    Total votes: 2

The commitment to European values and respect for fundamental rights together with the mobilisation of adequate policies and resources, are the two forces that drive our Strategy to place Europe at the forefront of the international AI landscape. Two groups established by the Commission last year in order to address these objectives presented the first results of their work in December 2018.

In a previous blog, I already introduced the Coordinated Plan for Artificial Intelligence agreed between EU Members States and the European Commission as a means for ensuring synergies and avoiding wasteful duplication of efforts in the facilitation of AI uptake. This time, I would like to talk about the draft AI Ethics Guidelines presented by the High-Level Expert Group on AI (AI HLEG) to provide a framework for Trustworthy AI. 

Supporting the development of a human-centric AI to increase human well-being and ensure common good, the AI HLEG introduced the concept of trustworthiness as a combination of ethical purpose and technical robustness.

Based on fundamental rights, the Guidelines list a set of principles and values that attribute AI applications with an ethical purpose. Ethical purpose consists of ensuring compliance with those rights, principles and values, as well as applicable regulation. However, ethical purpose – or put simply, good intentions – is not enough. The development and use of AI can also cause unintentional harm, for instance in case of a lack of technological mastery. Technical robustness is therefore likewise a key element to ensure the trustworthiness of our AI systems.

With these two elements in mind, the draft guidelines then provide a list of 10 requirements ranging from Accountability to Transparency that AI needs to have in order to be trustworthy. To ensure the implementation of those requirements, a series of technical and non-technical methods are offered as a suggestion for the development, deployment and use of an AI that is not just ethical, but also technically robust and reliable. Finally, the document offers an assessment list designed to help stakeholders to implement the guidelines in daily business. This is intended to operationalise the framework for Trustworthy AI, and in this regard goes beyond the other existing frameworks for ethical AI, which remain more on an abstract level.

Trustworthiness can become a distinct European quality mark that sets us apart from AI developed and used in other world regions.

While the draft Guidelines are still subject to a consultation process aiming to collect feedback from the wider AI community, last week the AI HLEG met with representatives of relevant European ministries to discuss the specific perception of the guidelines on the Member State level. The meeting that took place on 22 January in Brussels was designed to engage both sides in an active discussion on the purpose, implementation and future governance of the Ethical Guidelines for Trustworthy AI.

Member States, welcomed the draft as a guide for both National Strategies (some of which are already in place and several others in the preparation) and AI developers, towards the adoption of a European human-centric approach of AI. To this end, practical aspects of the document were appreciated while further guidance was requested on specific aspects of their application, such as for instance their non-binding nature or their relationship to existing regulation in the field.

The leading character of the document was also the object of discussion regarding its position on both European and international level. As the field of AI is still evolving, current regulatory processes should be self-designed and developed in cooperation with the industry. While all parties agreed that the aim of the Document is to guide without putting any barriers to innovation, its fast and effective adoption by both public and private sectors could strengthen European competitiveness and set an international example for a more ethical use of AI.

The Commission regards the draft Ethics Guidelines for Trustworthy AI as a building block that, together with the Coordinated Plan, sets the basis not solely for an “AI Made in Europe”, but one that is “Trustworthy”. Commissioner Gabriel met the Chair of the Group, Pekka Ala-Pietilä, earlier that day, and thanked the AI HLEG for all the efforts made to shape a European approach of AI based on high ethical standards. I happily follow her lead and likewise take the opportunity to thank the group for all the work they have already done, and that is still ahead.

While considering the feedback received through the consultation, the AI HLEG will now finalise the Guidelines and publicly present them in early April 2019 at the first AI Alliance Assembly. More information on this event will follow soon – stay tuned!