Artificial intelligence (AI) has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed.
It is essential to join forces in the European Union to stay at the forefront of this technological revolution, to ensure competitiveness and to shape the conditions for its development and use (ensuring respect of European values).
In its Communication, the European Commission puts forward a European approach to AI based on three pillars:
Being ahead of technological developments and encouraging uptake by the public and private sectors
The Commission is increasing its annual investments in AI by 70% under the research and innovation programme Horizon 2020. It will reach EUR 1.5 billion for the period 2018-2020. It will:
- connect and strengthen AI research centres across Europe;
- support the development of an "AI-on-demand platform" that will provide access to relevant AI resources in the EU for all users;
- support the development of AI applications in key sectors.
However, this represents only a small part of all the investments from the Member States and the private sector. This is the glue linking the individual efforts, to make together a solid investment, with an expected impact much greater than the sum of its parts.
Given the strategic importance of the topic and the support shown by the European countries signing the declaration of cooperation during the Digital Day 2018, we can hope that Member States and the private sector will make similar efforts.
The High-Level Expert Group on Artificial Intelligence (AI HLEG) presented their Policy and Investment Recommendations for Trustworthy AI during the first European AI Alliance Assembly in June 2019.
Joining forces at European level, the goal is to reach altogether, more than EUR 20 billion per year over the next decade.
Prepare for socio-economic changes brought about by AI
To support the efforts of the Member States which are responsible for labour and education policies, the Commission will:
- support business-education partnerships to attract and keep more AI talent in Europe;
- set up dedicated training and retraining schemes for professionals;
- foresee changes in the labour market and skills mismatch;
- support digital skills and competences in science, technology, engineering, mathematics (STEM), entrepreneurship and creativity;
- encourage Member States to modernise their education and training systems.
Ensure an appropriate ethical and legal framework
On 19 February 2020, the European Commission published a White Paper aiming to foster a European ecosystem of excellence and trust in AI and a Report on the safety and liability aspects of AI. The White Paper proposes:
- Measures that will streamline research, foster collaboration between Member States and increase investment into AI development and deployment;
- Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.
Public consultation on AI White Paper
After its publication, the White Paper underwent an open, public consultation process. All European citizens, Member States and relevant stakeholders (including civil society, industry and academics) were invited to participate in the consultation, by answering to an online survey and submitting their position papers on the matter.
The online survey remained open from 19 February to 14 June 2020. All contributions submitted through the online survey are available here. A report summarising the quantitative results of the survey is also available here.
The European AI strategy and the coordinated plan put forward trust as a prerequisite to ensure a human-centric approach to AI. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019. In parallel, the AI HLEG also prepared a revised document which elaborates on a definition of Artificial Intelligence used for the purpose of its deliverables.
The Commission launched the Communication on Building Trust in Human-Centric Artificial Intelligence on 8 April 2019 (COM(2019)168 final). Ensuring that European values are at the heart of creating the right environment of trust for the successful development and use of AI the Commission highlights the key requirements for trustworthy AI in the communication:
- Human agency and oversight
- Technical robustness and safety
- Privacy and Data Governance
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
Aiming to operationalise these requirements, the Guidelines present an assessment list that offers guidance on each requirement's practical implementation. At the first AI Alliance Assembly on 26 June 2019 the Commission officially launched the piloting phase of the assessment list involving stakeholders on the widest scale in order to reach consensus on the key requirement ensuring that the guidance can be tested and implemented in practice.
This process led to the elaboration of the final Assessment List for Trustworthy AI (ALTAI) that translates AI principles into an accessible and dynamic checklist for developers and deployers to self-assess their AI systems. ALTAI is available in a document version and as a prototype of a web-based tool.
In December 2018, Member States joined forces with the European Commission in a Coordinated Plan on AI for increased cooperation that will boost AI in Europe.
Coordination in AI is essential as:
- Only when all European countries work together we can make the most of the opportunities offered by AI and become a world leader in this crucial technology for the future of our societies.
- Europe wants to lead the way in AI-based on ethics and shared European values so citizens and businesses can fully trust the technologies they are using.
- Cooperation between Member States and the Commission is essential to address new challenges brought by AI.
AI needs the trust of citizens to develop. To earn this trust, AI will have to respect ethical standards reflecting our values. Decision-making must be understandable and human-centric.
This calls for a wide, open and inclusive discussion on how to use and develop AI both successfully and ethically sound.
Read the full commission communication on a coordinated action plan.
Other relevant Initiatives
Some AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making. The General Data Protection Regulation (GDPR) is a major step for building trust and the Commission wants to move a step forward on ensuring legal clarity in AI-based applications.
To the same regard, the European Commission has welcomed initiatives such as:
- The final Ethics Guidelines for Trustworthy Artificial Intelligence prepared by the High-Level Group on Artificial Intelligence published on 8 April 2019
- The Report on liability for Artificial Intelligence and other emerging technologies prepared by the Expert Group on Liability and New Technologies – New Technologies Formation and published on 21 November 2019
- The Declaration of cooperation on Artificial Intelligence, signed by 25 European countries on 10 April 2018. The declaration builds further on the achievements and investments of the European research and business community in AI and sets out the basis for the Coordinated Plan on AI.
Artificial intelligence (AI) endows systems with the capability to analyse their environment and take decisions with some degree of autonomy to achieve goals.
Machine learning denotes the ability of a software/computer to learn from its environment or from a very large set of representative data, enabling systems to adapt their behaviour to changing circumstances or to perform tasks for which they have not been explicitly programmed.
To build robust models at the core of AI-based systems, high-quality data is a key factor to improve performances. The Commission adopted legislation to improve data sharing and open up more data for re-use. It includes public sector data as well as research and health data.
- White Paper on Artificial Intelligence - A European Approach
- Brochure on Trustworthy AI: Joining efforts for strategic leadership and societal prosperity
- Communication "Artificial Intelligence for Europe"
- Declaration of cooperation on Artificial intelligence
- Press release on Artificial intelligence: Commission outlines a European approach to boost investment and set ethical guidelines
- Press release on creating European AI alliance
- Staff Working Document: Liability for emerging digital technologies
- Infographic on Artificial Intelligence
- Questions and Answers on Artificial Intelligence, the way forward
- Press release on Coordinated action plan
- Full commission communication on a coordinated action plan