The European Commission puts forward a European approach to artificial intelligence and robotics. It deals with technological, ethical, legal and socio-economic aspects to boost EU's research and industrial capacity and to put AI at the service of European citizens and economy.

Artificial intelligence (AI) has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed.

It is essential to join forces in the European Union to stay at the forefront of this technological revolution, to ensure competitiveness and to shape the conditions for its development and use (ensuring respect of European values).

A European approach to Artificial Intelligence

In its communication, the European Commission puts forward a European approach to Artificial Intelligence based on three pillars:

Being ahead of technological developments and encouraging uptake by the public and private sectors

The Commission is increasing its annual investments in AI by 70% under the research and innovation programme Horizon 2020. It will reach EUR 1.5 billion for the period 2018-2020. It will:

  • connect and strengthen AI research centres across Europe;
  • support the development of an "AI-on-demand platform" that will provide access to relevant AI resources in the EU for all users;
  • support the development of AI applications in key sectors.

However, this represents only a small part of all the investments from the Member States and the private sector. This is the glue linking the individual efforts, to make together a solid investment, with an expected impact much greater than the sum of its parts.

Given the strategic importance of the topic and the support shown by the European countries signing the declaration of cooperation at the digital day, we can hope that Member States and the private sector will make similar efforts.

The High Level Expert Group on Artificial Intelligence (AI HLEG) presented their policy and investment recommendations for trustworthy Artificial Intelligence at the first ever AI Alliance Assembly on 26 June 2019.

Joining forces at European level, the goal is to reach all together, more than EUR 20 billion per year over the next decade.

Prepare for socio-economic changes brought about by AI

To support the efforts of the Member States which are responsible for labour and education policies, the Commission will:

  • support business-education partnerships to attract and keep more AI talent in Europe;
  • set up dedicated training and retraining schemes for professionals;
  • foresee changes in the labour market and skills mismatch;
  • support digital skills and competences in science, technology, engineering, mathematics (STEM), entrepreneurship and creativity;
  • encourage Members States to modernise their education and training systems.

Ensure an appropriate ethical and legal framework

Some AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making. The General Data Protection Regulation (GDPR) is a major step for building trust and the Commission wants to move a step forward on ensuring legal clarity in AI-based applications.

European Commission welcomed the final Ethics Guidelines for Trustworthy Artificial Intelligence prepared by the High-Level Group on Artificial Intelligence published on 8 April 2019.

The Commission will also develop and make available guidance on the interpretation of the Product Liability directive.

You can also consult the Staff Working Document for emerging digital technologies.

Building Trust in Human-Centric Artificial Intelligence

The European AI strategy and the coordinated plan put forward trust as a prerequisite to ensure a human-centric approach to AI. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019. In parallel, the AI HLEG also prepared a revised document which elaborates on a definition of Artificial Intelligence used for the purpose of its deliverables.

The Commission launched the Communication on Building Trust in Human-Centric Artificial Intelligence on 8 April 2019 (COM(2019)168 final). Ensuring that European values are at the heart of creating the right environment of trust for the successful development and use of AI the Commission highlights the key requirements for trustworthy AI in the communication:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and Data Governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Societal and environmental well-being
  7. Accountability

Aiming to operationalise these requirements, the Guidelines present an assessment list that offers guidance on each requirement's practical implementation. At the first AI Alliance Assembly on 26 June 2019 the Commission officially launched the piloting phase of the assessment list involving stakeholders on the widest scale in order to reach consensus on the key requirement ensuring that the guidance can be tested and implemented in practise.

Coordinated EU Plan on Artificial Intelligence

The European Commission and the Member States published a Coordinated action plan on the development of AI in the EU on 7th December 2018 in order to promote the development of artificial intelligence (AI) in Europe.

The Coordinated EU Plan is needed because:

  • Only when all European countries work together we can make the most of the opportunities offered by AI and become a world leader in this crucial technology for the future of our societies.
  • Europe wants to lead the way in AI based on ethics and shared European values so citizens and businesses can fully trust the technologies they are using.
  • Cooperation between Member States and the Commission is essential to address new challenges brought by AI.

AI needs the trust of citizens to develop. To earn this trust, AI will have to respect ethical standards reflecting our values. Decision-making must be understandable and human-centric.

This calls for a wide, open and inclusive discussion on how to use and develop AI both successfully and ethically sound.

Read the full commission communication on a coordinated action plan.

Declaration of cooperation on Artificial Intelligence

On 10 April 2018, 25 European countries signed a Declaration of cooperation on Artificial Intelligence. It builds further on the achievements and investments of the European research and business community in AI. The Commission will now work with Member States on a coordinated plan on AI to be delivered by the end of the year.

Background information

Artificial intelligence (AI) endows systems with the capability to analyse their environment and take decisions with some degree of autonomy to achieve goals.

Machine learning denotes the ability of a software/computer to learn from its environment or from a very large set of representative data, enabling systems to adapt their behaviour to changing circumstances or to perform tasks for which they have not been explicitly programmed.

To build robust models at the core of AI-based systems, high quality data is a key factor to improve performances. The Commission adopted a legislation to improve data sharing and open up more data for re-use. It includes public sector data as well as research and health data.

AI Roadmap

Useful links