Pilot the Assessment List of the Ethics Guidelines for Trustworthy AI

Organisations working on artificial intelligence can now test the assessment list of the Ethics Guidelines for Trustworthy AI.

Feedback on the assessment list can be given in three ways:

  • By completing a survey before 1 December 2019. You will need to register first and will then receive an email with the link to the survey. Organisations that registered before June 2019 already received an email with the link;
  • By sharing best practices on how to achieve trustworthy AI through the European AI Alliance;
  • By participating in in-depth interviews (Organisations could mark their interest for participating in the interviews until 31 August 2019).

/futurium/en/file/pilotingjpgpiloting.jpg

 

The assessment list of the ethics guidelines operationalises the key requirements for ethical AI and offers guidance to implement them in practice.

This feedback will help better understand how the assessment list can be implemented within an organisation. It will also indicate where specific tailoring of the assessment list is still  needed, given AI’s context-specificity.

The piloting phase runs from 26  June until 1 December 2019.

The High-Level Expert Group on AI will propose a revised version of the assessment list to the Commission in early 2020, based on the feedback received.

Over 450 stakeholders have already registered to the piloting process. If you also support the key requirements for Trustworthy AI, you can register your interest to participate in the process by clicking on the following button.

All feedback received will be evaluated by the end of 2019 and serve as input for a revised assessment list to be finalised in 2020.

It should be noted that the Guidelines do not give any guidance on the first component of Trustworthy AI (lawful AI). They explicitly state that nothing in the document should be interpreted as providing legal advice or guidance concerning how compliance with any applicable existing legal norms and requirements can be achieved. Moreover, nothing in the guidelines or the assessment list shall create legal rights or impose legal obligations towards third parties. The AI HLEG, however, recalls that it is the duty of any natural or legal person to comply with laws – whether applicable today or adopted in the future. The Guidelines proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed.

 

back to top

back to Guidelines