Pilot the Assessment List of the Ethics Guidelines for Trustworthy AI
The Assessment List for Trustworthy AI is the operational tool of the Ethics Guidelines for Trustworthy AI, aiming to ensure that users benefit from Artificial Intelligence (AI) without being exposed to unnecessary risks. This list, first presented with the Ethics Guidelines in June 2019, was revised following a piloting process that involved more than 350 stakeholders.
Feedback on the assessment list was given in three ways:
- An online survey filled in by participants registered to the process;
- The sharing best practices on how to achieve trustworthy AI through the European AI Alliance;
- A series of in-depth interviews (for selected organisations that marked a relevant interest.
The assessment list of the ethics guidelines operationalises the key requirements for ethical AI and offers guidance to implement them in practice.
This feedback helped better understand how the assessment list can be implemented within an organisation. It also indicated where specific tailoring of the assessment list was still needed, given AI’s context-specificity.
The piloting phase took place from 26 June until 1 December 2019.
Based on the feedback received, the High-Level Expert Group on AI will propose a revised version of the assessment list to the Commission in July 2020.
It should be noted that the Guidelines do not give any guidance on the first component of Trustworthy AI (lawful AI). They explicitly state that nothing in the document should be interpreted as providing legal advice or guidance concerning how compliance with any applicable existing legal norms and requirements can be achieved. Moreover, nothing in the guidelines or the assessment list shall create legal rights or impose legal obligations towards third parties. The AI HLEG, however, recalls that it is the duty of any natural or legal person to comply with laws – whether applicable today or adopted in the future. The Guidelines proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed.