Auditing AI – the way for trustworthy AI

  • Cindy Degreef profile
    Cindy Degreef
    8 November 2019
    Total votes: 1

ISACA the professional association of systems auditors and cybersecurity experts welcomes the opportunity to comment on the AI High Level Expert Group’s ethics guidelines and the subsequent Policy and Investment Recommendations. As the body representing systems auditors, we have already undertaken work to identify the main challenges and opportunities for auditors stemming from the increasing use of AI.

 

We were very satisfied the ethics guidelines included a suggestion to facilitate the traceability and auditability of AI systems, which forms part of the wider principles of explainability, transparency and accountability. As the HLEG correctly observes, it Is not always possible to determine how an algorithm has reached a decision (black box algorithms); indeed in some cases it is not even necessary to have a look at the inner workings or an algorithm or the business model and IT behind it. In these cases it is necessary to implement other explicability measures, such as traceability, auditability and transparent communication on system capabilities. The group also observes that independent auditing can enhance the trustworthiness of the technology.

 

ISACA fully supports these points and we believe it is very positive that the incoming European Commission leadership has pledged to deliver legislation on AI within its first 100 days, broadly based on the AI HLEG output. The rules are expected to be horizontal and include requirements on the transparency and explainability of AI and algorithmic decision-making. As the HLEG ethics guidelines have clearly suggested, auditability is a prerequisite to the building of trustworthy AI.

 

In the paper accompanying this contribution, we lay out the main challenges and solutions stemming from AI for auditors. We also explain what changes AI brings and what precedents auditors can build upon to inspect these complex systems and ensure their integrity, safety and, ultimately trustworthiness. Simply put, the main challenges revolve around AI functioning as a collection of different systems, for example one of them being the machine learning which forms part of the wider AI system. Another challenge is that AI does not operate on a pre-determined set of rules, but can evolve as the algorithm adapts to changing circumstances. ISACA’s proposal is that AI systems should be audited along the lines of cloud computing or cybersecurity audit. This means that auditors should focus on the controls and governance structures in place to determine they are operating effectively, rather than having to go in depth into all the relevant protocols.

 

Auditability of AI can deliver a safe, trustworthy AI, which is the expressed goal of both the AI HLEG and the European Commission’s stated intention. We thus strongly urge lawmakers to consider including provisions relating to the auditability of AI systems in the upcoming AI legislation.