On Monday the 8th of April, the High-Level Expert Group on AI (AI HLEG) and the European Commission released the Ethics Guidelines for Trustworthy Artificial Intelligence. As with any guiding document, our challenge now is to follow, improve and deliver - both on great AI technology, and the positive human and market outcomes that follow.
In these posts, as a member of the AI HLEG and contributor to the ethics guidelines, we at OKRA Technologies will share our best practices and applications of the guidelines in our everyday work. We are an Artificial Intelligence company for healthcare, applying AI to combine multiple data sources in one simple interface, allowing healthcare professionals to search all their data in real time. We predict answers to crucial health and market outcome questions, such as what patient a drug will help, and where there is a patient population in need, which creates a more proactive, predictive and precise healthcare industry. This is particularly crucial at a time when Europe is facing ageing population, growing lifestyle diseases, and a shrinking healthcare workforce. AI will be a vital tool for supporting the future success of European healthcare and make the shift from volume based to value based healthcare system. In the process, not least in the healthcare industry, ethics will be a crucial consideration.
The AI HLEG ethics guidelines contain seven key requirements for the development of Trustworthy AI, tied to four ethical principles. To begin, we will cover the contested, somewhat controversial and one of the most discussed requirements: Transparency.
Why transparency is key: no insight, no trust, no action
The Ethics Guidelines state that transparency is split into three components: traceability, explainability and communication, where traceability relates to the data and processes behind an AI-recommended course of action, explainability to the reasons behind those suggestions, and communication to clear signalling of the existence, limitations and applications of AI systems as they interact with humans.
As the Guidelines state, transparency is important because “without such information, a decision cannot be duly contested.” As an AI company working directly with clients in the healthcare and life sciences industry, we would add that transparency is also important not only for formal reviews and processes, but for creating the basic trust and understanding that is needed for a user to trust any prediction or suggestion that wasn’t originally their own. Whether suggestions come from humans or AI, we cannot confidently action them unless we understand what they are, where they come from, and why they are important.
How are these three areas of transparency - traceability, explainability and communication - applied at OKRA?
For the requirement of traceability, OKRA practices clear documentation of which data and methodologies we use, and how these different factors impact predictions and suggestions in our AI system. When we receive data from clients, it does not arrive all at once but in pieces. Each time we receive new data, we integrate it, extract new variables (“features”) that will impact predictions and suggestions, and retrain and validate our AI models. This means that every time there are small changes in the data, we must reprocess the data. To make these changes fully traceable, we practice “version control” - meaning we save every version of data we use, and the intermediate stages and predictions in the processing. This is a mechanism to trace how changes in the data lead to changes in results; to trace what data has what effect on the predictions and suggestions, and how we arrived there. If our AI performance is impacted in any way by these refreshes, we can trace it back to where it came from.
Furthermore, when we receive data from a client, we investigate and document what this data contains; where it is collected, what each feature means, and so forth. Our data scientists also log the algorithms that helped them arrive at certain variables, so every time we run a new model, we have a step-by-step record of performance and methodologies over time. This ensures that our models are traceable across teams, time, projects and stakeholders.
The most important factor is perhaps less technical: explainability.
At OKRA, we have developed unique methods for generating the “reasons” behind all our predictions and suggestions, which we can display to our users for each of our predictions and representations of data. We view explainability less as technical documentation, which would provide users with full transparency of the methods and algorithms used, and which one would normally need a PhD in AI to understand. Instead, we believe explainability can be best achieved by providing users with the reasons and “supporting evidence” for the predictions that our AI platform provides. If we imagine a patient diagnosis, OKRA’s explainability could recommend a certain treatment, based on the response of similar patients, individualised variables such as age and region, genetics, the success of previous treatments, as well as our degree of confidence in these analyses - and it gives you this information in plain English. The OKRA platform gives you predictions, reasons and suggestions supported with accuracy scores and traceable data sources, but does not tell you exactly how it processes the data. This would most often not be useful for a human decision-maker, after all. In essence, when we look to explainability, OKRA believes that non-technical explainability is useful and needed for the user, and we have built our system to provide this.
Nevertheless whenever in depth technical explainability is required we support our clients by creating transparency on our methods, results and validations upto scientific levels. We can describe the data and the algorithm used to achieve the result, this form of explainability is more necessary in scientific domains. At OKRA we provide scientific validation, we publish our methods and results jointly with our clients to support clinical studies with our AI technology.
To fulfill the requirement of communication, we clearly present the aforementioned “reasons” behind our predictions and suggestions. We have coupled this with a lean feedback mechanism, including real-time technical platform feedback, as well as qualitative weekly and monthly feedback meetings with client representatives. The platform also has a built-in feedback mechanism, enabling end users of the AI system to provide instant feedback. This allows us to learn from the actual use of our platform, and gives us the opportunity to continue a supportive and educational dialogue as to the possibilities, limitations and use cases for the OKRA platform. In this sense, we support deeper understanding of the data in use, and sets the conditions for a client-centric development of the software. All OKRA software is intended to be used with minimal instruction or training, but it should nevertheless be accompanied by user documentation and support from OKRA team members who coach the client into maximising the value of the platform.
Traceability, explainability, communication - how do they relate?
We have no doubt that Transparency is one of, if not the most, crucial recommendation in the AI HLEG Ethics Guidelines. Transparency and explainability lays the basis for trust, and trust is the basis for adoption. With our outline of OKRA’s best practices, we hope to show that explainability should take a less technical role for users, merging with the requirement of Communication to ensure that each AI prediction and suggestion is understood when possible. The most important thing is what information was crucial to make a decision, not the neural pathways that made the connection itself between those links. Explainability of AI-driven predictions and suggestions is the way forward, including accuracy and data origins, coupled with clear and rigorous internal processes and standards as set out above.