Trustworthy AI
Introduction
AI models are increasingly integrated into our daily lives. They recommend movies to watch, suggest products to buy, help us navigate to our destinations and enable our faces to unlock our phones. These models have become indispensable to us as we go about our daily lives but are largely invisible.
How many people consciously think they are using an AI model when watching Netflix or unlocking their iPhones? We are using many AI models each day without actively deciding to. The recommender systems, facial recognition and route planning AI models are embedded in our applications and devices.
As more companies and governments adopt AI models for high-impact decision-making like loan approval, job applications, college admissions, and medical diagnoses, the stakes get higher for the decisions made. These decisions can have life-changing outcomes for those involved.
If the public or customers do not have confidence in the outcomes or decisions they are presented with, they will not trust the AI models themselves or the institutions deploying them.
What is Trust?
Trust is to believe in the truthfulness or having confidence in a person, institution, product or system and every day we trust things around us without giving them much thought.
When we get on a train, we implicitly trust that we will get to our destination safely and as scheduled. Here, we are relying on the quality of the design and construction of the train vehicle itself (a product), the infrastructure of stations, tracks, overhead wires and signals (a system) and the train operator to put all these elements together who maintain the infrastructure, purchase and operate the rolling stock, train the driver (if there is one) and maintain the integrity of the whole system (an institution).
So when you tag on your travel card, run to catch the train you are thinking about your day or what you will do when you reach your destination, not how the system works. You trust the people, institution, product and system implicitly.
What is Trustworthy AI?
Trustworthy AI is a term that refers to the development and operation of systems that adhere to a set of ethical principles. While the taxonomy of what constitutes a Trustworthy AI system varies between sources, the list generally includes the concepts of explainability, fairness, respect for privacy, security, accountability, reproducibility and traceability.
-
Explainability - Explainable AI (or XAI) refers to explaining the output of a model to humans in a way that they can understand. This can incorporate specific AI techniques like weighting feature importance in tabular data or salience maps for highlighting important pixels in an image. It also incorporates the broader Human-Computer Interaction (HCI) tools, including design, human factors and psychology.
-
Fair - The AI should not be biased against any individual or group. This involves considering fairness in training data and deployment of the system.
-
Privacy - Personal data should be respected, and good privacy practices should be followed with personally identifiable information (PII). This should also include any legally prescribed rules, e.g., the EU’s General Data Protection Regulation (GDPR).
-
Secure - A system must prevent unauthorised access and be resilient against attacks that could result in system corruption or data breaches.
-
Accountability - The developers and organisations operating the AI model should be held accountable for their actions.
-
Reproducibility - Where a model performs as expected and consistently under various conditions, and this can be verified by a third party.
-
Traceability - The maintenance of complete and precise documentation of the data, processes, artefacts and actors involved in the entire lifecycle of an AI model. Including its design implementation and operation.
Why is Trust in AI Important?
If a chatbot consistently provides incorrect information, it will lose the trust of its users. Similarly, if an autonomous vehicle behaves inconsistently and endangers people’s lives, it will not gain public trust. If the decisions made by an AI are not transparent or cannot be explained by the person operating it, the AI will not be trusted. Trust is crucial for AI adoption as people won’t use untrusted systems despite their benefits.
If a user is forced to use an AI system, for example, if a government agency is using it to make decisions regarding welfare or housing, then a lack of transparency will result in a loss of trust in the institution. It is important for these institutions to be open and clear about how they are using the technology in order to gain the trust of their users.
Conclusion
We are already using AI models whether we are conscious of it or not. Models are increasingly being embedded in all sorts of applications including those making high impact decisions.
There will not be trust in these models or the institutions deploying them unless they follow the principles of trustworthy AI. Developing explainable, fair, respect privacy, secure, accountable, reproducible and traceable systems will be a key to adoption. Over time these principles will be translated into best practice and regulatory frameworks on an industry by industry basis.
There are however challenges. Daily, hundreds if not thousands of research papers are being published with advances in AI algorithms and techniques. New foundation models are being trained and AI companies formed nearly as fast. We are definitely on the steep upward slope of the hype cycle for AI.
While the hype is concerned with what these models can do and how they will effect industries and society the less headline grabbing job of building and operating these models goes on.
The challenges include keeping up with the rapidly evolving legal and regulatory frameworks on AI. In the last few months the European Union has agreed the EU AI Act and the US has issues the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. How these will be translated into law and how they will be enforced is still a work in progress.
Along with keeping up with regulation and best practice, tools need to be developed, explainability methods improved, and independent AI testing and safety laboratories established to support building trustworthy AI systems.
There are a lot of challenges and opportunities ahead to making AI trustworthy.