Building Explainability into Public Sector Artificial Intelligence: Helping Stakeholders Understand the Thinking Behind AI Decision-Making

By Ian Ryan, Global Head, SAP Institute for Digital Government

As Social Services organisations increasingly adopt artificial intelligence (AI), a crucial consideration with building AI systems relates to human decision-makers’ ability to understand and explain how these systems generate their decisions. This is often called AI explainability. The SAP Institute for Digital Government collaborated with the University of Queensland to examine the challenge of explainability and how to mitigate the impact on AI implementations.

Transparency and Explainability

We studied nine AI projects and interviewed and surveyed data scientists, system developers, domain experts, and managers. Through our inquiry we identified the vital management responsibility in assuring that AI and machine learning algorithms are traceable and explainable to stakeholders, and that they adhere to high levels of data privacy, security and fair use. This transparency is vital to build trust that the government is applying AI for the right purposes and using data appropriately for analyses and recommendations.

Managing AI Explainability

At the heart of any AI technology is an AI model, an algorithm trained with data that mimics human decision-making processes. The models are an abstract representation of some portion of reality and are designed to predict domain-specific realities. With them come three inevitable gaps in understanding and performance (see Figure 1), all of which need be addressed and managed.

Gap 1 – Inconsistency between the user’s understanding and the AI model:

Users do not fully comprehend the AI model’s logic

  • The gap inhibits user trust and stifles acceptance of AI

Gap 2 – Inconsistency between the AI model and reality:

  • The AI model does not represent or reflect reality in an accurate or meaningful manner
  • The gap causes poor model performance by the model or produces bias against specific cohorts

Gap 3 – Inconsistency between the user’s understanding and reality:

  • Users exhibit bias or lack of understanding related to how things work in reality
  • The gap reduces domain experts’ beneficial potential real-world impact
Guidelines for Managing AI Explainability

Our research identified practices that have potential to facilitate AI implementation in Social Services and so help organisations carry out impactful AI projects with success. Our insights include:

  • Build explainability into complex AI models by examining and incorporating alternative traceable models. While many advanced AI models are inscrutable black boxes, they can be examined and tightly correlated relative to traceable ones.
  • Move beyond technical traceability to explanations that engage and involve stakeholders in AI-model development. Our study highlights the importance of considering AI models’ users, their knowledge, values, and perspectives when building AI.
  • Integrate AI into work by means of user-friendly explanatory interfaces. Regardless of how advanced the code behind the interface is, domain experts need simple tools built with specific user requirements in mind.
  • Educate and empower frontline staff to exploit AI but also override its decisions. Therefore, humans must remain in the decision loop after the AI system is deployed.
  • Plan for an iterative process. AI technologies are still nascent and emerging. This demands awareness and flexibility via an iterative process wherein the AI models get scrutinised. Explanations are crucial for detection of issues that necessitate revisions to the organisation’s AI systems.
Further research

This stage of our research identified the need for explainability when using AI in the business process in order to ensure a high-level of stakeholder trust in the outcomes. Further research (to be published in future editions of the ESN newsletter) will explore the specific challenges of building AI capability.

You can download our research paper here