Opinion: Building explainability into AI projects

Accelerating medical research, increasing public safety, building smart cities and continually improving the services used by citizens every day are just a few examples of the benefits that artificial intelligence (AI) can deliver in the public sector, writes Ian Ryan.

Yet compared with many private sector industries, it’s fair to say that public sector adoption of AI technology has been more measured. Governments and other public sector organisations face a number of significant challenges, from the availability of skills and investment funding, to demonstrating value and ensuring transparency about how decisions are made.

These challenges are reflected in the SAP Institute for Digital Government’s latest report – Building Explainability into Public Sector Artificial Intelligence – developed in partnership with the University of Queensland. While 80 per cent of public sector organisations are actively working towards data-driven transformation, fewer than 15 per cent have progressed beyond prototypes.

In order to drive greater uptake, the public sector needs to develop best practice frameworks and solutions for the development and use of AI systems that are accurate, robust, and scalable, but also reliable, fair, and transparent.

When building AI systems to meet these high levels of expectation, it’s vital that public sector workers are able to understand how these systems generate decisions and explain how this impacts results. This is known as AI explainability.

Read more of the article on Government News here