>

Government has yet to fully capitalise on AI. Here are 4 ways to change that.

Feature

New research examines the public sector’s use of AI, revealing the biggest challenges for applying potentially revolutionary AI solutions and how agencies can overcome them.

Embracing technology: the public sector of the future

To better serve its citizens, the public sector faces an existential need to become more agile, more mobile and more efficient. Some of the most hotly anticipated solutions include those enabled by artificial intelligence (AI). Ranging from predictive analytics to machine learning to intelligent robotic process automation, AI is one of the surest paths for extracting insights and value from growing volumes of data.

This has fuelled aspirations for everything from advanced smart cities to new approaches in population health management – often these solutions involve predictive analysis that could help agencies make better decisions, respond faster during crises and even pre-empt problems altogether. Some agencies are making use of AI applications already, like Queensland’s Office of State Revenue, which used machine learning to predict tax non-compliance and netted the state an extra $27 million in revenue.

Government also has a unique role to play when it comes to AI – since all Australians are impacted in some form or other by government services, governments must take the lead in their use of AI, whether through operations or service delivery.

Yet broader adoption remains low. A 2018 investigation by the SAP Institute for Digital Government (The SIDG) found that, while 80 per cent of public sector organisations were working toward data transformation, less than 15 per cent had progressed beyond the prototype stage.

The SIDG teamed up with University of Queensland researchers to assess where the sector is at in 2020. The resulting white paper, Delivering AI Programs in the Public Sector: Guidelines for Government Leaders, identifies the biggest AI challenges in the public sector – and how leaders can overcome them to finally harness the true potential of AI solutions.

The resource challenge: building AI capability and securing human talent

AI relies on large datasets, high-quality data, the right platforms and – importantly – data science talent.

This is resource-intensive – an acute challenge in the public sector where data is often purposefully siloed, and fractured across complex, ageing legacy systems. These overlapping issues create a sort of chicken-egg dilemma, where leaders may struggle to secure funding and executive buy-in without proven value – but proving value depends on funding and executive buy-in.

The research did uncover examples of success, though. One agency was able to overcome data-sharing barriers by outsourcing its AI model development, which was then trained with citizens’ payment data instead of sensitive personal data. Another agency chose a commercial-off-the-shelf AI development platform to decrease maintenance burdens.

Misunderstandings about AI and inflated hopes also demand project-level governance to manage expectations and encourage ongoing commitment from executives.

The process challenge: pre-empting machine fallacies by keeping humans in the loop

Despite myths of robot overlords and job losses, algorithms only outperform humans in their ability to process huge datasets. They still lack the context-specific reasoning capabilities that we have, which means AI solutions can’t simply be plugged into existing workflows. Agencies will need to rethink processes to combine the strengths of machines and people.

This is complicated because of the barriers that often separate data scientists and subject matter experts, demanding redesign for entire workflows. The researchers found that agencies who were able to reconcile these issues were those who embedded data scientists in everyday operations and encouraged collaboration with subject matter experts.

Successful approaches include co-location and collaborative workshops but, interestingly, interview data also highlighted the importance of attracting data scientists with strong soft skills and good communication.

Organisations were keenly aware of the need for human oversight and the risks of deferring to automation. Many were already redesigning workflows to ensure AI was doing the heavy lifting and data-crunching, with human workers acting as the controllers of the AI and making final decisions.

The explainability challenge: minimising bias and enabling transparency

Advanced AI models have an “explainability problem” – that is, the complexity of their logic and the sheer volume of data can make decision-making inscrutable to us.

This is a massive hurdle in the public sector, where public trust often depends on transparent rationale and straightforward accountability. It’s an even bigger challenge once we consider that algorithms have already demonstrated a serious risk of bias and error.

The researchers found that some agencies have been establishing strict oversight and procedural systems with these specific risks in mind. For instance, one agency excluded demographical data in favour of behavioural data to minimise bias in the model’s predictions.

Another created a more extensive end-user interface that visualised a customer journey and highlighted risky payment behaviours. This provided visibility into the factors affecting the overall risk estimate.

The culture challenge: reducing distrust among employees and citizens

Despite research indicating AI adoption rarely comes from a desire to reduce headcounts, job security fears abound. Additionally, the researchers found some human workers continuing to distrust AI’s decisions.

One solution is educating employees about the potential of AI-enabled tools – this can be an easier sell once employees witness the elimination of low-value tasks and admin burdens, freeing them to focus on more strategic and interesting work.

The public sector faces public resistance, too. Some agencies have the added challenge of a power imbalance, as citizens who rely on their services may not be able to switch providers like they would in the private sector.

While wider societal perceptions may evolve in a way that reduces distrust, there’s no simple solution to these challenges. Trust will depend on proven value and the effective management of unintended consequences – which will in turn depend on many of the solutions mentioned above.

The public sector faces unique challenges with AI solutions but also stands to gain some of the biggest rewards. And, promisingly, some agencies are already demonstrating how to address these issues.

Using an even deeper look into the public sector’s relationship with AI, Delivering AI Programs in the Public Sector: Guidelines for Government Leaders provides a practical framework for developing the foundations necessary for effective AI development in government.

However, it’s an area that requires deeper exploration, which is why The SIDG will continue partnering with the University of Queensland to understand ongoing challenges.

To read more about SAP Australia’s public sector offer, visit the public sector homepage now.