>

Fairness, Transparency, and Human Involvement: The Ethical Side of Artificial Intelligence

Feature

Relevant, reliable, and responsible – those are SAP’s guidelines for the artificial intelligence embedded in its solutions and products. The “responsible” part is being monitored by the software company’s AI Ethics department, led by Dr. Sebastian Wieczorek.

“At SAP, ethics has been part of our research and development of artificial intelligence from the very beginning,” Wieczorek, head of AI Ethics at SAP, says. “Every development in the area of AI is deeply aligned with SAP’s values.”

The Beginnings of Artificial Intelligence at SAP

Wieczorek was part of SAP’s first AI unit, founded in 2014. “In addition to the technical and product tasks, we have always considered the ethical aspect of our work from the beginning,” he says.

SAP was the first European company to define guidelines for dealing with AI and set up a corresponding advisory panel. Wieczorek’s work has always had a technical focus, but he has also been a member of the SAP AI Global Ethics steering committee, a member of the Enquiry Commission on AI in the German Bundestag, and has reported on the uses of artificial intelligence at the EU Parliament.

SAP focuses on embedding AI that is relevant, reliable and responsible by design

SAP initiated internal processes early on to formulate an approach to ethically unobjectionable AI, which ultimately led to the SAP AI Global Ethics policy.

To address ethical questions, experts must possess deep knowledge about AI technology and also be willing and able to engage in philosophical and moral questions as well as the legal side of technology.

“Our work in AI ethics is somewhat similar to that of a translator,” Wieczorek says. “Technological realities and possibilities have to be ‘translated’ into the language of philosophy, sociology, and law. The results must then be ‘translated’ back into technological requirements, so that a constant exchange between the two areas is achieved.”

The Role of Humans

Currently, AI systems cannot develop a motivation of their own or a concept of themselves or the world – never mind considering, on their own initiative, how to best optimize this world.

“Their purpose and tasks are determined by humans,” Wieczorek emphasizes. “What humans no longer do is define the exact implementation of the task.”

As always, when tasks are delegated – whether to machines or other people – it must be ensured that certain rules regarding fairness, transparency, and human participation rights are adhered to in their execution.

“We know less about how decisions are ultimately made when it comes to AI than we do when it comes to conventional software,” Wieczorek says. “Therefore, we must keep the possibility open to intervene if this automation does not work in certain cases as we want it to.”

What such an intervention looks like in individual cases can be quite varied, as the software as a whole and not just individual components must be considered.

The most well-known example of discrimination by intelligent software is the exclusion of historically underrepresented groups in job application processes. The historical data with which the AI is trained may reflect the biased selection criteria of the past. Therefore, it is theoretically possible that the AI may adopt and reproduce these biases for its own selection process.

“Side effects of this kind can occur relatively quickly on a large scale due to the high automation potential of AI software,” Wieczorek says. “Therefore, we must set high standards for the type and manner of automation and have the ability to limit side effects and efficiently reverse them.”

Guidelines for training data sets are neither the only leverage nor a guarantee of maximum fairness.

“There is a chain of things to consider,” Wieczorek says. “The system as a whole must be able to provide guarantees that the evaluation is fair – its behavior must be impeccable in the overall view.”

A Dedicated Ethics Review for Every AI Use Case

“In the SAP AI Global Ethics policy, it is stipulated that all of our products and solutions that use AI must be monitored from an ethical perspective – both during the development phase and later, when they are already on the market,” Wieczorek says.

Each AI use case is therefore subject to a separate review, which includes declarations from the product teams on how the use case complies with the guidelines of the SAP AI Global Ethics policy.

After their definition, all use cases undergo a classification process. However, if use cases, for example, make automated decisions that affect people or process personal data, they are automatically considered sensitive and are classified high risk.

“Such use cases then undergo a mandatory review process, which is continuously accompanied by experts, for example from my team,” Wieczorek says. “This way, each individual case is systematically checked for risks, in order to then decide, if necessary, what measures need to be taken to implement the ethical standards prescribed by SAP.”

Does the Use of AI Limit Human Responsibility?

Routine tasks taken over by AI are most often already automated to a degree and do not have to be formulated anew every time. But as increasingly personalized tasks are also being taken over by AI, such as through chat interactions, the individual user’s responsibility grows.

Wieczorek sees a shared responsibility between the AI developer and the user. “Especially with tasks that affect people and have human consequences, we will not shift the responsibility to the products,” he emphasizes.

Those providing an AI application are obligated to provide transparency about its behavior and to clarify what it was designed for – and what it was not designed for.

This in turn allows users to take on their own responsibility: tasks assigned must adhere to ethical principles and the results must be verified, rather than simply accepted.

It is particularly important for people to still be able to intervene in the functioning of the systems in case an undesired side effect, such as unfair behavior of the system, becomes apparent over time.

“It must always be ensured that humans can review, question, and possibly reverse the decisions made by the AI,” Wieczorek says. “This is the responsibility of the manufacturers of AI systems, as laid out in the SAP AI Global Ethics policy.”

Learn more about SAP’s approach to ethical AI:

Receive weekly news highlights from the SAP News Center