Last month, SAP Chief Sustainability Officer Daniel Schmid welcomed participants to the 2023 SAP AI Ethics Advisory Panel meeting at SAP headquarters in Walldorf, Germany.
The recent acceleration of generative artificial intelligence (AI) capabilities — as shown in a demo of SAP SuccessFactors solutions for generative AI in HR data at SAP Sapphire Orlando this year, for example — has left customers eager to embed these capabilities into their SAP applications as soon as possible.
Now, the challenge for SAP is to meet this demand with embedded generative AI capabilities that are not only efficient but also sustainable, responsible, and trustworthy. One part of this complex puzzle is collaboration with the SAP AI Ethics Advisory Panel.
SAP was the first major European tech company to create an AI ethics advisory panel five years ago, comprising independent AI ethics experts from academia and industry. The panel, sponsored by Thomas Saueressig, Executive Board member of SAP SE, SAP Product Engineering, convenes twice a year. Together with the SAP AI Global Ethics steering committee, panelists discuss current AI ethics issues and anticipate upcoming ones. Continuing to build on this tradition, panelists this year included:
- Peter Dabrock (virtual), chair of Systematic Theology (Ethics), University of Erlangen, Germany
- Susan Liautaud, lecturer in Public Policy and Law Stanford University, U.S.
- Nicholas Wright, consultant and intelligent biology affiliated scholar at Georgetown University Medical Center, U.S., honorary research associate at University College London, UK
- Paul Twomey, global founding figure of ICANN, co-founder of STASH, Australia
- Emma Ruttkamp-Bloem, professor and head of the Department of Philosophy, Faculty of Humanities at the University of Pretoria, South Africa
The Office of the Chief Sustainability Officer chairs this annual meeting and, as Schmid explained, “runs the AI ethics governance process to guarantee independent oversight of AI ethics at SAP and to guarantee that the SAP commitment on global human rights is taken into account.”
Day One: Understanding the Risks
With human rights top of mind, the AI Ethics Advisory Panel got to work, focusing on use cases with embedded generative AI capabilities.
Discussions were frank and fruitful: Is the identity of the individual really protected? Are users truly informed about what the app can and can’t do? What about consent? What is the risk of harm to individuals if information is incorrect? What about legal liability? SAP has a responsibility to make apps safe — how can it? Does the cost of raising revenue and efficiency with generative AI also mean an increase in risk to our customers’ data, reputation, and to SAP?
SAP regards all generative AI use cases as high risk because of current technological limitations such as hallucinations, high effort of training, and the high effort of operations. In addition, other risks include biases in output, potential misuse of models, and the fact that the legal frameworks for intellectual property and copyright for generative AI are still uncertain.
High-risk use cases are subject to a high degree of scrutiny and must be assessed by the SAP AI Global Ethics steering committee before development can continue.
For example, aspects such as processing personal or sensitive data, automated decision-making, and the negative effect on individuals or groups of individuals are examined, as well as the planned domain for deployment. Law enforcement, healthcare, democratic processes, employment, and HR are just some examples of types of applications that SAP deems to be high risk in the context of generative AI.
Day Two: Unlocking the Potential with AI Ethics
On the agenda for day two was SAP’s strategy for generative AI, a discussion on human rights and AI, and the progress that SAP is making toward equipping the workforce to deliver responsible and trustworthy AI.
Part of the AI ethics framework is dedicated to workforce enablement, delivering the knowledge, skills, tools, and processes required to make informed decisions about deployment. The panel offered feedback on SAP’s proposals on how to further embed AI ethics in its corporate DNA.
The advisory panel concluded that the key to unleashing the full potential of generative AI is to understand the current potential pitfalls and dangers and that SAP’s AI governance framework is in a strong position to adapt, respond, and manage the ethical challenges of generative AI going forward.
The detailed findings of the advisory panel are shared with the SAP AI Global Ethics steering committee and the Executive Board of SAP SE.
Interview with Professor Ruttkamp-Bloem
In an interview, Ruttkamp-Bloem answered questions on AI, her experience on the panel, and why language matters.
Q: How can the potential of AI be actualized?
A: The potential of AI can only be fully actualized if it is fully adopted, and it can’t be fully adopted if it is not trusted — and it won’t be trusted if it is not ethically governed.
What is sustainable AI technology?
The only sustainable AI technology is ethical, responsibly governed AI technology. If you don’t have sustainable AI — AI that is ethical and responsibly governed — your reputation will be damaged, you’ll have court cases and, ultimately, you will lose business.
How does SAP work with the AI Ethics Advisory Panel to make AI ethics part of our corporate DNA?
The SAP team that the panel interacts with doesn’t actually need the panel! SAP has done homework itself on these issues and, in many cases, they are people who have not had any kind of specific training and have stepped beyond the parameters of their software engineering background, but they knew what they are talking about with terms such as structural bias, identity, prejudice, and more. And now, SAP has a whole AI ethics structure in which, from my experience at World Benchmark Alliance, is lacking at many other companies. SAP is one of only 22% of companies that say anything about ethics on their Website.
Just putting up guiding principles is useless without actualizing them, but SAP, unlike other companies I have consulted, has a bottom line, meaning that if there are certain ethical concerns it will pause the project and go back to the drawing board.
SAP does have an AI ethics structure in place, SAP does know what it’s talking about, SAP does understand that the next step is actualization, and SAP listens to the panel.
For example, at last year’s meeting, the panel discussed how SAP could start talking about the impact of AI ethics in the company and how teams would feel if their projects had to be rethought or paused because of AI ethics concerns. SAP responded with different initiatives, such as the very successful openSAP course “AI Ethics at SAP,” as well as an internal AI ethics speaker series. It’s also clear that, on the ground and in the company itself, SAP is actively supporting development teams to manage the additional requirements of developing AI ethically. SAP is going out of its way to tell teams why these steps are necessary because, ultimately, developing AI ethically is about developing a sustainable product.
Why is the intersection of human rights and AI critical?
The growing number of use cases means that the scope for harm and misuse is also growing. As previously explained, sustainable AI technology has to be ethically and responsibly governed and the reason for this lies in the nature of the technology; it’s not human-centered, but human-driven.
The math behind AI is beautiful, but without the human data it’s nothing. This is not emphasized enough because AI builds on what we typically do as humans and thus builds on bias, so bad things are amplified because the algorithms latch onto patterns and, in the absence of knowing good from bad, bad things can be amplified and the whole thing explodes. There is an ethical reason to take responsibility and ethically govern software because of the harm that can come from AI and the overall threat it has to what it means to be human.
How does the phrasing of guiding principles impact trust?
The phrasing of principles is critical to making them understandable. It is hard to adhere to principles that you were not part of framing and a language you do not recognize. This is where the UNESCO Recommendation on the Ethics of AI is unique, as the ad hoc expert team that drafted the recommendation, of which I was the chair, went out of their way to include different articulations of shared values — some of which had to regrettably be changed during member states negotiations as there is the issue of diplomatically recognized vocabulary.
Companies that operate globally and have ethical principles want everyone to understand them and ultimately these companies want the technology that they’re building to be useful in the society in which it will be deployed. But if you have a set of principles articulated in a vocabulary that doesn’t make sense to the society where you’re deploying your technology, it will impact the trust and the scope of adoption. Culture plays a part. It’s an interpretation tool, a lens that, if used in policy making and related discourse, can make people feel more involved. They see the vocabulary they know and, consequently, adherence to and trust in this process come more easily.