The latest version of the SAP AI Ethics Handbook is the one-stop shop for applying the SAP Global AI Ethics policy and creating ethical AI solutions that support our commitment to deliver relevant, reliable, and responsible AI.
The updated handbook now contains information about generative and other types of AI and how to apply SAP’s updated ethical AI guiding principles. Here is a brief introduction to the handbook and how you can use it to apply SAP’s AI ethics policy to your work.
SAP’s Guiding Principles on AI Ethics
Principles 1-7 are applicable for teams involved in creating AI systems; principles 8-10 are for governance requirements.
- Proportionality and Do Not Harm
- Safety and Security
- Fairness and Non-Discrimination
- Sustainability
- Right to Privacy and Data Protection
- Human Oversight and Determination
- Transparency and Explainability
- Responsibility and Accountability
- Awareness and Literacy
- Multistakeholder and Adaptive Governance and Collaboration
Who Is the Target Audience for This Handbook?
In a nutshell – everyone developing and implementing AI.
This handbook is for everyone who wants to give users confidence in the SAP AI ethics processes and confidence that humans are at the core of SAP’s AI processes. In short, it’s for everyone who wants help create a human-centered AI culture. Specifically, principles 1-7 apply to teams creating AI solutions, while principles 8-10 apply to governance teams.
The handbook explains how human-centered AI is achieved with tools like user research, design thinking, and user stories. These tools help create products that are closely aligned to the needs of SAP’s target groups, increasing benefits and mitigating the risk of unintended harm in SAP AI use cases.
What Is an AI Use Case at SAP?
An AI use case means that the AI system is built either on symbolic AI, traditional/narrow AI, or generative AI. This handbook is applicable to all three types of AI use cases.
How Do You Determine an AI Use Case?
In the handbook, there is an ideation checklist that guides you through the process to determine the type of use case – red line, high-risk, or standard. The handbook also has detailed checklists for validation, realization, productization, and operation.
What Is a Red Line Use Case?
Red line cases are AI use cases that are prohibited because they undermine personal freedom, undermine society, and/or cause intentional damage to the environment.
What Is High-Risk Use Case?
An AI use case that meets one of the high-risk criteria listed below is a high-risk use case:
- Personal data is processed.
- Sensitive personal data is processed.
- It could negatively affect the well-being of individuals or groups, such as social, safety, financial, and/or physical harm.
- It has automated decision-making.
- It is a high-risk sector, like HR, healthcare, law enforcement, or democratic processes.
What Happens with High-Risk Use Cases?
The use case classification is checked by the SAP Global AI Ethics organization. If the organization agrees that the high-risk classification is correct, the SAP Global AI Ethics steering committee will review the case and recommend what, if any, further action needs to be taken.
Additional Information
Information about AI ethics is available at:
Guiding Principles That Resonate
Hear what guiding principles resonate the most with some of our in-house AI ethics experts:
“The guiding principle Safety and Security resonates with me because it covers everything that we need to take care of: AI security to ensure our systems are robust and work as designed and AI safety for protecting individuals, society, and the environment from harm done by AI systems. The guiding principle Transparency and Explainability resonates with me because it describes critical prerequisites to ensure human oversight – for humans on the loop like technical experts as well as humans in the loop such as business experts. Additionally, my cognitive scientist self is intrigued by the challenge to make AI output understandable for humans.”
– Bettina Laugwitz, Director, AI Ethics & Responsible AI
“The guiding principle Fairness and Non-Discrimination resonates with me because I believe this is currently the biggest gap in the development of AI and the reason why AI has the potential to harm human rights. Many AI scandals to date have been violations of this principle, including discrimination against women in finances and HR, to name but a few. AI cannot grow without the co-creation of, for example, minorities, the Global South, and women. The guiding principle Sustainability may be my biggest concern about AI, but it is also our biggest innovation possibility. Indigenous rights, co-creation, protection, and understanding how to protect fragile ecosystems parallel to the exploration and development of AI is crucial. SAP has the potential to explore how to become ‘green’ on this topic. This principle should be a priority for designing for future generations.”
– Camila Lombana Diaz, AI Ethics Expert and Researcher
“I am convinced the guiding principle Responsibility and Accountability gets to the heart of something very important: no matter how human AI appears to us, it cannot and should not be held morally accountable for its actions. AI is built and used by humans – and therefore responsibility and liability for all decisions and actions taken by AI must be assigned to human actors in order to ensure effective protection for those affected by AI. The guiding principle Fairness and Non-Discrimination in the development of AI makes a significant contribution to protecting human rights; it is difficult, however, to standardize processes to ensure fairness and many case-by-case decisions need to be made, which can be a challenge for those developing AI. Nevertheless, compliance with this principle is non-negotiable, which is why I am committed to supporting developers building fair AI.”
– Saskia Welsch, AI Ethics and Responsible AI Team Member
Alexa MacDonald is a senior editor for SAP News.