When we speak of ethics, most people think about the moral principles that govern the behavior of individuals or organizations. Ethics can seem highly abstract, a subject more useful to ancient philosophers than modern business and technology leaders.

Yet ethics is an important issue for today’s decision-makers – one that must be handled with extreme care. The rampant growth of artificial intelligence and other intelligent technologies in businesses makes ethics more than a philosophical question. Ethics is now a form of risk management that savvy businesses cannot ignore.

Look no further than the news to see how ethical scandals and problems can become a heavy economic burden for affected companies. Enterprises such as Enron, Siemens, and Volkswagen learned the hard way that poor ethics can negatively affect the bottom line. Beyond the fines were reputational damage, lower sales, and, in some cases, bankruptcy.

By properly managing ethical risks, companies can improve their overall risk management. Like most operational issues of this magnitude, however, it is not a task that can be the sole responsibility of individuals. Although you might want “honest” employees in your organization, that’s not enough. Ethics and integrity must be hardwired into your organization. Enterprises must develop organizational guidelines and standards that lay out in fine detail which behaviors are desired and encouraged and which are unacceptable – or even grounds for dismissal and potential legal action.

Balanced Perspective

With intelligent technologies such as AI embedded in business processes, the types of ethical risk grow. We know from surveys and other forms of social sentiment that people are skeptical about the use of these technologies. They wonder whether algorithms have been programmed to reflect inherent biases, and they worry about issues such as data privacy, security, and hackers.

The technology itself is something of a black box to most people – even those who are highly technical. Our institute has seen many cases where the results of AI processes are not always intelligible, even to their programmers. And most executives have little to no knowledge of AI ethics.

To address these knowledge gaps, experts have developed a number of high-level, abstract principles for AI ethics. Although these efforts are quite valuable, it’s time for enterprises to adopt more concrete guidance. We need to share the details of how these technologies work – and affect enterprise ethics – with engineers, programmers, and businesspeople.

In the research groups at our Institute for Ethics in Artificial Intelligence, we always include representatives from the technical side and the ethics or social science side to work together on creating tangible, actionable ethical guidelines for certain AI systems – such as those that might be used in the financial sector or healthcare.

Many executives have asked what they need to do to ensure that ethics are appropriately considered in their organizations. They want to know whether they should create new departments or positions that would be responsible for ethics. It’s difficult to know the correct answer for every enterprise.

Most importantly, enterprises must focus on integrating ethics into their AI development teams. After that, it may be helpful to have new people or new competencies that can work with developers to better understand the ethical impacts of technologies like AI.

Opportunities Ahead

As intelligent systems begin to deliver more detailed information faster than ever, work will change for employees. AI promises to accelerate many processes, even while processing larger volumes of data. With these new efficiencies, workers will spend less time assessing information and more time taking action.

The technology may lead to reductions in staffing for some corporate functions. For example, companies in industries such as insurance or banking may require fewer analysts. This does not mean that there will be a time when there is no work left for humans to do, however.

Instead, there will be an increased focus on the interfaces between workers and AI systems. Our institute has assigned a research group to work on this topic because we feel it will become increasingly important. Often AI systems do not sufficiently consider the preferences of workers who are collaborating with the technology. Developers must design intelligent technologies that can adapt more flexibly to what people want and need.

Communication interfaces present another challenge for AI and employee interaction. In the healthcare sector, for example, how should the technology communicate patient results in a way that supports the organization’s ethical standards? In HR, some companies are already using AI technologies to assess the résumés of job applicants and reduce the pool of people to be interviewed.

These applications present certain ethical risks – but there are also opportunities. For example, although we know that autonomous driving applications come with dangers, it’s not often acknowledged that they can be programmed to significantly reduce the number of accidents and the damage to people and property as compared with human drivers. That’s an ethical goal we can achieve through technology.

In healthcare, the introduction of new technologies can save lives and reduce suffering. Telemedicine and robotic surgeries are two examples of how the practice of medicine can be made more ethically positive. Humans make many errors because they rely on gut feelings or indulge in irrational behavior. In some areas, AI could equip us to make better decisions and take steps that would help people – if we embed the right ethical rules.

Ethical by Design

Technology that is ethically positive by design is an evolving concept. We need to build ethics into the code, and that’s doable. But there are other aspects that should be considered. In machine learning, for example, we need to consider how training data is selected.

There have been many examples of AI that was biased because a company used only the data already collected. We can correct this by developing more sensitivity to this issue. Some applications may need to add more data. Or they could include some kind of artificial data on which to train algorithms. This is something that will be a huge issue for companies in the future.

In addition, the responsibility for ensuring that information is accurate, current, and well-governed is shifting. In many parts of the world, laws hold individual drivers responsible when a vehicle crashes. That will have to change. If the vehicle is autonomous, the driver is no longer responsible – the vehicle manufacturer or software vendor is.

From the worker perspective, the technology and the data also must be trustworthy, which can happen only when the right interfaces are designed between humans and technology. When people are faced with a kind of black box that delivers output, it does not engender essential trust. Think of the drivers who receive but then ignore directions from a GPS because they assume they know their roads better. As a general rule, the system is better informed. But you have to trust it first to realize the anticipated value.

Regulatory frameworks may help us make the transition to more ethically positive and humane intelligent technologies – but those will happen on different timelines for different applications. For autonomous driving, there are already some well-crafted proposals for regulation. In other cases, such as financial services and healthcare, we need more work before we can create useful regulations.

But we need to be careful not to put the technology at risk with regulations that are too strict or too early. Ethics can help industries develop the right rules at the appropriate time. Groups such as the International Telecommunications Union, a specialized agency of the United Nations, are already working on international standards for AI, creating a foundation for building these new technologies.

To deliver the desired results, intelligent technologies need to consider the human implications of their use. As I like to tell my students, “AI cannot fly without ethics.” As business and technology leaders recognize this truth, they can begin to address ethical issues from the start of their technology initiatives – reducing the risk of later problems.


About Horizons by SAP

Horizons by SAP is a future-focused journal where forward thinkers in the global tech ecosystem share perspectives on how technologies and business trends will impact SAP customers in the future. The 2020 issue of Horizons by SAP focuses on Context-Aware IT, with contributors from SAP, Microsoft, Verizon, Mozilla, and more. To learn and read more, visit www.sap.com/horizons.

Read more SAP by Horizons stories on the SAP News Center.


Christoph Lütge is director of the Institute of Ethics in Artificial Intelligence at the Technical University of Munich.