>

Artificial intelligence (AI) has become all-pervasive, influencing how we live, work and play. From the moment we wake up to the moment we sleep, we are interacting with AI systems throughout — intelligent gadgets, social media platforms, AI-enabled communication channels, digital assistants, and the list goes on.

Organizations are embarking on transformational digital initiatives and racing ahead to adopt disruptive digital technologies to interact with their customers and stakeholders. AI-powered tools and platforms become the ideal software for small businesses and big companies to connect to customers. Customer relationship management tools such as chatbots, email bots, AI assistants, intelligent self-service platforms, digital service agents, and other software become a customer’s first point of contact and offer seamless customer experiences thereafter.

AI Ethics the new Shiny Penny in Business

There is no doubt that AI and other new digital technologies can generate lasting value for companies. However, it’s also creating trust issues on how technology is deployed and used, thus emphasizing the need for ethics in AI. In a highly competitive marketplace, digital ethics has become a key differentiator that is as important to achieving business objectives as delivering outstanding products and services. Adherence to digital ethics is no longer an option.

Ethics is a well-defined system of principles that influences our choices of right and wrong and defines what is morally good or bad. Ethics is a human characteristic, and AI or machines cannot be expected to exhibit empathy and employ ethics, nor can they learn it over time. Thus ethics has to be incorporated into AI-based systems at the development stage.

Dangers that Lurk in the lack of AI Ethics

AI-based systems are intrinsically data-driven, and issues related to the accuracy, privacy, security and bias associated with the use of data have popped up from time to time. However, formulating standards to use AI ethically is not an easy task. In a world that is becoming increasingly globalized and operating online, countries and organizations are grappling with the intricacies of developing ethical guardrails for AI.

Industries across the spectrum are impacted by the lack of codifying standards in AI ethics. Some sectors where the implementation of AI ethics has become essential are the following.

3 sectors for implementing ethics

For insurance and financial business analytics services

The financial services industry is one among the many that face the ethical concerns raised by AI. Insurers, banks, fintech and other financial institutions are automating services with AI. In areas such as wealth advisory, risk management, fraud detection and credit rating, AI is supporting or even stepping into the shoes of human decision-makers. Behavioral sciences techniques woven into AI algorithms have proved to be successful in changing customer attitudes towards money. They have proved to be helpful tools to nudge individuals to focus on savings or to track their spending patterns and work on financial planning. However, ethical concerns arise when the nudges powered by AI become unethical tools to manipulate behavioral change.

AI speeds up claim and application processing, saves money and supports timely fraud detection. However, the risk of bias is very high in the AI algorithms making assessments. There is a lack of interpretability and transparency on how an AI algorithm comes to a decision in AI software for small businesses and large organizations, making it difficult to identify biased or discriminatory behavior. The bias could also be unintended. It could be representative of the prejudice present in the social system. The machine does not understand or consider removing the biases but just tries to optimize the model. Often the data fed into the AI program will not be a perfect representative sample when there is a limited dataset from certain minority segments, which leads to algorithms tending to make sweeping generalizations.

AI-based business analytics services can make assumptions about the risk profile, habits and lives of people. Applicants can be charged excessive premiums after being categorized as high-risk or denied loans due to low credit scores. These biases result in discrimination against ethnic, gender and racial minorities.

For healthcare business analytics services

The use of AI in healthcare is relatively new and has revolutionized healthcare. From diagnostics and imaging to apps to assess symptoms to workflow management in hospitals, AI is used in a wide range of clinical and operational applications in healthcare. Explosive growth is predicted in the use of AI in healthcare in the years to come. However, with this growth comes the dangers and challenges in using AI ethically.

AI chatbots and health apps provide a range of services and collect and analyze data through wearable sensors. Ethical questions about user agreements arise in such a situation. Unlike the traditional informed consent process, the user agreement is signed without a face-to-face dialogue. Most people routinely ignore user agreements and do not take the time to understand them. Frequent updates to software also make it difficult for individuals to follow the terms of agreement once signed. The information from AI chatbots or health apps is sometimes fed back into clinical decision-making without the user’s knowledge.

Another big challenge for AI in healthcare is transparency. Transparency is essential to ensure patient confidence and trust between clinicians and patients. In AI programs that use genotype and phenotype-related information, biases could result in false diagnoses and ineffective treatments that could jeopardize the safety of patients. An AI algorithm trained majorly on data on Caucasian patients and limited data on African-American patients can give inaccurate diagnoses or treatment recommendations for the African-American populations. Data sharing could sometimes happen outside the patient-doctor relationship, such as for clinical safety testing of health apps or with friends and family members. However, patients are often not clearly informed about the processing or sharing of their data.

For military business analytics services

Countries worldwide invest millions of dollars in the research and development of modern technology for military applications. The increasingly intelligent and autonomous AI has become a favored choice for many. AI-equipped autonomous weapons have changed the theatre of war. Ethical questions about what a weapon is allowed to do on its own and who is accountable or takes responsibility for what it does on its own are being discussed by strategists across the world. Development and deployment of AI-enabled weapons require significant oversight, responsibility and judgment as compared to other systems because of the ethical issues involved.

An accidental or small skirmish can escalate and become a full-fledged conflict due to autonomous weapons with pre-programmed goals making decisions on their own. Malicious manipulation of AI systems in autonomous weapons can trigger a cascade of unintended actions and cause large-scale harm on the battlefield. The inhumanness of war is amplified when states that believe in using AI weapons unleash them against an adversary who does not use them and is ill-prepared for the impact. The limited understanding of what AI weapons are capable of and the lack of ability to recall a system once triggered only compound the harm. AI systems cannot treat opponents with dignity, analyze context to distinguish between non-combatants and combatants, or recognize signs of surrender. Ethical regulations can restrict the use of fully autonomous AI weapons and reduce the incidence of immoral violence.

Tech companies partnering with the military of a country are governed by a code of ethics pre-dominated by the moral biases of the employees. Terrorists, geopolitical enemies, rogue nations, etc. can whip up patriotic fervor and impair the moral compass of a developer. There is significant concern about the dangers of AI-fueled arms races by rogue nations and the amassing of state-of-the-art AI weapons by dictators and terrorists.

Conclusion

The concerns discussed above are relevant to all industries. AI implementation in software for small businesses and large organizations should take into consideration legal and ethical implications. AI tools are developed by public-private partnerships and large private technology companies. Although these companies have the resources to build the tools, they are not incentivized to adopt ethical frameworks when designing them. Even if regulations are crafted to fight bias in AI, they will have to be implemented with other AI-based systems that act as watchdogs. However, the role that technology will play in enforcing these regulations has to be studied and monitored.

AI is ethically neutral; it is the human beings who develop AI systems that input their individual biases and opinions into AI machines. Developers should be transparent about the data used and its shortcomings. The central tenet around which the development of an AI program or system revolves should be algorithmic accountability. Sufficient controls should be put in place to ensure that the algorithm performs as expected.

Ethical and legal concerns should be tackled early in the development lifecycle. Incorporating ethical analysis into the development program could reshape processes and create new costs; however, developers and decision-makers must recognize that this approach will reap tangible and rich benefits in the future. 

AI Ethics