What is ethical AI and why it matters

Discover how ethics shape the future of AI, from addressing biases to ensuring transparency and how stakeholders can foster a thoughtful, inclusive, and ethical AI landscape. 

Content Marketing Specialist

Monika Karlović

Content Marketing Specialist

Artificial intelligence (AI) is no longer just a buzz word, but the standard businesses worldwide are looking to adopt to improve processes, communication, content generation, and the list goes on. 

But with great power comes great responsibility, and the ethical considerations around using AI are certainly great. Let’s breakdown all you need to know about ethics in AI advancement and how it can help stakeholders safeguard basic human values and the reputation of businesses that use AI technology. 

the ethical considerations of AI

What is ethical AI?

Ethical AI involves taking a safe approach to developing AI. The goal of all stakeholders, from engineers to governments and CEOs, should be to prioritize the respectful use of data, and create an AI system that is transparent with something of a moral compass. Essentially, ethical AI aims to mitigate the risk of unethical situations and prevent the harm of any human beings.  

Ethical AI definition:

Ethical AI is a set of moral principles and methods designed to guide the responsible development and use of AI technology. Businesses are now creating AI codes of ethics to mitigate the risk of any ethical issues with the technology. 

In the past, ethical considerations would be associated with academics, government, non-profit organizations, and healthcare. But today, businesses deal with massive amounts of personal data, and are creating innovative AI solutions that remove the need for human involvement. This puts businesses and corporations under the ethical microscope.  

Most of the time, countries will have laws around the use of AI, but because AI is constantly changing and being developed it tends to be the responsibility of businesses to set up a code of conduct or principles for developing AI. Large corporations like Microsoft, Google, and Meta have teams in place that monitor and oversee the ethics around any AI being developed.  

What is an AI code of conduct?

An AI code of conduct is designed to help businesses follow moral guidelines when developing AI technology, or even implementing it into their business. A major development was when the G7 created a voluntary code of conduct for businesses to follow worldwide. These guidelines are meant to promote safe and trustworthy AI and help hold businesses accountable for what they develop. 

Most AI codes of conducts include principles like:  

  1. Fairness: removing biases and treating everyone equally 
  2. Transparency: clear communication on how systems are built and developed 
  3. Privacy and security: respecting customer and user data and privacy 
  4. Accountability: ensuring brands are liable for the AI they develop and integrate 

An AI code of conduct helps businesses retain their reputation and integrity as it demonstrates to the public that the future of AI does not have to be a scary or unclear one.

Why is ethical AI important?

Ethical AI is important because we are developing a technology that is meant to replicate, and in many situations replace, human intelligence, and is increasingly more present in day-to-day life.  

When we are talking about new technology that influences decision making, day-to-day interactions, and the production of content, there always needs to be a conversation around ethics. How can we ensure artificial intelligence is not going to harm users and society? 

Brands developing and implementing AI need to hold their technology to a certain ethical standard to ensure users and their data remains safe and protected. 

Ethical AI stakeholders and their influence

  1. AI researchers influence the creation of ethical AI since they design and develop AI algorithms and should be integrating ethical considerations into their research. They can help in publishing guidelines, advocating for ethical and responsible uses of AI, creating open-source codes for transparency, and educating other stakeholders on the purpose and ethical uses of AI.  
  2. Product engineers play an important role in creating ethical AI products. The need to take ethics into consideration when designing and developing AI products. They can implement safeguards against biases, adhere to company AI codes of conduct, and prioritize user privacy. 
  3. Governments influence ethical AI by establishing regulations and policies that guide the development and deployment of AI technologies. They can create legal frameworks addressing issues like bias, privacy, and accountability. Government agencies may also support research and initiatives promoting ethical AI practices.  
  4. Private companies should be creating codes of conduct to address ethical concerns, be as transparent as possible on the use and development of technology and ensure user data remains secure and private. By integrating ethical principles into their operations, private companies contribute to the broader effort of ensuring AI technologies are developed and used ethically. 

To get a better idea on how stakeholders can help guide businesses to use ethical AI checkout this round table discussion between partners who all contribute to the development and utilization of generative AI in the real world.

Featuring:

  • Josh Diner, Group Product Marketing Manager at Infobip
  • James Brown, Strategic Partnership Development Manager at Infobip
  • David Fernandez Vinuales, Head of Strategic Partnerships at Google Messages RCS
  • Scott Vaughan, CMO at Vaughan GTM Advisory
  • Dmitry Gritsenko, CEO & Founder of Master of Code Global

3 examples of unethical AI 

1. Unethical AI in recruiting

Back in 2014, Amazon used an AI tool to help with recruiting new employees. The AI was trained on the resumes of their software department, which was made of mostly male employees. This taught the AI to exclude resumes that included women’s schools, clubs, or other affiliations – essentially training it to remove women from the hiring pool.  

The intention was innocent, to use an AI tool to speed up the hiring processes for a major corporation that has thousands of applications coming in regularly. But AI is only as smart as humans train it to be, and sometimes human bias can seep through, even without intention.  

Companies building these AI tools need to be conscious of such biases, develop and train the technology to the extent where they lower the margin of error for these types of situations. Companies using AI should also consider always keeping a human-in-the-loop to monitor the outputs of AI.

2. Unethical AI in academics

The launch and popularity of ChatGPT sprung among students across the world. There was suddenly a way to generate unique essays or written pieces that would speed up the time it takes to complete academic projects. This sprung serious controversy over academic integrity and plagiarism. How could students effectively learn and earn their degrees without putting in any actual effort? What’s more, ChatGPT is not always accurate, it is known for hallucinating and spewing false information – and most students don’t know any better.  

Ironically, teachers and professors have started to combat the use of GenAI in academics with more AI. A professor from the University of Texas used ChatGPT to tell him if any essays were AI generated – not knowing this is not a feature of ChatGPT. The AI tool claimed it generated every essay, leading to the professor failing his entire class. After an investigation it was found only two students used ChatGPT to create their essays. Some students were even denied their diplomas during this time.  

There is a serious ethical problem in using AI in academics. As a society, we can mostly agree that using AI to complete work is ethically wrong and removes integrity from a student’s work. And using AI without proper understanding of how it works to determine the fate of students is also a serious ethical concern.  

Human involvement in academics is essential. Besides, the entire point of academics is to better the development of human beings and society. What will the world look like for a generation of students who are dependent on generative AI to think for them? It becomes the responsibility of the universities and schools to control, monitor, and discourage the use of AI for ethical reasons. 

3. Unethical AI in policing and surveillance

AI facial recognition is widely being adapted by police and government agencies in the United States and around the world. But the accuracy of this technology is being called into question after multiple people have been falsely identified and arrested for crimes they did not commit. These technologies seem to have biases as they cannot accurately identify black individuals, as most are trained on data from light complexions, meaning discriminatory biases are at risk of growing if police continue using this technology. 

AI facial recognition is being used by EU countries like France, who will use AI surveillance during the 2024 Olympics, and China that uses face tracking to identify protesters. Clearview AI, a facial recognition technology, has been banned in many EU countries as it uses images scrapped from the internet and social media without any consent from individuals and has been banned from selling facial data to private US companies.  

The ethics around using AI in policing is a serious problem, as any faults in the technology can drastically impact the lives of innocent individuals, deepen racial biases, and make the general population feel like they are under constant surveillance. Activist groups and many members of government are fighting for bans on AI in policing because of these inaccuracies and general privacy concerns.

3 examples of successful and ethical AI

LAQO: Ethical AI for insurance

LAQO is Croatia’s first fully digital insurance company, offering 24/7 support to their customers. Together with Microsoft Azure OpenAI Service and Infobip’s chatbot building platform Answers, a generative AI-assistant was launched. 

The chatbot can specifically help customers with FAQs about LAQO and insurance claims to avoid any misleading information and ethical concerns. With careful thought and consideration, we were able to build a successful and effective AI chatbot with a tiny margin for hallucinations.  

The results:

30%

of queries are handled by AI assistant

95%

of queries are resolved in 3-5 messages

Megi: Ethical healthcare chatbot

Megi Heath is a digital healthcare solution that wanted to optimize their patient journey. They decided to integrate a generative AI assistant built on Answers. Any time AI and healthcare are mentioned in the same sentence, there needs to be extensive discussions about the ethics around these interactions.  

To mitigate any ethical risks, the teams decided to focus on four primary use cases for patients suffering from high blood pressure:

  • Record and control blood pressure
  • Track symptoms
  • Patient education
  • Connect with a doctor

Each use case was carefully crafted to improve the patient’s experience and ensure their medical needs were ethically met by balancing AI interactions with human ones.   

The results:

86%

CSAT score

65%

reduction in time to collect data for diagnosis

Coolinarika: Ethical nutritional assistant

Coolinarika by Podravka is massively popular culinary platform in Croatia. They wanted to create more conversational experiences for their users as well as positively impact their community by offering nutritional education on food and inspire the creation of healthy recipes. 

To ensure there were no ethical issues around suggesting recipes and nutritional information to users, Podravka and Infobip worked with leading nutritionists in Croatia to create quick tips and recommendations for simple and high-quality recipes. Additionally, the AI experts at Infobip managed to program the chatbot with 100% accuracy in providing relevant nutritional recipe recommendations to users.

Creating an ethical future for AI

So, there is a future where AI ethical risks can be controlled and mitigated if everyone plays their part. Stakeholders and policy makers play a major role in setting guidelines, codes of conduct, and setting the standard for ethical AI development that safeguards user data and privacy.  

We at Infobip take great pride in our role in creating ethical AI solutions for customers, and contributing to a future where AI improves and helps society and upholds basic human values. 

Read on to see how you can implement generative AI use cases in an effective and ethical way.  

Deep dive into ethical uses of conversational GenAI and how to implement them

GenAI without the risk

Get the latest insights and tips to elevate your business

By subscribing, you consent to receive email marketing communications from INFOBIP. You have the right to withdraw your consent at any time using the unsubscribe link provided in all INFOBIP’s email communications. For more information please read our Privacy Notice

Jan 24th, 2024
10 min read
Content Marketing Specialist

Monika Karlović

Content Marketing Specialist

You might also like: