Ethical AI: How brands can develop responsible AI solutions 

Discover how ethics shape the future of AI, from addressing biases to ensuring transparency and how stakeholders can foster a thoughtful, inclusive, and ethical AI landscape.

Content Marketing Specialist

Monika Karlović

Content Marketing Specialist

Artificial intelligence (AI) is no longer just a buzz word, but the standard businesses worldwide are looking to adopt to improve processes, communication, content generation, and the list goes on.  

But with great power comes great responsibility, and the ethical considerations around using AI are certainly great. Let’s breakdown all you need to know about ethics in AI and how it can help stakeholders safeguard basic human values and the reputation of businesses that use AI technology.  

What is ethical AI?

Ethical AI is a set of moral principles and methods designed to guide the responsible development and use of AI technology. Businesses are now creating AI codes of ethics to mitigate the risk of any ethical issues with the technology.  

Ethical AI involves taking a safe approach to developing AI. The goal of all stakeholders, from engineers to governments and CEOs, should be to prioritize the respectful use of data, and create an AI system that is transparent with something of a moral compass. Essentially, ethical AI aims to mitigate the risk of unethical situations and prevent the harm of any human beings.   

In the past, ethical considerations would be associated with academics, government, non-profit organizations, and healthcare. But today, businesses deal with massive amounts of personal data, and are creating innovative AI solutions that remove the need for human involvement.  

This puts businesses and corporations under the ethical microscope where they need to consider fairness, explain-ability, interpret-ability, transparency, sustainability, and concerns about manipulating stakeholder autonomy with their AI solution. And it does not end there! It ethical questions arise like:  

  • Is it infringing on privacy?
  • Will it interfere with interpersonal dialogues or social cohesion?
  • How do you measure the reliability, robustness, lawfulness, and safety of the AI model you’ve created?

In many countries there are laws around the use of AI, but because AI is constantly changing and being developed it tends to be the responsibility of businesses to set up a code of conduct or principles for developing AI. Large corporations like Microsoft, Google, and Meta have teams in place that monitor and oversee the ethics around any AI being developed.   

What is an ethical AI code of conduct? 

We know different governments have regulations, but there’s a high need for organizations to implement regulations and frameworks to try and minimize unethical AI solutions. Comprehensive regulations under data protection law don’t match the speed of AI development, so we need our own systems to keep us in check as we progress further into AI development. 

Why should we be concerned about ethical conversational AI? If not regulated and designed ethically, it can perpetuate harmful bias, spread misinformation, violate privacy, cause security breaches, and harm the environment—impacting individuals and communities personally and professionally. 

An AI code of conduct is designed to help businesses follow moral guidelines when developing AI technology, or even implementing it into their business. A major development was when the G7 created a voluntary code of conduct for businesses to follow worldwide. These guidelines are meant to promote safe and trustworthy AI and help hold businesses accountable for what they develop.  

Most AI codes of conducts include principles like:   

  • Fairness: removing biases and treating everyone equally
  • Transparency: clear communication on how systems are built and developed
  • Privacy and security: respecting customer and user data and privacy
  • Accountability: ensuring brands are liable for the AI they develop and integrate

An AI code of conduct helps businesses retain their reputation and integrity as it demonstrates to the public that the future of AI does not have to be a scary or unclear one. 

Stakeholders that influence the ethics of AI 

  1. AI researchers influence the creation of ethical AI since they design and develop AI algorithms and should be integrating ethical considerations into their research. They can help in publishing guidelines, advocating for ethical and responsible uses of AI, creating open-source codes for transparency, and educating other stakeholders on the purpose and ethical uses of AI.
  2. Product engineers play an important role in creating ethical AI products. The need to take ethics into consideration when designing and developing AI products. They can implement safeguards against biases, adhere to company AI codes of conduct, and prioritize user privacy.
  3. Governments influence ethical AI by establishing regulations and policies that guide the development and deployment of AI technologies. They can create legal frameworks addressing issues like bias, privacy, and accountability. Government agencies may also support research and initiatives promoting ethical AI practices.
  4. Private companies should be creating codes of conduct to address ethical concerns, be as transparent as possible on the use and development of technology and ensure user data remains secure and private. By integrating ethical principles into their operations, private companies contribute to the broader effort of ensuring AI technologies are developed and used ethically.

To get a better idea on how stakeholders can help guide businesses to use ethical AI checkout this round table discussion between partners who all contribute to the development and utilization of generative AI in the real world. 

Featuring: 

  • Josh Diner, Group Product Marketing Manager at Infobip
  • James Brown, Strategic Partnership Development Manager at Infobip
  • David Fernandez Vinuales, Head of Strategic Partnerships at Google Messages RCS
  • Scott Vaughan, CMO at Vaughan GTM Advisory
  • Dmitry Gritsenko, CEO & Founder of Master of Code Global

Key steps to take when building ethical AI solutions

Ethical AI is important because we are developing a technology that is meant to replicate, and in many situations replace, human intelligence, and is increasingly more present in our everyday lives.   

When we are talking about new technology that influences decision making, day-to-day interactions, and the production of content, there always needs to be a conversation around ethics. How can we ensure artificial intelligence is not going to harm users and society?  

1. Understand the industry and purpose the solution is designed for 

Each industry can have a different effect on ethical considerations like:  

  • Religion
  • Sex and gender
  • Education
  • Socioeconomic backgrounds
  • Geographical locations
  • Cultural backgrounds
  • Language proficiency levels
  • Physical abilities
  • Technological access

For example, an industry that has a high-risk of developing unethical AI would be the justice systems. AI solutions could be used for criminal profiling, sentencing recommendations, legal advice, and assistance but it can create biases based on the training data which could result in unethical implications that can drastically affect someone’s life.  

Whereas, in retail, an AI solution can be used for a customer service chatbot, personal shopping assistant, or help in generating personalized recommendations for marketing campaigns. If the AI is poorly trained and generates an unethical response, it will most likely affect the reputation of your brand but have little to no impact on your customer’s life. 

2. Address biases  

Like humans, AI like to put people into boxes, attaching positive and negative labels learned from the data to different groups. To correct this sort of group bias, researchers can force the model to ignore attributes like race, class, age, and gender. It’s similar to how some orchestras now have musicians audition behind a curtain to maintain a race and gender-blind selection process. 

3. Set fair and measurable KPIs 

Another way to help mitigate the risk of unethical AI, brands should set measurable KPIs from the get-go. This helps us measure success in terms of the technical fairness of the AI solution and the social context it influences.  

For example, a fintech brand launches an automated loan lending AI solution. They should measure the conversion rate to approved loans as well as how many loans are unapproved. At the same time, they need to check if loans are approved equally for different groups, ensuring fairness. Evaluate face recognition for racial accuracy. Verify if individuals with similar backgrounds are treated uniformly. And then they can determine it technically fair in a social context. 

4. Work with AI experts 

Creating an ethical AI solution is no easy task. Partnering up with a solution provider that knows the ins and outs of ethical AI can make your project go smooth. Infobip has extensive experience building custom AI solutions for brands of any industry. How do we do it? 

  • CX team: Customer experience is at the heart of every solution. Our CX experts can help your brand pinpoint what your solution can do for customers, and help design an AI solution that will help you reach your goals
  • AI experts: The brains behind the creation and the success of the solution, our AI experts help minimize things like hallucinations, and ensure an accurate and ethical response is generated each time
  • Partnerships: We have strong working relationships with other solution providers, so we can collaborate and use the expertise of our experienced partners to help develop and launch your solution

Working with CX and AI professionals will help ensure that your AI solution remains ethical and compliant with local laws and regulations- keeping the reputation of your brand strong and reliable.  

3 examples of unethical AI 

1. Unethical AI in recruiting

Back in 2014, Amazon used an AI tool to help with recruiting new employees. The AI was trained on the resumes of their software department, which was made of mostly male employees. This taught the AI to exclude resumes that included women’s schools, clubs, or other affiliations – essentially training it to remove women from the hiring pool.  

The intention was innocent, to use an AI tool to speed up the hiring processes for a major corporation that has thousands of applications coming in regularly. But AI is only as smart as humans train it to be, and sometimes human bias can seep through, even without intention.  

Companies building these AI tools need to be conscious of such biases, develop and train the technology to the extent where they lower the margin of error for these types of situations. Companies using AI should also consider always keeping a human-in-the-loop to monitor the outputs of AI.

2. Unethical AI in academics

The launch and popularity of ChatGPT sprung among students across the world. There was suddenly a way to generate unique essays or written pieces that would speed up the time it takes to complete academic projects. This sprung serious controversy over academic integrity and plagiarism. How could students effectively learn and earn their degrees without putting in any actual effort? What’s more, ChatGPT is not always accurate, it is known for hallucinating and spewing false information – and most students don’t know any better.  

Ironically, teachers and professors have started to combat the use of GenAI in academics with more AI. A professor from the University of Texas used ChatGPT to tell him if any essays were AI generated – not knowing this is not a feature of ChatGPT. The AI tool claimed it generated every essay, leading to the professor failing his entire class. After an investigation it was found only two students used ChatGPT to create their essays. Some students were even denied their diplomas during this time.  

There is a serious ethical problem in using AI in academics. As a society, we can mostly agree that using AI to complete work is ethically wrong and removes integrity from a student’s work. And using AI without proper understanding of how it works to determine the fate of students is also a serious ethical concern.  

Human involvement in academics is essential. Besides, the entire point of academics is to better the development of human beings and society. What will the world look like for a generation of students who are dependent on generative AI to think for them? It becomes the responsibility of the universities and schools to control, monitor, and discourage the use of AI for ethical reasons. 

3. Unethical AI in policing and surveillance

AI facial recognition is widely being adapted by police and government agencies in the United States and around the world. But the accuracy of this technology is being called into question after multiple people have been falsely identified and arrested for crimes they did not commit. These technologies seem to have biases as they cannot accurately identify black individuals, as most are trained on data from light complexions, meaning discriminatory biases are at risk of growing if police continue using this technology. 

AI facial recognition is being used by EU countries like France, who will use AI surveillance during the 2024 Olympics, and China that uses face tracking to identify protesters. Clearview AI, a facial recognition technology, has been banned in many EU countries as it uses images scrapped from the internet and social media without any consent from individuals and has been banned from selling facial data to private US companies.  

The ethics around using AI in policing is a serious problem, as any faults in the technology can drastically impact the lives of innocent individuals, deepen racial biases, and make the general population feel like they are under constant surveillance. Activist groups and many members of government are fighting for bans on AI in policing because of these inaccuracies and general privacy concerns.

3 examples of successful and ethical AI

LAQO: Ethical AI for insurance

LAQO is Croatia’s first fully digital insurance company, offering 24/7 support to their customers. Together with Microsoft Azure OpenAI Service and Infobip’s chatbot building platform Answers, a generative AI-assistant was launched. 

The chatbot can specifically help customers with FAQs about LAQO and insurance claims to avoid any misleading information and ethical concerns. With careful thought and consideration, we were able to build a successful and effective AI chatbot with a tiny margin for hallucinations.  

The results:

30%

of queries are handled by AI assistant

95%

of queries are resolved in 3-5 messages

Megi: Ethical healthcare chatbot

Megi Heath is a digital healthcare solution that wanted to optimize their patient journey. They decided to integrate a generative AI assistant built on Answers. Any time AI and healthcare are mentioned in the same sentence, there needs to be extensive discussions about the ethics around these interactions.  

To mitigate any ethical risks, the teams decided to focus on four primary use cases for patients suffering from high blood pressure:

  • Record and control blood pressure
  • Track symptoms
  • Patient education
  • Connect with a doctor

Each use case was carefully crafted to improve the patient’s experience and ensure their medical needs were ethically met by balancing AI interactions with human ones.   

The results:

86%

CSAT score

65%

reduction in time to collect data for diagnosis

Coolinarika: Ethical nutritional assistant

Coolinarika by Podravka is massively popular culinary platform in Croatia. They wanted to create more conversational experiences for their users as well as positively impact their community by offering nutritional education on food and inspire the creation of healthy recipes. 

To ensure there were no ethical issues around suggesting recipes and nutritional information to users, Podravka and Infobip worked with leading nutritionists in Croatia to create quick tips and recommendations for simple and high-quality recipes. Additionally, the AI experts at Infobip managed to program the chatbot with 100% accuracy in providing relevant nutritional recipe recommendations to users.

The results:

18%

conversion rate to engaged user

40%

more active users aged 25-34

Creating an ethical future for AI

So, there is a future where AI ethical risks can be controlled and mitigated if everyone plays their part. Stakeholders and policy makers play a major role in setting guidelines, codes of conduct, and setting the standard for ethical AI development that safeguards user data and privacy.  

We at Infobip take great pride in our role in creating ethical AI solutions for customers, and contributing to a future where AI improves and helps society and upholds basic human values. 

Read on to see how you can implement generative AI use cases in an effective and ethical way.  

Deep dive into ethical uses of conversational GenAI and how to implement them

GenAI without the risk
Jun 12th, 2024
12 min read
Content Marketing Specialist

Monika Karlović

Content Marketing Specialist

You might also like: