Unlike innovations of the past like big data and social media, where ethics weren’t considered before scaling the technology, the ethical discussion around AI is largely taking place before it hits the mainstream. And that’s good news for businesses and its customers, especially since AI is arguably the most disruptive technology to date. 

In order to really reap the benefits AI can unlock, now is the time to create a standard for implementing it ethically and safely. As you begin to develop your organization’s framework for ethical AI, here are the five critical guidelines to follow.

Keep social impact front and center

First, do no harm. This is an oath taken by many healthcare professionals. And while it may seem obvious why the tech industry, especially those working with AI, should also take it, it’s incredibly important that humans are at the heart of any and all AI solutions. AI should be a tool leveraged to create a positive impact on society. The difficulty here is the subjectivity at play. Yes, AI should make people smarter, better, faster. It should automate tasks and optimize process. But it’s on you to really decipher the threshold for when and how.

That being said, when the potential negative impact on people, property or businesses becomes high, humans should have the ability to override AI. Think self-driving cars, fake news and addictive app experiences.

Ensure decisions guided by algorithms are equitable

AI solutions should be honest and impartial. It shouldn’t discriminate and should be able to eliminate undesirable bias based on gender, race, religion, etc. Seems pretty straightforward, right? 

Not quite. This is another gray area. Does fairness mean you treat everyone the same, or you treat people based on their specific situation? And when it comes to discrimination, how do you navigate those waters with fairness and bias in mind? 

Based on your use for AI solutions, the answers to these questions will differ. Be sure to get input from many members of your org from different personal backgrounds and different levels of the business.

Build trust through transparency

Be upfront with your customers. You should never try to hide that an AI solution is at work in a product. Your users should know when they’re interacting with an AI-enabled system and should understand the system’s purpose. 

You should be able to explain the decisions your AI solution makes. You should be able to chart the reasoning system and clearly explain how results are generated. Explainable AI is about trust, and it’s a hot topic getting more and more attention. Determine whom you need to be transparent with and for what purpose. 

Tip: If your vendors are adding AI to their platforms, work closely with them to understand their AI systems so you can be transparent with your customers about your own products.

Safeguard data at every stage

Another guideline that can be difficult to wrap your head around is creating safe and secure AI solutions. This sounds like a no-brainer, but when it comes to collecting data and protecting people’s privacy, things can get a little hairy. There’s a delicate balance between collecting only the information needed for the solution’s purpose, and collecting as much data as possible to help improve future iterations. 

While you ponder that dilemma in terms of your own organization and AI needs, here’s some straightforward considerations: 

  • The data you collect should be masked, encrypted and only used for the purpose you collected it for—so get clear on why you’re collecting.

  • Your AI system should be thoroughly tested before it’s put into wide use, aka you should monitor the machine learning process and test out the most exceptional situations that can occur (before they happen with real life consequences).

  • When AI is combined with robotics, the robot should not be able to do any physical harm.

Hold yourself accountable

We’re still in the early stages of AI, and although we’re getting ahead of things by having ethical discussions now, we can’t actually build ethics into AI solutions today. We can remove different biases, like data set, association, confirmation, automation, etc., but that doesn’t solve for ethics. That means you are responsible for making sure the guidelines listed above are taken into consideration. 

So, what does that look like? Continuous testing and monitoring. You’re ethically accountable for the behaviors of the system. As AI-enabled systems learn from real-world interactions, they may begin to stray from their original intentions. It’s your responsibility to override high risk decisions and re-design the system based on learnings. 

As you consider your organization’s own AI strategy, remember these guidelines are just the starting point. As you can see, not all of the considerations listed above are black and white. You need to socialize these with the different stakeholders across your business and come to a consensus on what this looks like for you and your initiatives. It’s impossible to prevent all unintended consequences of AI, but with your framework in place you have a better chance of reacting to, and catching, issues as they occur.