You and a friend go out shopping and you both buy the same thing, only they pay less for it than you do. They don’t have a specific discount or loyalty card. But based on their profile, an AI has decided that you will buy at a higher price than your friend. How does that make you feel? Is it ethical?

Using AI ethically should be a key consideration for organisations that care about their brand and the relationship they have with their customers and employees. Using AI the wrong way can cause irreparable damage to these relationships and your reputation. In some cases it can result in increased market regulation as government bodies seek to prevent malpractice. This means that an understanding of the ethical use of AI is important to ensure strong ongoing relationships with customers and avoid wasted investments in AI.

Artificial Intelligence is constantly in the news at the moment and most organisations are planning to start using AI if they are not already. However, some are more considered about their use of it than others. To quote Mark Zuckerberg, some will “move fast and break things”, others will move fast and not break things.

Although the shopping example above is unlikely to happen on the high street for now, it already happens online on some sites. In the ticketing industry the UK Government has stepped in and banned the use of Bots by ticket touts. These were being used to bulk buy tickets which the touts could sell on the secondary markets for a higher price. This meant that fans could not get tickets to gigs direct and were having to pay over the odds to see concerts. The ticket touts now face unlimited fines if they are found to have used bots.

So what do we mean when we talk about the ethical use of AI? For businesses there are some key things to consider when considering whether to use AI.

1) Will my customers or employees be supportive about how I’m using AI?

If you are using AI to profile customers, what is the benefit to the customer? How are you using the profiling information? We would advocate openness. Testing the response to these questions before launching is a good idea.

Many people already assume that AI is being used in cases where in fact it isn’t. However, some groups of people are more accepting of that than others. Be clear about the value of the AI for your customer or your employee, not just for your organisation. If you get a negative response, take time to understand what drives that so you can re-consider.

2)  Is the way I’m using AI likely to structurally alter the industry I’m in?

AI has the ability to completely disrupt certain industries through automation of intelligent tasks. For example, the insurance industry has relied on pooled risk since it began. In the consumer space, this has been based largely on demographics and occupation. AI can look at a much wider range of data to look at a more complete picture of the individual and identify behavioural or character traits that indicate whether a person is likely to be a good or a bad risk.

This means that pricing can be based on a range of (in some cases more subjective) criteria that determine what risk you represent. This greater ability to predict risk could easily have the effect of lowering the price for some while pricing others completely out of the market.

The regulators have already shown a willingness to step into this market to ensure that risk pooling continues to operate and that pricing is fair. The same is likely to be true in a range of other industries from Travel to retail and energy to financial services. Admiral insurance recently had to abandon plans to use social media profiles to price car insurance following a wave of negative publicity.

Another example of this is the use of AI in the creation of passive tracker funds. These track the markets automatically and adjust the portfolio based on algorithms. As these are lower cost and broadly track the performance of the market, there has been a significant shift from active funds to passive funds. Automated trading had an impact on the market crash as far back as 2007. Developments in AI since then mean that the algorithms have increasing autonomy and in this emerging space, their behaviour is not always predictable and could easily lead to further crashes.

Disruption is a good thing as it is often the engine of progress. However, understanding the downsides as well as the upsides is a good way to ensure you stay on top, keep customers happy and don’t get regulated out of the market. 

3)  Am I guarding against artificial stupidity, when the bots do something they shouldn’t?

Artificial Intelligence as a concept has been around for over half a century, but every new AI that is built has the capacity to create unexpected outcomes. That might range from refusing a loan to a loyal customer when it shouldn’t, to buying shares automatically when there is no money to pay for them. It is important, but complex to pick up and fix this type of issue because most AI implementations are “black box”. The AI takes data in and then makes a decision within the black box based on its training. The reasons for the decision are not always transparent because IT has been trained based on statistical analysis and correlation of data where correlations are not always obvious. In many respects they can be like a recalcitrant expert member of staff who is almost always right but won’t explain how they came to a particular conclusion.

Another area of artificial stupidity is bots on bots. As the implementation of AI becomes more widespread, there is an increasing likelihood of Bots encountering each other. This can lead to unintended consequences as the bots have not been designed to work with other bots. For example, Wikipedia use bots to do many of the corrections of invalid or malicious edits on the site. They now have a problem where the bots are fighting each other over edits by continually overwriting the previous bots’ edits.

4)  Is my bot prejudiced or biased in some way?

Microsoft received unwelcomed press attention when their prototype chatbot Tay started tweeting racist tweets. They released Tay onto Twitter and within 24 hours the trolls had “trained” Tay to be like them.

Facebook who have been using AI to curate and personalise news feeds has had to wrestle recently with the issue of bias in those bots. They have had accusations from US politicians from both sides that the news is biased. They therefore have to continually monitor the output and tune the algorithms to ensure they are not creating bias.

Most AI used in business is trained, then deployed so there is an opportunity to test for and identify bias. However, this requires specific skills and expertise which organisations will need to start investing in.

5)  Is my bot secure from attack or from being gamed?

There has been much talk recently about the impact of cyber attacks on elections in the US. As AI automates more human tasks, the risks that such attacks pose increase. If a malicious attacker is able to influence AI’s that are making payments, moving goods or issuing credit, there is significant scope for harm both to individual customers and to the organisations that are running the AI’s.

6)  Have I built safeguards in to ensure that the bot has appropriate human oversight?

Whether it is artificial stupidity, bot bias or hacking, things go wrong. Putting in the right level of human oversight is important to ensure that you are able to remedy the situation. This is most obvious in self driving cars where the driver has to remain at the wheel to take over in the event that the AI gets it wrong.

Building in safeguards is not as straightforward as it might sound. AI often makes decisions based on a range of criteria and in most cases, in a black box. This can make it difficult for a human to say whether the decision was right or wrong. Take the example of you and a friend buying the same product for a different price. How can a human say whether the pricing made a difference to the sale? The answer usually lies in the data, so you have to have people actively looking at the data and ensuring that the AI is behaving as expected.

You also need to be able to switch them off and still keep offering the service.

AI certainly has the power to transform, wielding this power is something that we should do with a clear understanding of the implications. I advocate embracing the opportunity while having an AI ethics board that reviews all AI implementations from an ethical perspective. This will ensure that the organisation is properly protected against the potential risks posed by AI.