Innovations in artificial intelligence (AI) are starting to dramatically improve the provision of goods and services ranging from healthcare to financial services and technology. But it's just the start. As big data gets even bigger and computing power continues to grow, AI will transform virtually every aspect of life as we know it.

It's an exciting prospect that holds out the hope of tackling some of our most difficult challenges for the benefit of humanity. But, at the same time, there are many legitimate worries. As with any new and rapidly evolving technology, widespread adoption of AI will involve a steep learning curve. Mistakes and miscalculations are inevitable – leading to unexpected and sometimes harmful effects.

To maximise the benefits and minimise harm, AI ethics are crucial to ensure the social and ethical implications of the design and use of AI systems are considered at every step of the way. Four key pillars underpin AI ethics – systems need to be:

Fair

• Private

• Robust

• Explainable

Fair

The use of AI to reduce bias in credit scores is a good example of unintended consequences that highlights why fairness is a vital consideration in the design of AI systems. Credit scores are often presented as objective and neutral but they have a long history of prejudice – on the basis of race or gender, for example. AI-based credit scoring models provide a more nuanced evaluation of data and can unearth hidden relationships between variables that would not seem relevant or even be included in a traditional credit scoring model – anything from political beliefs to who you're connected to on social media. But introducing such non-traditional information to credit scores runs the risk of making them even more biased than they already are. Fairness is key to ensure this additional data actually benefits the individuals it's designed to help, by opening up access to credit.

Private

Healthcare is already demonstrating why privacy is crucial when it comes to the use of AI. Back in 2016, the news that London AI firm DeepMind was working with the NHS sparked controversy over the use of sensitive patient data. Last year's transfer of DeepMind's health division to parent company Google rang even more alarm bells. Clearly, we can't have a situation where patient data can be linked to Google accounts. But the right balance needs to be struck to protect the privacy of individuals whilst enabling society to benefit from the use of AI to assess scans, for example, or plan radiotherapy treatment. The real value lies in anonymised aggregate data anyway – rather than individual data – so a balance should be achievable.

Robust

The importance of robustness in an AI system is illustrated in the world of recruitment. AI has the potential to significantly improve the hiring process – taking a lot of the guesswork out of identifying talent, whilst removing the bias that so often clouds human judgement. But it was reported last year that even Amazon hit problems with this, when its new recruiting system was found to be biased against women. Its computer models were trained to vet applicants by observing patterns in CVs submitted to the company over a 10-year period. Most came from men – a reflection of the male dominance of the tech industry at that time – so, in effect, the system taught itself that male candidates were preferable. AI systems are not robust by default – to maximise their benefits, we need to ensure they do the "right thing" in the real world.  

Explainable

A Microsoft experiment in "conversational understanding" – a Twitter bot called Tay – highlights why the fourth AI ethics pillar of explainable is required. It took less than 24 hours for Twitter to corrupt the innocent AI chatbot. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation". Unfortunately, the conversations didn't stay playful for long. People soon started tweeting the bot with all sorts of misogynistic and racist remarks. And Tay started repeating these sentiments back to users – proving the old adage of garbage in, garbage out. As with biological intelligence, an explanation for how an AI decision came to fruition is challenging – but necessary if we're to truly reap the benefits AI has to offer.

As technology advances, the four pillars underpinning AI ethics will be key to ensuring innovation flourishes within a framework of responsibility. We're already seeing steps in the right direction – with The AI Initiative from The Future Society, for example, and the Ethics and Governance of AI Initiative, which is a joint project of MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society. Such collaboration needs to be the way forward. AI is too important to be left to chance.