The Ethical Application of Artificial Intelligence


selective focus photography of black Buddha figurine on green leaf

Artificial intelligence used to be the exclusive purview of science fiction books and films. However, while the technology is still very much in its infancy and lacks the sophistication of the true thinking machines featured in those tales, questions are already being asked regarding the ethics of artificial intelligence and its application in business.

In his novel, I, Robot, Issac Asimov laid out three laws which govern the use of robotics in the world he created – that robots should not cause harm, or allow harm to be done, to a human; all instructions from humans are to be obeyed; and no robot should self-harm unless it was done following one of the first two laws.

While we are a long way from needing to worry about AI powered machines to the extent they do in Asimov’s world, the three laws of robotics laid out in the novel does raise interesting questions about the capability for automated devices and systems to cause some degree of harm, even in their current form.

Ethical AI

According to McKinsey, by 2025, individuals and companies around the world will produce an estimated 463 exabytes of data each day, compared with less than three exabytes a decade ago. Therefore, we must ask the question, what could be the potential harm caused by AI technology failing to analyze or process this data in an ethical manner?

"Few companies have systematically considered and started to address the ethical aspects of data management, which could have broad ramifications and responsibilities,” says McKinsey. "If algorithms are trained with biased data sets or data sets are breached, sold without consent, or otherwise mishandled, for instance, companies can incur significant reputational and financial costs. Board members could even be held personally liable.”

Probably the most obvious way ethics and AI can be brought together is when considering matters of data privacy and security. With much data gathering and processing being left to AI technology, organizations need to ensure that regulations are being adhered to and the tech being deployed is suitably secure as to not make it easy for cybercriminals to break in and steal confidential data.

With many organizations farming out their data science capabilities to third party service providers, the issue of security becomes even more important. It is not ethically sound to simply assume a provider has their data security locked down and go about your day. Third party vendor vetting is no longer simply a sensible business decision, but an ethical concern.

Company Culture

"Companies may believe that just by hiring a few data scientists, they’ve fulfilled their data management obligations,” continues McKinsey. "The truth is data ethics is everyone’s domain, not just the province of data scientists or of legal and compliance teams. At different times, employees across the organization – from the front line to the C-suite – will need to raise, respond to, and think through various ethical issues surrounding data.”

To develop a culture of ethical AI management within an organization means setting out, in detail, how it intends to develop and deploy these technologies, and a fully realized set of principles and guidelines which will inform and guide all stakeholders on their implementation and use.

In an increasingly draconian regulatory environment, the need for ethical AI use needs to permeate the entire company culture. It is no good for one department to assume another is taking responsibility for these matters – siloes need to be broken down so every part of the company infrastructure is singing from the same sheet, and everyone involved understands the importance of ethics related to AI and data.

Short Term Thinking

In the current landscape where an increasingly volatile economy and other disruptive matters are putting significant pressure on companies, the temptation is high to focus on short term goals and immediate ROI gratification.

However, it is short-term thinking which gives unethical practices the most fertile ground in which to take route. One example of this might be a company being somewhat careless with sharing confidential information because it benefits their ability to make profits right now. This kind of attitude is highly likely to come back and bite the organization however, and it is not just the person who committed the unethical behavior who is likely to suffer as a result.

As one tech company president explained to McKinsey: "It’s tempting to collect as much data as possible and to use as much data as possible. Because at the end of the day, my board cares about whether I deliver growth and EBITDA […] If my chief marketing officer can’t target users to create an efficient customer acquisition channel, he will likely get fired at some point – or at least he won’t make his bonus.”

Final Thoughts

The ethical application of AI technology and data science should be an organization wide consideration. Remove siloes, ensure policy is clear and unambiguous, avoid incentivizing short term thinking, and become extra stringent when it comes to vetting technology and vendors, and your company will be well on its way to becoming an ethical AI powerhouse.


Ethical AI is sure to be a hot topic at FIMA 2023, being held in April at the Westin Copley Place, Boston MA.

Download the agenda today for more information and insights.