Decisions made by artificial intelligence will have an increasingly large impact on our future lives. Imagine interviewing for a new job and being screened by an AI recruiting application that is biased against candidates with your background. Envision having a loved one be denied critical medical care by AI-based tools offering lower reimbursement rates for people in your postal code.
These scenarios are not hypothetical; their outcomes happen today, both with and without the help of artificial intelligence. On that note, The Washington Post recently published an article, titled “Racial bias in a medical algorithm favors white patients over sicker black patients.”
The gist of the article is that a reimbursement model based on patient postal codes had an unintended consequence of perpetuating different standards of care ultimately linked to the race and socio-economic status of patients. Presumably, if the model just went off of the medical condition of the patient and a general standard of care, it’d be more egalitarian.
Whether we like it or not, the logic and data used by our AI tools contain human biases from our past experiences. The challenge and opportunity posed by AI is that we can either make negatively-biased decisions faster and more efficiently or we can consciously break the cycle and evaluate the equity of AI model outcomes, as an input to future decisions. But how do we steer toward the latter?
A Framework for AI Ethics Guidelines
As a research and advisory firm, there is a whole team at Gartner covering the ethics of AI. One group of analysts looked at the most commonly mentioned ethical guidelines for AI by technology vendors, industry groups and governmental agencies. They found consistent themes within the following categories:
Beyond the fairness issue mentioned above, there are many others that span from AI transparency and explainability to ensuring privacy and accountability for the creators and users of AI. Closer to home, these guidelines touch a large number of daily processes in supply chain, including:
- Talent Management: the interview process and selection criteria for new employees
- Planning and Sourcing (Antitrust Avoidance): how product and material allocations are assigned to customers and suppliers when our company represents a significant share of the market
- Customer Service: the need for transparency regarding the use of nonhuman entities (AI-powered chat bots, for example) to record and respond to customer requests
The reality is that for many of us, digital ethics has been an afterthought. Gartner research anticipates that new legislation — and the regulations that follow — will force us to react quickly, if we aren’t proactively planning for them today. For instance, what would be the impact if all “non-entities,” including the aforementioned customer service chat bots, were required to identify themselves as such? At a minimum there would be significant software development work to clarify when a human is communicating with the customer versus a machine.
Where to Start?
In developing strategies around AI, data and analytics, supply chain leadership teams should:
- Create awareness and training on ethics for management teams and employees designing and implementing AI-related capabilities.
- Use the AI ethics guidelines framework as a reference point and have a governance body monitor whether the continued learning of the AI-enabled systems strays from these guidelines.
- Address specific ethical dilemmas by adding an AI ethicist to the team and/or by having a digital ethics advisory board.
It is still early days in our adoption of AI-based decision-making tools, so now is the perfect time to assess and improve adherence to ethical AI guidelines.
Will we channel the better angels of our nature and use this powerful new technology for good or will we give up the chance at greater freedom, privacy and equality by simply codifying the devil we know? The power is in our hands and minds.
VP Distinguished Advisor,
Gartner Supply Chain