Once, as a joke, I asked my Amazon Echo, “Alexa, are you our robotic overlord?”
“Playing Pancake Robot by Parry Gripp”, Alexa informed me in her stilted monotone. Suffice it to say, I do not think we are heading toward a Terminator-esque dystopic future any time soon. I am sure you are all as relieved as I was.
With that being said, AI continues to improve, and organizations continue to become more adept at both setting themselves up to use AI, through better data management and integration, and more comfortable in using AI to their advantage. It is technology which will continue to embed itself into our work and daily lives.
For Risk Management, I predict that this will include creating controls to one of the biggest risks a company can experience.
That is – when a risky condition exists but it is against the short-term interests of the individual to report it or escalate it.
Source: Influence and Drive Secure Behaviors, Gartner, 2022
Here are some examples:
- Risk Indicator Adjustment: A company might tweak a risk indicator —changing the parameters so it goes from red to green —to avoid negative reports to the board and executives. Thereby allowing the company to continue with their business-as-usual approach which may increase profits at the expense of increasing risk exponentially.
- Unethical Sales Practices: Pressure to sell more, combined with lax oversight and performance incentives, can lead to unethical behavior and predatory sales practices, negatively impacting customers and ultimately a detriment to the company when customers are lost, and the practice comes to light. However, the short-term benefits blind the salespeople, their managers, and the executives to the long-term risks they are taking on.
- Reporting Dilemma: Businesses may create their own databases which contains personally identifiable information (PII) to make it easier to perform reporting, versus reporting from production or technology-owned databases. However, these in-house databases often lack essential controls like encryption and following the least privilege principle which puts confidential data at risk, which could be improved if managed by technology partners but would require the sacrifice of convenience.
- Password Sharing: An individual shares a password with another to allow them to complete their business objectives
In each of these cases, security and risk are sacrificed to meet what are perceived to be business objectives and in each of these cases, the decisions made were in service of making the individuals’ lives easier. Additionally, the potential realization of the risk is far enough away for the invincibility fallacy to come into place.
AI, on the other hand, lacks these inherent biases. It does not favor one outcome or another due to personal gain or emotional attachment. It simply observes and analyzes. It can therefore identify risks that may be otherwise overlooked and escalated to executives so they can truly understand what is going on in their enterprise.
For example, AI could:
- Risk Indicator Adjustment: Independently track the data which informs the risk indicator, and its likelihood and impact of a negative outcome. Indicators can be reported to the board or executives without or in some cases, in spite of human influence.
- Unethical Sales Practices: AI can identify and link trends of sales practices, deal shape, and customer outcome.
- Reporting Dilemma: Discovery of all assets within an organization can be automated, and AI can be used to categorize the data, and how well, if at all, the data is protected.
- Password Sharing: Identify non-standard behavior of a user account based on past history.
While none of this is available today, this is within AI’s capabilities. It is able to make predictions or assumptions based on a large amount of historical data. It can participate in workflows or answer questions, as CoPilot, ChatGPT and Bard have shown us.
To implement a program like this, companies would need the following four things:
- A business environment that is highly integrated both procedurally and from a data-wise perspective.
- A repository of loss and risk events, linked to root causes.
- A highly integrated AI program which can access the majority of the business environment and trained on the loss and risk events, and the data which led to the root causes.
- An ombudsmen or committee whose responsibility it is receive and escalate these types of issues.
Therefore, much like many things I talk about, this is a way point in a journey, instead of an easy button. However, given the likelihood of a company facing a dilemma like this is high and the potential risk it could expose the business to, I think it’s a worthwhile journey to go on.
If you agree, the first step is a highly integrated business environment. This requires a visible and trustworthy technology estate, breaking down silos and a roadmap on how to accomplish these things and beyond.