AI risks include data poisoning and model corruption

15
Jan 25
By | Other

Interos, a company that provides supply chain resilience and risk management software, emailed me to say that there was a supply chain risk that everyone seemed to be ignoring – the risks of AI.

Companies use risk management software, such as the Interos solution, to monitor and analyze supplier risk events in real time. These are big data platforms that monitor various news sources and databases from governments, financial institutions, ESG NGOs and other sources to detect when a negative event has occurred or may occur.

It is well known that ChatGPT can hallucinate. Almost all supply chain software companies are talking about how they are incorporating generative AI into their solutions and how it can improve user interfaces. Most argue that when the AI ​​is trained on the company’s own data, the risk of hallucinations is small.

But what Interos is talking about is different. Not just the danger of AI generating hallucinations, but the dangers associated with all forms of AI. AI-related risks include data poisoning and model corruption. These, Interos argues, present significant challenges for organizations integrating AI into their operations. They are right, I don’t hear anyone else discussing this risk.

I interviewed Ted Krantz, Intero’s new CEO, to learn more. Mr. Krantz argues that with cloud-based architecture, componentized software and integrated analytics, there are significant information flows in ERP and supply chain platforms. This information comes from within the platform’s applications and increasingly from external sources, such as Interos. These platforms need high-fidelity signals that can be trusted.

The path of the life cycle of data, continued Mr. Krantz, includes an input stage, model and output. All three of these checkpoints have challenges and opportunities.

Garbage in, garbage out

Garbage in, garbage out refers to the data integrity problem on the input side. Solutions, especially solutions that leverage public data such as risk management applications, are highly dependent on signal quality. Is the information from these websites true? Is it fictitious? “So there is a component of entry-level corruption that all of us have to deal with.” This is true whether it is a generative AI application or more traditional forms of artificial intelligence.

Several algorithms can help clean the data. These can be simple logic that detects if it’s a zip code field; must have five digits. If the zip is only four numbers long, then it is an error. Or data cleaning tools might suggest that both “P&G” and “Proctor & Gamble” probably refer to the same company.

However, many other input errors can occur, especially around something as complex as a global supply chain. But, Mr. Krantz continues, “It gets really complicated, really fast.” Interos provides risk assessment for six different types of risks. “Each of these independent risk factors has individual, unique variables that may require manual intervention and cleanup to adjust and correct. There may be changes on the regulatory front. For example, over 15,000 companies were added to the US restricted entity list in 2023 and 2024.

Or perhaps the effective date of a piece of legislation has been delayed, which changes the scores. “It’s a hornet’s nest of literally countless entry-level potential corruptions that you’re constantly adjusting to. So the bottom line here is that for the foreseeable future, we need a team that is constantly checking the data.” The team, the CEO explains, is constantly “poking” at the system to try to find errors, as well as interact with customers who think some of the results may be wrong. “This is endless. It’s a beast. People are not only replaced here. They actually put themselves in more strategic positions.”

Risk of AI model corruption

AI models can also be corrupted. “The complexity at the model level is orchestrating private signals, company signals, what signals are being replaced by others, and getting that calibration right.” For Interos, the AI ​​model calculates the Interos risk score on a scale of 0 to 100. There are green, yellow and red indicators, as well as maps and monitoring capabilities attached to the scores.

At the model level, a set of issues surrounds how that output is modeled. “What are the variable weights associated with that point?” How much weight should be given to each variable that makes up the outcome? “Like at entry level, we need a team around this that is constantly calibrating how the score should be calculated.” And as with the data level, ongoing collaboration with customers is necessary to ensure that the scoring mechanism is accurate. There is always a “man in the loop”. If someone says they don’t have that, they’re just not being truthful.” For example, online news articles may generate event data. But creating real-time maps around this risk often requires humans to tweak the algorithm.

Whatever result is generated, “customers will naturally challenge that result.” We need to have a way for us to determine the integrity of the I score that is strictly an unbiased industry scope based on the data that we see.” For example, a customer might see a cyber risk score of 70. Sophisticated customers seek to understand how that score was generated and argue that if the score were calibrated differently, their score would be higher. And that customer may be right. There should be an element of collaboration around the risk model.

One thing Interos is working on is giving the customer the ability to weigh the parameters themselves. For example, a supplier’s risk score may be based in part on the FICO credit score created by the Fair Issac company. In the future, Intero’s customers may decide to give that variable either more or less weight.

Intero’s CEO points out that risk models have different levels of complexity. Some of the data, like FICO scores, is “quasi-historical.” In some cases, such as predicting storm paths and which suppliers may be affected, it is a real-time forecast. For more complex models, Interos must move more slowly before allowing customers to change parameters.

The risk that AI results can be corrupted

Finally, AI results can be corrupted. A risk here is the potential loss of intellectual property. This is the idea that a malicious hacker or government might be able to find an entry point into a company’s application and view, corrupt or block data. All enterprise software suppliers need robust cyber security.

Interos just released a report called 5 Supply Chain Predictions You Need to Know in 2025. Interos predicts that traditional cyber attacks – malware, ransomware, phishing, etc. – will continue into 2025, but they warn that we should be on the lookout for more disruptions to the physical infrastructure that is fundamental to our digital world. Geopolitical rivalries hold the potential for significant cyber disruptions in the hardware and software we depend on to make our world go round. Increasingly, enterprise applications run in public clouds. An attack that takes down a public cloud platform doesn’t just affect one company; it affects many companies.

In summary, the risks of AI far outweigh the risks associated with AI generating hallucinations.

Click any of the icons to share this post:

 

Categories