In Regulated Industries, The Question Isn’t Whether to Use AI, it’s How to Use it Responsibly

In Regulated Industries, The Question Isn’t Whether to Use AI, it’s How to Use it ResponsiblyImage | AdobeStock.com

In high stakes, regulated industries the promise of AI is huge – faster decisions, deeper insights, greater efficiency. But so are the risks. When lives, compliance and critical operations and infrastructure are on the line, there’s no room for hallucinations or black-box reasoning.

Mission-critical applications demand more than cutting-edge algorithms or powerful models. They require responsible intelligence. AI that is auditable, explainable and aligned with regulatory standards from day one. That’s only possible when human expertise and AI innovation are brought together by design, not as an afterthought.

As Aden Hopkins, CEO at XpertRule puts it: “The future isn’t human versus AI. It’s human with AI. Engineered for trust and built for the real world.”

AI Misperceptions

AI adoption typically swings between two extremes, risk averse underuse or high risk overreliance. On one hand, stakeholders avoid risk by limiting AI to simple or low impact tasks such as document summarisation or content generation, missing its full potential. On the other, some dive in without fully understanding the risks, exposing themselves to commercial failure, regulatory breaches, and reputational damage. In a world where AI is rapidly advancing, and bold claims about its impact are everywhere, this duality creates real challenges for both industry and audience. Striking the right balance is no longer optional; it’s critical, especially within regulated industries.

Yet many companies continue to build their AI policies and strategies on shaky foundations, underestimating the technology’s limitations and risks. No financial institution, pharmaceutical company or manufacturer wants a reputation for unsafe practices – or the inevitable fine for regulatory breach. Yet by blindly relying on AI generated decisions without being able to determine how these decisions were made and data surfaced or, critically, ensuring flags are in place that compel human intervention at key stages within the process, such mistakes are inevitable. It is just a matter of time.

Compliance Priority

Responsible decision making, underpinned by verifiable, trusted data and processes, is the foundation for mission critical solutions, especially within regulated markets. Yet LLM-based AI Agents are black box solutions that do not offer this essential transparency and auditability. Even with the addition of explanation tools, regulators are increasingly sceptical of these black-box AI models, according to Stanford HAI’s 2023 EU AI Act Analysis.

With no way to validate the AI model prior to deployment, the only option is to question why a decision was made after the fact. At which point the AI will come up with some kind of plausible explanation, but not one based on any decision-making process that generated it. This ‘after the event’ occurrence is not true transparency. It is not auditable. And it is certainly not safe for mission critical applications and industries.

The inherent ambiguity of language, the inability of LLMs to perform logical reasoning and their non-deterministic nature means LLM-based AI Agents cannot deliver the reliability required for decisions made in mission critical applications. Who wants to bet the business on systems with inherent unreliability and inconsistency? Especially, without the ability to explain how a decision was made or trust the veracity of the explanation. Is it any wonder 80% of businesses do not fully trust their AI systems? Compliance requires real-time explainability, not just rationalisation after decisions are made.

Human Intelligence  

The fact that LLMs and Gen AI have serious flaws is not news – certainly not to the technology experts or to anyone experienced in the AI field. LLMs are designed to model language – and they do so brilliantly. They are excellent when used appropriately, for certain non-critical tasks. Where LLMs fail is trying to use this approach as a proxy for modelling real world human decision making.

But that is no reason to step away from the power of AI to deliver transformational business gains. Successful AI innovation needs to model human decision making using a combination of AI technologies – and not just LLMs. Solving complex, high stakes problems requires the experience and understanding of human experts. It demands a way to maximise the value of LLMs safely, with guardrails, processes and workflows that both reflect human understanding and ensure AI defers to human expertise when necessary.

And rather than a single technology, responsible AI is defined by a set of guiding principles that inform the design and development of AI solutions. Reliability, consistency, accountability, auditability, compliance, safety and adaptability are fundamental to the process – and at every stage, the deployment must embed human oversight, critical thinking and ethics. A composite platform that combines a range of AI technologies alongside Gen AI and LLMs provides the foundation for the next evolution of AI, allowing organisations to achieve innovation safely, whilst also meeting regulatory compliance demands.

Future of AI in Regulated Industries

The future is exciting. AI can be transformational. But the current rash of bad news stories is causing uncertainty, especially in regulated industries. The growing recognition amongst businesses that LLMs/Gen AI cannot be used in isolation to deliver essential reliability, explainability and auditability must not tarnish the entire field of AI and undermine the transformational benefits that AI can deliver.

Businesses cannot afford to rush headlong into unchecked AI innovation, but it is also imperative that organisations do not panic and shy away from the enormous potential gains and innovation available. There are proven ways to leverage the power of LLMs and Gen AI to drive tangible business value. With the right approach, expertise and blended AI technologies, responsible AI is maximising both human and artificial intelligence to deliver mission critical decision making.

By Aden Hopkins, CEO at XpertRule