With agentic AI, the promise is about so much more than process enhancement and greater productivity. It is about redefining a function’s purpose and value, ArisGlobal’s Jason Bryant explains.
Agentic artificial intelligence (agentic AI) is a significant leap forwards in AI, building on the already impressive impact of generative AI (GenAI). Agentic AI involves the coordination of specialist AI tools or “agents” to complete assigned goals – not by following prescribed rules, but by reasoning about how best to deliver results.
Its perceived importance is based on the technology’s inherent scope to redefine the way organisations operate and the value they deliver. With agentic AI, there is much greater autonomy in what AI does and how. Given a desired outcome, individual agents each harness their own intelligence, experience and reasoning to deliver their part in the most effective way possible, deciding what will be required, where to find it, and so on.
All of this is co-ordinated by an orchestrator agent. As well as optimising the end-goal delivery, the orchestrator uses the collective insights to propose new ways to add value. Put another way, this next phase of AI is about so much more than optimised processes or enhanced productivity.
What are the implications for functions such as reg affairs and post-market drug safety?
The ability to reason, anticipate, generate insight and knowledge and make better decisions is well matched to late-stage pharma R&D activities, being data-intensive, process-heavy and outcome-critical. Generative AI is already proving indispensable in functions such as regulatory affairs and drug safety/pharmacovigilance. Uses cases to date include marketing authorisation application preparation, product change control/regulatory impact assessment management, adverse event case processing, and safety reporting.
Agentic AI’s ambitions are greater still, offering to transform not only the output but also the value and purpose of Safety, Regulatory and adjacent teams.
Regulatory opportunities
Regulatory opportunities for agentic AI include reinventing the global management of product regulatory compliance. Autonomous, “regulation-aware” dossier assembly and submission orchestration is within reach now. It is possible for orchestrated AI agents to continuously ingest clinical data packages, study reports, CMC documents, eTMF pointers and legacy submission artefacts. Agentic systems can also perform automated regulatory gap-analysis versus target-region requirements, draft region-specific CTD/eCTD modules (with citations and traceability to source documents), and orchestrate the technical packaging (file naming, folder structure, etc).
Where there is any ambiguity, the agentic system could generate a short “decision rationale” and a list of recommended human checks, and run a rules/validation pass (file integrity, cross-reference checks, local appendices). This could inform autonomous routing of items to subject experts (e.g., CMC, clinical, labelling) with suggested edits and severity scores – providing human reviewers with a near submission-ready dossier.
Strategically, shorter regulatory cycle times promise to accelerate go/no-go decisions and speed up patient access, while sponsors would be in a position to iterate protocols more swiftly. Meanwhile agents’ gap-analysis outputs could be fed upstream to clinical operations and protocol teams, enabling trials to be designed that need fewer regulatory clarifications further down the line.
Pharmacovigilance potential
In post-market drug safety, the use of AI to streamline Medical Dictionary for Regulatory Activities (MedDRA) coding of adverse events offers considerable potential to transform the value of pharmacovigilance. AI technology is already helping to transform efficiency and accuracy around the classification of adverse event data, with the potential to invoke additional reference cross-checks, or expedite next actions. Combining autonomous MedDRA coding with proactive signal triage could help to eliminate manual bottlenecks.
If designated agents detect an unusual combination of coded terms, for example, they could raise an automated “probable signal” alert; pre-populate a signal report draft (including proposed case lists, timeline and supporting evidence snippets); and recommend a triage priority for human safety reviewers. The time to first credible signal would be shortened, and experts freed to focus on ambiguous/novel cases and investigation design. Meanwhile the system could route high-risk clusters to epidemiology/medical affairs automatically and suggest immediate risk-mitigation actions (e.g., targeted communications, batch holds, enhanced monitoring), enhancing human decision-making.
Optimised, adaptable governance
When AI systems are given new autonomy across extended workflows the potential risks extend beyond incorrect outputs. They include potential for unintended data movement, loss of operational control, misaligned decision-making and blurred lines of accountability. Having guardrails is essential then, to mitigate unintended behaviour.
Yet being too prescriptive and fixed about controls could hamper future potential. Although multi-agent frameworks are emerging these do not, of themselves, provide for trust, context-sensitive decision making or risk-aware governance. Those considerations need to be both designed in from the start, and be adaptable to evolving needs so that risk mitigation doesn’t stifle future value.
Taking a principles-based approach, rather than one that is hard-wired around specifics, is a good way to support process stakeholders in defining scenarios and goals that agentic AI could help address. An interesting contribution here comes from the Council for International Organizations of Medical Sciences (CIOMS)’ Working Group XIV on AI in Pharmacovigilance[1]. It aims to create a common foundation for regulators, industry and technology providers that can keep pace with the unprecedented rate of technological advancement.
Companies can complement such principles with their own systemic-thinking or service-design methods – e.g. developing journey maps to plot how agentic workflows trigger, interact and evolve – to help translate high-level principles into operational governance models, including the degrees of autonomy afforded to individual agents.
All of this can feed into plans for adaptable provisions for human involvement (e.g. as technology continues to advance, and as users and responsible managers become more comfortable with its output). This will allow companies to move at their own pace towards accepted use of agentic AI and its wider potential.
About the author
Jason Bryant is Senior Vice President, Product Management – AI, at ArisGlobal, based in London. A data science actuary, he has built his career in fintech and health-tech, and specialises in AI-powered, data-driven, yet human-centric product innovation. He previously led an AstraZeneca digital incubator and today remains on the board of a health charity, Scleroderma & Raynaud’s UK (SRUK), which is dedicated to improving the lives of people affected by those conditions in the UK. https://www.arisglobal.com/
References
[1] Artificial intelligence in pharmacovigilance, CIOMS Working Group report, Draft, 1 May 2025: https://cioms.ch/wp-content/uploads/2022/05/CIOMS-WG-XIV_Draft-report-for-Public-Consultation_1May2025.pdf
