In 2026, setting clear AI expectations is the priority. On January 14, 2026, EMA and FDA marked a step towards regulatory alignment by publishing joint guiding principles for good AI practice. These shared expectations establish a common standard on what ‘good’ looks like. The principles inform how AI should support evidence across the lifecycle, including clinical trials and safety monitoring.
Rather than adding new requirements, the principles act as a reference point for how AI should be applied within regulated processes. They remain technology-neutral while being operationally specific, calling for clear definition of context of use, implementation of risk-based controls, governance of data and documentation, and lifecycle management to ensure oversight remains intact.
The focus is no longer just on whether AI can be used, but whether it can be run reliably, transparently, and with accountability.
The focus shifts from what AI can do, to how its controlled
Over the past 18–24 months, many life sciences organisations have explored AI in “text-heavy” and assistive use cases: summarising long documents, classifying content, extracting data points, and drafting internal or external text. That is a logical starting point because it is fast to pilot and easy to demonstrate value.
The harder step is what comes next. As soon as AI output starts to influence regulated evidence, decisions, or actions, “it appears to work” is no longer the standard. The standard is now evolving to become repeatable with oversight: consistent use in context, clear accountability with a human in the loop, and a record that holds up to audit scrutiny over time. That is what the EMA–FDA principles bring into sharper focus.
Context of use sets the standard for AI risk and control
The most consequential phrase in the regulators’ framing is “context of use.” AI is not evaluated in the abstract. It is evaluated in the context of what it is intended to do, what it influences, and what risk it introduces.
A drafting assistant used to summarise internal notes is one thing. AI that uses the same summarisation method to prioritise safety case review, flag potential signals, or initiate steps in trial operations is another. The same underlying technique can move from low impact to high impact simply by where it sits in the process and how its output is used.
That difference matters because it changes what “good practice” requires in the real world. When the context of use is higher risk, the surrounding expectations shift with it: clearer controls, stronger documentation, explicit human oversight, and more disciplined management change. The practical implication is that AI programs will increasingly need an inventory of use cases mapped to risk, with defined boundaries and a shared understanding of accountability.
Trust in AI is determined by workflow and record
The guiding principles are technology-neutral, but they are not neutral about execution. In practice, that broadens attention beyond the model itself to the surrounding controls and documentation: data provenance, versioning, how outputs are reviewed and used, and what changed over time.
This is where a strategic design choice becomes critical: whether AI sits outside the regulated workflow as a separate tool, or whether it is embedded inside the systems where regulated work actually happens.
When AI is embedded in a regulated workflow, it can inherit what regulated environments already require: role-based permissions, standardised steps, controlled handoffs, and audit trails by design. When AI sits outside, teams often end up reconstructing context and controls after the fact, copying information across tools, reconciling versions, and rebuilding the “story of the record” during reviews and inspections. That is manageable in a pilot. It is fragile at scale.
This “embedded” direction is also where industry product strategy is heading. For example, Veeva has described a phased rollout of AI agents that starts in commercial applications and expands across R&D and quality through 2026, which reflects the broader point: embedding AI deep inside the operational systems of record is structurally better aligned with governed data, auditability, and repeatable oversight than treating AI as a separate overlay.
Lifecycle management becomes essential as AI enters regulated workflows
Lifecycle management can sound abstract until you map it to how AI behaves in production. Newer foundational models are released. Prompts derived from agent objectives and instructions evolve. Knowledge sources and transactional data change. Any of those changes can subtly alter AI outputs in ways that matter in a regulated context. The regulators’ emphasis on risk-based performance assessment and lifecycle management is, in practice, a reminder that “set it and forget it” is not an option once AI influences regulated work.
What does “lifecycle management” mean in operational terms?
- Version control that is audit-relevant: not only for code and models, but also for prompts, agent instructions, templates, and reference material that influences outputs.
- Defined monitoring and drift triggers: what you monitor, how often, and what constitutes meaningful change.
- Change control proportional to risk: when a change requires documentation, review, re-qualification, or rollback.
- Clear accountability when AI output is accepted, overridden, or escalated: so responsibility does not evaporate into “the algorithm said so.”
These are operating model disciplines. They are not optional add-ons delegated to a data science team. They sit at the intersection of quality, business process ownership, and technology, which is exactly where regulated execution already lives.
Regulation advances alongside a rising operational bar
The EMA–FDA principles arrive as part of a wider regulatory pattern that reinforces the same underlying message: trustworthy technology depends on trustworthy execution.
The European Commission’s timeline for the EU AI Act makes governance expectations increasingly concrete, with the Act fully applicable from 2 August 2026, alongside earlier milestones for prohibited practices and AI literacy (from 2 February 2025) and general-purpose AI obligations (from 2 August 2025).
In the U.S., the FDA’s October 2024 guidance on electronic systems, electronic records, and electronic signatures in clinical investigations clarifies what it means for electronic records and systems to be trustworthy and reliable. It is not AI guidance, but it becomes directly relevant when AI output is part of evidence generation. If AI influences evidence and decisions, the integrity of the electronic record and the system that produced it becomes more central, not less.
None of this suggests regulators want to slow innovation. The joint principles are explicitly framed to enable responsible use. The operational bar is rising in a way that rewards organisations that make oversight practical, repeatable, and scalable.
The next steps for leaders today
The practical response is not to slow down AI adoption. It is to operationalise it, so each new use case does not trigger a fresh governance debate.
For most organisations, the near-term opportunity is to use the EMA–FDA principles as a design requirement for the AI operating environment, even while product capabilities across clinical, regulatory, and safety continue to mature across the market. That means doing the foundational work to move from isolated AI applications to a governed capability in a regulated execution environment. This involves defining contexts of use, mapping them to risk, standardising what “essential information” looks like for oversight, and putting lifecycle controls around what changes and when.
This is also the moment to push vendors for specificity. Not “does it do AI,” but: where is AI in the workflow, what record does it produce, what is versioned, what is auditable, what is monitored, and what changes under change control? The more AI is embedded in the systems that already manage roles, permissions, workflows, and audit trails, the easier it becomes to scale responsibly over time – without rebuilding evidence and oversight from scratch.
Implementing AI for GxP environments
The EMA-FDA principles shift AI from a standalone tool to a core part of the regulated foundation for evidence generation and monitoring. Success will depend less on the number of pilots and more on embedding AI into controlled workflows, where decisions are reviewable and traceable, and change is managed with clear oversight. This is what will enable organisations to turn AI investment into operational advantage.
By Crystal Allard, senior director, Government Strategy and Pratyusha Pallavi, executive director, Regulatory AI Strategy, Veeva Systems

