Generative AI is just one strand of artificial intelligence which is progressing at enormous speed, already pushing the boundaries of deep research, with profound implications for life sciences that have yet to be pinned down. ArisGlobal’s Jason Bryant asks what that means for companies trying to embrace the changes that are coming.
Within just 2.5 years, Generative AI has disrupted entire industries. For a time, its potential was constrained by the materials the technology was exposed to; then the ability to understand this in context. But with escalating momentum those limitations are being overcome. This is presenting a challenging duality: a future that is already here, yet still largely unknown. In life sciences, where an advantage lost could mean patients missing out, companies are wondering how on earth they move forwards?
The unstoppable force of AI
GenAI is the branch of artificial intelligence that uses everything that is known already to create something new. From early conversational capabilities, through reasoning, the technology is already delivering ‘agentic’ capabilities (goal-driven abilities to act independently and make decisions with human intervention only where needed).
There are early signs too that “innovating AI” is emerging. That’s as AI becomes capable of creating novel frameworks, generate fresh hypotheses, and pioneer new approaches. This creative potential pushes AI from merely processing information to actively shaping the future of scientific discovery, applying it to problems yet to be solved.
At the core of the latest GenAI advances is the accelerated pace of large language model (LLM) development. These deep learning models, trained on extensive data sets, are capable of performing a range of natural language processing (NLP) and analysis tasks, including identifying complex data patterns, risks and anomalies. A growing movement towards open-source GenAI models, meantime, is making the technology more accessible and customisable (alongside proprietary models).
Reimagining scientific discovery and deep research
In life sciences, there are persuasive reasons to keep pace with and harness latest developments as they evolve. GenAI is poised to become a gamechanger in scientific discovery and new knowledge generation – at speed and at scale.
In human intelligence terms, we have already reached and surpassed human expertise levels[1]. Recent advancements in Agentic AI models have even led to the need for a new benchmark[2].
The advanced reasoning promise, a highlighted benefit of DeepSeek’s latest AI model, has enormous scope in science (enabling logical inferences and advanced decision-making). Google and OpenAI both have Deep Research agents that go off and perform their own searches, combining reasoning and agentic capabilities. As reasoning capabilities continue to improve, and as the technology becomes more context-aware, the potential to accelerate scientific discovery becomes real through the creation of new knowledge. The ability to project forward, and consider “What if?” and “What next?”.
Already OpenAI’s Deep Research is optimised for intelligence gathering, data analysis and multi-step reasoning. It employs end-to-end reinforcement learning for complex search and synthesis tasks, effectively combining LLM reasoning with real-time internet browsing.
Meanwhile Google has recently introduced its AI co-scientist[3], a multi-agent AI system built with Gemini 2.0 as a “virtual scientific collaborator”. Give it a research goal, and off it will go – suggesting novel hypotheses, novel research and novel research plans.
Which way now?
With all of this potential, the strategic question for biopharma R&D becomes one of how to keep pace with all of these technology developments and build them into the business-as-usual; how to prepare for a future that is simultaneously already here yet continuously changing shape?
Up to now, most established companies have experimented with GenAI to see how it might address everyday pain points in Safety/Pharmacovigilance, Regulatory, Quality and some Clinical and Pre-Clinical processes. These activities been largely about becoming familiar with the technology, and assessing its trustworthiness and value. Others have gone further, creating lab-like constructs for experimentation.
Yet the hastening pace of technology development, and the intangibility around what’s coming, means that the industry now needs to embed AI more intrinsically within its infrastructure and culture. This is about proactively becoming AI-ready rather than simply “receptive to” what the technology can do.
Being discerning as “experts” hover
In the past, a popular approach to a hyped new technology or business change lever has been to “pepper” associated champions across the business. In this case, some organisations are taking a venture-capital like approach of bringing in non-native AI talent to key roles – visionaries and master-crafters from other industries. But AI is moving so quickly, and its likely impact is so fundamental to life sciences, that experts need to be “neck deep” in it to be of strategic value.
One of the biggest challenges now is the duality companies are now grappling with: the simultaneous need to be ready for and get moving with deeper AI use today, while gearing up for a tomorrow that is likely to look very different. This has widespread “change” implications: at a mindset and method level; and from a technical and cultural perspective – both today and tomorrow.
For this reason, strategic partnerships are proving a safer route – with tech companies that are fully up to speed with the latest developments, are enmeshed in it and its expanding application, and are actively building sector-specific solutions. Even so, companies will need to choose their AI advocates wisely, as “AI washing” is commonplace among consultants and service providers now, as new converts to the technology inflate their credentials in the field.
The good news is that internal IT and data teams are well versed in AI technology today, and have high ambitions for it. The challenge is bringing the technology’s potential to fruition where it could make a difference strategically. This is likely to require involve sitting with an organisation’s real problem areas, and understanding if and how emerging iterations of AI might offer a solution.
About the author
Jason Bryant is Vice President, Product Management for AI & Data at ArisGlobal, based in London, UK. A Data Science Actuary, he has built his career in fintech and healthtech, and specialises in AI-powered, data-driven, yet human-centric product innovation.
[1] Measuring Massive Multitask Language Understanding, Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021): https://arxiv.org/pdf/2009.03300
[2] Humanity’s Last Exam, November 2024, https://agi.safe.ai/
[3] Accelerating scientific breakthroughs with an AI co-scientist, Google Research blog, February 2025: https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/