On AI Scientists with ARIA’s Antony Rowstron and Aayush Chadha
Welcome to Decoding Science: every other week our writing collective highlight notable news—from the latest scientific papers to the latest funding rounds in AI for Science —and everything in between.
We had the pleasure of speaking with Antony Rowstron and Aayush Chadha from the Advanced Research + Invention Agency (ARIA), UK’s R&D funding agency built to unlock scientific and technological breakthroughs.
ARIA recently announced its funding programme for AI Scientist systems; autonomous systems that can generate hypotheses, design and run experiments, and analyze results with minimal human oversight. AI scientists have been discussed at length this year (some great write ups here and here), and we were keen to understand how ARIA believes AI scientists will lead to novel breakthroughs in science.
In this interview we discuss:
ARIA’s AI Scientist call as a way to cut through noise, map what these systems can truly do today, and identify teams whose AI scientists can make real progress on difficult, interdisciplinary problems such as mitochondrial gene delivery.
How AI scientists will accelerate human research rather than replace scientists, enabling many more “shots on goal” via parallel experimentation.
How the UK will maintain scientific capability through advanced automation and democratized access to experimentation.
Antony Rowstron became ARIA’s inaugural CTO in June 2025 after 26 years at Microsoft Research. He’s led multidisciplinary teams working on a diverse set of subjects: optical storage technologies to robotics for data centres. Aayush Chadha is a Frontier Specialist, focusing on how to accelerate research in labs across the UK. He’s worked on a diverse range of scientific disciplines, from ML research to material production and scaling.
Pablo: Could you introduce yourselves and describe your roles at ARIA?
Ant:
I’m the CTO at ARIA. I joined about six months ago after 26 years at Microsoft Research, where I led research across robotics, hardware, and new device development. I’m a computer scientist by training, not a domain scientist, but my role at ARIA spans several areas. I mentor our frontier specialists and I support program directors as they define and refine new program ideas.
Another part of my role is to help ARIA embrace AI properly. Internally that means improving the tools and workflows we use. Strategically it means understanding how AI can increase the velocity of scientific discovery across our portfolio, and how we can push AI in directions that make it more useful for science rather than just consuming whatever industry produces.
I am particularly focused on the emerging “AI scientist” space: systems that can generate hypotheses, design experiments, run them via automated labs, analyze results, and iterate. Several companies are starting to build things in this direction. A major part of my job is to understand how ARIA should interact with such systems and how our creators might collaborate with them.
Aayush:
I started out in computer science, working on early deep learning systems such as recurrent neural networks and LSTMs. I later moved into materials science, where I worked on graphene and quantum dots and dealt with a lot of material scaling challenges. After that I spent some time at Entrepreneur First trying to start a company, and then I joined ARIA.
At ARIA I work across multiple programs, mainly neurotechnology, manufacturing abundance, and our AI scientist efforts. Recently I have been visiting many labs in the UK, especially in materials science and chemistry, to understand their current level of automation and where there are opportunities to accelerate research.
Ant, how does your previous experience, especially at Microsoft, shape your work at ARIA?
Ant:
At Microsoft I was deeply involved in building early cloud infrastructure. That gave me a strong sense of how you design very large technical systems for reliability, scale, and manageability. You think hard about uptime, standardization, abstraction layers, and how to make complex hardware behave like a dependable utility.
When I look at today’s labs, including the attempts at “cloud labs,” I rarely see that kind of systems thinking. Labs are often fragile, bespoke, and heavily dependent on people physically being there at odd hours. The instruments are usually designed for human operators and human interpretation.
Bringing cloud-style engineering into the lab world could have a huge impact. You want laboratory infrastructure that runs 24/7, 365 days a year, with high availability and minimal manual intervention. For AI scientists to be effective, the physical layer has to be that reliable. Without that foundation, even very capable AI systems will be constrained.
That experience also shapes how I think about where ARIA can add value. We are not just funding new algorithms. We are thinking about how computation, instruments, and automation come together to make a step-change in how science is done.
What is ARIA’s objective with the AI Scientist call, and what would success look like?
Ant:
The call is our first structured way to engage with AI scientist systems. We are trying to understand what exists today, what these systems can actually do, and where their limits are.
We asked applicants to show us one problem they believe their system can solve now and a stretch problem they think it cannot yet solve. That gives us a picture of the claimed frontier and where they themselves see the boundary. It also helps us distinguish between marketing and reality, which is always necessary in a hype-heavy area.
We expect many submissions to be narrow and domain specific. That is completely fine. At this stage, crawling is meaningful: a system that can reliably execute a well-defined scientific loop is already impressive. At the same time, we hope to see a few attempts at more general or more flexible systems.
Success for this call would mean several things. First, we gain a realistic map of current capabilities. Second, we learn how to evaluate and interact with these systems. Third, we identify teams and approaches that could plug into future ARIA programs as true participants in scientific work rather than just as tools on the side.
Aayush:
Some of the problems in the call are closely tied to our existing opportunity spaces and programs. A very concrete success would be a team whose AI scientist can make genuine progress on one of those problems, earlier than we expected a human-led team to do so.
Today’s AI scientist systems that are publicly described are mostly very narrow. ARIA’s programs tend to be interdisciplinary. Demonstrating that an AI scientist can bridge at least two substantial domains would be a strong signal that the field is maturing.
Could you give an example of the kinds of problems you included in the call?
Aayush:
One example is from our program on engineering mitochondria. The goal is to deliver nucleic acids into the mitochondrial matrix. That has been a long-standing challenge in biology. Many groups have worked on it for decades without a fully satisfactory solution.
However, there is now an inflection point. Advances in nanoparticle-based delivery and related physical chemistry suggest new routes to the problem. A useful AI scientist in this context would need to understand mitochondrial biology and the complexes on the mitochondrial membrane, but also be able to draw on knowledge from nanoparticle design and materials science.
A system that can connect those areas and explore design spaces for delivery vehicles, identify candidates that are more likely to be taken up, and design experiments to test them would be very valuable. It is a good example of the interdisciplinary capability we are interested in.
Are you mainly seeing specialized systems, like a CRISPR-GPT, or are you expecting general AI scientists?
Ant:
We’re working our way through the applications now, but they’re generally specialized. That is what the current ecosystem is geared toward. My hope is that at least a small proportion will be more general in scope.
What is already clear is that there is a lot of interest. We received hundreds of applications, far more than a typical call. That is exciting and slightly daunting, but it is a good sign of how much energy there is in this space.
I am not concerned if most of what we see is still at the “crawling” stage. This is a technically difficult area. If it were easy, it would not be appropriate for ARIA. Direction, credible progress, and clear thinking about limitations are more important than grand claims at this point.
Q: What happens after this initial call? Will teams be able to apply these AI scientists to full ARIA programs?
Ant:
Many applicants have already asked how they can participate in our main programs. The answer is that this call is partly about learning how to make that possible.
There are at least two layers to think about. One is how AI scientists participate as “creators” in programs, alongside human-led teams. The other is the underlying lab infrastructure. If we want AI scientists to plan and run experiments at scale, labs need to look very different to how they look today.
My experience with cloud infrastructure is relevant here. In the cloud, you hide a huge amount of complexity behind stable interfaces and you build for high availability. When I look at lab automation today, including attempts at cloud labs, I do not think we are yet at that stage of maturity. There is a lot of room for ARIA to support work on more robust, modular and highly available lab systems.
Aayush:
This is also where instrument design comes in. Many instruments are optimized for human operators and for human-readable outputs. An MRI scanner is a good example. The raw data are nothing like the image a radiologist sees. There is a large processing pipeline to turn a very complex signal into something a human can interpret visually.
If you free yourself from the constraint that outputs must be directly interpretable by humans, you can design instruments that produce data in forms that are much more natural for machine learning systems. CERN is an extreme example in physics, where the raw data are far too complex for humans to look at directly. We think there is room for that kind of thinking in many more areas of science.
Do you view AI scientists as something that should be sovereign, perhaps in a national capability or national security sense?
Ant:
ARIA is not a defense agency. We are the non-defense counterpart of something like DARPA. So we do not frame our work as a national security project.
At the same time, it is obviously important that the UK remains a serious player as science becomes more automated. The UK is world leading in many areas of science. To stay that way, our researchers need access to the right facilities, including advanced automation and AI-driven infrastructure.
My focus is less on sovereignty as a security issue and more on capability. We want UK scientists to have state-of-the-art tools, including AI scientists and the labs they require, so that they can continue to do world-class work.
How do you expect the role of human scientists to change over the next five to ten years?
Ant:
I think in terms of velocity. AI will increase the speed at which humans can generate and test ideas. That is similar to what the cloud did for computing. It did not eliminate programmers, but it changed what they could do in a given amount of time.
For at least the next decade, I expect AI systems to be tools that augment human scientists rather than independent agents replacing them. The history of technology supports that view. Look at autonomous vehicles. The hype always runs ahead of reality. Autonomous driving programs started in the early 2000s and only now are we seeing limited deployments of robo-taxis in a few cities. Human drivers have not disappeared. The same pattern is likely here.
So I expect AI scientists to become very powerful partners, but not to remove humans from the loop any time soon. Humans will still be central in setting directions, judging what matters, and interpreting the broader implications of results.
Aayush:
Higher velocity also changes the structure of research. In biology and other sciences we often talk about “multiple shots on goal,” but historically the number of shots you can take is limited by experimental bandwidth. Automated experimentation and AI-driven design allow a lead scientist to pursue many hypotheses in parallel.
You can imagine a future where a principal investigator defines a research quest and an AI scientist explores dozens of branches at once, generates results, and suggests the most promising paths. That kind of parallelism is a step-change, not a five percent improvement.
Do you worry that AI scientists could constrain scientific creativity or homogenize scientific direction? How do you think about that risk?
Ant:
Technological waves will happen whether we like it or not. The question is whether you shape them or are shaped by them. I have lived through several smaller waves in computing. The pattern is always the same. You either ride and steer the wave, or you get overwhelmed by it.
ARIA’s responsibility is to participate in this wave in a way that keeps it aligned with what is good for science and society. As a funding body we can influence which approaches get scaled. We can insist on human oversight where it matters, encourage diversity of approaches and make sure the incentives are not set up in a way that narrows science prematurely.
We are also investing a lot of thought in ethics and governance inside programs. It is not just about what is technically possible. It is about what kind of scientific ecosystem we want to build as these tools mature.
What does your ideal lab look like in five to ten years?
Ant:
There are two main aspects. The first is interdisciplinary instrumentation. Today labs are siloed. Biology instruments live in biology labs, materials instruments in materials labs, and so on. In my experience, many breakthroughs happen at the interface between fields.
I would love to see labs where unusual combinations of instruments sit side by side and are easily accessible to researchers and to AI systems. Ideally we would see discoveries that the scientists involved can honestly say would not have occurred if those tools had not been colocated, or if a system had not suggested an unexpected combination.
The second is full automation. When I was at Microsoft, our team took optical experiments that used to involve physicists turning knobs and logging values by hand and turned them into fully automated rigs. We added motorized stages, sensors and control software, then streamed all the data. At one point we were sending more data into UK data centers than any other UK customer. The scientists went from standing at a bench all day to writing Python scripts, pressing run and getting graphs a few hours later. The speedup was dramatic.
I recently heard about students staying until four in the morning to record bacterial growth measurements because an experiment takes hundreds of hours. In a better lab, they would go home at five in the afternoon, and the system would keep running and logging results overnight. That is the kind of change I want to see at scale.
Aayush:
I would also add democratization: at the start of the computing era, mainframes sat in special rooms and only a few people got access. Today almost everyone carries a powerful computer in their pocket. I think something similar could happen in science.
In a world where AI scientists work well, the ability to design simple experiments, run them and interpret the results could become accessible to many more people, including students and hobbyists. A child asking “why does this behave this way?” could run a small experiment and get a serious, structured explanation of what happened and what it implies. That kind of accessibility would not only train future scientists but also raise the general level of curiosity and understanding in society.
After these projects conclude, will ARIA publish what you have learned?
Ant:
We will not publish detailed results for each individual team, but we do intend to publish a high-level synthesis of what we have learned. That will include our sense of the current state of the art, where systems are actually delivering versus just promising, and where we see major opportunities and gaps.
This is not just useful for scientists. If AI scientists become real and have the impact we expect, they will matter to everyone. So we want to make sure that the broader public narrative is informed by reality rather than by hype alone. That means being transparent, at the right level of abstraction, about what works and what does not.
We are also thinking hard about ethics, governance and the role of humans in the loop as these systems evolve. The goal is to help create the wave we want, not to be surprised by the one we get.
Thank you both, excited to see the final report.
If you are interested in keeping up with ARIA’s progress in this space, follow ARIA here.
Did we miss anything? Would you like to contribute to Decoding Science by writing a guest post? Drop us a note here or chat with us on X.





Love this!