raia hadsell
google deepmind
The Research and Applied AI Summit (RAAIS) is a community for entrepreneurs and researchers who accelerate the science and applications of AI technology. In the run up to our 10th annual event on June 12th 2026 in London, we’re running a series of speaker profiles to shed more light on what you can expect to learn on the day!
At RAAIS we have a focus on translating cutting edge technology and research into production-grade products for real-world problems.
We are delighted to host Raia Hadsell as a returning speaker - she first spoke at RAAIS in 2017, when she was a Senior Research Scientist at DeepMind.
Raia is VP of Research at Google DeepMind, where she co-leads the Frontier AI unit. She joined DeepMind in 2014, when it was still a 50-person startup freshly acquired by Google, and her work has since spanned some of the field's hardest open problems: continual and transfer learning, deep reinforcement learning for robotics and navigation, and the models that power today's frontier systems.
The arc of a career
What makes Raia's research career unusual is the consistency of its through-line. She earned her PhD under Yann LeCun at NYU, where her dissertation on long-range vision for off-road robots received the Outstanding Dissertation award. That work helped shape metric learning and Siamese neural networks - architectures now so standard they underpin most modern contrastive learning. Her most highly cited papers include Dimensionality Reduction by Learning an Invariant Mapping and Learning a Similarity Metric Discriminatively, with Application to Face Verification, foundational contributions to representation learning that have collectively gathered tens of thousands of citations.
After a postdoc at CMU's Robotics Institute with Drew Bagnell and Martial Hebert, and a stint at SRI International's Vision and Robotics group, she joined DeepMind and turned her attention to a problem that had been nagging the field for decades: catastrophic forgetting.
Why continual learning matters
Neural networks are powerful learners but terrible rememberers. Train a model on task B and it forgets task A. This is catastrophic forgetting, and it has been one of the deepest obstacles to building AI systems that improve over time rather than being retrained from scratch. Raia's 2017 paper Overcoming Catastrophic Forgetting in Neural Networks proposed elastic weight consolidation, a method for protecting important learned parameters while still acquiring new knowledge. Alongside Progressive Neural Networks and Distral: Robust Multitask Reinforcement Learning, this body of work laid much of the groundwork for how the field thinks about lifelong and multitask learning today.
It's also the thread that connects her navigation research - including a landmark Nature paper demonstrating that artificial agents trained to navigate develop grid-like neural representations resembling those found in rodent brains - to her more recent work on generalist robotic agents like RoboCat and bipedal robot locomotion published in Science Robotics.
From research to frontier systems
Raia's selected publications tell a story about where frontier AI is actually heading. Her recent work includes contributions to Gemini 2.5, Gemma 2, and RecurrentGemma, alongside RoboCat - a self-improving foundation agent for robotic manipulation that can pick up new tasks from as few as 100 demonstrations - and research on teaching bipedal robots to play agile soccer using deep reinforcement learning.
This range is what makes her unusually well-placed to speak at RAAIS. She sits at the intersection of frontier language models, embodied intelligence, and the kind of continual adaptation that will determine whether AI systems can operate reliably outside the data centre - in factories, hospitals, homes, and the physical world.
Beyond the lab
Raia's influence extends well beyond her own research. She founded and serves as Editor-in-Chief of Transactions on Machine Learning Research (TMLR), launched in 2021 as an alternative venue for rigorous ML publication. She sits on the executive boards of CoRL (Conference on Robot Learning) and WiML (Women in Machine Learning), is a Fellow of ELLIS, and is a founding organiser of NAISys (Neuroscience for AI Systems).
In November 2025, she was appointed as an AI Ambassador for the UK government's Department for Science, Innovation and Technology, where she chairs peer review panels for national AI research initiatives - a role that puts her at the centre of UK AI policy at a pivotal moment.
She holds a PhD from NYU, and - in a detail that says something about the breadth of her thinking - an undergraduate degree from Reed College in religion and philosophy.
