AI Safety Research · Cambridge, United Kingdom

Geodesic
Research

The shortest path to impact

We are building the science of alignment and control pretraining—determining whether safety can be an end-to-end strategy throughout model training.

Geodesic Research is a UK-based technical AI Safety organisation focused on compute-intensive alignment research. While there is an established science of pretraining's influence on capabilities, no such science exists for safety. We aim to change that.

Our purpose is two-fold: to explore scalable pretraining approaches we have reason to believe are not yet standard practice, and to facilitate adoption of any promising techniques we develop. We will consider Geodesic successful if we determine that pretraining has no obvious impact on safety—or if we determine that pretraining interventions are viable and facilitate their adoption across the industry.

01
Conceptually Simple
We focus on conceptually simple interventions—such as data insertion and filtering—that benefit from scale and span the full training stack. Just as labs curate pretraining data for capabilities like reasoning, we believe they should also curate for alignment.
02
Uniquely Positioned
We believe we are uniquely well-positioned to pursue alignment pretraining research. Our team has experience pretraining LLMs from scratch, with established relationships across the open-source and frontier AI communities. We also have ample compute access, enabling us to train larger models and establish scaling trends.
03
Frontier Impact
Our aim is for our research to be useful at scale. We develop interventions that can be slotted into existing training pipelines without harming general capabilities. Our target audience is model training teams at frontier labs, and we aim to establish scaling trends that demonstrate our interventions are production-ready for the frontier.

Can alignment and control be
end-to-end strategies?

01
Scaling Alignment Pretraining
Our first paper found that AI discourse causes self-fulfilling (mis)alignment. We are now scaling our experimental setup to include reasoning models, agentic misalignment evaluations, and larger training datasets to determine whether alignment pretraining is production-ready.
Learn more
02
Inoculation Midtraining
Post-training can inadvertently reinforce misaligned behaviour that generalises broadly. We will craft declarative persona descriptions during midtraining to inoculate models against emergent misalignment—shaping not just what a persona defaults to, but how it generalises under distributional shift.
03
Control Pretraining
Base models already understand concepts like alignment faking and monitoring. We will explore shaping this knowledge during pretraining—reducing the likelihood that misaligned AIs possess knowledge differentially useful for circumventing control protocols, strategy sandbagging, or dangerous technical capabilities.

Founded in Cambridge, UK

Puria Radmard
Founder & Co-Director

PhD student in theoretical neuroscience at the University of Cambridge. Led Geodesic's early work on steganography. Previously a machine learning engineer at raft.ai and a private equity quantitative strategist at Goldman Sachs.

Cameron Tice
Founder & Co-Director

Marshall Scholar at the University of Cambridge, where he completed his MPhil on automated research with LLMs for computational psychiatry. Lead author of Noise Injection for Sandbagging Detection and a former Research Manager for the ERA: AI fellowship.

Alexandra Narin
Strategic Delivery

Cofounder of UK AI Forum. Previously, a Neuroscience researcher at UCL and the Head of Grants for a Biotech company where she secured and managed over $5M in private and government funding. Alexandra is applying her experience to scaling Geodesic.

Kyle O'Brien
Founding Member of Technical Staff

Joined Geodesic through the ERA fellowship. Leads the alignment pretraining research agenda and has developed strong relationships with UK AI Security Institute through previous research on Deep Ignorance. Previously at EleutherAI and Microsoft.

Guided by leading researchers

Tomek Korbak
Tomek Korbak
OpenAI
Alex Cloud
Alex Cloud
Anthropic

Help build the future
of AI safety

We don't have any open positions at the moment… but we're always interested in connecting with talented people who share our mission. If you'd like to be considered for future opportunities, please reach out to cam@geodesicresearch.org with your CV and a note about what draws you to our work.