Research Get Involved Team

Aether

LLM Agent Safety Research

Aether is an independent LLM agent safety research group dedicated to conducting impactful research that ensures the responsible development and deployment of AI technologies. We work on whatever seems most impactful to us, focusing on critical areas that can positively influence AGI companies, governments, and the broader AI safety field.

Research

Chain-of-Thought Monitoring & Hidden Reasoning

Our primary research focus has been on chain-of-thought monitoring. We investigate how information access affects LLM monitors' ability to detect sabotage and other safety-critical behaviors. We've also developed a taxonomy for understanding hidden reasoning processes within LLMs, providing a structured framework for analyzing covert reasoning mechanisms.

Information Access Research

Emerging Research Areas

We're exploring topics including shaping the generalization of LLM personas, interpretable continual learning, and pretraining data filtering. Our research agenda remains flexible to focus on the most impactful projects.

CL Robot

Get Involved

We are not accepting applications right now, but people who are interested in positions can view details about our last hiring round here.

Our Team

Rohan Subramani

Rohan Subramani

Founder & Researcher

Rauno Arike

Rauno Arike

Researcher

Shubhorup Biswas

Shubhorup Biswas

Researcher

Advisors

Seth Herd

Seth Herd

Astera Institute

Marius Hobbhahn

Marius Hobbhahn

Apollo Research

Erik Jenner

Erik Jenner

Google DeepMind

Zhijing Jin

Zhijing Jin

University of Toronto

Francis Rhys Ward

Francis Rhys Ward

Independent / LawZero