Research Join Us Team

Aether

LLM Agent Safety Research

Aether is an independent LLM agent safety research group dedicated to conducting impactful research that ensures the responsible development and deployment of AI technologies. We work on whatever seems most impactful to us, focusing on critical areas that can positively influence AGI companies, governments, and the broader AI safety field.

Research

Chain-of-Thought Monitoring & Hidden Reasoning

Our primary research focus has been on chain-of-thought monitoring. We investigate how information access affects LLM monitors' ability to detect sabotage and other safety-critical behaviors. We've also developed a taxonomy for understanding hidden reasoning processes within LLMs, providing a structured framework for analyzing covert reasoning mechanisms.

Information Access Research

Emerging Research Areas

We're exploring topics including shaping the generalization of LLM personas, interpretable continual learning, and pretraining data filtering. Our research agenda remains flexible to focus on the most impactful projects.

CL Robot

We're Hiring!

Apply Now

Applications reviewed on a rolling basis • Apply early

Position Details

  • Openings: 1-2 researchers
  • Start Date: Between February and May 2026
  • Duration: Through end of 2026, with possibility of extension
  • Compensation: ~$100k USD/year (prorated based on start date)
  • Apply by: Saturday, January 17th EOD AoE (extended from original deadline of January 3rd)
  • Location: Trajectory Labs, Toronto (in-person expected, visa sponsorship available)

What you'd be working on

So far, we have focused on chain-of-thought monitoring. See our Research section for details on our work, including our paper How does information access affect LLM monitors' ability to detect sabotage? and our post Hidden Reasoning in LLMs: A Taxonomy.

We are not committed to a specific research agenda for the upcoming year yet. Topics we're currently exploring include shaping the generalization of LLM personas, interpretable continual learning, and pretraining data filtering. We plan to always work on whatever seems most impactful to us.

We're looking for:

  • Experience working with LLMs and executing empirical ML research projects
  • Agency and general intelligence
  • Strong motivation and clear thinking about AI safety
  • Good written and verbal communication

A great hire could help us:

  • Become a more established org, like Apollo or Redwood
  • Identify and push on relevant levers to positively influence AGI companies, governments, and the AI safety field
  • Shape a research agenda and focus on more impactful projects
  • Accelerate our experiment velocity and develop a fast-paced, effective research engineering culture
  • Publish more papers in top conferences

Application Process

We prefer that candidates join us for a short-term collaboration (1-3 months part-time) to establish mutual fit before transitioning to a long-term position. However, if you have AI safety experience equivalent to having completed the MATS extension, we are happy to interview you for a long-term position directly. The interview process will involve at least two interviews: a coding interview and a conceptual interview where we'll discuss your research interests. The expected starting date for long-term researchers is Feb-May; we're happy to start short-term collaborations ASAP.

If you are only interested in short-term collaborations, you can fill out this form instead.

Our Team

Rohan Subramani

Rohan Subramani

Founder & Researcher

Rauno Arike

Rauno Arike

Researcher

Shubhorup Biswas

Shubhorup Biswas

Researcher

Advisors

Seth Herd

Seth Herd

Astera Institute

Marius Hobbhahn

Marius Hobbhahn

Apollo Research

Erik Jenner

Erik Jenner

Google DeepMind

Zhijing Jin

Zhijing Jin

University of Toronto

Francis Rhys Ward

Francis Rhys Ward

Independent / LawZero