Alignment Red Team Research Engineer/Scientist

Posted 4 hours 46 minutes ago by AI Security Institute

£80,000 - £100,000 Annual
Permanent
Full Time
Research Jobs
London, United Kingdom
Job Description
A leading AI safety organization in London is seeking Research Engineers/Scientists for their Alignment Red Team. Responsibilities include researching misalignment risks in frontier AI models and running evaluations to inform AI safety policies. Candidates should have strong software engineering and machine learning experience, particularly in Python, and ideally a background in AI research projects. The role also offers unique insights and direct influence on global AI deployment strategies, with a competitive salary and various benefits.