The team behind AISAR

Organizers

Iván Arcuschin

Iván Arcuschin

Independent Researcher working on Mechanistic Interpretability and Evals for AI Safety. He holds a PhD in Computer Science from the University of Buenos Aires, where he specialized in automatic test case generation for Android apps. He is currently a MATS fellow, where he is doing research on unfaithful chains of thought in LLMs. Previously he worked on InterpBench, a benchmark for evaluating interpretability methods.

Agustín Martinez Suñé

Agustín Martinez Suñé

Postdoctoral Research Associate at the University of Oxford, working with the OXCAV group. He holds a PhD in Computer Science from the University of Buenos Aires, where he specialized in formal methods for the analysis of distributed systems. He is a former PIBBSS Fellow, where he began exploring the intersection of formal methods and machine learning to advance the development of verifiably safe AI systems.


Advisory Board

Sebastían Uchitel

Sebastían Uchitel

Editor in Chief, IEEE Transactions on Software Engineering. Professor @ University of Buenos Aires. PhD @ Imperial College London. General Chair for the 38th International Conference of Software Engineering (ICSE).

Luciana Ferrer

Luciana Ferrer

Researcher @ Computer Science Institute (ICC), CONICET-UBA. PhD @ Stanford. Board Member of the International Speech Communication Association (ISCA).

Ryan Kidd

Ryan Kidd

Co-Director of MATS, Co-Founder and Board Member of LISA, Manifund Regrantor, and advisor to AI Safety ANZ, Catalyze Impact, and Pivotal Research. PhD in Physics @ University of Queensland.

James Fox

James Fox

Senior Science Associate @ Schmidt Sciences. Previously Research Director of the London Initiative for Safe AI (LISA). Computer Science PhD on technical AI safety @ University of Oxford.

Nora Anmman

Nora Anmman

Technical Specialist for the Safeguarded AI programme at the UK's Advanced Research & Innovation Agency. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.