Program information

Overview

AISAR is an in-person, part-time program conceived and designed to give students an opportunity to delve into AI Safety. Concretely each participant is expected to work on a research project in a specific subject within AI Safety for 6 months. Time commitments are 20 hours per week for scholars.


Scholar profile

In this first edition of the program, we are looking for advanced students or recent graduates from the Master's degree in Computer Science or Data Science from Exactas-UBA.


The following characteristics will be considered positively:


  • Solid academic background demonstrating deep knowledge in their area of study related to the scholarship topic.
  • Previous experience or marked interest in participating in research projects related to Artificial Intelligence (AI) and/or Machine Learning.
  • Ability to understand and quickly incorporate advanced technical concepts from the state of the art in AI.
  • Prior knowledge or strong motivation to contribute to AI Safety, particularly in mitigating catastrophic risks associated with the development and deployment of advanced autonomous AI systems.
  • Demonstrate technical strengths in Computer Science or Data Science disciplines that, with proper guidance, can contribute to research or development of approaches aimed at risk reduction in AI systems.
  • Have adequate English proficiency for reading academic papers and participating in seminars with international AI Safety researchers.

While the profile is primarily aimed at advanced students or recent graduates in Computer Science and Data Science, we are also open to considering other related disciplines in exceptional cases. If this is your case, we ask that you include a note telling us why you believe your profile is suitable for this call.


Research Topics

Research project topics will be selected based on mentor availability and alignment with AI Safety agendas, some of which are:

  • Interpretability: Development of methods to understand and explain the internal and external behavior of AI models, whether through reverse-engineering their architecture, analyzing implicit goals or values in their behavior, evaluating the faithfulness of their reasoning processes, or studying their inductive biases.
  • Evaluations: Building systematic tests, both behavioral and mechanistic, to detect dangerous capabilities, harmful propensities, and the robustness of safety measures, creating evidence-based "gates" that determine if and how a model can be scaled or deployed.
  • Scalable Oversight/Control: Designing techniques such as task decomposition, process supervision, debate, iterated amplification, and adversarial "control evaluations" that allow humans (or weaker AIs) to monitor and constrain much more capable models, even when they attempt to evade safeguards.
  • Governance: Study and design of policies, regulations, institutions, and technical and coordination mechanisms to avoid or mitigate harmful dynamics in advanced AI development (such as multipolar traps, unilateralist curse, proliferation of dangerous capabilities, etc.)
  • AI Agency: Developing a deeper mathematical theory of agency, optimization, corrigibility (an agent's ability to accept external corrections without resisting or attempting to avoid them), and embedded decision-making, to support alignment strategies based on principles valid even for radically superhuman agents.
  • Safety by Design AI: refers to AI systems developed to operate safely, with formal and verifiable guarantees that they will avoid harmful behavior. The goal is to ensure compliance with safety standards through rigorous verification, rather than relying solely on empirical testing.

We will also use as a reference the agendas and projects defined in other AI Safety programs or funding institutions:


Frequently asked questions

Is there funding for scholars?

Yes, scholars will receive a monthly stipend of $600 USD, plus a monthly fund for compute expenses of $500 USD.

What is the duration of the program?

The program lasts 6 months.

What is the time commitment for scholars?

Scholars must dedicate at least 20 hours per week to the program.

What is the program schedule?
The tentative schedule for the first edition of the program to take place during 2025 is:
  • May 14-28 (inclusive): Scholar applications
  • May 29 - June 26: Selection process for the 6 scholars who will participate in the program
  • June 27: Notification of acceptance to selected scholars
  • June 30: Program start
  • July 30: Research plan submission for the project
  • Last week of September: Mid-term report submission
  • First week of December: Internal workshop with scholar presentations
  • Second week of December: Final program report submission
How is the scholar selection process?

In this first edition, we will select up to 6 scholars to be part of the program. The selection process will be based on:

  • Analysis of candidates' CV and cover letter: degree progress, GPA (without failing grades and without CBC), previous work or research experience, knowledge of Machine Learning, knowledge of AI Safety agendas, knowledge of English, awards received, etc.
  • Remote Python programming exam (90 minutes)
  • Remote AI Safety papers reading exam (120 minutes)
  • Interview with program organizers (if passing first filter)
  • Matching process between scholars and mentors (if passing final round)
What events will take place during the program?

During the program, the following events will take place:

  • Program kickoff meeting
  • Integration and networking activities
  • Project follow-up meetings with mentors and research managers
  • Seminars with international researchers
  • Seminar on ways to fund AI Safety projects (LTFF, Open Philanthropy, etc.)
  • Internal workshop with scholar presentations
  • Final presentation of results
What are the deliverables of the program?

Within the framework of the program, it is considered a priority to systematize the progress of research projects; therefore, scholars will be required to:

  • Complete a tentative research plan for the project within the first month.
  • Complete the Blue Dot AI Safety Fundamentals course within the first month.
  • Submit a mid-term progress report.
  • Make a final presentation at an internal program workshop, showing the results obtained for the project.
  • Submit a final program report.
Do I need to have a project in mind to apply?

No, you don't need to have a project in mind to apply.