SaferAI is a fast-moving, mission-driven French organization advancing and promoting AI risk management to reduce AI risks, in particular extreme risks from advanced AI systems.
Past successes and ongoing work:
1. Working in standardization at JTC21, the body in charge of writing the technical specifications of the EU AI Act, part of the newly constituted US AI Safety Institute Consortium, and member of a G7 OECD taskforce, we’re responsible for significant contributions to AI risk management of large language models in this area.
2. Developing a rating system for AI companies from a risk management perspective.
3. SaferAI was invited to a hearing at OPECST, the joint Senate and Parliament Commission on Science and Technology in France, to discuss cybersecurity and national security risks associated with AI.
4. In collaboration with Kai Zenner and the Ada Lovelace Institute, we co-hosted a workshop attended by over 50 participants in Brussels. Our efforts led to successful coalition-building to promote a drafting process for the EU GPAI Model Code of Practices that would include civil society.
If you're hesitant to apply, lean towards doing so. We’re still at an early stage so any colleague can significantly shape the organization. We're excited to hear about you.
Link to the job description and application form: https://saferai.factorial.fr/job_posting/201265
Learn more about us on our website: safer-ai.org