Senior Technical Specialist - AI Risk Assessment at Future of Life Organizations

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Technical Specialist - AI Risk Assessment at Future of Life Organizations. Location Information: Anywhere. Position Overview. We are seeking a Senior Technical Specialist to join our Comprehensive Risk Assessment team. This role combines original research in AI safety and technical governance with strong emphasis on conceptual depth and quality assurance leadership. The ideal candidate will bring methodological rigor to our assessment methods and analyses of AI risk pathways, contributing to the development of frameworks that shape industry standards and policy. You will serve as a subject matter expert bridging technical AI safety with governance approaches, enhancing team research quality while driving forward key aspects of our groundbreaking work in risk assessment methodologies for increasingly capable AI systems, all within our established strategic research program. This position is entirely remote but occasional travel is required.. About CARMA. The Center for AI Risk Management & Alignment (CARMA) works to help society navigate the complex and potentially catastrophic risks arising from increasingly powerful AI systems. Our mission is specifically to lower the risks to humanity and the biosphere from transformative AI.. We focus on grounding AI risk management in rigorous analysis, developing policy frameworks that squarely address AGI, advancing technical safety approaches, and fostering global perspectives on durable safety. Through these complementary approaches, CARMA aims to provide critical support to society for managing the outsized risks from advanced AI before they materialize.. CARMA is a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public benefit corporation.. Key Responsibilities. • Develop techniques for discovering threat models and generating risk pathway analyses that capture societal and sociotechnical dimensions. • Model multi-node risk transformation, amplification, and threshold effects propagating through social systems. • Contribute to the design of robust technical governance frameworks and assessment methodologies for catastrophic risks, including loss-of-control scenarios. • Provide strategic and tactical quality control for the team’s research, ensuring conceptual soundness and technical accuracy. • Drive or take ownership of original research projects on comprehensive risk management for advanced AI systems, aligned with the team's objectives. • Collaborate across CARMA teams to integrate risk assessment paradigms with other workstreams. • Contribute to technical standards and best practices for the evaluation, risk measurement, and risk thresholding of AI systems. • Craft persuasive communications for key stakeholders on prospective AI risk management. Required Qualifications. • 5+ years of experience in AI safety, alignment, and/or governance. We are open to candidates at different levels of seniority who can demonstrate the required depth of expertise.. • Strong understanding of multiple risk modeling approaches (causal modeling, Bayesian networks, systems dynamics, etc.). • Experience with systemic and sociotechnical modeling of risk propagation. • Excellent analytical thinking with ability to identify subtle flaws in complex arguments. • Strong written and verbal communication skills for technical and non-technical audiences. • Publication record or equivalent demonstrated expertise in relevant areas. • Systems thinking approach with independent intellectual rigor. • Track record of constructive collaboration in fast-paced, intellectually demanding environments. • Comfort with uncertainty and rapidly evolving knowledge landscapes. Preferred Qualifications. • Background in complex systems theory, control theory, cybernetics, multi-scale modeling, or dynamical systems. • Work history at AI safety research organizations, technical AI labs, policy institutions, or adjacent risk domains. • Experience with quality assurance processes for technical research. • Ability to model threshold effects, nonlinear dynamics, and emergent properties in sociotechnical systems. • Understanding of international dynamics and power differentials in AI development. • Ability to balance consideration of both acute and aggregate AI risks. • Experience with causal, Bayesian, or semi-quantitative hypergraphs for risk analysis. • Demonstrated methodical yet creative approach to framework development. CARMA/SEE is proud to be an Equal Opportunity Employer. We will not discriminate on the basis of race, ethnicity, sex, age, religion, gender reassignment, partnership status, maternity, or sexual orientation. We are, by policy and action, an inclusive organization and actively promote equal opportunities for all humans with the right mix of talent, knowledge, skills, attitude, and potential, so hiring is only based on individual merit for the job. Our organization operates through a fiscal sponsor whose infrastructure only supports persons authorized to work in the U.S. as employees. Candidates outside the U.S. would be engaged as independent contractors with project-focused responsibilities. Note that we are unable to sponsor visas at this time.. $140,000 - $220,000 a year. plus good benefits if a U.S. employee