Lead Data Scientist - Safety Alignment at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Lead Data Scientist - Safety Alignment at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Lead Data Scientist - Safety Alignment. in the . United States. .. In this role, you will lead the design, development, and deployment of safe and aligned AI systems that operate reliably in complex, high-stakes, and multi-agent environments. You will define technical safety frameworks, implement alignment strategies, and monitor agent behavior to ensure adherence to ethical and human-centric standards. The position blends cutting-edge research with applied engineering, requiring collaboration across product, governance, and operational teams. You will mentor data scientists, contribute to research initiatives, and guide deployment readiness while mitigating risks associated with autonomous AI. This is a strategic opportunity to shape the next generation of responsible AI technologies in a collaborative and innovation-driven environment.. . Accountabilities:. Design and implement safety architectures for agentic AI systems, including guardrails, reward modeling, and self-monitoring.. Lead alignment techniques such as inverse reinforcement learning, preference learning, interpretability tools, and human-in-the-loop evaluation.. Develop continuous monitoring and evaluation strategies for agent behavior in both simulated and real-world environments.. Collaborate with product, legal, governance, and deployment teams to ensure responsible scaling and operational safety.. Conduct research and contribute to publications on AI alignment, multi-agent cooperation/conflict, and value learning.. Identify and mitigate potential failure modes, including goal misgeneralization, deceptive behavior, and unintended instrumental actions.. Establish safety milestones for autonomous AI deployment readiness.. Master’s Degree with 4+ years of experience in research, ML engineering, or applied research focused on production-ready AI solutions.. 2+ years of experience leading AI/ML system development.. Deep expertise in AI alignment, multi-agent systems, or reinforcement learning.. Proven track record in research-to-production initiatives or technical governance frameworks.. Strong publication record or contributions in AI safety, interoperability, or algorithm ethics.. Proficiency in Python, SQL, and data analysis/data mining tools.. Experience with ML frameworks such as PyTorch, JAX, ReAct, LangChain, LangGraph, or AutoGen.. Experience with high-performance, large-scale ML systems and large-scale ETL pipelines.. Preferred: Ph.D. in Computer Science, Data Science, Machine Learning, or related field.. Preferred: Contributions to open-source AI safety tools or benchmarks.. Preferred: Knowledge of value-sensitive design, constitutional AI, or multi-agent alignment.. Preferred: Experience in regulated domains such as healthcare, finance, or defense.. . Company Location: United States.