Staff Data Engineer (Remote - US) at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Staff Data Engineer (Remote - US) at Jobgether. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.. One of our companies is currently looking for a . Staff Data Engineer. in the . United States. .. This role offers a unique opportunity to shape data infrastructure at the heart of an innovative AI-driven healthcare product. As Staff Data Engineer, you will design and manage scalable, high-performance data systems powering next-generation diagnostic technologies. You’ll be responsible for building and maintaining a robust cloud-based infrastructure to ingest, transform, and deliver large volumes of complex sensor and medical data. Working cross-functionally with scientists, machine learning engineers, and medical professionals, your work will directly enable advanced model development and contribute to healthcare breakthroughs.. Accountabilities:. . Design and maintain scalable, reliable data pipelines using AWS to support high-impact medical AI research.. . Build and automate data ingestion and transformation workflows that ensure accuracy and reproducibility.. . Oversee core data infrastructure for batch processing and model training at scale.. . Develop robust systems for managing large multichannel datasets with attention to quality, cost, and compliance.. . Create automated reporting and ML workflows to support continuous learning and model iteration.. . Collaborate with internal teams to align data solutions with scientific and operational needs.. . Ensure adherence to data privacy and security standards, especially in healthcare contexts.. . . Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related technical field.. . 8+ years of hands-on experience as a Data Engineer, with strong exposure to data operations.. . Proven expertise in AWS services, including S3, and working with large-scale cloud-based datasets.. . Advanced skills in Python, Spark, Hadoop, or Scala; strong SQL proficiency required.. . Experience designing automated, scalable data pipelines and batch processing systems.. . Familiarity with MLOps tools and time-series or sensor data is a plus.. . Strong ownership mindset with the ability to operate autonomously in a fast-paced environment.. . Comfortable working collaboratively with multidisciplinary teams and stakeholders.. . Company Location: United States.