Data Engineer (Remote - US) at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Data Engineer (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Data Engineer. in the . United States. .. The Data Engineer will play a pivotal role in modernizing and optimizing enterprise data pipelines, enabling the secure and efficient movement of critical data across systems. You will design, implement, and maintain high-performance ETL/ELT pipelines for both real-time and batch processing, while ensuring compliance with strict security standards. Collaborating closely with data architects, data scientists, integration teams, and product stakeholders, you will help migrate legacy systems to scalable, cloud-based architectures. This role requires hands-on technical expertise, a deep understanding of distributed data systems, and the ability to translate complex challenges into reliable, production-ready solutions. You will also mentor team members, drive continuous improvement, and contribute to mission-driven outcomes with measurable impact.. . Accountabilities. Design, implement, and operate secure, high-performance data pipelines for real-time and batch workflows.. Architect scalable solutions to reliably move sensitive data while meeting security and compliance standards.. Build and optimize ETL/ELT pipelines using NiFi, Kafka, Python, and SQL.. Collaborate with integration, product, and data science teams to migrate legacy pipelines to modern architectures.. Maintain and optimize data lakes, ensuring smooth data exchange across multiple systems.. Define monitoring and alerting strategies, create runbooks, and troubleshoot production issues.. Drive continuous improvement and innovation across the data ecosystem, while mentoring and guiding team members.. . Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field.. 3+ years of experience implementing enterprise-grade data pipelines, or 7+ years of equivalent professional experience.. Hands-on expertise with NiFi, Kafka, Python, SQL, and distributed data systems.. Familiarity with OpenSearch/Elasticsearch, CI/CD, and DevOps practices.. Strong understanding of data security and compliance requirements for sensitive data.. Proven ability to design and operate robust, production-grade pipelines.. Experience with relational databases, existing data models, and schema extensions.. Excellent collaboration, communication, and technical documentation skills.. Growth mindset and proactive problem-solving in complex, mission-driven environments.. Preferred / Standout Skills:. Experience with public health data standards (HL7 v2.x, FHIR, LOINC, CVX, SNOMED, ICD-10).. Event-driven architectures and data de-identification strategies.. Hands-on experience with integration engines (e.g., Rhapsody, Mirth) and containerized deployments (Docker, Kubernetes).. Familiarity with Master Patient Index (MPI) or data deduplication frameworks.. . Company Location: United States.