
Data Engineer II (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Data Engineer II. in . the United States. .. As a Data Engineer II, you will play a central role in designing, building, and maintaining scalable data pipelines and infrastructure that enable analytics, reporting, and machine learning initiatives. You will collaborate closely with data scientists, analysts, architects, and product owners to ensure data is accurate, secure, and accessible for business and research purposes. This position offers the opportunity to work with modern cloud-based data platforms, implement best practices for data quality and governance, and contribute to continuous improvement of data workflows. You will also support production-ready data systems, troubleshoot performance issues, and help mentor junior team members in coding and operational standards. The role combines technical expertise with cross-functional collaboration to deliver high-impact, data-driven solutions.. . Accountabilities. Design, implement, and optimize ETL/ELT pipelines to transform raw data into structured, reliable datasets.. Develop and maintain data models, including star/snowflake schemas, operational data stores, data warehouses, and data lakes.. Collaborate with stakeholders to translate business requirements into technical data solutions.. Manage cloud-based data platforms and tools (e.g., AWS, GCP, Azure, Redshift, BigQuery, Databricks, Snowflake).. Monitor pipeline performance, optimize SQL queries, and ensure scalability, throughput, and cost-efficiency.. Implement data quality monitoring, validation, and alerting to maintain accuracy and completeness.. Document pipelines, transformations, and system architecture, adhering to best practices.. Mentor junior data engineers on coding standards, tools, and workflows.. Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent work experience; Master’s degree preferred.. 2+ years of experience in data engineering, analytics engineering, or software engineering.. Proven experience delivering production-ready ETL/ELT pipelines and modern data stack solutions.. Strong programming skills in SQL, Python, Java, or Scala.. Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB).. Familiarity with data processing frameworks such as Apache Spark, Apache Beam, Apache Airflow, DBT, AWS Glue, Databricks, Snowflake, or Dagster.. Cloud platform experience (AWS, GCP, Azure) including compute, storage, and networking management.. Knowledge of DevOps and CI/CD practices (Terraform, Docker, Kubernetes, GitHub Actions).. Understanding of data governance, security, and compliance, including handling PII/PHI data.. Strong collaboration and communication skills to work effectively across technical and business teams.. Preferred: AWS or GCP certifications, experience in life sciences or genomics industries.. Company Location: United States.