
Data Engineer (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in the United States.. We are seeking an experienced Data Engineer to join a fully remote team focused on modernizing and optimizing data pipelines and analytics infrastructure. In this role, you will work closely with cross-functional teams, including Cloud Architects, analysts, and DevOps, to design and implement scalable ETL solutions in cloud environments. You will play a key role in migrating and improving existing workflows while ensuring high-quality, secure, and efficient data processing. This position offers the opportunity to take ownership of technical initiatives, influence architecture decisions, and contribute to a culture of innovation and continuous improvement. If you are passionate about cloud technologies, ETL development, and translating complex data into actionable insights, this role provides a dynamic and collaborative environment.. . Accountabilities. . Design, implement, and maintain ETL pipelines using AWS Glue and PySpark.. . Migrate and re-architect existing ETL solutions from legacy platforms, including Databricks and Linux/Python scripts.. . Collaborate with Cloud Architects and DevOps to modernize data ingestion and processing patterns.. . Perform data modeling, transformation, and validation to support analytics and reporting.. . Troubleshoot and debug pipeline failures while optimizing performance and scalability.. . Develop custom reports, complex queries, and automation scripts to support stakeholders.. . Ensure compliance with SDLC processes and maintain secure, auditable data pipelines.. . Support DevSecOps initiatives, including infrastructure setup, security remediation, and CI/CD deployment.. . . . U.S. citizenship or authorization to work with at least 3 of the last 5 years residing in the U.S.. . Bachelor’s degree (or equivalent experience) with 5+ years in data engineering or related roles.. . Strong experience with cloud platforms, especially AWS, and ETL tools such as AWS Glue.. . Proficient in Python and/or PySpark for data processing tasks.. . Experience with semi-structured data formats (XML, JSON, Parquet) and relational databases.. . Knowledge of Snowflake cloud data platform, including performance tuning and optimization.. . Familiarity with CI/CD tools, version control (GitHub), and Agile development practices.. . Understanding of database security, audit, and access controls.. . Strong problem-solving, analytical, and communication skills, with the ability to collaborate across technical teams.. . Ability to obtain Public Trust clearance.. . . Company Location: United States.