
Data Engineer (Sr/Staff Software Engineer) (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Data Engineer (Sr/Staff Software Engineer). in the . United States. .. This role offers the opportunity to lead the design and implementation of scalable data solutions that power real-time analytics, machine learning, and business intelligence. You will work hands-on with modern data platforms, ETL pipelines, data streaming, and lakehouse architectures to transform raw data into actionable insights. Operating in a fast-paced, collaborative environment, you will partner with data scientists, analysts, and engineers to deliver reliable, secure, and high-performance data workflows. This position emphasizes innovation, efficiency, and the adoption of best practices in data engineering, allowing you to have a direct impact on enterprise-wide data initiatives. Ideal candidates are technically proficient, proactive problem-solvers, and thrive in dynamic, cross-functional teams.. Accountabilities:. As a Data Engineer in this role, you will be responsible for:. Designing and implementing real-time and batch data streaming solutions to efficiently process high-volume data.. Developing, maintaining, and optimizing ETL pipelines, data workflows, and data orchestration processes.. Building and managing large-scale data warehouses and certified datasets to support analytics, machine learning, and business intelligence needs.. Collaborating with data scientists, analysts, and stakeholders to understand data requirements and deliver effective solutions.. Monitoring, troubleshooting, and improving data pipelines to ensure reliability, scalability, and performance.. Ensuring data security, compliance, and adherence to industry standards and regulations.. Writing well-documented, efficient, and tested code for data solutions and production deployment.. Staying current with emerging trends in data engineering, cloud technologies, and machine learning infrastructure.. The ideal candidate will have:. Minimum 5 years of experience in software engineering, data engineering, or database management.. Strong programming skills in Python, SQL, and Java.. Hands-on experience with ETL processes, data pipelines, and cloud-based data platforms.. Familiarity with modern data lakehouse technologies (Apache Iceberg, Delta Lake, Apache Trino) and database systems (SQL/NoSQL).. Experience with data streaming frameworks such as Apache Kafka, AWS Kinesis, Apache Flink, or Apache Spark.. Knowledge of DevSecOps practices and tools like Jenkins and GitHub.. Strong problem-solving, analytical, and communication skills with the ability to collaborate effectively across teams.. Intellectual curiosity and adaptability to learn new technical concepts and technologies quickly.. Preferred Qualifications:. Bachelor’s degree in Computer Science, Information Technology, or related field.. Experience in high-tech, transportation, logistics, or supply chain industries.. Familiarity with Agile frameworks (Scrum, Kanban, SAFe).. Proficiency with Jupyter notebooks and open-source data engineering tools.. Knowledge of geospatial data transformation and spatial libraries.. Company Location: United States.