
Data Engineer (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in United States.. As a Data Engineer, you will play a pivotal role in designing, building, and maintaining robust, secure, and scalable data infrastructure that powers analytics, reporting, and advanced data science initiatives. You will work closely with cross-functional teams to ensure high-quality, reliable, and accessible data, enabling data-driven decisions across the organization. This role offers exposure to both batch and real-time data processing, modern cloud platforms, and cutting-edge big data technologies. You will have the opportunity to implement best practices in data governance, security, and pipeline orchestration, while continuously exploring innovative tools and frameworks. Your contributions will directly influence the organization’s ability to leverage data as a strategic asset.. . Accountabilities. · Design, develop, and optimize ETL/ELT processes for structured and unstructured datasets.. · Build and maintain data pipelines integrating multiple internal and external data sources.. · Implement data quality, validation, and monitoring processes to ensure reliable enterprise data.. · Support both real-time streaming and batch data processing pipelines.. · Collaborate with data architects, analysts, and scientists to deliver scalable, high-performing solutions.. · Ensure data security, compliance, and governance standards are met.. · Evaluate and recommend new data engineering tools, techniques, and platforms for continuous improvement.. · Proficiency in programming languages such as Python, Java, or Scala.. · Advanced SQL skills for data transformation and query optimization.. · Experience with data pipeline tools like Airflow, dbt, Kafka, or equivalents.. · Strong knowledge of big data frameworks such as Apache Spark, Databricks, or Flink.. · Hands-on experience with cloud platforms (AWS, Azure, Google Cloud).. · Familiarity with modern data architectures including data lakes, lakehouses, and warehouses.. · Knowledge of containerization and orchestration tools such as Docker and Kubernetes.. · Understanding of data modeling, metadata management, and data lineage.. · Experience implementing CI/CD pipelines for data workflows.. · Familiarity with modern storage and query engines (Snowflake, Redshift, BigQuery, Delta Lake).. · Strong analytical, problem-solving, and communication skills; able to explain complex concepts to non-technical stakeholders.. · Collaborative mindset with the ability to work effectively across cross-functional teams.. · Preferred: experience with infrastructure-as-code (Terraform, CloudFormation), machine learning data pipelines, data governance frameworks, and working in regulated environments (GDPR, HIPAA, FedRAMP).. . Company Location: United States.