
Senior Data Engineer (Remote - Latam) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer in Latin America.. This role offers the opportunity to shape and scale a high-impact data platform in a dynamic, remote-first environment. As a Senior Data Engineer, you will be responsible for building and evolving data infrastructure that supports analytics, modeling, and reporting at scale. You will collaborate with cross-functional teams to ensure data reliability, governance, and accessibility, enabling informed business decisions across the organization. The position emphasizes hands-on technical expertise, mentorship, and strategic influence over architectural decisions. You will work with modern cloud platforms, distributed processing frameworks, and diverse data types, ensuring performance, observability, and reproducibility. This role provides autonomy, career growth, and the chance to contribute to a culture of innovation and excellence.. . Accountabilities. . Architect, implement, and maintain scalable data infrastructure for ingestion, processing, and serving large volumes of data efficiently.. . Improve and optimize existing frameworks and pipelines to enhance performance, reliability, and cost efficiency.. . Establish and enforce robust data governance practices, enabling cross-functional teams to access trusted data.. . Transform raw datasets into clean, usable formats suitable for analytics, modeling, and reporting.. . Investigate and resolve complex data issues, ensuring data accuracy and system resilience.. . Maintain high standards for code quality, testing, and documentation with a focus on reproducibility and observability.. . Stay current with industry trends, tools, and emerging technologies to continuously improve engineering practices.. . . . Bachelor’s Degree in Computer Science, Engineering, or a related field.. . 5+ years of experience designing, building, and operating scalable data ingestion, processing, and serving layers in production.. . Strong expertise in SQL for analytics, transformations, and performance optimization.. . Proficiency in Python for data manipulation, pipeline development, and integration (e.g., PySpark, pandas).. . Experience with data modeling for Data Warehouses/Lakehouses and building efficient ETL/ELT pipelines.. . 3+ years of experience with distributed data processing frameworks such as Apache Spark for batch or streaming workloads.. . 3+ years of experience with cloud platforms (AWS and/or GCP) for storage, compute, and data engineering workloads.. . Knowledge and experience implementing data governance practices at scale, including policies, lineage, and access controls.. . Experience improving existing data pipelines for performance, reliability, and cost efficiency.. . Hands-on experience with automated testing, CI/CD, and observability tools for data pipelines.. . Strong understanding of data security and privacy practices (PII handling, encryption, IAM, least privilege).. . Proficiency with diverse data formats (structured, semi-structured, unstructured; JSON, Avro, Parquet, ORC).. . Experience with modern data architectures: Data Lake, Data Warehouse, Lakehouse, Data Mesh, or Data Fabric.. . Advanced English proficiency for communication with international teams and clients.. . . Company Location: Colombia.