Data Engineer at Adoreal

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Data Engineer at Adoreal. Who We Are . We are a fast-growing vertical SaaS company that leverages innovation and disruptive technologies to improve consumer experiences, outcomes, and predictability in elective medicine. Our team thrives on challenges, embraces change, and is dedicated to transforming our industry. . Adoreal is scaling rapidly, and this role is designed to support growth. As we automate more of our content production, scheduling, and reporting processes, we are looking for a strategic implementor who can take on high-impact work in sales and marketing automation.. Who We're Looking For . We’re looking for a hybrid Data Engineer with experience (or strong interest) in analytics and Data Science to help build Adoreal’s data infrasturcutre from the ground up. In this multifaceted role, you’ll build scalable data infrastructure, conduct deep analysis, and support the integration of machine learning models and generative AI solutions.  . You’ll collaborate with engineering, product, and operations teams to build end-to-end data pipelines, design experiments, and deploy models.  . Responsibilities. Data Engineering  . Design and maintain scalable ETL/ELT pipelines using tools like AWS, Apache Airflow, dbt, or Step Functions  . Ingest, clean, and transform structured and unstructured data (e.g., text, logs, embeddings)  . Manage cloud-native data infrastructure   . Implement robust data quality checks, lineage tracking, and observability practices  . Analytics & Insights  . Write advanced SQL queries and produce business dashboards (Looker, Tableau, Power BI)  . Conduct exploratory data analysis and hypothesis testing to support decision-making  . Work with product and operations teams to design A/B tests and interpret results  . MLOps & AI Engineering  . Experience using AI tools to help implement solutions  . 2–5 years of hands-on experience in data engineering, analytics, or data science roles  . Proficiency in SQL and Python for data transformation, analysis, and automation  . Experience building and maintaining ETL/ELT pipelines using modern tools such as AWS, dbt, Airflow, or custom scripts  . Solid understanding of data modeling principles (e.g., star/snowflake schemas), schema evolution, and partitioning strategies  . Familiarity with ETL/ELT best practices including:  . Incremental processing vs full refresh  . Idempotent jobs and failure recovery  . Data validation and quality checks (e.g., null checks, duplication handling)  . Efficient job orchestration and dependency management  . Experience with cloud-based data ecosystems (especially AWS, e.g., S3, Redshift, Athena)  . Solid grounding in statistical methods, A/B testing, and exploratory data analysis  . Exposure to LLMs, NLP, or embedding-based search, with an interest in integrating AI into analytics workflows  .  . Company Location: Colombia.