Senior Data Engineer at ZYTLYN Technologies

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Data Engineer at ZYTLYN Technologies. Who we are. ZYTLYN Technologies empowers companies across the $11 trillion travel industry to shape the future with predictive AI solutions that augment commercial planning, sales, marketing, retailing and operations. We work with some of the largest travel brands in the world, and our vision is to answer highly detailed and granular questions about the future of travel, such as demand, supply, market fluctuations and pricing. Our core focus is on airlines, airports, travel agencies, destinations, tourism boards, hotels, car rentals, travel retailers, and luxury brands.. Who we are looking for. As a Senior Data Engineer, you'll be responsible for building and maintaining the systems that support our products and analytics. You'll have the opportunity to take ownership of key pipelines, influence the technical direction of our platform, and collaborate closely with engineers and data scientists to deliver reliable, high-quality data.. Location / Contract type. . Geneva, Switzerland Office/Hybrid, or;. . Full remote contractor (GMT+1 to GMT+4).. . Our culture. We have a culture that focuses on empowering people, with team members working in our HQ (Geneva, Switzerland), and all across Europe (e.g. France, Spain, Italy, Poland, UK). We believe a diverse team creates better outcomes and fosters a better environment for learning and growth. We put a lot of emphasis on communication, listening, efficient processes and trusting our team. We rely on each other, and work together to achieve our common goals. We believe in working smart, with strong focus and intensity, tackling every challenge as a team.. Your work. . Own the design, build, and maintenance of reliable batch pipelines using PySpark and Python;. . Influence the future direction of our data platform, with potential to design and implement streaming pipelines;. . Design and optimise data architecture on AWS;. . Ensure high-quality, observable data flows into downstream systems that power analytics, product features and decision-making;. . Champion solid engineering practices (CI/CD, testing, Git workflows);. . Ensure the quality and suitability of datasets for downstream use;. . Collaborate with product managers, engineers and data scientists to deliver trusted datasets and support model development/deployment.. . Basic requirements. . 5+ years of data engineering experience building large-scale data platforms;. . Proven hands-on experience with Spark, AWS, Python/Scala, SQL - familiarity with Kafka is a plus;. . Track record of orchestrating, monitoring, and maintaining high-volume batch pipelines across distributed systems and cloud environments;. . Proficiency with Docker and containerised deployments;. . Familiarity with orchestration frameworks (e.g. Airflow) or AWS-native solutions like Step Functions (preferred);. . Strong engineering practices: CI/CD pipelines, automated testing, GitHub/GitLab Flow;. . Resourceful self-starter, comfortable with ambiguity and shifting priorities in a startup;. . Highly organised, disciplined, and detail-oriented;. . Excellent communication and listening skills — verbal, written.. . Bonus points. . Familiarity with Kafka or similar event-streaming technologies, and exposure to building/maintaining streaming pipelines;. . Experience designing data warehousing solutions (Redshift, Snowflake, BigQuery);. . Experience with Infrastructure as Code (Terraform, CloudFormation);. . Exposure to MLflow or similar tools, and familiarity with model deployment workflows;. . Awareness of data quality practices and governance principles;. . Familiarity with Kubernetes for container orchestration.. . Company Location: Switzerland.