Senior Data Engineer at Savii

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Data Engineer at Savii. Location Information: Gurgaon - Remote. ✨ Innovate with SAVii: Empowering Change Through Technology! 🌍. About SAViiAt SAVii, we’re on a mission to transform the employee wellness landscape. Since our founding in 2017 as SAVii PH, we’ve been reimagining how employee benefits work by offering 360° . salary. -linked wellness services in emerging markets like the Philippines and India. Our platform empowers HR leaders to support their teams’ financial wellness in innovative and tech-driven ways. We’re now expanding, and we’re looking for technical experts to join us and help drive innovation in the employee wellness space. 🚀. Are you ready to build the future of wellness with cutting-edge technology? 💡. Our Culture: Empowering Innovators to ThriveAt SAVii, we believe that technology is a key driver of our mission. We’re a remote-first company that thrives on flexibility, allowing our technical teams the freedom to work from anywhere while delivering high-impact results. Whether you’re building innovative solutions or solving complex problems, we give you the autonomy to create and innovate.. We foster a culture of continuous learning, collaboration, and problem-solving. Every technical team member has the opportunity to bring fresh ideas to the table and play a pivotal role in delivering transformative solutions. Together, we move fast, innovate even faster, and build solutions that will impact lives. 🌍. At SAVii, technical excellence and agility are our core strengths. We embrace a mindset of experimentation and iteration, where you can push boundaries, explore new technologies, and grow alongside a team of brilliant innovators. 💫. . Job Overview. . . As a Senior Data Engineer, you will be a core contributor to the development and maintenance of scalable, secure, and high-performance data . pipelines. . You’ll work under the guidance of the Data Engineering Lead and collaborate closely with product, analytics, and engineering teams to ensure reliable data availability across the organization. Your responsibilities will span designing data workflows, implementing data quality frameworks, and optimizing data storage and retrieval for analytics and reporting use cases. You’ll also play a critical role in mentoring junior engineers, adhering to engineering best practices, and proactively improving our data infrastructure.. . Day-to-day Activities. . . Design, develop, and deploy scalable . ETL. /ELT pipelines to ingest, transform, and store data from diverse sources such as BigQuery, MySQL, Segment, HubSpot, and Zendesk.. Collaborate with the Data Engineering Lead to implement best practices for data infrastructure, coding standards, and architectural improvements.. Develop modular and reusable workflows for both batch and streaming data processing using modern orchestration and processing tools.. Contribute to data modeling, schema optimization, and performance tuning of data warehouses and lakes.. Implement data validation, monitoring, and alerting systems to ensure high data quality and reliability across environments.. Support deployment and operationalization of pipelines using CI/CD, DevOps, and infrastructure-as-code principles.. Participate in code reviews, design discussions, and technical grooming sessions.. Assist junior engineers through peer mentoring and onboarding, fostering a culture of continuous learning and improvement.. Coordinate with cross-functional teams (Product, Decision Science, Business Operations) to understand data requirements and deliver effective solutions.. Troubleshoot pipeline and infrastructure issues, and proactively implement improvements.. . What We’re Looking For: Skills & Experience. . . 4–8 years of experience in data engineering, preferably in fast-paced, cloud-native environments.. Proven expertise in designing and building robust data pipelines using tools like Airflow, dbt, Spark, Kafka, or Beam.. Solid understanding of data warehousing, lakehouse architecture, and streaming frameworks.. Expertise in SQL and Python, or Scala, or Java.. Experience with cloud platforms, especially Google Cloud Platform (GCP). Exposure to AWS or Azure is a plus.. Familiarity with DevOps practices, version control (e.g., Git), and CI/CD tools (e.g., GitLab CI, Jenkins).. Strong problem-solving and debugging skills.. Excellent communication skills, with the ability to clearly articulate technical concepts to non-technical stakeholders.. Experience working in Agile environments and contributing to sprint planning and reviews.. Exposure to data security, access control, and compliance practices.. Experience with data cataloging, governance tools, or lineage tracking (e.g., DataHub, Amundsen, Collibra) is advantageous.. .