Senior Data Engineer ETL at MediaRadar

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Data Engineer ETL at MediaRadar. Role: Senior Data Engineer. Location: US Remote. Please note:. At this time, we are unable to sponsor employment visas. Only candidates who are legally authorized to work in the United States on a full-time, long-term basis without current or future visa sponsorship will be considered.. Additionally, we are not considering applicants seeking short-term, contract-based, or temporary employment arrangements.. About MediaRadar. MediaRadar. , now including the data and capabilities of Vivvix, powers the mission-critical marketing and sales decisions that drive competitive advantage. Our competitive advertising intelligence platform enables clients to achieve peak performance with always-on data and insights that span the media, creative, and business strategies of five million brands across 30+ media channels. By bringing the advertising past, present, and future into focus, our clients rapidly act on the competitive moves and emerging advertising trends impacting their business.. Job Description. We are seeking an experienced Senior Data Engineer to join the team and take ownership of our data infrastructure. With millions of new data points ingested daily, your mission will be to architect, build, and scale robust data pipelines that ensure flawless data quality. You'll work with a passionate team on a modern cloud data stack (Azure, Databricks), solving complex challenges to deliver timely and reliable data that drives our business and delights our custome. rs.. What You'll Do. . Architect & Build: Design, implement, and optimize scalable and reliable ELT/ETL pipelines using Databricks, Spark, Python and Store Procedures. You will model complex data and build solutions for both batch and streaming workloads.. . Lead & Mentor: Guide and support a team of data engineers, providing technical direction and resolving challenges. You will elevate the team's skills through mentorship and constructive code reviews to ensure we ship high-quality, efficient code.. . Ensure Data Integrity: Develop and implement comprehensive testing frameworks, data validation rules, and QA plans to guarantee the accuracy and integrity of our data assets.. . Optimize & Troubleshoot: Proactively monitor system performance, tune complex SQL queries, and troubleshoot production issues. You will perform root-cause analysis and implement lasting solutions to improve system health and reliability.. . Collaborate & Innovate: Actively participate in an Agile environment (Scrum, sprints, backlog grooming) and collaborate with cross-functional teams. You'll help evaluate and introduce new technologies and best practices to continuously improve our data platform.. . What You'll Bring (Qualifications). . A bachelor’s or master’s degree in computer science, Engineering, or a related field (or equivalent practical experience).. . 7+ years of hands-on experience in data engineering, with a strong focus on Databricks, Apache Spark and 1 RDBMS solution for processing large-scale datasets.. . Expert-level proficiency in Python (including PySpark) and a strong understanding of data structures, algorithms, and OOP principles.. . Deep expertise in SQL and Spark SQL, with proven experience writing and optimizing complex analytical queries.. . Hands-on experience with the Databricks ecosystem, including Delta Lake, Unity Catalog, and Data frame APIs.. . Bonus Points (Nice to Haves). . Experience building and maintaining CI/CD pipelines using tools like Azure DevOps.. . Previous experience migrating a legacy data system to a modern, unified data platform.. . Familiarity with containerization (Docker, Kubernetes).. . Experience working in a fast-paced, product-driven environment.. . Company Location: United States.