Data Engineer II - SRC - Music at Spotify

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Data Engineer II - SRC - Music at Spotify. Location Information: EMEA. The Rights Systems team builds and operates Spotify Rights Center (SRC) — Spotify’s rights management platform that enables Spotify and its partners to identify, manage, and enforce music licensing rights across all the content available on the platform.. Comparable in scope to YouTube’s Content ID, SRC is purpose-built for Spotify’s ecosystem and underpins the company’s strategy for video and emerging content types. Over the past year, the team has taken SRC from concept to a fully operational production platform, delivering automated content scanning, policy management, enforcement pipelines, appeals workflows, manual claims, and analytics capabilities.. The team has grown from a single squad to three squads (~30 bandmates) and continues to scale as a strategic investment area for Spotify over the next several years.. What You'll Do. Build and maintain the data pipelines and analytics infrastructure that power Spotify’s rights management platform. Own batch and streaming pipelines that generate core datasets used by Spotify Rights Center, including processing content segments, joining rights metadata, and producing match data that drives the product. Develop and evolve analytics models that transform pipeline and service data into reporting for system reliability, rightsholder adoption, and business ROI. Partner closely with product managers, backend engineers, and insights teams to define metrics, build dashboards, and generate data that informs strategic decisions around platform expansion. Maintain data export pipelines connecting backend services and the data warehouse to ensure downstream consumers receive timely and accurate data. Implement strong data quality practices, including validation tests, alerts, and monitoring to ensure reliable pipeline outputs. Contribute to technical solutions that support licensing, financial engineering, and content platform stakeholders. Participate in product ideation with engineers, researchers, product managers, and domain experts across the team. Contribute to a collaborative engineering culture and support continuous learning through hack days, reading groups, and internal training. Who You Are. You have strong SQL skills and deep experience with data modeling and warehouse design. You have experience building and operating batch data pipelines at scale using tools such as Scio, Apache Beam, Spark, or similar frameworks. You are comfortable working with modern cloud data warehouses such as BigQuery, Snowflake, or similar technologies. You have experience with analytics engineering tools like dbt and understand layered data modeling approaches (staging, transformation, reporting). You have worked with workflow orchestration platforms such as Flyte, Airflow, or similar systems. You can write production-quality code in languages such as Scala, Python, or Java. You understand data quality practices and have built monitoring and alerting systems for pipeline reliability. You take ownership of solutions end-to-end, from understanding business questions to deploying and validating data pipelines. You communicate clearly with both technical and non-technical stakeholders. You enjoy learning new domains and tackling complex data challenges such as rights management, content matching, and multi-territory licensing. Where You'll Be. We offer you the flexibility to work where you work best! For this role, you can be within the EMEA region as long as we have . a work location. (excluding France due to on-call restrictions).. This team operates within the Central European and GMT time zone for collaboration.