Shape the data infrastructure behind music rights at scale
As a Data Engineer II, you'll play a key role in building and maintaining the data systems that power a centralized platform for managing music licensing rights. Your work will directly impact how rights data is processed, matched, and reported across a global content environment.
What You’ll Do
- Design, implement, and support batch and streaming data pipelines that process content segments, integrate rights metadata, and generate match outputs critical to platform functionality
- Develop and refine analytics models that convert raw system and service data into actionable insights on platform reliability, rightsholder engagement, and business performance
- Collaborate with product managers, backend engineers, and insights teams to define metrics, build reporting frameworks, and inform strategic platform growth
- Ensure data consistency and timeliness by maintaining export pipelines between backend services and the data warehouse
- Enforce data quality through validation logic, monitoring systems, and alerting mechanisms to guarantee pipeline accuracy and reliability
- Support stakeholders in licensing, financial engineering, and content platforms with robust, scalable data solutions
- Engage in product planning and ideation alongside engineers, researchers, and domain experts to shape future capabilities
- Help foster a culture of technical excellence through knowledge sharing, internal training, and collaborative events
What We’re Looking For
- Proficiency in SQL, data modeling, and data warehouse architecture
- Proven experience developing and managing large-scale batch data pipelines using frameworks like Scio, Apache Beam, or Spark
- Familiarity with cloud data platforms such as BigQuery or Snowflake
- Hands-on experience with analytics engineering tools including dbt and layered modeling techniques (staging, transformation, reporting)
- Working knowledge of orchestration systems like Flyte or Airflow
- Ability to write production-grade code in Scala, Python, or Java
- Experience implementing data quality controls, monitoring, and alerting for pipeline stability
- End-to-end ownership of data solutions—from understanding business needs to deployment and validation
- Clear communication with both technical and non-technical partners
- Curiosity for complex domains like rights management, content matching, and multi-region licensing
Technology Environment
Scio, Apache Beam, Spark, BigQuery, Snowflake, dbt, Flyte, Airflow, Scala, Python, Java, SQL
Work Environment
This is a hybrid role based in the North Americas region, with collaboration aligned to the Eastern Standard time zone. You’ll have flexibility in where you work while staying connected with your team.
Benefits
- Comprehensive health insurance
- Six months of paid parental leave
- 401(k) retirement plan with company matching
- Monthly meal allowance
- 23 paid days off per year
- Flexible paid holidays
- Paid sick leave
Our Commitment to Inclusion
We believe innovation thrives in an inclusive environment. We are dedicated to equal opportunity regardless of background, identity, or personal characteristics. Our culture values collaboration, continuous learning, and diverse perspectives. We actively support employee growth through hack days, reading groups, and internal knowledge exchange. Bring your unique experience—we’re building the future of listening together.
