As a Software Engineer, you will play a key role in shaping a new data streaming platform within an agile, cross-functional team. The focus is on designing and implementing scalable solutions for data integration, stream processing, and platform observability using cutting-edge technologies.
What You'll Do
- Build and maintain integration components such as client libraries to enable seamless data flow across systems
- Design and implement stream processing workflows using tools like Flink or Databricks, including operations like windowing, joins, and state management
- Enforce schema standards by implementing and maintaining a centralized schema registry
- Develop monitoring, logging, and error handling mechanisms to ensure system reliability
- Contribute to data governance practices and role-based access controls
- Build visualization layers to support data transparency and insights
- Containerize services and manage deployment on AKS, ensuring scalable and resilient operations
What We're Looking For
You thrive in dynamic, international settings and can collaborate effectively with technically advanced stakeholders. You're comfortable taking initiative and solving complex data engineering challenges.
Preferred Background
- Hands-on experience with Kafka and stream processing architectures
- Familiarity with Databricks, Flink, or similar processing engines
- Knowledge of schema registry implementation and data governance principles
- Proven work with containerization and orchestration platforms, particularly AKS
- Experience building monitoring solutions and visualization layers
Work Environment
This role operates in a hybrid model, allowing remote work within Poland or a mix of onsite and remote presence in Katowice. The team emphasizes agile practices, continuous learning, and the adoption of modern technical standards in a collaborative, international context.