As the Lead Performance Tester (SDET), you will shape and lead performance engineering for high-availability, cloud-native telecommunications platforms. You'll establish the strategic direction for performance validation across distributed systems, ensuring reliability and scalability under real-world conditions.
Key Responsibilities
- Develop and oversee the end-to-end performance engineering strategy for complex, multi-tenant telecom environments running on AWS.
- Lead and mentor a decentralized group of SDETs, enabling feature teams to integrate performance testing into their development cycles using k6 and other tooling.
- Design and maintain robust, reusable backend performance frameworks in Python, Java, or TypeScript, focused on API, database, and infrastructure layers.
- Implement production traffic replay using tools like GoReplay to simulate realistic load patterns and system contention.
- Execute comprehensive testing types—including load, stress, endurance, and chaos engineering—to validate system resilience and scalability.
- Test event-driven architectures leveraging Kafka and RabbitMQ under production-like throughput and latency demands.
- Integrate performance checks into CI/CD pipelines via Jenkins or GitHub Actions to enable early detection of regressions and automated quality gates.
- Use observability platforms such as Grafana, CloudWatch, Coralogix, and PMM to monitor system behavior, analyze bottlenecks, and support root cause analysis.
- Define performance benchmarks, SLOs, and error budgets that align with business objectives and technical constraints.
- Collaborate with architecture and data engineering teams to assess system design choices, validate database scalability (including MySQL and Tungsten), and recommend performance-driven improvements.
- Evaluate and optimize both modern microservices and legacy monolithic systems to ensure consistent performance outcomes.
Required Expertise
- Proven experience in backend performance engineering with hands-on coding in Python, Java, or TypeScript.
- Track record of leading performance testing initiatives in large-scale, AWS-hosted environments.
- Deep familiarity with k6, JMeter, or Gatling for designing and executing scalable performance tests.
- Experience with production traffic shadowing tools such as GoReplay.
- Solid understanding of microservices, distributed systems, and real-time communication architectures.
- Proficiency in AWS services including EC2, ECS, RDS, and API Gateway, with a focus on performance at scale.
- Hands-on use of observability stacks like Grafana, Coralogix, VictoriaMetrics, and PMM for performance diagnostics.
- Experience integrating performance testing into Agile and CI/CD workflows, delivering actionable insights to development teams.
- Strong ability to guide technical teams through framework adoption, code reviews, and cross-functional test planning.


