Lead the architecture of a next-generation data center platform designed to enhance power visibility and control in high-density computing environments. As Principal Platform Architect, you'll define and drive the full system design of a distributed, embedded solution that spans from waveform sensing and firmware to edge compute, orchestration, and analytics.
What You'll Do
- Own the end-to-end technical architecture of a scalable platform integrating high-resolution power sensing, real-time processing, and closed-loop control.
- Establish architectural standards emphasizing determinism, reliability, security, and low latency in power-sensitive production systems.
- Design and optimize ultra-low latency software paths for waveform capture, anomaly detection, and dynamic power capping across GPU-dense server racks.
- Lead performance modeling and system-wide optimization across CPU, memory, I/O, and networking layers, with deep focus on kernel internals, scheduling, and memory management.
- Provide technical guidance across firmware, bootloaders, BSPs, hardware abstraction, and real-time execution environments.
- Collaborate closely with hardware teams on sensor integration, signal fidelity, timing accuracy, and compute placement.
- Partner with product and customer-facing teams to ensure architectural alignment with real-world data center constraints and scalability goals.
- Mentor and align principal engineers across hardware, firmware, OS, and on-device software disciplines.
- Translate business and customer requirements into scalable technical designs that balance near-term delivery with long-term evolution.
What We’re Looking For
- 10+ years in systems architecture, embedded systems, or high-performance computing platforms.
- Proven experience delivering distributed embedded systems from concept through large-scale deployment in mission-critical settings.
- Deep technical expertise in low-level OS design, firmware, and embedded software development.
- Track record building low-latency, high-throughput, or real-time systems with strict performance requirements.
- Strong background in hardware-software co-design, especially in power-constrained or performance-sensitive environments.
- Experience with performance profiling, tracing, and optimization across compute, memory, and I/O subsystems.
- Familiarity with real-time scheduling, timing analysis, and deterministic behavior in embedded systems.
- Ability to lead cross-functional teams without formal authority, build consensus, and communicate complex technical concepts clearly.
- Willingness to travel up to 20% of the time.
Preferred Experience
- Work with GPU-based or accelerator-heavy systems, including power-performance tradeoff analysis.
- Experience with NVIDIA SoC architectures and production-ready implementations.
- Background in data center infrastructure, power systems, or energy-aware computing.
- Knowledge of RDMA, high-speed interconnects, and zero-copy data pipelines.
- Exposure to AI/ML inference workloads in real-time or edge environments.
Technology Environment
Our platform leverages NVIDIA modules, Yocto-based OS layers, GPU telemetry, firmware, BSPs, hardware abstraction layers, real-time execution environments, RDMA, high-speed interconnects, zero-copy pipelines, and AI/ML inference workflows.
Our Culture & Benefits
We foster a diverse, inclusive, and respectful workplace where employees are empowered to solve meaningful problems. We emphasize collaboration, mentorship, and professional growth within a flexible, remote-first environment. Our benefits include competitive compensation, health, dental, and vision coverage, employer-matched 401k, and flexible paid time off.
We are committed to equal employment opportunity regardless of race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, genetic information, or veteran status.