hybrid Hybrid

NVIDIA is hiring a Senior Software Development Engineer, TensorRT-LLM

NVIDIA is looking for a Senior Software Development Engineer to join our TensorRT-LLM team. In this role, you will craft and develop robust inferencing software that can be scaled to multiple platforms for functionality and performance. You will collaborate across the company to guide the direction of machine learning inferencing.

What You'll Do

  • Craft and develop robust inferencing software that can be scaled to multiple platforms for functionality and performance.
  • Perform benchmarking, profiling, and system-level programming for GPU applications.
  • Closely follow academic developments in artificial intelligence and feature update TensorRT.
  • Provide code reviews, design docs, and tutorials to facilitate collaboration among the team.
  • Conduct unit tests and performance tests for different stages of the inference pipeline.
  • Collaborate across the company to guide the direction of machine learning inferencing, working with software, research and product teams.
  • Write safe, scalable, modular, and high-quality C++/Python code for our core backend software for LLM inference.
  • Improve the usability of the TensorRT-LLM library and build systems (CMake).

What We're Looking For

  • Masters or higher degree in Computer Engineering, Computer Science, Applied Mathematics or related computing focused degree (or equivalent experience).
  • 4+ years of relevant software development experience.
  • Excellent C/C++ programming and software design skills, including debugging, performance analysis, and test design.
  • Strong curiosity about artificial intelligence, awareness of the latest developments in deep learning like LLMs, generative and recommender models.
  • Experience working with deep learning frameworks like TensorFlow and PyTorch.
  • Self-starter who consistently takes initiative to drive projects forward.
  • Excellent written and oral communication skills in English.

Nice to Have

  • Prior experience with a LLM framework or a DL compiler in inference, deployment, algorithms, or implementation.
  • Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application.
  • Architectural knowledge of CPU and GPU.
  • GPU programming experience (CUDA or OpenCL).

Technical Stack

  • C++, Python
  • TensorFlow, PyTorch
  • CUDA, OpenCL
  • CMake

Benefits & Compensation

  • Compensation: $148,000 USD - $235,750 USD for Level 3, and $184,000 USD - $287,500 USD for Level 4.
  • Eligible for equity.
  • Benefits.

Work Mode

This position follows a hybrid work model.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Required Skills
C++PythonTensorFlowPyTorchCUDAOpenCLCMakeLLMDeep LearningHigh-Performance ComputingGPU ProgrammingDistributed SystemsModel Optimization C++PythonTensorFlowPyTorchCUDAOpenCLCMakeLLMDeep LearningHigh-Performance ComputingGPU ProgrammingDistributed SystemsModel Optimization
Need to work legally in Thailand?

Work permits without the paperwork nightmare

Thai immigration rules are strict and easy to get wrong. SVBL handles the bureaucracy — correct visa type, proper documentation, timely submissions. You focus on your work.

Right visa type for your situation
Document preparation & submission
Deadline tracking & renewals
Direct liaison with immigration
Talk to an expert
10+ years experience
About company
NVIDIA
NVIDIA builds accelerated computing platforms and AI technologies that power advancements in areas such as generative AI, data centers, robotics, and digital twins.
All jobs at NVIDIA Visit website
Job Details
Category data
Posted 6 months ago