<aside>
📍
San Francisco, Singapore, Amsterdam
</aside>
About the Role
At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs). Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency.
We are seeking an ****Inference Frameworks and Optimization Engineer to design, develop, and optimize distributed inference engines that support multimodal and language models at scale. This role will focus on low-latency, high-throughput inference, GPU/accelerator optimizations, and software-hardware co-design, ensuring efficient large-scale deployment of LLMs and vision models.
This role offers a unique opportunity to shape the future of LLM inference infrastructure, ensuring scalable, high-performance AI deployment across a diverse range of applications. If you're passionate about pushing the boundaries of AI inference, we’d love to hear from you!
Responsibilities
Inference Framework Development and Optimization
- Design and develop fault-tolerant, high-concurrency distributed inference engine for text, image, and multimodal generation models.
- Implement and optimize distributed inference strategies, including Mixture of Experts (MoE) parallelism, tensor parallelism, pipeline parallelism for high-performance serving.
- Apply CUDA graph optimizations, TensorRT/TRT-LLM graph optimizations, and PyTorch-based compilation (torch.compile), and speculative decoding to enhance efficiency and scalability.
Software-Hardware Co-Design and AI Infrastructure
- Collaborate with hardware teams on performance bottleneck analysis, co-optimize inference performance for GPUs, TPUs, or custom accelerators.
- Work closely with AI researchers and infrastructure engineers to develop efficient model execution plans and optimize E2E model serving pipelines.
Requirements
Must-Have:
- Experience:
- 3+ years of experience in deep learning inference frameworks, distributed systems, or high-performance computing.
- Technical Skills:
- Familiar with at least one LLM inference frameworks (e.g., TensorRT-LLM, vLLM, SGLang, TGI(Text Generation Inference)).
- Background knowledge and experience in at least one of the following: GPU programming (CUDA/Triton/TensorRT), compiler, model quantization, and GPU cluster scheduling.
- Deep understanding of KV cache systems like Mooncake, PagedAttention, or custom in-house variants.
- Programming:
- Proficient in Python and C++/CUDA for high-performance deep learning inference.
- Optimization Techniques:
- Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.
- Knowledge of inference optimization, such as workload scheduling, CUDA graph, compiled, efficient kernels
- Soft Skills:
- Strong analytical problem-solving skills with a performance-driven mindset.
- Excellent collaboration and communication skills across teams.