Runtime Generalist Engineer
Physical Intelligence
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Engineering
Who We Are
Physical Intelligence is bringing general-purpose AI into the physical world. We are a group of engineers, scientists, roboticists, and company builders developing foundation models and learning algorithms to power the robots of today and the physically-actuated devices of the future.
Achieving real-world performance requires extremely tight system latency, reliable sensor pipelines, and end-to-end engineering that makes perception and control loops work at real-time speeds.
As a Runtime Software Engineer, you’ll engineer the low-latency, high-throughput systems that underpin our physical intelligence model. You won’t be designing ML models - you’ll be the person who makes them run flawlessly in production, optimizing every layer from OS to camera pipeline to networking. You’ll collaborate closely with researchers, platform engineers, and robotics operators to identify bottlenecks and extract maximum performance from the entire system.
The Team
The Runtime team is responsible for building the core platform that Pi’s robots, sensors, and evaluation pipelines rely on. The team spans Linux systems engineering, camera and sensor pipelines, robot actuator controllers, networking, real-time IO, and performance tooling. They ensure our ML models and control systems operate under strict latency budgets and are robust under real-world conditions.
In This Role You Will
-Own Real-Time Pipelines: Engineer low-latency, high-reliability sensor and actuator pipelines across Linux, drivers, and middleware.
-Optimize System Performance: Profile and optimize across compute, I/O, memory, scheduling, networking, and storage to meet real-time constraints and increase throughput.
-Build OS-Level Capabilities: Extend or modify Linux components, drivers, and scheduling to achieve deterministic behavior under load.
-Streaming & Video Systems: Develop and optimize real-time video streaming systems where frame timing and packet scheduling matter.
-Reliability & Debugging: Build tooling for profiling, tracing, and debugging timing issues across distributed systems and hardware interfaces.
-Cross-Functional Collaboration: Work with researchers, hardware engineers, and operations teams to integrate optimized pipelines into production workflows.
What We Hope You’ll Bring
-Strong programming skills in C++, Rust, or Python, with experience building and optimizing production software.
-Experience with Linux systems programming (syscalls, drivers, kernel parameters, scheduling, memory/IO subsystems).
-Background in real-time or near–real-time systems, VR/AR, video pipelines, 3D engines, or streaming systems where latency budgets are strict.
-Ability to optimize across the entire stack - kernel scheduling, drivers, networking, GPU/CPU workloads, video frameworks, and distributed components.
-Experience with profiling tools (perf, tracing, eBPF, GPU profilers, network analyzers) and comfort diving into complex performance issues.
-A mindset oriented around determinism, throughput, frame budgets, jitter minimization, and real-time correctness.
-Ability to collaborate deeply with researchers and platform engineers to translate high-level model requirements into real-world system performance.
Bonus Points If You Have
-Experience with VR/AR platforms or low-latency 3D engines.
-Camera system expertise (synchronization, capture pipelines, codecs, GPU offload).
-Streaming/video conferencing stack experience (WebRTC, real-time transport optimizations).
-Background in robotics, autonomous systems, SLAM pipelines, or perception systems (implementation, not research).
-Expertise in kernel-level engineering, device drivers, or high-performance networking.
-Familiarity with distributed systems that process real-time data flows.