We're looking for a seasoned backend engineer who's passionate about building high-performance systems that process massive amounts of video data and support ML inference at scale. You'll architect and implement systems that handle petabytes of data while maintaining low latency and high throughput. You'll take ownership of critical infrastructure powering our computer vision models and data processing pipelines.
You will:
- Design and implement scalable backend services with Rust to process high volumes of video data
- Build reliable ML inference systems that can handle thousands of requests per second
- Architect distributed systems for real-time video processing and analysis
- Optimize performance at every layer, from database queries to network protocols
- Collaborate closely with ML engineers, frontend developers, and infrastructure teams
Our Stack
- Backend: Rust, gRPC, Protocol Buffers
- ML Inference: PyTorch, ONNX Runtime, TensorRT
- Infrastructure: Kubernetes, AWS/GCP, Terraform
- Data: Kafka, S3, distributed object storage, time-series databases
You'll be a great fit if...
- You have 5+ years building high-scale backend systems
- You've worked with video processing pipelines or ML inference systems at scale
- You're experienced with distributed systems challenges like consistency, fault tolerance, and performance tuning
- You can navigate complex architectural decisions and make thoughtful tradeoffs
- Bonus: experience with Rust, real-time streaming, or computer vision infrastructure