Software Engineer, Agents Runtime
Glean
About the Role:
The Agents Runtime team builds the low‑latency, reliable, and secure foundation that powers Glean’s AI agents and assistant experiences at scale. You’ll design and operate core runtime services for multi‑turn orchestration, tool calling, model routing, memory, streaming, and safety. You’ll work across distributed systems, production observability, and ML infra integrations to deliver an experience that feels instant, accurate, and trustworthy — while optimizing cost and reliability.
You will:
- Own impactful runtime problems end‑to‑end — from architecture and design to production launch and ongoing reliability.
- Build and evolve core services for session lifecycle, streaming responses (e.g., gRPC/WebSockets), structured tool execution, memory/state, and policy/guardrails.
- Design for performance, correctness, and cost: reduce p50/p95 latency, improve tail behavior, and optimize token/tool budgets.
- Integrate with leading LLM providers (e.g., OpenAI, Anthropic, Google Gemini) and internal evaluation frameworks to improve quality and predictability.
- Harden the platform with fault isolation, retries, timeouts, circuit‑breaking, backpressure, and graceful degradation.
- Instrument deep observability (tracing, metrics, logs) and create playbooks/SLOs for high availability and on‑call excellence.
- Collaborate closely with product, quality, and application teams to prioritize the most impactful roadmap investments.
You are:
- 3+ years of software engineering experience building production distributed systems or cloud‑native applications.
- BS/BA in Computer Science or related field, or equivalent practical experience.
- Strong coding skills in at least one of: Python, Go, Java, or C++, with a focus on reliability, performance, and tests.
- Product‑minded: you prioritize customer impact, clear SLAs/SLOs, and pragmatic iteration.
- Ownership‑driven with a positive, proactive attitude; comfortable leading projects or learning from battle‑tested engineers.
- Experience operating services on Kubernetes and at least one major cloud (e.g., GCP, AWS, or Azure).
- Familiarity with event/streaming systems (e.g., Pub/Sub, Kafka), caching (e.g., Redis), and data stores for low‑latency paths.
- Practical understanding of LLM/agents building blocks: tool/function calling, structured outputs, streaming, and model selection/routing.
- Strong observability and debugging skills: tracing (e.g., OpenTelemetry), metrics, dashboards, and production forensics.
- Background in one or more areas is a plus: policy/guardrails, multi‑tenant isolation, rate‑limiting, concurrency control, cost optimization.
Location:
- This role is hybrid (3-4 days a week in one of our SF Bay Area offices)
Compensation & Benefits:
The standard base salary range for this position is $140,000 - $265,000 annually. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits.
We offer a comprehensive benefits package including competitive compensation, Medical, Vision, and Dental coverage, generous time-off policy, and the opportunity to contribute to your 401k plan to support your long-term goals. When you join, you'll receive a home office improvement stipend, as well as an annual education and wellness stipends to support your growth and wellbeing. We foster a vibrant company culture through regular events, and provide healthy lunches daily to keep you fueled and focused.
We are a diverse bunch of people and we want to continue to attract and retain a diverse range of people into our organization. We're committed to an inclusive and diverse company. We do not discriminate based on gender, ethnicity, sexual orientation, religion, civil or family status, age, disability, or race.
#LI_HYBRID