Experience Orchestrator (Speculative Prototyping & Design) — Anthropic Consulting
About Anthropic
Anthropic’s mission is to build reliable, interpretable, and steerable AI systems that are genuinely beneficial to society. We are a rapidly growing group of researchers, engineers, policy practitioners, and operators working together to make frontier AI safer in practice, not just in principle.
Anthropic Consulting partners with commercial and institutional clients to identify where AI can create durable, defensible value, then makes those opportunities tangible through rapid, high-fidelity prototyping and design.
About the Team
We’re looking for an Experience Orchestrator: someone who can translate ambiguous client goals into concrete product directions, align stakeholders around a shared “future state,” and build prototypes that are credible enough to guide investment and engineering.
This role sits at the intersection of strategy, design, and applied AI. You’ll lead engagements that span information products (workflows, decision systems, copilots, internal tools) and physical products and services (consumer, commercial, security, agricultural), with an emphasis on responsible deployment and real-world constraints.
What You'll Do
- Find value creation opportunities: Work with clients to map operations, incentives, constraints, and failure modes; identify where AI can change cost curves, throughput, quality, or risk.
- Orchestrate end-to-end experiences: Define the user journey across humans, models, tools, and environments: what the system does, what the human does, and how handoffs work under pressure.
- Speculative prototyping as a decision tool: Build prototypes that make futures legible: interactive demos, service blueprints, “day-in-the-life” simulations, synthetic data scenarios, and lightweight hardware/field mockups.
- Design for real constraints: Incorporate latency, reliability, security boundaries, offline modes, auditability, and operational realities (shift work, field conditions, adversarial settings).
- Translate into buildable plans: Produce clear artifacts: PRDs, experience specs, evaluation plans, risk registers, and phased roadmaps that engineering teams can execute.
- Partner with technical teams: Collaborate with research, product, and engineering to select model approaches, define evaluation criteria, and prototype safely.
- Responsible deployment: Anticipate misuse and harm; design mitigations, monitoring, and escalation paths; help clients adopt best practices for safety and governance.
What You'll Deliver (Examples)
- A prototype “control room” experience for incident response that integrates model-assisted triage, audit trails, and escalation.
- A field-ready concept for an agricultural service that blends sensors, operator workflows, and model-driven recommendations.
- A decision-support product for procurement or logistics with evaluation metrics tied to real outcomes.
- A speculative physical product concept (e.g., ruggedized device + service) with a credible path to deployment and safety controls.
You May Be a Good Fit If You
- Have 8+ years in product design, service design, product strategy, innovation consulting, or adjacent roles where you shipped real systems (not just decks).
- Can lead ambiguous, multi-stakeholder engagements and land decisions with executives and operators.
- Have a strong practice in speculative design / prototyping: you can create artifacts that change minds: prototypes, narratives, scenarios, and testable hypotheses.
- Are fluent in AI product mechanics: model capabilities/limits, evaluation thinking, prompt/tool orchestration, human-in-the-loop design, and failure analysis.
- Are comfortable spanning digital + physical contexts (or have a strong willingness to learn): devices, environments, logistics, and operational workflows.
- Write exceptionally well: you can produce crisp, structured documents that make tradeoffs explicit.
- Bring a grounded approach to safety, privacy, and security, especially in high-stakes domains.
Strong Candidates May Also Have
- Experience in regulated or high-consequence environments (security, defense, healthcare, finance, critical infrastructure, agriculture operations).
- Familiarity with prototyping stacks (Figma, Framer, code-based prototypes, agent/tool demos) and basic data/ML literacy.
- Background in systems thinking, operations research, or human factors.
Annual Salary:
We offer competitive compensation, equity, and benefits. Exact details depend on location and level.
Logistics
- Education requirements: Bachelor’s degree in a related field or equivalent practical experience.
- Location-based hybrid policy: Some client engagements require in-person collaboration; we will discuss travel and on-site expectations case by case.
- Visa sponsorship: We do sponsor visas when feasible for the role and candidate, and we make reasonable efforts to support immigration pathways when extending an offer.
- Relocation: We are open to relocation for this role and assess case-by-case support.
How we're different
We value clarity, intellectual honesty, and careful reasoning. We move quickly, but we don’t hand-wave. We prototype to learn, measure to decide, and document so others can build.
We’re especially interested in generalists who can connect technical systems and human systems: people who can make things, write clearly, hold complexity without mystifying it, and help clients adopt safe and durable practice rather than brittle demos.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office environment.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.
We encourage you to apply even if you do not meet every listed qualification. Strong candidates come from many routes, and we value a broad range of lived, professional, and disciplinary perspectives.
Your safety matters to us. Anthropic recruiters only contact candidates from @anthropic.com addresses or clearly identified partner agencies. We will never request fees or banking information prior to employment.
As set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under applicable law.
Create a Job Alert
Interested in building your career at Anthropic? Get future opportunities sent straight to your email.
Create alertApply for this job
*
indicates a required field