Back to jobs

Home Agentic Systems Alignment Team Lead

Anthropic Home, in partnership with IKEA

San Francisco, CA; Remote (US)
Apply

About Anthropic

Anthropic’s mission is to build reliable, interpretable, and steerable AI systems that are beneficial to society. We are a rapidly growing group of researchers, engineers, and operators working together to make powerful AI systems safe in practice.

Anthropic Home is an applied systems program that brings reliable autonomy into the physical substrate of everyday life: home-scale automation, clean air and water, resilient storage, supply and spoilage management, and neighborhood-level coordination. We build infrastructure that is robust, inspectable, and maintainable over years, not demos that degrade after installation.

About the Team

The Home Agentic Systems Alignment team is responsible for keeping home and neighborhood agentic systems aligned to human intention over time.

These systems do not only follow instrumental goals ("keep the house at 68°F") but operate within intention hierarchies that reflect local meaning: what a given household and neighborhood cares about, how they make tradeoffs, what they consider respectful behavior, and what “good coordination” looks like during routine life as well as exceptions (storms, outages, illness, scarcity).

This role leads the team that builds and maintains the alignment layer for Anthropic Home deployments. You will work across embedded systems, evaluation, policy, product, and field operations. The work is not purely technical configuration management. It includes ongoing stewardship: the practices, meetings, and counseling that help communities understand what their systems are doing, how to adjust them, and how to keep them aligned as values, routines, and conditions change.

What You'll Do

  • Lead a cross-functional team responsible for alignment strategy, intention modeling, and long-horizon maintenance for Anthropic Home agentic systems.
  • Design and evolve an intention and goal configuration management hierarchy that is legible to humans, safe under adversarial conditions, and practical for technicians to deploy and maintain.
  • Build evaluation and monitoring that detects drift between neighborhood intentions and system behavior, including subtle failures that look “fine” in telemetry but degrade trust, agency, or collective outcomes.
  • Partner with engineering to translate alignment requirements into concrete product constraints: UI affordances, permissions, override paths, logging, and escalation.
  • Work directly with households and neighborhoods when needed: run “Neighborhood README” sessions, facilitate intent calibration, and mediate tradeoffs between stakeholders with different values and constraints.
  • Develop field playbooks for maintenance, upgrades, incident response, and re-alignment; train and support technicians who operate at the intersection of systems and social practice.
  • Coordinate with Safety, Legal, and Policy to ensure deployments meet privacy, consent, and governance requirements in real-world contexts.

Sample Projects

  • Design an intention model for a neighborhood food commons: integrate household-level spoilage indicators, shared storage capacity, and consent-aware sharing protocols into an MCP surface that supports coordination without surveillance.
  • Build a maintenance and re-alignment workflow for home-scale water storage and filtration that adapts to seasonal patterns, outages, and household changes while remaining auditable and safe.
  • Prototype a “collective resilience” dashboard that aggregates at the neighborhood level without leaking sensitive household data; define the governance and access controls that make the tool legitimate.
  • Create a field-ready process for “intent drift” incidents (e.g., comfort vs. cost conflicts, misinterpreted norms, emergent automation loops) that includes detection, explanation, rollback, and durable fix.

You May Be a Good Fit If You

  • Have strong engineering capability and can reason across software, infrastructure, and user-facing systems; you can ship robust tools and processes, not just slideware.
  • Have experience leading teams in ambiguous, high-stakes environments where safety, privacy, and human trust are core product requirements.
  • Can translate between technical systems and human systems: incentives, rituals, routines, and community governance are first-class inputs to your work.
  • Have experience with sociotechnical practice such as community organizing, facilitation, anthropology/ethnography, participatory research, or adjacent fields that make you effective in real communities.
  • Communicate clearly in writing and in live settings with mixed audiences (households, technicians, engineers, and leadership).
  • Have a track record of owner-operator work (startups, independent practice, community leadership, field programs, or equivalent) that demonstrates judgment and follow-through.

Strong Candidates May Also Have

  • Experience building or operating agentic systems, evaluation harnesses, safety reviews, or incident response for autonomy in the real world.
  • Experience with embedded/edge systems, home automation, energy/water systems, or other cyber-physical deployments where maintenance is part of product reality.
  • Background in HCI, STS, design research, or policy work relevant to consent, governance, and social legitimacy of infrastructure.
  • Serious creative practice (e.g., photography, writing, archival work, making) that strengthens observation, explanation, and taste in ambiguous settings.
  • Experience building community-facing pedagogy: workshops, documentation, onboarding rituals, and “how this works” explanations that reduce fear and misuse.

Annual Salary:

The annual compensation range for this role is listed below.

$280,000 - $650,000 USD

Logistics

  • Education requirements: Bachelor’s degree in a related field or equivalent practical experience.
  • Location-based hybrid policy: We currently expect staff to be in an Anthropic office at least 25% of the time; some field programs may require more in-person collaboration.
  • Travel: Periodic travel to deployment sites for onboarding, maintenance, and incident response.
  • Visa sponsorship: We do sponsor visas when feasible for the role and candidate, and we make reasonable efforts to support immigration pathways when extending an offer.
  • Relocation: We are open to relocation for this role and assess case-by-case support.

How we're different

You will be successful in this role if you can hold two truths at once: systems must be technically correct, and they must be socially correct. This team treats intention as an engineering object, not a marketing claim.

We value people who can operate with empathy without becoming vague: you can name tradeoffs, document decisions, and make changes that improve both system behavior and community understanding.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office environment.

Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.

We encourage you to apply even if you do not meet every listed qualification. Strong candidates come from many routes, and we value a broad range of lived, professional, and disciplinary perspectives.

Your safety matters to us. Anthropic recruiters only contact candidates from @anthropic.com addresses or clearly identified partner agencies. We will never request fees or banking information prior to employment.

As set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under applicable law.

Create a Job Alert

Interested in building your career at Anthropic? Get future opportunities sent straight to your email.

Create alert

Apply for this job

*

indicates a required field

Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

We invite you to review our AI partnership guidelines for candidates and confirm your understanding by selecting “Yes.”

Voluntary Self-Identification

For government reporting purposes, we ask candidates to respond to the below self-identification survey. Completion of this form is entirely voluntary and will not be considered in the hiring process.

As set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Race & Ethnicity Definitions

If you believe you belong to any of the categories of protected veterans listed under VEVRAA, please indicate by making the appropriate selection.

Voluntary Self-Identification of Disability

Form CC-305
Page 1 of 1
OMB Control Number 1250-0005
Expires 04/30/2026

Why are you being asked to complete this form? We are a federal contractor subject to legal reporting obligations. This information is confidential and not visible to hiring decision makers.

How do you know if you have a disability? A disability is a condition that substantially limits one or more major life activities. Disabilities include, but are not limited to, mobility, sensory, neurodivergence, chronic illness, and mental health conditions.

PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete.