Practice Lead, Societal Impacts, Red Teaming, and Futures Prototyping
About Anthropic
Anthropic’s mission is to build reliable, interpretable, and steerable AI systems that are beneficial to society. We are researchers, engineers, policy practitioners, and operators working together to make frontier AI safe and useful in practice.
We are hiring for a role at the seam between societal impacts research, adversarial evaluation, red teaming, and artifact-based inquiry. This role is designed for a builder-researcher who can make subtle sociotechnical risks inspectable before they scale into default behavior.
About the Team
Many of the most consequential risks from advanced AI systems do not first appear as obvious policy violations or benchmark failures. They appear as shifts in trust, authority, dependence, institutional routine, and social meaning. They arrive in useful, attractive, and operationally convenient forms, and can therefore normalize before they are legible.
This role exists to work on that terrain. You will study and prototype the subtle societal effects of frontier AI systems, especially where model behavior is technically compliant yet socially corrosive, over-relied upon, misinterpreted as authoritative, or gradually embedded into institutions in ways that are difficult to reverse.
We are looking for someone who can operate with unusual range: someone who can code, build prototypes in hardware and software, frame research questions with humanities-level sensitivity, translate qualitative observations into evaluable hypotheses, and produce artifacts that help mixed audiences reason about difficult futures. This is not a communications role and not a conventional policy role. It is a research-and-building role for someone who can make the world around AI legible through evidence-bearing prototypes, adversarial probes, scenario artifacts, and concrete evaluation ideas.
Strong candidates will often have a non-standard profile for a frontier AI company: formal engineering training, advanced scholarship on technology and society, founder or operator experience, experience carrying products from concept to reality, and a visible record of teaching, publishing, convening, or otherwise helping wider communities understand emerging technology.
What You'll Do
- Design and run research on subtle societal impacts of frontier AI systems, including over-reliance, authority cues, dependency formation, normalization effects, and institutional adoption patterns.
- Develop red-teaming and adversarial evaluation methods for harms that are contextually harmful or socially corrosive even when they do not present as explicit policy violations.
- Translate qualitative findings from artifact probes, interviews, field observation, usage analysis, or cross-functional inquiry into concrete taxonomies, test sets, evaluation proposals, and mitigation priorities.
- Build artifact-based research outputs such as future documents, simulated interfaces, institutional memos, household scenarios, operational field guides, and other prototypes that help teams inspect how model behavior may shape everyday life.
- Partner with Research, Safeguards, Policy, Communications, and Product teams to connect findings to model behavior work, launch criteria, user-facing interventions, and external explanation.
- Create methods that help Anthropic reason not only about what models can do, but about what forms of social practice and institutional behavior they are helping create.
- Operate as a bridge across technical and non-technical contexts: produce rigorous internal artifacts for researchers and also clear explanatory materials for leadership, partners, and public-interest audiences.
- Public speaking, writing, facilitation and other forms of communication and engagement for external audiences, workshops, and other forms of public-facing work to help shape discourse on the societal impacts of frontier AI systems and responsible innovation practices.
Sample Projects
- Design an adversarial study on how users misread fluency, tone, and confidence as competence in emotionally charged or professionally consequential contexts; turn the findings into a new evaluation suite and product guidance.
- Build a set of artifact probes for institutional adoption of AI in education, media, administration, or care, then derive a taxonomy of subtle failure modes around trust, deference, and normalization.
- Prototype a newspaper, memo set, or procedural handbook from a plausible near future in which a model has become quiet decision infrastructure; use the artifact to surface risks that are hard to see in abstract policy discussion.
- Create a recurring workflow that converts weak sociotechnical signals into red-teamable hypotheses, reproducible tests, and candidate mitigations.
- Develop a field guide to socially embedded model failures that helps Anthropic teams discuss and compare risks that do not fit neatly into current benchmark regimes.
- Run a cross-functional workshop in which speculative artifacts, empirical research, and model evaluations are used together to align on a high-consequence deployment question.
You May Be a Good Fit If You
- Have deep technical fluency and can build working prototypes in software; familiarity with hardware, interfaces, or physical computing is especially valuable because many socially consequential systems do not stay purely on-screen.
- Have substantial experience designing and shipping work under ambiguity, including founder/operator, entrepreneurial, independent-lab, or similarly high-ownership environments.
- Believe it or not, we are not looking for a traditional career path individual who has been going from tech company to tech company. We are looking for someone who has built things in the world, taught others how to see emerging systems, and developed original methods rather than only inheriting existing ones. The strongest applicants will show evidence that imagination is part of their engineering practice, not decorative frosting on top of it.
- Have a track record of building and shipping products, research programs, or operational systems that required end-to-end ownership from early concept through prototyping, testing, and real-world uptake.
- Have experience with artifact-based research methods such as speculative prototyping, design fiction, scenario building, worldbuilding, service enactments, or adjacent practices used to make uncertainty concrete.
- Can turn qualitative observations into operational research outputs: taxonomies, protocols, evaluations, decision briefs, and concrete interventions.
- Have excellent written and verbal communication; you can publishand, teach, facilitate, and make complex technical questions legible without flattening them.
- Bring evidence of broad synthesis across domains such as engineering, research, product, education, media, design, policy, or community practice.
- Can carry material freight from framing through implementation: not only ideas, but working artifacts, tested methods, and durable organizational practice.
- Can move comfortably between engineering rigor and humanities-grounded inquiry; you know how to interpret systems technically and culturally.
- Have evidence of carrying products, systems, or research programs from early concept through prototyping, operationalization, and real-world uptake.
- Have experience with artifact-based research methods such as speculative prototyping, design fiction, scenario building, worldbuilding, service enactments, or adjacent practices used to make uncertainty concrete.
- Can turn qualitative observations into operational research outputs: taxonomies, protocols, evaluations, decision briefs, and concrete interventions.
- Have excellent written and verbal communication; you can publish, teach, facilitate, and make complex technical questions legible without flattening them.
- Bring evidence of broad synthesis across domains such as engineering, research, product, education, media, design, policy, or community practice.
- Can carry material freight from framing through implementation: not only ideas, but working artifacts, tested methods, and durable organizational practice.
Strong Candidates May Also Have
- Advanced academic training in a field that sharpens sociotechnical reasoning, such as STS, history of technology, anthropology, human-computer interaction, media studies, or adjacent disciplines.
- Formal engineering training alongside experience in cultural analysis, strategic research, or public pedagogy.
- A track record of building and operating a hardware or software product company, especially where product concept, prototyping, manufacturing, logistics, and customer experience all required direct ownership.
- Experience with patents, inventive systems work, or applied R&D that demonstrates comfort moving between idea generation, technical implementation, and institutional protection or transfer.
- A serious creative practice such as writing, photography, filmmaking, archival work, publishing, or fabrication that strengthens research and explanation.
- Experience leading workshops, seminars, or learning programs that help technical and non-technical groups reason together about emerging technology.
- Public-facing work that has helped shape discourse on technological futures, design methods, or responsible innovation.
- Experience building tools, prototypes, or inquiry methods that improve institutional decision-making rather than simply generating ideas.
Annual Salary:
The annual compensation range for this role is listed below.
$360,000 - $880,000 USD
Logistics
- Education requirements: Bachelor’s degree in a related field or equivalent practical experience. We are particularly interested in candidates whose educational background spans engineering and sociotechnical or humanities-based inquiry.
- Location-based hybrid policy: We currently expect staff to be in an Anthropic office at least 25% of the time; some work may benefit from more in-person collaboration.
- Visa sponsorship: We do sponsor visas when feasible for the role and candidate, and we make reasonable efforts to support immigration pathways when extending an offer.
- Relocation: We are open to relocation for this role and assess case-by-case support.
- Interview format: For this role, we conduct all interviews in Python. Interviews run in a private, time-boxed Docker environment using hypothetical tasks aligned to the role.
How we're different
This role is for someone who understands that some of the most important AI failures are not loud failures. They are soft, cumulative, and organizationally convenient. They often become visible only when given concrete form.
We value candidates who have built things in the world, taught others how to see emerging systems, and developed original methods rather than only inheriting existing ones. The strongest applicants will show evidence that imagination is part of their engineering practice, not decorative frosting on top of it.
An especially strong fit for this role is someone whose portfolio already demonstrates unusual continuity across engineering, cultural analysis, product invention, public explanation, and artifact-based futures work. We are interested in candidates whose range compounds into rigor.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office environment.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.
We encourage you to apply even if you do not meet every listed qualification. Strong candidates often have non-linear backgrounds, unusual combinations of experience, and evidence of high judgment across multiple domains.
Interview tasks are synthetic and hypothetical. They are designed to reflect anticipated work types for this role and are not based on active production projects.
Your safety matters to us. Anthropic recruiters only contact candidates from @anthropic.com addresses or clearly identified partner agencies. We will never request fees or banking information prior to employment.
As set forth in Anthropic’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under applicable law.
Create a Job Alert
Interested in building your career at Anthropic? Get future opportunities sent straight to your email.
Create alertApply for this job
*
indicates a required field