On a Tuesday morning in a brightly lit conference room in San Francisco, a half dozen former researchers, designers, and anthropologists passed a printed product catalog across a long table, stopping at a page that showed a consumer device labeled with a safety appendix. The catalog was not attached to any manufacturer; it had been produced by a policy research group that had once worked inside commercial AI labs, and the document was intended to act as evidence rather than marketing collateral.
Two days earlier, an institutional foresight project released a graphic collection with 33 contributors that imagined military and civic futures in a near future; a separate strategy paper came with a specification for a speculative artifact — a Mutually Assured AI Malfunction reasoning prompt — that reframed deterrence in algorithmic terms. In both cases, the artifacts were not fantasy for fantasy’s sake; they were rehearsal objects intended to surface legal tangles and cultural frictions that prose alone often hides.
Those signals, coming from defense-level thought experiments and national-security strategy papers, have pushed civil policy groups to reimagine what counts as testimony in hearings and what passes for a regulatory rationale. Once regulatory dialogues and convenings happend with round-table discussiona, and policy wonks reading white papers; now, they are increasingly happening with mock product catalogs, graphic novels, and buildable spec sheets. And whereas once this would have been seen as “fringe” or a woo-woo breakout exercise during lunch, it is now becoming a mainstream part of the policy process. The shift is not just stylistic; it is procedural, and it has implications for how law is made, how compliance is enforced, and how the public understands risk.
The stakes are straightforward and procedural; when the unit of evidence shifts from a prose report to a material artifact, regulators, lawyers, executives, and courts start to see different problems. That shift reframes discussions about liability, user consent, and system behavior because an artifact makes possible consequences concrete; it forces bodies that write law to imagine enforcement, inspection, and compliance in the language of objects.
The shift is profound as it nearly settles the question of how to govern AI; it moves the conversation from abstract principles to concrete design decisions. When policy groups hand a lawmaker a mock product, the conversation shifts from abstraction to obligation. The artifact becomes a legible target for debate, rather than an abstract admonition about risk. Speculative prototypes act like rehearsal scripts for governance; they reveal not only failure modes, but values.
What is Speculative Prototyping? It is a new role that has emerged to address the rapidly closing gap between an idea and its implementation. With Ai-based implementation models effectively diminishing to zero the time between a concept and its deployment amongst billions of users, the policy process has had to adapt. Speculative Prototyping, many sense, is the response needed to keep up with the pace of change.
At Anthropic, a small group of researchers and designers began to experiment with a variant of Speculative Porotyping they call “Design Fiction”. To them, this is a way to explore the downstream effects of not only AI governance choices, but to unlock potential new applications for AI that might otherwise suffer from a lack of the kind of language one would use to describe something previously non-sensical or so fantastical as to be beyond reason.
Thurman Youngblood is on a team at OpenAI that has been producing speculative prototypes for the past year; he describes the practice as a way to prototype while also provoking discussion. “We found that by creating artifacts — mock product catalogs, magazine feature stories, specification sheets, and quick-start guides — was a rapid way to ideate, without the burden of too much structure around the exploration. These artifacts were not meant to be fantasy for fantasy’s sake; they were rehearsal objects intended to provoke discussion and reveal the practical implications of regulatory decisions.”
Youngblood explained that treating their ideas as thought experiments was insufficient and often hard to deliver as a compelling proposal for a new product or experience.
“The tools allow us to move so quickly to deployment that we have a lot of difficulty comprehending the downstream effects of our ideas. This causes a good deal of legal exposure and cultural risk, and we found that by creating artifacts that represent the downstream effects of governance choices, we could have a more concrete conversation about what those choices would actually look like in practice.”
Youngblood and hsi team raise an important point as regards the exposure surface of the AI industry; they are no longer as effectively able to cover their risk exposure. Several legal cases have pointed to courts that are increasingly willing to entertain arguments about the downstream effects of AI products, and the public is more attuned to the risks of AI than ever before. Whether or not an AI company is aware of the risks of harm caused by a particular product, they are increasingly forced to warn users about those risks, and if they can show that they were aware of, and shared the risk, they are less likely to be held liable. In other words, much like the tobacco industry, the AI industry is effectively showing users the potentially deleterious effects of their products, if such exists, by producing artifacts that represent those effects. By doing so, they are not only warning users, but also creating a record of the defendants awareness of the risks, which can be crucial in legal contexts.
The Pedagogy of Speculative Prototyping
Education and entrepreneurship programs have long used science fiction as a tool to expand imagination; one program adopted a five-step process to convert distant futures into entrepreneurial experiments, producing student ventures that began as narratives and became pilots. Similarly, classroom exercises asking students to embed ethical audits within science fiction have shown that narrative constraints can force explicit moral reasoning. Speculative Prototyping borrows these pedagogies and flips them toward governance: the artifact becomes both curriculum and testimony, teaching regulators what to ask while exposing loopholes in law.
A practical example: a policy group produces a “Conscience Kit” catalog that imagines a household assistant with an opt-in cognitive nudge feature. The kit includes a spec sheet that defines a hardware-backed consent token, a suggested inspection checklist for privacy officers, and a short-form news piece that chronicles a lawsuit launched after the feature was deployed. By assembling these pieces, the creators aim to show how a particular legal requirement would actually operate in markets and courts; moreover, they create a legible target for debate, rather than an abstract admonition about risk.
Those legible targets matter for national security debates as well. The Mutual Assured AI Malfunction proposal reframes deterrence in algorithmic terms; the logic of deterrence becomes intelligible only when artifacts illustrate attack surfaces and accountability pathways. Speculative prototypes can show how a backdoor might be introduced, how evidence chains would break, or how nonstate actors could repurpose commercial devices. When planners at defense and civil institutions compare a graphic-novel scenario with a mock device spec, they often discover mismatches between theoretical controls and operable reality.
Institutions are responding by adopting narrative forms that meet different audiences. NATO’s use of a graphic novel to convene 33 authors is instructive: the comic form democratized strategic imagination, enabling technocrats and citizens to hold a shared mental model. Similarly, when a regulatory committee receives a packet that includes a catalog and a buildable spec sheet, the committee members do not have to imagine risk; they can hold it, read it, and ask engineers pointed questions about compliance. That transformed posture changes timelines; it lengthens review cycles because it is easier to see where a regulation would require active enforcement.
There are tensions. Speculative Prototyping can help technicians and policymakers anticipate failure modes, but it can also naturalize particular futures by making them feel inevitable. The pedagogy of turning fiction into artifact carries a responsibility; educational work on sci-fi-inspired entrepreneurship shows how narrative choices nudge innovation toward certain business models. The same risk exists in governance: if regulators habitually accept artifacts that encode particular value trade-offs, those trade-offs can ossify into policy before contested alternatives are fully aired.
Legal exposure and cultural backlash have nudged some companies toward products designed to be less addictive or less likely to cause long-term cognitive effects; litigation over algorithmic harms makes prototypes that foreground such harms politically salient. For groups with roots in commercial labs, the move to produce Speculative Prototyping is both defensive and generative: defensive because it anticipates litigation and cultural critique; generative because it supplies a menu of plausible regulatory instruments that could be adopted in statute or standard.
The practice is not neutral. Speculative prototypes are rhetorical devices as much as they are design objects; they set frames and imagine enforcement regimes. When a policymaker flips through a mock product guide, they are not merely learning about a device; they are reading a proposal for an inspection regime, liability architecture, and public messaging strategy. As a result, the artifacts produced by Design Fiction practitioners can accelerate regulatory thinking, and sometimes they shorten the path from a debate to a draft law.
Policy workshops that once produced thick reports are now including artifact packets in their deliverables; minutes from recent foresight engagements show artifact reviews listed alongside testimony from ethicists and economists. That inclusion changes bargaining dynamics because stakeholders can point to specific design decisions and say: this would be illegal, this would be uninsurable, this would be unenforceable. Speculative Prototyping thus functions as both a map and a constraint; it clarifies routes forward while highlighting choke points.
An institutional question remains unresolved: who controls the fiction? When defense planners, civil regulators, and private companies all produce artifacts, the policy conversation can fragment into competing material narratives. The Mutual Assured AI Malfunction concept raises another wrinkle; deterrence frameworks assume clear attribution and credible retaliation, yet prototypes often show technical ambiguity at the heart of attribution. Artifacts can make ambiguity legible, but they do not necessarily solve it.
A concrete consequence is visible now: at several advisory committees, draft rules under consideration include requests for “impact artifacts”,product-like documents that explain downstream effects. Those requests are experimental; they are as likely to be reshaped in committee as they are to be adopted. What is clear is that Design Fiction has migrated from an imaginative exercise into a procedural tool, and Speculative Prototyping is proving to be a technique for doing governance before governance is settled.
A hearing is scheduled next month where a municipal regulator will present a packet of speculative prototypes as part of their ordinance briefing; advocates will argue about admissibility, firms will argue about burden, and designers will stand by the artifacts they produced. The outcome will be procedural and specific; it will not settle the larger philosophical debates, but it will show whether material narratives can survive the rough practicalities of lawmaking.