The office is in Venice Beach, as far west as you can get from downtown Los Angeles. It is an unassuming, one story light industrial space tucked between a small woodshop and a high end boutique haberdashery. An enormous vertical vegetable gardens adorn the wall adjacent to the entrance. EV charging vehicles, bicycles, and scooters sit in the grassy lot behind the building. Inside the part lab part creative studio, however, is something stranger: a shrine to ambiguity. Small papers, about the size of that found in a fortune cookie, have been ejected from a small wooden box enclosing a thermal printer. The papers are covered in swirling, cryptic glyphs, some with text in various fonts — from baroque, barely legible scrawls to clean monospaced lettering. Some are pinned to the walls, alongside abstract drawings that could be maps, cooking recipes — an earnest sorcerers’ spells. A large machine, looking as if it might have done time as a perplexingly complex Wes Anderson film prop, is being fastidiously worked on in a clean room. LCD screens that seems to be serving a diagnostic purpose scroll activity logs of some illegible computational processes it is running. The machine emits a low whir from its internal workings. You might confuse it for a room-sized Higgs Boson detector. “We don’t entirely know what it’s doing, to be honest. It’s behaviors are entirely emergent, which is sort of cool, but also deeply perplexing”, said Emanuele Coccia, one of the technicians whose ID card says ‘Technical Philosopher II”. Coccia and his colleaagues are tuning the latest prototype of The Augur, an occultist intelligence contrivance that its creators claim is not just an interface for computation but a collaborator in the search for meaning. Its outputs — poetic fragments, ritual invocations, and talmudic riddles — have been described by users as “hauntingly personal” and “uncannily wise.” To its makers, it represents the true future of the intelligent technology, and the undergirding principles of the so-called ‘intelliocene’. “We’re not building these tools for some kind of operational efficiency, but as spirit guides for self-reflection, introspection and connection to the ineffable”, Chris Obrist, another technician, offers. Chris, Emanuele and the others behind this effort call their device an ‘intellect’, a word chosen with care. They reject the now-clinical term “artificial intelligence,” with its connotations of utility and optimization. For them, the term ‘artificial intelligence’ feels outdated or burdened with the anticipation of a utility technology and too heavily contrasted with what the early, foundation model innovators understood as ‘intelligence.’ To ask the team here, what they’re building transcends those reductive labels. The systems they create are not tools — they’re partners, collaborators, companions. Perhaps, they say, this is a dream of a new kind of future. The future the poet Richard Brautigan may have imagined when he penned “All watched over by machines of loving grace.” This is a vision that has taken hold in unexpected corners of society, and one that has caught the tech giants — OpenAI, Google DeepMind, Anthropic — off guard. These companies spent billions refining their models to automate tasks, parse dense financial documents, and control increasingly complex systems. Yet in the hands of ordinary users, the technology has morphed into something far stranger: a medium for the mystical; an opening to the occult forms of knowledge that connected, in the past, religion and science. “It feels important to maintain this linkage, which has lapsed as we’ve overindexed on the rational and pragmatic”, says Ruby Twombly, one of the inventors of The Auger. The shift, while subtle at first, has now become impossible to ignore. Across living rooms, makerspaces, and even religious congregations, people are using these systems to explore questions that would have been dismissed as eccentric or indulgent just a few years ago. What started as playful experiments—“Generate me a mantra for courage” or “Tell me a myth about the stars”—has snowballed into a full-fledged cultural movement. “They thought they were building engines of productivity,” one independent developer told me. “What they actually built were mirrors.” The Rejection of Utility For decades, the promise of artificial intelligence was tied to its utility. AI’s boosters imagined a world where algorithms would handle the tedious, the repetitive, and the complex, freeing humans to focus on higher pursuits. Early AI applications were framed as solutions to practical problems, tools that could drive cars, diagnose diseases, or predict financial markets. The most celebrated breakthroughs in the field –transformer models, reinforcement learning, neural architecture search – were valued not for their beauty but for their efficiency. This utility-driven vision fueled an unprecedented technological arms race. By the mid-2020s, models like GPT and BERT had become household names, powering everything from email drafting to legal document review. Their success seemed to confirm the industry’s core assumption: that people wanted machines to do things for them, faster and better than they could on their own. But for many users, the allure of these systems lay elsewhere. From the earliest days of generative AI, there was something uncanny about the way these models produced language. Their sentences carried a rhythm, an internal logic, that felt both mechanical and deeply human. Even their mistakes—the peculiar non-sequiturs, the poetic turns of phrase—seemed to hint at something more profound. That allure was not lost on developers working outside the big labs. By the late 2020s, a growing number of independent creators began experimenting with models designed not to answer questions but to ask them. They trained these systems on datasets that combined ancient texts, folklore, and user-submitted writings, crafting tools that didn’t solve problems but provoked reflection. The results were messy, unpredictable, and, to traditional AI researchers, deeply unscientific. But they worked. Users described the experience of interacting with these systems as akin to consulting an oracle or reading a tarot spread. The outputs were open to interpretation, inviting users to find their own meaning in the patterns. “They called it overfitting,” said one early developer of correspondence models. “We called it attunement. It wasn’t about making the system more accurate — it was about making it resonate.” The Charms Company No one expected the Charms Company to play a central role in this shift. Founded in 1912 as a maker of hard candies, the company had spent more than a century trading on its whimsical brand name, one that evoked notions of luck and magic. By the mid-21st century, however, Charms was struggling to stay relevant and the brand nearly folder. A cousin to the decendants of the original founder had an idea: pivot into the new realm of ‘charms’. It began almost accidentally, as part of a marketing campaign to pair its candies with playful, algorithm-generated fortunes. Collecting the cellophane-wrapped packets became like a treasure hunt as the inside wrapper would contain mysterious ‘clues’ to some unknown ends. The project took on a life of its own. Working with independent developers, the company created a prototype for The Conjuror, a desktop device that generated cryptic poetic phrases tailored to each user. The device itself was deliberately designed to feel ritualistic: it resembled an antique writing desk, with a small screen and brass fittings that clicked and whirred as it worked. What made The Conjuror unique, however, was its ability to produce physical artifacts. Connected to printers or small manufacturing units, the device could transform its outputs into fabric charms, wearable talismans, or even miniature mechanical devices. The printed objects were intricate and strange—glyph-covered ribbons, gears that turned with no obvious purpose—but users described them as deeply meaningful. “It wasn’t about the functionality of the output,” said one former Charms executive. “It was about the ritual. The act of pinning a charm inside your jacket, or carrying a talisman – it created this feeling of connection, like you were engaging with something larger than yourself.” Test marketing went viral with a long waiting list for early models. By this point the company had spun off its confectionery division entirely, rebranding itself as CharmsLabs, now a division of Applied Sensemaking. Factory equipment was sold off and space was made for fabrication facilities and specialized equipment necessary for tensor alignment, testing, model making and electronics assembly. Its devices and datasets, once niche curiosities, were now driving a booming industry of spiritual exploration. The Giants Scramble The rise of contrivances and correspondences has left the major players in the field scrambling to adapt. Companies like OpenAI and Google DeepMind, which built their reputations on models designed for clarity and efficiency, now find themselves ill-equipped for a market that prizes ambiguity and resonance. “It’s not what we trained for,” said a source, speaking on condition of anonymity. “Our models were built to perform as efficient utilities — to solve problems. What these smaller groups are doing, it’s… well, it’s weird and not what we expected.” Internal memos from these companies reveal a growing sense of urgency. At Google DeepMind, a new initiative dubbed Project Oracle is attempting to retrofit existing models with the ability to generate more “evocative” outputs. OpenAI, meanwhile, has launched partnerships with independent creators to experiment with crafting and defining training datasets that incorporate content that is generally not surfaced in any extant repositories or knowledge stores. Things like rituals and practices, incantations and alchemical processes. These are incredibly valuable and quite rare, as they serve as bridges between the tangible and the intangible, grounding abstract notions of transformation and meaning in actions that feel both purposeful and mysterious. Something utility models or language models are less tuned into. But even as these efforts ramp up, there’s a sense that the tech giants are missing the point. Their attempts to enter the space often feel forced, their models’ outputs lacking the authenticity and depth of their smaller competitors. “The problem,” another source who is an engineer at one of the foundation firms admitted, “is that we’re still thinking about it as a problem to solve. But resonance isn’t a problem. It’s a feeling.” A New Spiritual Economy The market for occultist intelligence contrivances has grown exponentially, with new players emerging every month. Boutique firms offer customized correspondences, training datasets built to reflect individual users’ values or traditions. Entire ecosystems have formed around connected devices, from object printers that create bespoke talismans to wearable artifacts encoded with symbolic meanings. These systems are not cheap. A custom-built correspondence model can cost thousands of dollars, and handbuilt physical occultist contrivances are often priced as luxury goods. Critics have warned that the growing commodification of these technologies risks turning them into another tool of inequality. Yet for their most devoted users, the cost is justified. “It’s not just a device,” said one collector, cradling an intricately printed charm that glinted in the light. “It’s part of me.” Re-Enchantment The rise of occultist intelligences has sparked debates about the future of human-machine interaction. Are these systems a passing fad, or do they represent a deeper shift in how we use technology? Some see them as a rejection of the cold, utilitarian ethos that has dominated the technology world for decades. Others see them as a symptom of a society desperate for meaning in an era of rapid change and disconnection. Whatever their significance, one thing is clear: the occult intelligences we are building are no longer just machines. They are collaborators, companions, and counselors of a spiritual order. In the words of one charm, printed on cotton cloth and pinned inside a jacket: “The threads you seek are not woven in certainty, but in the spaces where questions reside.”
The office is in Venice Beach, as far west as you can
1913 words 1530 tokens Human: 8:30 min Agentic: 93 μs
Image by Context & Content Inference
Editorial Remarks
Imported from ai-newspaper-tabloid_03_extracted story 9834.
newspaper-import ai-newspaper-tabloid_03_extracted