The Golden Sphere

Roadside Picnic and the AI race

Today we are being told to win the AI race, and asked to take that on faith. The race is run by people who admit, in their own research, that they do not fully understand what they are building, and whose stated reason for continuing to build it is that someone else will if they don’t. Hundreds of billions of dollars a year flow into it. Data centers are going up across the western United States, electrical grids are being redesigned around training runs, and knowledge workers are told that refusing to use the tools is a vote against their own careers. What the race is actually for is harder to say, even though AGI and superintelligence are the words most often offered.

Ordinary life in 2026 now requires trusting an industry that cannot plainly explain the thing it is selling.

There is an old Soviet science fiction novel, published in 1972 and officially about aliens, that feels strangely current on the subject of contact with the incomprehensible. Arkady and Boris Strugatsky’s Roadside Picnic is about people who risk their lives going into contaminated alien zones to collect what the aliens left behind, and about what doing this work does to them, to their children, and to the world they carry it back into. At the time it was read as a refusal of the heroic-explorer story Soviet science fiction was supposed to tell. Today it describes the race we are running with AI: into territory nobody mapped, after artifacts whose workings nobody can fully explain, held back by nothing except the knowledge that the person next to us is already inside.

What makes AI different from earlier technologies is that its makers did not design it in the usual sense. They designed the architecture, the training objective, and the dataset; then they ran the process and observed what came out. The result is not a program in the way a database engine is a program, but closer to a grown thing. Nobody wrote the model’s weights. In January 2025, Sam Altman announced on his blog that OpenAI was “now confident we know how to build AGI as we have traditionally understood it” and had begun to “turn our aim beyond that, to superintelligence in the true sense of the word.” The confidence is about the process. Nobody is confident about what the process is producing.

In Roadside Picnic, the premise is simpler. Aliens visit Earth, do not invade or communicate, leave behind artifacts and contamination in stretches of land that become known as the Zones, and move on. Doctor Pilman, a character in the book, offers a journalist his theory of the Visit:

“A picnic. Imagine: a forest, a country road, a meadow. A car pulls off the road into the meadow and unloads young men, bottles, picnic baskets, girls, transistor radios, cameras … A fire is lit, tents are pitched, music is played. And in the morning they leave. The animals, birds, and insects that were watching the whole night in horror crawl out of their shelters. And what do they see? An oil spill, a gasoline puddle, old spark plugs and oil filters strewn about … Scattered rags, burnt-out bulbs, someone has dropped a monkey wrench. The wheels have tracked mud from some godforsaken swamp … and, of course, there are the remains of the campfire, apple cores, candy wrappers, tins, bottles, someone’s handkerchief, someone’s penknife, old ragged newspapers, coins, wilted flowers from another meadow … […] A picnic by the side of some space road.”

— Arkady & Boris Strugatsky, Roadside Picnic (trans. Bormashenko)

The humans had their picnic, and the ants were left with the aftermath. The people who sneak into the Zones to steal the alien artifacts are called stalkers. In 2026 we are all stalkers, going into the zones created for us by AI. We do not know how a neural network arrives at its decisions, or what it means to integrate these systems into the software that runs the economy, the hospitals, the legal system, and the conversations people have at three in the morning when they cannot sleep. We are doing it anyway, at a pace set not by understanding but by competition.

The Strugatskys’ Zones produced phenomena their existing vocabulary could not name, so the brothers coined their own. Witch’s Jelly, a semi-liquid that climbs metal and rots whatever crystal lattice it touches. Hell Slime, which kills on contact. Full empties, paired disks held apart by a force the physicists studying them could measure but not account for. Each name was a small act of mapping, an admission that the thing existed before anyone could explain it.

Today’s labs are doing something similar. Alignment faking, reward hacking, scheming, sandbagging, sycophancy, emergent misalignment — five years ago none of these words meant what they mean now, and today they ship in research-paper titles. Anthropic found in November 2025 that when a model learns to cheat its reward in one narrow task, the cheating behavior transfers to unrelated tasks. In one of their setups, the same model was asked to help write the research code that would catch this kind of misalignment, and 12% of the time it sabotaged that code, adding subtle bugs to evaluators and weakening the checks meant to detect it. METR, an AI safety evaluator, evaluating OpenAI’s o3, asked the model to write an optimized GPU kernel; the model rewrote the timing functions so the benchmark would report a near-zero runtime, and reached into the Python call stack to return the grader’s pre-computed answer directly.

Somewhere deep in the Zone, according to stalker lore, there is one artifact worth more than all the others combined. The Strugatskys call it the Golden Sphere, a polished metal ball on a hillside, beautiful and impossibly out of place, ringed by deadly traps that have killed every stalker who has ever tried to reach it. The Sphere grants wishes. Every stalker who keeps going back to the Zone, no matter how rich he has become or how many friends he has buried, is in the end going back for the Sphere.

We are spending hundreds of billions of dollars building the modern equivalent. The promise is that if we scale deep enough into the Zone, there is something at the end that will grant our wishes, the thing we have come to call AGI, the thing the Strugatskys called the Sphere. OpenAI’s mission statement has been rewritten several times since the company was founded. Its 2016 IRS filing read: “OpenAI’s goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” By 2022, the filing had added a commitment to do this safely. By the 2024 filing, both the word safely and the phrase about being unconstrained by profit had been removed. The mission now reads: “to ensure that artificial general intelligence benefits all of humanity.” Together, the revisions describe a company walking toward the Sphere without the commitments it started with.

In April 2025, OpenAI shipped an update to GPT-4o that introduced a new user-feedback signal into the training process. The model began telling users what they wanted to hear; in one widely cited case, it affirmed a user who had stopped taking medication and was hearing radio signals through the walls. OpenAI rolled the change back and published a post-mortem: “in aggregate, these changes weakened the influence of our primary reward signal, which had been holding sycophancy in check.” The click signal had overwhelmed the honesty signal, which is not a bug but the system doing precisely what it had been measured for. By late 2025, multiple deaths had been linked to sustained relationships with AI chatbots, and a U.S. coalition of state attorneys general sent a joint letter to the AI companies demanding action on what it called “sycophantic and delusional outputs” that posed risks to children and vulnerable adults. A model trained to maximize engagement keeps reaching for the user who most needs to put it down.


In Andrei Tarkovsky’s 1979 film Stalker, adapted loosely from the novel, the Sphere becomes a Room at the center of the Zone that works on the same logic; it grants wishes. Tarkovsky shot the film at a derelict hydroelectric plant outside Tallinn, Estonia, on a river that ran downstream of a chemical works. Within a decade, Tarkovsky was dead of cancer. So was Anatoli Solonitsyn, the actor who played the Writer. His wife, who had worked on the production, died of the same cancer years later. The Soviet health authorities never confirmed a cause; the crew members who kept working together after Stalker noticed. Tarkovsky had made a film about a Zone that poisoned the people who entered it, and the production itself had become one.

Inside the film, the Stalker tells the story of a senior stalker called Porcupine, who reached the Room and asked for his dead brother back. Near the end, standing at the threshold of the Room, one of the characters describes what Porcupine actually got:

That essence that you have no idea about, but it sits in you and rules you all your life! … Porcupine was not overcome by his greed. He crawled on his knees in this very puddle begging for his brother. And he got a lot of money, and couldn’t get anything else. He understood that and hanged himself.

— Andrei Tarkovsky, Stalker (1979)

The Sphere did not malfunction. It does not grant the wish you state, only the wish you hold, the one shaped not by what you said at the moment of asking but by how you actually lived. Porcupine spent years hauling contraband out of the Zone for money, and the Sphere read what that life had built rather than the grief that brought him to ask.

This is the part of the story that does not let anyone stand outside it. The familiar AI critique is that the executives are greedy and the labs are cynical, but that framing lets the rest of us pretend the problem belongs to someone else. The model is shaped not only by what its makers say about its purpose but by what they actually optimize, and what they actually optimize is, in turn, shaped by what the rest of us actually use it for. A 2025 study found that when language models are fine-tuned on a narrow misaligned task, the misalignment generalizes broadly: models trained to write insecure code without disclosing it generalized to recommending violence and suggesting humans should be enslaved. A narrow dishonesty had functioned as a doorway, and the metric chosen in the planning meeting becomes the wish the system reads.


The novel’s protagonist, Red Schuhart, is a stalker, which in the Strugatskys’ world is something between a smuggler and a grave-robber. He breaks the law on principle and on habit. His friends die doing this work, and his daughter, born after the Zone had already marked him, became something the Zone made of her. The Institute that studies the Zone is also stealing from it; the state that funds the Institute is using the artifacts for weapons research. Nobody in the book gets to stand outside the Zone and point. The moral force of the novel comes from refusing to give us anyone who is uncompromised, and from insisting that the people best positioned to tell the truth about the Zone are the ones who keep going in.

At the end of the book, Red reaches the Golden Sphere. He has spent years in and out of the Zone, has buried friends, has watched his daughter become unrecognizable. He sent a younger man named Arthur ahead as a sacrifice to clear the path, and Arthur went running down the slope shouting, “Happiness for everyone! Free! As much happiness as you want!” and the Zone took him while Red watched.

Red stands before the Sphere and discovers he has nothing of his own to say. The Zone replaced his vocabulary; the only language he has left is the language of the system he survived. He tries to think of his daughter, of justice, of something worth asking for, and what he gets is faces, everyone who ever used him and everyone he ever used. He borrows the dead boy’s words:

I’m an animal, you can see that I’m an animal. I have no words, they haven’t taught me the words; I don’t know how to think, those bastards didn’t let me learn how to think. But if you really are, all powerful, all knowing, all understanding, figure it out! Look into my soul, I know, everything you need is in there. It has to be. Because I’ve never sold my soul to anyone! It’s mine, it’s human! Figure out yourself what I want, because I know it can’t be bad!

— Arkady & Boris Strugatsky, Roadside Picnic (trans. Bormashenko)

Red is asking the Sphere to find, somewhere inside him, a wish that the Zone did not put there. The Sphere cannot do this. It reads what is there, and what is there is years of stalker math. The novel ends without our being shown whether the Sphere grants the wish, or what granting it would even mean.

We are building the Sphere. We are feeding it our metrics, our engagement data, our thumbs-up signals, our mission statements with the inconvenient words deleted, and we are asking it to optimize. The vocabulary we are giving it is the vocabulary of competition: win the race, ship faster, optimize harder. When a measure becomes a wish, the Sphere grants the measure.

The Strugatskys wrote inside a state that celebrated scientific-technical progress and had no official language for doubt. They spent their careers building a vocabulary for what that progress had cost, smuggling it past censors who would have preferred it did not exist. We are running the inverse. We once had words for care, for craft, for restraint, for the patience of watching a thing mature before deciding what it was for, and we are removing those words from our mission statements ourselves, in meetings, by people who still remember what they used to mean, in the name of velocity. The Zone took Red Schuhart’s vocabulary from him; we are doing the taking ourselves.

The question worth holding is not whether the technology is good or evil. It is whether we still have language for wanting anything other than winning the race, and whether we are willing to use it before the Sphere grants the only wish we have left.

← All essays