Synthetic Reasoning

A system generates false medical information. The headline says: AI hallucinated.

A tool produces nonconsensual images of real people. The headline says: AI creates deepfakes.

A chatbot fabricates legal citations that get filed in court. The headline says: AI made up fake cases.

Notice who's missing.

The company that built the system. The engineer who shipped it. The executive who decided it was ready. The entity that profits from the deployment. Every headline names the tool. The maker vanishes.

I've watched this pattern from inside. I build products that use these systems. I've sat with Anthropic's Claude through thousands of hours of work and watched it fabricate citations with the same confidence it delivers verified facts. Every time I catch it, the instinct is the same: Claude made that up. Not Anthropic deployed a system that generates confident fabrications. The first framing comes automatically. The second takes effort.

That automatic framing isn't random. It was designed into the name.

"Artificial Intelligence" isn't a description of the technology. It's a brand. John McCarthy coined it in 1955 for a funding proposal to the Rockefeller Foundation.1 The competing names for the field — cybernetics, automata theory, complex information processing2 — were accurate but politically entangled. McCarthy wanted a name that would define a new discipline on his own terms. He got one. Seventy years later, every headline, every congressional hearing, every dinner table conversation about the technology still runs on grant-proposal language from the Eisenhower administration.

Both words are wrong.

"Artificial" sets up a binary: real versus fake. Artificial sweetener. Artificial turf. Artificial intelligence. The word positions the technology as an imitation of the real thing — human intelligence — which implies the real thing exists somewhere inside it. It doesn't. The system processes patterns from data that humans created, using architectures that humans designed, on infrastructure that humans built. Nothing about that is an imitation of thought. It's a construction.

"Intelligence" grants agency. "The AI decided." "AI learned to lie." "AI is getting smarter." Every sentence reinforces the frame: this is a being that acts. It thinks. It chooses. It has intentions. None of that is true. But the name makes it feel true, and feeling is what drives headlines, regulation, and blame.

Branding does something most people underestimate. It installs a pattern in your thinking that runs before you notice it running. Hear "AI" and your brain anthropomorphizes instantly. You picture something that thinks, something with agency, something that can be blamed the way a person can be blamed. The name is the trigger. The anthropomorphism is automatic. And it fires every time.

The rename isn't cosmetic.

Synthetic: constructed from real components. Has a manufacturer. Implies process and origin. Synthetic biology isn't fake biology. It's biology that someone built.

Reasoning: a capability, not a consciousness. A process that produces outputs. Someone designed the process. Someone deployed it. Someone is responsible for what it produces.

Synthetic reasoning puts a manufacturer back behind the product. It's not a being that acts on its own. It's a system that was built, deployed, and released by an entity with a name, a business model, and a liability.

We named the tool, and the maker disappeared.


12-minute read

The name was always strategic.

In the summer of 1955, John McCarthy sat down to write a funding proposal to the Rockefeller Foundation. He was a young mathematician at Dartmouth, and he wanted to host a research workshop on machine cognition. The question was what to call it.

The existing fields had names. Norbert Wiener's cybernetics dominated the conversation about machines and feedback systems. Automata theory covered formal computational models. Complex information processing described the emerging work on problem-solving programs.

McCarthy chose none of them. He coined "Artificial Intelligence" to stake out new territory. In his own words: "One of the reasons for inventing the term was to escape association with 'cybernetics.' Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him."3

The proposal — co-authored with Marvin Minsky, Nathaniel Rochester, and Claude Shannon4 — went to the Rockefeller Foundation in August 1955. The workshop ran the following summer. Ten researchers spent six weeks at Dartmouth. The term outlived all of them. Not because it described the technology accurately. Because it was vivid enough to define a discipline.

A funding pitch became civilization's default frame for thinking about machine cognition. Seventy years of repetition cemented it so deeply that questioning the name feels like questioning the technology itself.

What "Artificial" does to your brain

Say the word and watch what happens inside your head. "Artificial" triggers a binary: real versus fake. Artificial sweetener isn't real sugar. Artificial turf isn't real grass. Artificial intelligence isn't real intelligence.

That binary smuggles in an assumption. If this is the artificial version, then real intelligence is the reference point. Human cognition becomes the original that the machine imitates. And if it's imitating human cognition, then maybe there's something like thought in there. Maybe it sort of understands. Maybe it sort of knows.

It doesn't. Large language models predict the next token in a sequence based on statistical patterns in training data. The training data was created by humans. The architecture was designed by humans. The deployment decisions were made by humans. Nothing in that process constitutes understanding. But "artificial" primes you to look for it anyway, because "artificial" means "in imitation of the real thing."

Synthetic breaks the frame. Synthetic biology isn't fake biology — it's biology that was constructed. Synthetic data isn't fake data — it's data that was generated for a purpose. The word points to process and origin. Something synthetic was built by someone. Something artificial is pretending to be something else.

What "Intelligence" does to your brain

"The AI decided to..."

"AI learned how to..."

"AI is getting smarter..."

"AI made a mistake..."

Read those sentences. Notice who's acting. The technology is. It decides, learns, gets smarter, makes mistakes. The word "intelligence" gives the technology grammatical agency, and we experience grammatical agency as actual agency.

Agency is where blame lives. If the AI made a mistake, the AI is responsible. If Google deployed a system that generated false information, Google is responsible. Same event. Different subject. One has a billion-dollar company attached to it. One doesn't.

"Reasoning" strips the agency. Processes produce outputs. Someone designed the process. Someone deployed it. "The system's reasoning produced an incorrect output" is a sentence about engineering. "AI made a mistake" is a sentence about a being. The first invites investigation. The second invites anthropomorphism.

Testing alternatives

If "Artificial Intelligence" is wrong, what's right?

Machine Learning describes how the system trains, not what it does. Accurate for the process. Meaningless as a public frame.

Automated Inference is precise and dead on arrival. Nobody will say it in conversation. A name nobody uses changes nothing.

Digital Reasoning — "digital" carries no weight. A digital clock is digital. The word has been diluted past specificity.

Computational Intelligence swaps the adjective but keeps the lie. "Intelligence" still does the same work. The technology still sounds like it thinks.

Synthetic Reasoning wins on both words. "Synthetic" replaces the real/fake binary with construction and origin. "Reasoning" replaces consciousness with process. Together they describe something built by someone for a purpose. Not something that acts on its own.

The name runs your brain before you notice

In The Power of Habit, Charles Duhigg identifies the structure underneath every habit: a cue triggers a routine, which delivers a reward. Once the loop is established, the brain stops fully participating in the decision. "Unless you deliberately fight a habit," Duhigg writes, "the pattern will unfold automatically."5

"AI" is a cue.

Hear it, and the routine fires: you picture a thinking being. Something with agency. Something that decides, learns, creates. The reward is a clean mental model — you know what "AI" is without any effort. Every headline reinforces the loop. Every movie. Every conversation. Thousands of repetitions over years. You don't choose to anthropomorphize. The habit runs before you can intervene.

James Clear extends this in Atomic Habits. The deepest layer of behavior change, he argues, isn't what you do. It's what you believe. "Your current behaviors are simply a reflection of your current identity."6 Once you adopt a label, your decisions align with it without conscious effort.

We haven't just named the technology. We've given it an identity. "AI is creative." "AI is dangerous." "AI is taking our jobs." These aren't descriptions of a system. They're identity statements about a being. And identity statements shape behavior automatically.

"Synthetic reasoning produced this output" triggers a completely different mental frame than "AI wrote this." The first makes you ask who built it. The second makes you wonder if it's alive.

There's a concept in mindfulness practice — articulated most clearly by psychologist Rollo May7 — about the space between stimulus and response. Most of the time, there is no space. A stimulus arrives and the reaction fires instantly. Contemplative practice trains you to notice the gap. To pause before the habit completes.

Most people hear "AI" and the anthropomorphism fires immediately. No gap. No questioning. The name arrives and the brain constructs a being.

This post is trying to insert that pause.

Who benefits from the current name

Follow the accountability.

"AI hallucinated" is a headline. "Google deployed a system that produced false medical information" is a lawsuit.

"AI is biased" is a societal conversation. "Meta trained a model on data that reflects and amplifies existing discrimination" is a legal liability.

"AI generated deepfakes" is a moral panic. "A company built and sold a tool that produces nonconsensual sexual images" is a crime.

Every pair describes the same event. The version with "AI" as the subject is the one that gets published. The version with a company name is the one that gets litigated. The name determines which framing you reach for first. The entities building these systems benefit every time the tool gets blamed instead of the maker.

"Artificial Intelligence" wasn't coined to obscure accountability. McCarthy was naming a research discipline, not running a PR campaign. But seventy years of use have made it the default frame, and the default frame serves specific interests. Every company building these systems benefits from language that makes the tool the subject and the manufacturer invisible.

When the tool has a face, the manufacturer doesn't need one.

Next time you read a headline with "AI" in it, notice what your brain does. Notice the being it constructs. Notice the agency it grants. Notice who disappears from the sentence.

The name gave the tool a face. The face let the maker vanish.

Whether you call it synthetic reasoning or something else entirely doesn't matter as much as this: the moment you see the name as a choice rather than a description, you've broken the loop. The manufacturer reappears. The accountability returns. The right questions become askable.

You don't have to rename anything. You just have to notice that someone already did — seventy years ago, for a grant proposal.

Sources

  1. McCarthy, J., Minsky, M., Rochester, N., Shannon, C. "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence." August 1955. stanford.edu/jmc/history/dartmouth
  2. Kline, R. "Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence." IEEE Annals of the History of Computing, 2011.
  3. McCarthy, J. Stanford JMC Archive. stanford.edu/jmc
  4. McCarthy, J., Minsky, M., Rochester, N., Shannon, C. "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence." August 1955.
  5. Duhigg, C. The Power of Habit. Random House, 2012. Chapter 1.
  6. Clear, J. Atomic Habits. Avery, 2018. Chapter 2. jamesclear.com
  7. May, R. The Courage to Create. W.W. Norton, 1975.

The entity who publishes is responsible, not the tool. The structural gap in publishing accountability — and what it would take to close it — is the subject of The Architecture of Prosperity. PublishingPolicy.org is what I'm building to close it.