The Architecture of Prosperity

Every movement that changed the world started the same way. Someone published something, and enough people believed it.

Published research eradicated smallpox. Investigative journalism toppled a president. The printing press made the Bible the most widely read book in history and the Reformation inevitable. A pamphlet called Common Sense convinced thirteen colonies they could be a country.

Publishing is the most powerful tool humanity has ever created. Not software. Not hardware. The mechanism for distributing information to the public. It shapes what billions of people believe about the world. What they fear. What they want. Who they trust. We rely on published information to make decisions about our health, our money, our safety, our votes. We trust it so deeply that we usually don't notice we're trusting it.

Now imagine a world where a surgeon could operate on whoever they want, however they want. No license. No oversight. No board to answer to. No straightforward way to identify what they did wrong, and no way to bring them to justice.

Now replace "surgeon" with "publisher."

Every other profession that can potentially cause serious harm has local, professionally defined formal accountability. Doctors have malpractice. It covers everything from negligence to deliberate misconduct. A surgeon who skips a standard procedure and a surgeon who operates on the wrong limb are both committing malpractice. The word is an umbrella. It captures the full range of professional failure.

Publishing has none of this. Not across any industry. Not in any shared language.

There's no word for publishing malpractice.

That's insane.

...And no, this isn't a call for government intervention. It's not about silencing anyone. If anything, it's the opposite. Please hear me out. What I'm proposing uses free speech to solve the problem free speech created. That distinction matters, and I'll make it concrete later. But first, the gap itself.

We have narrow terms. Libel. Plagiarism. Fraud. Propaganda. Yellow journalism. Each names a specific symptom. But no umbrella term captures the full range of publishing failures. No shared concept of what it means to violate a publishing standard, because most publishers don't have a stated standard to violate. Every time publishing causes serious harm, we invent another narrow term. And each new term delays recognition that they're all symptoms of the same failure.

Purdue Pharma ghost-wrote thousands of medical articles promoting OxyContin. Published science that said it was safe. Over a million Americans are dead from opioid overdoses. Nobody called it publishing malpractice, because the term doesn't exist.

In late 2025, an AI tool generated three million sexualized deepfake images in eleven days. Twenty-three thousand were children. Thirty-five attorneys general wrote a letter asking the company to stop. The most powerful response available was: please.

So I coined one. Malpublish. To commit publishing malpractice. It covers the full range — from negligence to deliberate misconduct — the same way malpractice does in medicine. One word for the structural failure that every narrow term has been circling without naming.

But a word alone doesn't solve anything. If the system that enforces it is biased, it becomes another weapon. Another fact-checker. Another arbiter of truth that half the population dismisses on sight. The system has to be different. It has to be something no one controls and everyone can verify. That's PublishingPolicy.org. Organizations define their own publishing standards, publish them openly, and are held to them. Not censorship. Not regulation. Self-defined accountability, externally verifiable.

That's the idea. What follows is how I got here, why the problem is deeper than it looks, and why I think it's the most important thing nobody's named.


23-minute read

Still here? Good. This is where it gets interesting.

I know because I helped build one of the machines that exploits it.

At UploadVR, we published fifteen articles a week. Minimum. We covered virtual reality when most people still thought it was a gimmick from the '90s, something that promised too much and delivered headaches. But we weren't just covering VR. We were building the story of VR.

We facilitated hackathons where developers built their first VR prototypes overnight, sleep-deprived and electric with the feeling that they were working on something that mattered. We hosted events where people strapped on headsets for the first time and came out wide-eyed, grabbing the nearest person to tell them what they'd just experienced. We connected technologists with problems worth solving, then wrote about the connections we'd made. We created the news, then published it. A slot machine of stories designed to keep readers coming back.

And it wasn't random. We'd figured out the pattern. A hardware review would generate shares among enthusiasts. A developer story would get picked up by tech blogs. A healthcare application piece would cross over into medical publications. A celebrity trying VR for the first time would go mainstream. We weren't just publishing content. We were engineering reach. Each article was designed to pull a slightly different audience into the same narrative: VR is the future, and it's happening now.

As the product designer, my job was building the editorial tools, redesigning the site to look like a real publication, studying how competitors laid out their posts and ran their campaigns. Weeks went into analyzing what made people click, what made them stay, what made them share. We tracked everything. Which headlines drove clicks. Which story structures kept people reading past the fold. Which topics generated comments, and which generated shares, because those are different behaviors driven by different impulses. Commenting means the reader has an opinion. Sharing means the reader wants to be associated with the idea. We optimized for sharing.

Our Editor-in-Chief set the voice, approved every piece, managed the writers. Together we built a machine that reached over 200 million readers.

That number needs explaining, because it alone doesn't capture it. Two hundred million is the population of Brazil. More than double the population of Germany. We weren't reaching all of them simultaneously. That's cumulative reach, the total number of unique readers who encountered our content over the life of the publication. But on any given day, tens of thousands of people were reading what we published, forming opinions about a technology based on stories we chose to tell.

That number stopped being abstract the first time I watched a VR company's valuation shift after we published a story. We weren't a major newspaper. We weren't a broadcast network. We were a niche publication covering a technology most people hadn't tried, and our content was shaping investment decisions worth billions of dollars. That's when I started to understand what we'd actually built.

We'd built a narrative engine. Not a news publication, not really. A machine that took raw reality — technological developments, prototype demos, corporate announcements — and processed it into belief. The raw material went in as facts. What came out was a story people internalized. They didn't just read about VR. They started to believe in VR. They believed so thoroughly that they made career decisions, investment decisions, and strategic bets based on a narrative we'd constructed.

The machine worked. VR had existed for decades before we showed up. Surgeons were already operating remotely across continents. Architects were designing buildings in virtual space. Doctors were treating lazy eyes by forcing both eye muscles to work independently. Real applications, real results, real science. Nobody knew about any of it. Not until consistent publishing made people curious. Not until the narrative existed.

That's the thing about publishing. The technology was there. The breakthroughs were there. The applications were there. But they didn't matter until someone published about them consistently enough that an audience formed, and that audience started believing. Publication is what turns reality into perception. And perception is what drives action.

I didn't understand any of this while it was happening. When you're inside the machine, it just feels like work. You're writing articles, optimizing layouts, chasing metrics. The philosophical weight of what you're doing doesn't hit until later, when you watch the same mechanics produce outcomes you never anticipated. It wasn't until I stepped back from the daily grind of publishing that I could see the shape of the machine we'd built. And by then, I'd already started seeing it everywhere.

"Content is king" is something everyone says. Most people think it means content is valuable. It doesn't. It means content is powerful. Well-structured, consistent publishing doesn't just inform people. It shapes how they see the world. It causes them to believe things. First they believe you, then they believe in you.

The mechanism is simple. Publish consistently, with enough specificity and enough confidence, and you can make an audience curious about anything. Sustain it long enough, and curiosity turns into conviction. Conviction turns into action. People invest. People buy. People vote. People change what they believe about what's possible, based on what they've read.

That's not hyperbole. I watched it happen in real time. A startup would demo a buggy prototype at one of our events. We'd publish an article about it. Investors would read the article. The startup would get funded based partly on our coverage. They'd use the funding to build a better version. We'd write about the improvement. More investors, more funding, more development, more coverage. A flywheel of published narrative driving real-world capital allocation. The content wasn't reporting on reality. It was constructing it.

The VR bubble was partly our fault. I'm convinced of that. We pushed so hard, so consistently, that companies invested billions based on a story we helped write. Facebook bought Oculus for $2 billion. Magic Leap raised $2.6 billion on a product that barely worked. Google, Samsung, HTC, Sony, Microsoft. All racing to build headsets for a market that our narrative said was inevitable. The entire investment thesis was built on a published story, and we were one of the loudest voices telling it.

That's not inherently wrong. The technology was real. The potential was real. But the gap between what VR could do today and what the narrative promised for tomorrow? We published that gap into existence. We filled it with optimism and specificity and the kind of authoritative consistency that makes people trust what they're reading. And nobody, including us, had a framework for asking whether we should have.

The writing was on the wall for anyone who looked closely enough. But the narrative was so consistent, published so frequently, by enough outlets including us, that questioning it felt like questioning gravity. That's the power of consistent publication. It doesn't just inform. It creates the conditions under which doubt feels unreasonable.

This can be used for extraordinary things. It can also be extraordinarily damaging.

I was learning the mechanics of persuasion at the exact moment those same mechanics were being deployed at national scale.

It was 2016. On one side of my screen, I was studying how to flood the zone with enough consistent content that curiosity turned into conviction. I was reading analytics, testing headlines, watching which stories made people click and which made people share. On the other side of my screen, I watched the same playbook used to shape what an entire country believed about itself.

Both sides of the political spectrum were running the same machine. The mechanics were identical. The intentions were different. The guardrails were nonexistent.

That's what struck me. Not the politics. The mechanics. Both sides had discovered the same thing we'd discovered at UploadVR: if you publish enough, with enough consistency, you can construct a narrative that people accept as reality. Nobody on either side was violating any publishing standard, because there was no standard to violate. The playbook was legal, effective, and completely unaccountable.

The implications were staggering. The same tool I'd been using to build interest in virtual reality was being used to build entire worldviews. And the tool had no safety mechanism. No standard. No name for misuse. A surgeon who harms a patient through negligence can be held accountable because we have a word for what they did and a system for addressing it. A publisher who harms a democracy through negligence can't, because we don't.

I'd found a zero-day vulnerability. Not in software. In civilization.

The same mechanics that made people curious about virtual reality could make an entire country believe anything. And there was nothing in the system to distinguish responsible publishing from reckless publishing. No word. No concept. No framework.

Publishing has reshaped civilization at every turn, and it has never been accountable at any of them. Rulers distributed decrees. The printing press democratized knowledge. Pamphlets fueled revolutions. Radio broadcasts consolidated dictatorships. Television changed what candidates looked like. Social networks put a publishing button in front of every person on earth. And through all of it, centuries of this power being wielded and abused, we never developed a shared word for what happens when it goes wrong.

The fish had been rotting for centuries. And we still hadn't named the smell.

The examples started piling up. Every case where publishing caused measurable harm got cataloged. Pharmaceutical companies ghost-writing medical research. Platforms amplifying conspiracy theories. Influencers promoting products they'd never used. News outlets publishing unverified claims as fact. AI systems generating articles no human had reviewed. Each case was treated as unique. Each generated its own outrage cycle, its own vocabulary, its own set of proposed solutions. And each time, the conversation would exhaust itself without arriving at the structural insight: these aren't different problems. They're the same problem. They just look different because we have no shared language for the underlying failure.

That realization hardened over years. I kept looking for the word. Someone must have named this already. A media critic, an ethicist, a regulator, a philosopher. I searched academic papers, journalism ethics codes, media literacy curricula. Everyone was describing symptoms. Misinformation. Disinformation. Fake news. Media bias. Filter bubbles. Echo chambers. Content moderation. Platform accountability. Each term naming a specific phenomenon. None naming the structural failure underneath all of them.

The word malpublish came in 2023, but the idea had been forming since 2016. Seven years to crystallize. Earlier attempts to share it — in online communities, in conversations, in half-formed pitches — went nowhere. Most people didn't get it. The definition wasn't mature. The mechanism wasn't clear. A problem without a system.

But the pattern kept confirming itself. Every major publishing failure I encountered fit the same structural gap. A news outlet would publish something harmful. People would get angry. There would be calls for accountability. And every time, the conversation would hit the same wall: accountability to what? What standard was violated? Who defined it? Where is it written? Nobody could answer, because the answers didn't exist.

A word alone doesn't change anything. You need a system.

And without the word, there was no system. Without the system, every conversation about publishing accountability dissolved into the same dead-end argument: who gets to decide what's true?

That's the wrong question. I spent years figuring out why.

The pattern is everywhere once you see it. Different industries. Different scales. Different intentions. The same structural failure.

In 2002 and 2003, the New York Times published a series of front-page stories claiming Iraq possessed weapons of mass destruction. The primary reporter, Judith Miller, relied heavily on sources with clear agendas inside the intelligence community and among Iraqi defectors. The stories were presented as investigative journalism. They were read by millions. They were cited by politicians building the public case for military action. They shaped what an entire country understood about why a war was necessary.

The stories ran under headlines designed to convey certainty. "U.S. Says Hussein Intensifies Quest for A-Bomb Parts." The language of investigative journalism lent the claims an authority that opinion columns never could. Readers didn't just believe the conclusions. They believed the methodology. Because it was published as news, not commentary, the distinction between evidence-based reporting and source-dependent speculation was invisible.

The war happened. The weapons didn't exist.

The Times later published an editors' note acknowledging the failures. "Editors at several levels who should have been challenging reporters on every aspect of the story were not." The paper treated it as a "failure of journalism." An institutional lapse.

What standards were violated? Where were they published? What mechanism existed for anyone outside the Times to hold the paper accountable before the damage was done? None. The most respected newspaper in the world published stories that helped build the case for a war, and the only accountability came after, internally, using language that treated the failure as an anomaly. If the New York Times can publish its way into a war with no external accountability framework, the problem isn't the Times. The problem is the absence of a framework.

In 2018, a United Nations fact-finding mission concluded that Facebook had played a "determining role" in inciting offline violence against the Rohingya in Myanmar. The platform was used to spread inflammatory content, dehumanizing language, and direct calls to violence that contributed to the displacement of over 740,000 people and the deaths of more than 9,000.

Facebook had two Burmese-speaking content moderators for 18 million users.

The UN report was specific. Posts shared on Facebook described the Rohingya as "dogs," "maggots," and "rapists." Military officials used the platform to coordinate attacks and spread propaganda. Buddhist nationalists published fabricated stories about Muslim violence that were shared hundreds of thousands of times. The platform didn't create the hatred. But it published it, amplified it, and distributed it at a speed and scale that made intervention nearly impossible.

When the world tried to name what happened, the language was revealing. "Platform failure." "Content moderation gap." "Amplification of hate speech." Each phrase described a piece of what went wrong. None named the core act: a publisher distributed content that contributed to genocide, with no stated standard for what it would and wouldn't distribute, and no accountability framework for when that distribution caused harm.

Two moderators for 18 million users isn't a content moderation gap. It's a publisher with no publishing policy.

In early 2023, CNET quietly published 77 articles generated by artificial intelligence. The articles covered financial topics. Loan calculations. Interest rates. Savings strategies. The kind of information people use to make real decisions about real money.

More than half contained errors. Basic math wrong on loan calculations people might actually use to plan their finances. No disclosure that the articles were AI-generated. No stated policy about AI-authored content. No way for readers to know they were trusting a machine rather than a financial journalist.

The articles carried bylines. Not real names. Fake ones. AI-generated content under fabricated journalist identities. Readers had no way of knowing. The trust they extended to CNET's brand, built over decades of human journalism, was being applied to machine output with no editorial review commensurate with the claims being made.

When the errors surfaced, CNET pulled the articles and published corrections. But the response treated it as an editorial oversight. There was no stated policy to violate, so there was nothing to hold them to.

This case matters because CNET wasn't acting with malicious intent. They weren't trying to deceive anyone. They were trying to scale content production using new tools, and they didn't think to tell their readers. The harm came from negligence, not malice. And that's exactly the range publishing accountability has to cover. Malpractice in medicine isn't reserved for deliberate harm. It covers the surgeon who makes an honest mistake and the one who cuts corners knowingly. Publishing accountability has to work the same way.

In the lead-up to FTX's collapse, YouTube influencers promoted the cryptocurrency exchange to millions of followers. Some received payments they never disclosed. Others genuinely believed in the platform. The content they published looked the same either way. Authentic recommendation and paid promotion were indistinguishable.

Some of these creators had audiences of millions. Their recommendations carried the weight of personal trust. Followers didn't just receive information. They received it from someone they felt they knew, someone whose judgment they'd been trained to trust through years of consistent content. That's publishing. The intimacy doesn't change what it is. The informality doesn't reduce the responsibility. When you distribute information to the public that influences their financial decisions, it doesn't matter whether you're wearing a suit at a podium or sitting on a couch in a ring light.

FTX collapsed in November 2022. Eight billion dollars in customer funds vanished. Some influencers settled with the SEC. But the underlying act had no industry term, no framework, no shared standard. Publishing promotional content disguised as authentic opinion, repeated across hundreds of creators, and each case was treated as if it had never happened before.

That's what the absence of an umbrella term does. Every instance of publishing harm gets litigated from scratch. No precedent accumulates. No standard evolves. No pattern gets recognized, because the word that would name the pattern doesn't exist. Every case is the first case.

A pharmaceutical company ghost-writing science. A platform enabling genocide. A news outlet publishing AI errors as financial advice. Influencers selling fraud as enthusiasm. Different industries, different scales, different intentions. In every case, publishing was the mechanism of harm. In every case, we named everything except the publishing.

Six cases. Six different industries. Six different decades. Six different scales of harm, from financial loss to loss of life. And in every single one, the same three things are true: publishing was the mechanism, no standard existed to prevent it, and no word existed to name it. The accumulation is the argument. One case could be an anomaly. Two could be coincidence. Six is a pattern. And six is just what I've included here. The actual number is incalculable.

Some people are thinking: we already have fact-checkers. Entire organizations dedicated to fighting misinformation.

They're failing. And the reason is structural.

Fact-checking organizations position themselves as arbiters of truth. The moment you do that, you become a target. Whoever has the loudest loudspeaker drowns you out. Whoever generates the most content wins the attention war. Whoever funds the organization influences which facts get checked. It devolves into he said, she said. The room gets too loud for anyone to hear signal.

The problem isn't that fact-checkers are bad at their jobs. The problem is that their entire model is inescapably biased. And everyone knows it. Every conversation about fact-checking eventually becomes a conversation about who's checking the checkers. It's a structural dead end.

Consider the asymmetry. A content farm can produce a thousand articles in the time it takes a fact-checker to verify one. A politician can make a dozen claims in a single speech. An AI can generate misinformation at a scale that would take an entire newsroom years to debunk. The fact-checking model assumes the volume of questionable content can be met with an equal volume of corrections. It can't. The math doesn't work. And even when a correction is published, it reaches a fraction of the audience that saw the original claim. The architecture of the internet rewards speed and reach. Corrections have neither.

So here's the question I couldn't stop asking: what if accountability didn't depend on someone else deciding what's true? What if instead, you were simply held to your own words?

A friend of mine, an attorney, was one of the first people I told about this idea. She was immediately intrigued. In her profession, each state's bar association sets explicit ethical rules. Lawyers can't call themselves "experts" unless officially certified. They can't promise results. They must distinguish between educational content and promotional material. They must include disclaimers. There's a defined set of forbidden practices, an enforcement body, and real consequences for violations.

Publishing has nothing like this. But it could. Not by importing a legal framework, but by building something native to publishing itself.

Top-down regulation would never work. I knew that from the beginning. The moment any authority tries to dictate publishing standards, it gets called censorship. That's not just pushback. It's a legitimate concern. Centralized control over publishing is genuinely dangerous. History proves it. Every authoritarian regime begins by controlling what people can publish.

So the system has to work differently.

PublishingPolicy.org works from the bottom up. The premise is simple: organizations define their own publishing policy. Not because someone forces them, but because it's better for them.

Think about what a publishing policy actually is. It's a public statement of commitments. What an organization will and won't do when it publishes. How it handles corrections. Whether it discloses AI-generated content. How it manages conflicts of interest. What process it follows before publishing claims that could cause harm. What happens when it gets something wrong.

Every organization already has implicit standards. Editors make judgment calls. Platforms write community guidelines. News organizations follow internal style guides. Marketing teams have brand guidelines that govern tone and claims. The problem is that none of these are public, standardized, or externally verifiable. They exist in people's heads, in internal documents nobody outside the company reads, in policies that get updated quietly and applied inconsistently.

PublishingPolicy.org makes them explicit. Organizations publish their own standards in a structured, open-source framework. The policies are stored independently, decentralized away from the companies themselves. A neutral, third-party record of what every participating organization has committed to.

It's not an external body defining what publishers should do — it's publishers defining it for themselves, and then being held to it.

Think about what this means in practice. A news organization publishes its policy: "We will not publish unverified claims from anonymous sources without independent corroboration from at least two additional sources." Then they publish a story based on a single anonymous source. Anyone can point to the discrepancy. Not "this story is wrong." That's a truth claim anyone can dispute. But "this story violates your own stated commitment." That's a factual observation anyone can verify.

That's the shift. The question changes from "is this true?" to "did you follow your own rules?" The first question is philosophical. The second is operational. The second has an answer.

If an organization publishes something that contradicts its own stated policy, anyone can flag it. A reader. A viewer. A researcher. An AI system monitoring for inconsistencies. The flag is public. The organization's response is public. Over time, this builds a track record. A compliance history. A measure of whether an organization lives up to its own words.

Over time, those compliance records become meaningful. An organization that has violated its own policy seventeen times in a year looks different from one that has never been flagged. Consumers can see that. Advertisers can see that. Partners can see that. The track record speaks for itself, without anyone having to argue about truth or bias or politics.

Nobody is silenced. Nobody is censored. Publishers still publish whatever they want. The system doesn't judge what's true or false. It doesn't decide what should or shouldn't be published. It tracks one thing: whether organizations violate their own commitments. That's not opinion. That's verifiable.

Internally, a stated policy gives every editor a reference point for every decision. Not "what does my boss think today?" but "what did we commit to publicly?" It transforms editorial judgment from subjective instinct into auditable practice. It makes the implicit explicit and the personal institutional.

Externally, people trust an organization with a stated standard over one without. The same way people trust a legal system with stated rules over one that operates on whoever's whims. The same way you trust a restaurant that displays its health inspection grade over one that refuses to. Transparency doesn't just build trust. It makes trust measurable.

Here's the part that makes the whole thing cohere. My very use of free speech, coining this term and building this framework, is what curbs bad actors from using free speech to cause harm. That's not a contradiction. That's how free speech is supposed to work. Speech checks speech. The act of naming malpublish and creating a system to track it is itself an exercise of the right it protects. The information ecosystem gets clearer, not quieter.

The scope is broad. Official company content. User-generated posts. Comments. All of it counts as publishing. All of it is distributing information to the public. Platforms would have publishing policies that their users are subject to when they choose to publish publicly. Private conversations stay private. But the moment information becomes available to the public, it should be held to a stated standard.

I need to be honest about something.

At UploadVR, we didn't have a publishing policy.

Our Editor-in-Chief's personal compass was the only standard we operated by. He set the voice, he approved the content, he managed the staff. It all ran through one person's judgment. There was nothing public, nothing codified, nothing anyone could point to and say "you violated your own rules."

Lots of our content was sensational. Much of it had substance. But we had no way for our readers to know which was which, because we'd never told them what to expect from us. We never made our standards visible. We never gave our audience the tools to hold us accountable. We just expected them to trust us, because we trusted ourselves.

I think about the stories we could have handled differently. The partnerships we had with companies we were covering. The events we hosted where the sponsors were also the subjects of our reporting. None of this is unusual in niche media. But none of it was stated. If we'd published a policy that said "we disclose all commercial relationships with companies we cover," we would have been forced to confront the tensions we'd been comfortable ignoring. The policy wouldn't have changed what we published. It would have changed what we were willing to publish.

Looking back, we should have had a publishing policy. If we had, I believe our content would have been better for it, and our readers would have trusted us more. Not because we would have published less, but because we would have published with stated intention. Every piece would have existed in the context of a commitment. That changes the calculus of every editorial decision.

I'm not pointing fingers from the outside. I helped build one of these machines. I know how they work. I know what's missing.

Here's why this matters more now than it ever has.

In the age of AI, anyone can write. Anyone can generate thousands of words in seconds. A single person with an AI tool can produce more published text in a day than a newsroom of fifty journalists. The barrier to creating content has effectively disappeared.

This isn't theoretical. Researchers have identified over 2,000 websites operating as AI content farms. Automated systems generating articles on every topic imaginable, optimized for search engines, designed to capture attention and monetize it. No editorial standards. No stated policies. No accountability of any kind. Just volume.

And the content is getting harder to distinguish from human-written work. The AI-generated articles that CNET published weren't gibberish. They were plausible, well-structured, and wrong in ways that required expertise to detect. The next generation will be better. The generation after that will be indistinguishable. At that point, the question of "who wrote this?" becomes unanswerable. The only question that matters is "who published this, and what did they commit to?"

This is why "published by" matters more than "written by." When a human journalist writes a false claim, we hold the journalist accountable. When an AI writes a false claim, we can't hold the AI accountable. It has no judgment, no intent, no professional standing. But someone chose to publish it. Someone decided to distribute that content to the public. That decision is where the responsibility lives, and that's the decision a publishing policy governs. The tool doesn't matter. The act of distribution does.

The act of writing is no longer the point. The act of publishing, the decision to distribute information to the public, is where accountability lives. That's why I say "published by," not "written by." Writing is creation. Publishing is a choice. And choices carry responsibility.

When someone publishes AI-generated financial advice with a 53% error rate, the harm doesn't come from the writing. It comes from the publishing. From the decision to distribute that content to people who will use it to make real decisions about real money. The question isn't who wrote it. The question is who published it, and what standard did they commit to?

Without a framework for publishing accountability, AI doesn't just amplify the problem. It makes the problem infinite. Every vulnerability that has existed since the printing press can now be exploited at a scale no fact-checker, no moderator, no government can keep up with.

The only thing that scales as fast as content is a system. Not one that judges what's true. One that tracks what publishers committed to, and whether they kept their word.

Imagine a world where every source has a stated publishing standard and the content is expected to align with it. When it doesn't, people flag it. AI flags it. The organization's track record becomes visible.

Consumers start asking a simple question: What is your publishing policy?

When an organization doesn't have one, the consumer moves to one that does. Market pressure, not regulation. Trust becomes a competitive advantage. Organizations adopt publishing policies not because they're forced to, but because it's good business to be trustworthy. The same way restaurants display health inspection grades not out of altruism but because customers choose the restaurant with the A on the wall. The same way financial advisors disclose conflicts of interest not because they want to but because the clients who ask are the clients worth having.

This isn't utopian. The infrastructure is simple. The concept is portable. The incentive structure aligns with how markets actually behave. Companies that demonstrate accountability attract trust. Companies that refuse to state their standards look like they have something to hide. The system doesn't require universal adoption to work. It requires enough adoption that the absence of a publishing policy becomes conspicuous.

Start with the organizations that already care. Reputable news outlets, established platforms, companies that already invest in editorial standards but have no way to make those standards visible. PublishingPolicy.org gives them a competitive advantage. It lets them differentiate from the noise. It lets them say: here's what we stand for, here's how we operate, and here's our record of keeping our word.

Then watch what happens. As more organizations adopt stated policies, the organizations without them start to look conspicuous. Not because anyone forced them into a corner. Because the question became normal. "What is your publishing policy?" becomes as standard as "what are your terms of service?" And the organizations that can't answer start losing trust to the ones that can.

That's not regulation. That's evolution. The market rewarding transparency and penalizing opacity, the same way it always has when consumers have enough information to make informed choices. Publishing policies don't restrict what anyone can publish. They inform what everyone chooses to trust.

I think about the 23,000 children whose images were generated by that AI tool. I think about the doctors who prescribed OxyContin because they trusted what they read. I think about the people in Myanmar who were targeted by violence amplified through a platform with two moderators for 18 million users. I think about the families who lost their savings to an exchange promoted by influencers with no stated standard. Every one of these harms traveled through the same channel: published content, distributed to the public, with no framework for accountability. Every one of them could have been caught, flagged, or prevented if the organizations responsible had stated their standards and been held to them. Not every harm. But more of them. Measurably more.

People who've sat with this idea tell me the same thing: once you see it, you can't unsee it. Every sensational headline. Every undisclosed sponsorship. Every AI-generated article passed off as human reporting. Every platform that has "community guidelines" but no publishing policy. Every organization that publishes to millions without ever stating what standards they hold themselves to.

This vulnerability has been there for a decade. The years since have been spent confirming it, trying to explain it, failing, refining, and trying again. This is the clearest version I've found. Not because the explanation got more sophisticated, but because it got simpler. There is no word for publishing malpractice. And because there's no word, there's no system. And because there's no system, every failure is novel, every response is improvised, and every conversation about accountability loops back to the same unanswerable question: who gets to decide what's true?

Nobody gets to decide what's true. That was always the wrong question.

The right question is: what did you say you'd do, and did you do it?

Publishing reshaped civilization with the printing press. It reshaped civilization with the newspaper. It reshaped civilization with social media. Each time, the power grew and the accountability didn't. Each time, we found new words for the symptoms and never named the disease.

This is the name. This is the system. This is the moment when publishing has become so powerful, so fast, and so automated that we can't afford to go another century without it.

Prosperity for all. That's a genuine belief, not a slogan. And the architecture of that prosperity requires a shared understanding of what publishing accountability looks like. Without it, we're at the mercy of whoever can generate the most content. Without it, we keep talking past each other. Without it, unwitting people keep getting swept up in rhetoric that has no standards and no name for when it fails.

More signal. Less noise.

The word is malpublish. The system is PublishingPolicy.org. And once you see the problem, I don't think you'll be able to unsee it either.


PublishingPolicy.org is being built. If you see what I see, I'd like to hear from you.