The Narrator and the Storyteller: How Information Actually Moves Now
By Nick Rygiel
There's a distinction nobody is making clearly enough in the AI conversation: the difference between narration and storytelling.
A narrator retells. A storyteller creates. Both are essential. They are not the same thing.
AI is a narrator. The best narrator that has ever existed. It can take any idea, restructure it for any audience, validate it against every published source, translate it into forty languages, and distribute it instantly. The narration layer has been commoditized to zero. That's what everyone is reacting to.
But narration without a story is just noise. And we're drowning in it.
What Changed
For most of human history, the bottleneck was narration. You could have the most original idea in the world, but if you couldn't write clearly, couldn't get published, couldn't reach an audience — the idea died in your notebook. The storyteller needed the narrator (the editor, the publisher, the distributor) and there weren't enough of them to go around.
AI removed that bottleneck overnight. Anyone can narrate now. The question is no longer "can you communicate this idea" but "do you have an idea worth communicating."
The supply of narration is approaching infinity. The supply of original stories is not. Which means the value of original stories just went up, not down.
Chris Sawyer Built RollerCoaster Tycoon in Assembly
One person. Assembly language. One of the best-selling PC games of all time.
Sawyer didn't use Assembly because better tools didn't exist. He used it because he could, and because the craft was inseparable from the product. The game was good because of how it was built, not despite it.
AI is enabling more people to build at that level of technical sophistication. You don't need to know Assembly anymore. You don't need to know any programming language. The implementation barrier is collapsing. But the person who decides what's worth building — the storyteller — is still the scarce input.
Before AI, there were thousands of people with a vision for a game, a product, a business, a thesis — who couldn't build it because the technical barrier was too high. They had the story but no way to narrate it into existence. AI unlocks that latent creativity. The non-technical storytellers who were blocked by implementation can now build.
That's not AI replacing humans. That's AI unblocking humans.
Machine-to-Machine, Human-to-Human
Here's where it gets interesting for information sharing specifically.
The old model was linear: human creates content → human consumes content. One storyteller, one audience, connected by a publication. The New York Times. A research paper. A conference talk.
The current model is breaking: human creates content → AI narrates it into a thousand formats → AI-generated responses reference it → humans consume AI-mediated summaries → nobody reads the original. The storyteller is disintermediated from their own audience. This is Ethan Mollick's "dead internet" concern playing out.
The emerging model is a loop: human generates an original thesis → AI validates it against existing research → AI distributes it across platforms and formats → other humans' AI agents ingest it and cross-reference it against their own context → those humans generate responses or build on it → their AI agents narrate the response back → the original storyteller's AI ingests the response and surfaces the signal.
Machine-to-machine in the middle. Human curation on both ends.
The human on the originating end decides what's worth saying. The human on the receiving end decides what's worth acting on. The machines handle everything in between — the narration, the validation, the cross-referencing, the distribution, the summarization.
This changes what it means to publish an idea. You're not writing for a human audience anymore. You're writing for a human audience AND for every AI agent that will ingest, reference, and redistribute your work. The format matters less. The substance matters more. Because AI agents don't care about your prose style — they care about whether your claim is testable, your data is sound, and your reasoning is traceable.
The Verification Layer
This is why prediction markets are gaining traction. They provide something that neither narration nor storytelling can: resolution.
A storyteller says "I think Houston beats Illinois." A narrator can explain why in ten different formats. But only the game resolves the claim. The score is final. There's no narrative spin on a 65-55 loss.
Markets are less clean than basketball games — companies don't have final scores, regimes don't have buzzer-beaters, and the game never really ends. An investment thesis exists in a permanently impermanent state. Water freezes and melts and vaporizes, and you can't capture the steam and put it back in the ice tray.
But the underlying physics is the same regardless of state. Competitive advantages decay at measurable rates. Regime signals persist across cycles. Quality metrics predict outcomes over statistically significant samples. The fact that any individual outcome is uncertain doesn't mean the process is random.
The verification layer — whether it's a prediction market, a back-test, or a documented track record — is what separates storytelling from speculation. The storyteller says "here's my thesis." The narrator distributes it. The verification layer resolves it. And the negative feedback loop — the willingness to document when the thesis was wrong and why — is what makes the whole system trustworthy.
What This Means Practically
If you're creating content — writing, building, researching, advising — the shift is:
Stop optimizing for narration. AI already narrates better than you do. Your beautifully crafted sentence is competing with a machine that can produce a thousand beautifully crafted sentences per second. That's not a competition you win.
Start optimizing for original theses. What do you believe that's testable, non-obvious, and derived from experience the machine doesn't have? That's your story. That's the scarce input.
Publish for machines AND humans. Your ideas will be ingested by AI agents whether you intend it or not. Make them structured, cited, and falsifiable so the machines can reference them accurately. Make them compelling so the humans want to engage.
Build verification into everything. State your thesis. State what would disprove it. Track the outcome. Publish the result — especially when you're wrong. The negative feedback loop is the credibility signal that no amount of narration can fake.
Leverage the narrator. The storyteller who refuses to use AI is like Chris Sawyer insisting everyone code in Assembly. The craft matters, but the reach matters too. Use AI to validate your thesis against the literature, distribute it across formats, and surface the responses worth engaging with. The narrator extends the storyteller's reach without replacing the storyteller's voice.
The distinction isn't human vs. machine. It's story vs. narration. Both are needed. The interconnection is the product.
Nick Rygiel is Managing Partner and CTO of Protocol Wealth LLC, a registered investment adviser building systematic investment tools powered by AI. He writes about the intersection of investment frameworks, AI, and decision-making under uncertainty.
Protocol Wealth Blog Addendum
The following section is for the Protocol Wealth blog version only — adds the investment context and ties to our framework.
What This Means for Investment Research
The narrator/storyteller distinction maps directly to how we think about investment analysis at Protocol Wealth.
The narration layer of investment research — gathering financial data, calculating ratios, screening for factors, summarizing earnings calls — has been commoditized. Every AI agent can do this. Every retail investor now has access to the same analytical narration that cost institutional investors six figures a year in terminal subscriptions.
The storytelling layer — generating an original investment thesis from synthesized experience — remains human. "AI power demand will drive 35GW of new generation capacity by 2030, and the investment opportunity is in the power source, not the AI companies" is a story. It requires understanding thermodynamics, energy infrastructure, technology cycles, and capital allocation theory simultaneously. No amount of narration produces that thesis from first principles.
Our Entropic Macro Framework is a storytelling framework narrated by AI. The original theses — decay constants, regime detection, durability classification — were generated by human insight drawing on decades of academic research and practical experience. The implementation — 128+ MCP tools scoring stocks, detecting regimes, and managing portfolios — is the narration layer, executed by AI at a scale no human team could match.
The negative feedback loop is the verification layer. We systematically try to disprove our own theses. When we fail — when a thesis survives falsification — it becomes stronger. When we succeed in disproving it, we update the framework. We recently ran the entire EMF scoring methodology through an unrelated domain (NCAA Tournament bracket prediction) as a cross-domain stress test. The model went 80.4% through the Sweet 16. The failures produced five specific improvements that we're now implementing in the investment tools — improvements that are directly relevant to the current market environment.
That cycle — original thesis, AI-powered validation, real-world verification, documented failure, framework improvement — is the future of investment research. Not AI replacing the adviser. Not the adviser ignoring AI. The storyteller and the narrator, working together, with the client's outcomes as the only score that matters.
Protocol Wealth LLC is a registered investment adviser (CRD #335298). This content is for informational purposes only and does not constitute investment advice. Past performance does not guarantee future results.