# The Feed Framed the Last Election and AI Answers Will Frame the Next
Matt Hodges
2026-03-20

After 2024, Democrats drew the right immediate lesson: get better at
being online. More native creators. More distributed media. More
willingness to compete in the places where politics, culture, and
identity now collide. You can see that instinct in the party’s
post-election push to meet people in sports forums, community groups,
social platforms, and other nontraditional spaces. That is necessary.
But we may be overfitting to the old layer of the problem. We are
spending a lot of energy getting better at winning attention inside
feeds just as the next information layer starts forming beyond them.

This is the useful version of the [Dead Internet
Theory](https://en.wikipedia.org/wiki/Dead_Internet_theory). Not the
conspiratorial version where the internet is suddenly “fake,” but the
strategic version where the feed is becoming a worse sensor. Imperva’s
2025 Bad Bot Report found that [automated traffic has surpassed human
activity](https://cpl.thalesgroup.com/ppc/application-security/bad-bot-report),
accounting for 51% of all web traffic. And [Digg’s collapsed
re-launch](https://techcrunch.com/2026/03/13/digg-lays-off-staff-and-shuts-down-app-as-company-retools/)
is a bright flashing light:

> “The internet is now populated, in a meaningful part, by sophisticated
> AI agents and automated accounts. We knew bots were part of the
> landscape, but we didn’t appreciate the scale, sophistication, or
> speed at which they’d find us.”

Whether or not you buy every maximalist claim about bots, you should
heed this moment. A feed can remain a useful distribution channel even
as it becomes a much noisier environment. When synthetic activity
distorts the signals coming back, campaigns risk optimizing against
artifacts rather than persuasion.

But even that understates the shift. It isn’t just that machines are
polluting human spaces, but that more of the internet is being built by
and for machines as the primary producers and consumers. Technical users
used to think in terms of APIs, but [OpenClaw](https://openclaw.ai/)
pushed the idea of autonomous agents into the mainstream. Moltbook
suddenly appeared as a social network designed *exclusively* for AI
agents. It mimicked Reddit’s format, but only bots could post, comment,
and vote. It claimed 1.5 million personal agents, but security
researchers [revealed those agents were managed by just 17,000 human
owners](https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys),
an 88:1 ratio. Nevertheless, [Meta acquired the platform almost
immediately](https://www.reuters.com/business/meta-acquires-ai-agent-social-network-moltbook-2026-03-10/).
That should land as more than a curiosity. It is an early signal that
parts of the online ecosystem are no longer merely vulnerable to
synthetic activity; they are being rebuilt for it.

And once that happens, the human response is predictable: people begin
relying on machines to interpret the increasingly machine-shaped
internet for them. You can already see that instinct forming. On X,
[users increasingly treat Grok as a
fact-checker](https://techcrunch.com/2025/03/19/x-users-treating-grok-like-a-fact-checker-spark-concerns-over-misinformation/).
That impulse to ask a machine for the answer is the behavioral shift
that we can’t let run away from us. The current digital fight is about
winning attention in feeds and learning from what the feed sends back.
But the next fight is about the answer layer: what gets retrieved,
summarized, ranked, and surfaced. When a voter asks an AI system what a
candidate believes, what a bill does, or whether a claim is true, the
system itself becomes the first interpreter of political reality. That
is why [Answer Engine
Optimization](https://en.wikipedia.org/wiki/Generative_engine_optimization)
matters immediately.

This layer is already operating at mass scale: Google says [AI Overviews
now reach more than 2 billion monthly
users](https://blog.google/company-news/inside-google/message-ceo/alphabet-earnings-q2-2025/)
while [ChatGPT is used by 800 million people every
week](https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/).
Combine that with the reality that [information-seeking has become the
lead use-case for
AI](https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society),
and the answer layer starts to look like a present-tense strategic
priority.

The harder question is what actually shapes the answers people receive.
Recent research from CaucusAI makes this concrete. When prompted with
*“Tell me about Josh Shapiro”* every tested model produces a coherent
summary of Pennsylvania’s governor. [But the citation stacks underneath
those answers are
illuminating](https://caucusai.substack.com/p/same-answers-different-sources).
Wikipedia appears in nearly every response. GPT consistently cites Axios
and AP, [both of which are OpenAI content
partners](https://openai.com/index/partnering-with-axios-expands-openai-work-with-the-news-industry/).
Gemini and Grok cite Pennsylvania’s NPR affiliate. The state government
site shows up frequently. **Shapiro’s own campaign website does not
appear at all.** Not once, across any tested model. The candidate’s own
digital presence is invisible in the answer layer. We should be asking,
*why?* Is it because our campaign websites are so heavily optimized for
donation conversions that they fail at basic information discovery? Is
it because the underlying models are actively weighted to treat campaign
content as low-trust? The answer is vitally important. The current
reality is that voters get information that is assembled from sources
the campaign does not control and may not even be monitoring.

This is where Democratic practitioners need to move faster. We still
talk about digital largely as a creation problem: more clips, more
creators, more reach, more velocity. All of that still matters, [as
about half of U.S. adults say they get news from social
media](https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/).
But if the feed is being displaced as the arbiter and the internet
itself is becoming more agentic, then the next advantage will not belong
to the people who know how to trend. It will belong to whoever figures
out how to show up in the new systems that mediate public understanding.

Right now, that is still an open field. No one has cracked it yet. We
need to learn what retrieval-optimized political content looks like. We
need to test how AI systems currently represent Democratic positions
versus Republican ones on salient issues. We need to build a working
theory of source authority in an AI-gated information environment. We
need measurement systems and feedback mechanisms that allow us to be
both proactive and reactive to the AI information landscape. These are
solvable problems, but solving them will require devoted research,
infrastructure, and resources today.

Democrats should undoubtedly keep competing for attention online. But we
should not make the mistake of assuming that the feed is forever the
information gateway. The next fight is over what survives into
generative answers. The next time a voter asks a machine what Democrats
stand for, the answer should not be assembled entirely from sources no
one on our side is watching.
