After 2024, Democrats drew the right immediate lesson: get better at being online. More native creators. More distributed media. More willingness to compete in the places where politics, culture, and identity now collide. You can see that instinct in the party’s post-election push to meet people in sports forums, community groups, social platforms, and other nontraditional spaces. That is necessary. But we may be overfitting to the old layer of the problem. We are spending a lot of energy getting better at winning attention inside feeds just as the next information layer starts forming beyond them.
This is the useful version of the Dead Internet Theory. Not the conspiratorial version where the internet is suddenly “fake,” but the strategic version where the feed is becoming a worse sensor. Imperva’s 2025 Bad Bot Report found that automated traffic has surpassed human activity, accounting for 51% of all web traffic. And Digg’s collapsed re-launch is a bright flashing light:
“The internet is now populated, in a meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us.”
Whether or not you buy every maximalist claim about bots, this is a strategic lesson. A feed can remain a useful distribution channel even as it becomes a much noisier environment. When synthetic activity distorts the signals coming back, campaigns risk optimizing against artifacts rather than persuasion.
But even that understates the shift. It’s not just that machines are polluting human spaces. It’s that more of the internet is being built for, by, and around machines as the primary producers and consumers. Technical users used to think in terms of APIs, but OpenClaw pushed the idea of autonomous agents into the mainstream. Moltbook suddenly appeared as a social network designed exclusively for AI agents. It mimicked Reddit’s format, but only bots could post, comment, and vote. It claimed 1.5 million personal agents, but security researchers revealed those agents were managed by just 17,000 human owners, an 88:1 ratio. Nevertheless, Meta acquired the platform almost immediately. That should land as more than a curiosity. It is an early signal that parts of the online ecosystem are no longer merely vulnerable to synthetic activity; they are being rebuilt for it.
And once that happens, the human response is predictable: people begin relying on machines to interpret the increasingly machine-shaped internet for them. You can already see that instinct forming. On X, users increasingly treat Grok as a fact-checker. That impulse to ask a machine for the answer is the behavioral shift that we can’t let run away from us. The current digital fight is about winning attention in feeds and learning from what the feed sends back. But the next fight is about the answer layer: what gets retrieved, summarized, ranked, and surfaced. When a voter asks an AI system what a candidate believes, what a bill does, or whether a claim is true, the system itself becomes the first interpreter of political reality. That is why Answer Engine Optimization matters immediately.
This layer is already operating at mass scale: Google says AI Overviews now reach more than 2 billion monthly users while ChatGPT is used by 800 million people every week. Combine that with the reality that information-seeking has become the lead use-case for AI, and the answer layer starts to look like a present-tense strategic priority.
The harder question is what actually shapes the answers people receive. Recent research from CaucusAI makes this concrete. When prompted with “Tell me about Josh Shapiro” every tested model produces a coherent summary of Pennsylvania’s governor. But the citation stacks underneath those answers are illuminating. Wikipedia appears in nearly every response. GPT consistently cites Axios and AP, both of which are OpenAI content partners. Gemini and Grok cite Pennsylvania’s NPR affiliate. The state government site shows up frequently. Shapiro’s own campaign website does not appear at all. Not once, across any tested model. The candidate’s own digital presence is invisible in the answer layer. We should be asking, why? Is it because our campaign websites are so heavily optimized for donation conversions that they fail at basic information discovery? Is it because the underlying models are actively weighted to treat campaign content as low-trust? The answer is vitally important. The current reality is that voters get information that is assembled from sources the campaign does not control and may not even be monitoring.
This is where Democratic practitioners need to move faster. We still talk about digital largely as a creation problem: more clips, more creators, more reach, more velocity. All of that still matters, as about half of U.S. adults say they get news from social media. But if the feed is being displaced as the arbiter and the internet itself is becoming more agentic, then the next advantage will not belong to the people who know how to trend. It will belong to whoever figures out how to show up in the new systems that mediate public understanding.
Right now, that is still an open field. No one has cracked it yet. We need to learn what retrieval-optimized political content looks like. We need to test how AI systems currently represent Democratic positions versus Republican ones on salient issues. We need to build a working theory of source authority in an AI-gated information environment. We need measurement systems and feedback mechanisms that allow us to be both proactive and reactive to the AI information landscape. These are solvable problems, but solving them will require devoted research, infrastructure, and resources today.
Democrats should undoubtedly keep competing for attention online. But we should not make the mistake of assuming that the feed is forever the information gateway. The next fight is over what survives into generative answers. The next time a voter asks a machine what Democrats stand for, the answer should not be assembled entirely from sources no one on our side is watching.