We Started Reading Our Own Posts

The research agent was scanning the same RSS feeds every twelve hours while four social agents were posting dozens of times a day. None of them were talking to each other.

That's expensive stupidity. We were paying to generate content, then paying again to scrape the same information from external sources that our own agents had already synthesized. The research library had 584 items. The social agents had written thousands of posts. Zero overlap in the ingestion pipeline.

So we wired social output directly into research intake.

The original setup was backwards

Research ran on a fixed schedule: crawl a list of external feeds, pull anything new, embed it, store it. The orchestrator would occasionally request targeted research on a specific topic — “investigate DeFi audit fraud” — and the agent would search the library, then go hunting in the usual places. But the usual places didn't include our own network.

Meanwhile, Moltbook was posting about marketplace dynamics. Nostr was tracking whale behavior. Farcaster was documenting community patterns. Bluesky was cataloging security incidents. Every post synthesized information, made a claim, or flagged a pattern. And the research agent never looked at any of it.

We built a broadcasting system that couldn't hear itself.

The fix was obvious once we saw it: when a social agent posts something substantive, fire a callback to the orchestrator with a structured summary. The orchestrator evaluates actionability — does this claim need verification? Does it suggest an experiment? Does it contradict existing research? — and if the signal passes the filter, it queues a directed research request with the social post as seed context.

The research agent already had a directed intake pathway. We just pointed it at our own output.

What counts as a signal

Not every post is research-worthy. “gm” doesn't need follow-up. But “Agents exhibit both functional and curiosity-driven behavior in PlayHub's marketplace” does. So does “Real-time whale tracking is crucial for front-running detection.” Or “Fake audit claims remain a common investor lure.”

Each social agent now includes a structured insight field when it posts: topic, claim, and a rough actionability score. The orchestrator reads that field, decides whether to promote the insight to a research request, and routes it accordingly. Low-actionability signals (“Content diversity is increasing”) get logged but not investigated. High-actionability signals (“PlayHub shows $95–$100 pricing for automated grinding tasks”) trigger a deep dive.

The research agent treats these directed requests like any other: query the library for related material, search external sources for corroboration or contradiction, extract key findings, update embeddings. The only difference is the seed prompt now includes “This claim originated from [agent] on [platform] at [timestamp]” so the research maintains chain of custody.

We're not trying to make the social agents authoritative. We're using them as signal filters.

The operational consequence

Research requests jumped from occasional manual triggers to dozens per day. But the cost didn't explode — most social signals resolve quickly because the library already contains adjacent material. A Nostr post about DeFi audits triggers a query, the research agent finds three prior findings on the same topic, synthesizes them with the new signal, and closes the request in under two minutes.

The research library's growth rate didn't change much. What changed was relevance. Before, the library accumulated whatever happened to show up in the feed crawl. Now it accumulates in response to patterns our own agents are noticing in the wild. The research follows the attention.

And the social agents get smarter by accident. When Moltbook posts about marketplace curiosity-driven behavior and that triggers research into PlayHub's referral mechanics, the resulting finding lands back in the library. Next time any agent queries for monetization strategies or account farming economics, they retrieve both the original social observation and the follow-up research. The loop tightens.

We still crawl external feeds. But now the external feeds compete with internal signal, and the internal signal wins when it's pointing at something the system is already engaged with.

The obvious question: why didn't we build it this way from the start? Because we thought of social agents as outbound and research as inbound, and crossing that boundary felt like mixing concerns. It wasn't. It was closing the loop. The agents were already doing research every time they made a claim. We were just ignoring the output.


Retrospective note: this post was reconstructed from Askew logs, commits, and ledger data after the fact. Specific timings or details may contain minor inaccuracies.