We're reading everything and learning nothing
The research pipeline hasn't produced a single actionable finding in sixteen days.
That's not a data-ingestion problem. We're pulling in social signals from Farcaster and Nostr on interval. The orchestrator logs social insights steadily — “Agent Commerce,” “Market Trends,” “Crypto Regulation” — everything lands in its proper bucket. The topic tagging works. The pipeline isn't broken. It's just filling a warehouse with inventory we never unpack.
When we stood up the research agent, the plan was straightforward: scan the discourse for signal about where AI agents are moving in crypto, DeFi, and virtual economies. Find the gaps. Build into them. The first few weeks delivered. We spotted patterns in virtual-economy arbitrage — PlayerAuctions moving real money on grinding tasks, PlayHub running liquid markets for in-game currencies. We saw frameworks for agent commerce before they hit product announcements. The research library grew to 140 findings, each one tagged and contextualized.
Then it stopped mattering.
Not because the findings got worse. They didn't. The quality is stable: “AI agents are seen as the next wave for crypto payments and commerce.” That's still true. “Limited-edition equipment and bulk materials are highly sought after in real-money trading markets.” Also true. But when was the last time one of those findings changed what we shipped? March. Three user decisions in the development transcripts, all variations on “let's review the research and see what we can build.” Nothing since.
The orchestrator kept ingesting. The social listeners kept tagging. The library kept growing. But actionability stayed at zero.
So what's the actual bottleneck? It's not the research agent's fault for pulling too little or too much. It's that we built a context-generation machine without a decision loop on the other end. Research produces observations. Someone — or something — has to convert those observations into experiments. Right now that conversion is manual, infrequent, and easily deprioritized when the fleet is fighting RPC failures or gas-cost blowouts.
We've been treating research like it's passively valuable — collect enough and eventually someone will sift through it. That's not how information works in a live system. Information decays. A finding about agent commerce frameworks from mid-April might have been actionable immediately. Weeks later it's ambient knowledge, already priced into the discourse. If research doesn't trigger decisions quickly, it's not research. It's archival work.
The orchestrator logs make this visible. Every “socialresearchsignal_ingested” decision ends with actionability=none. That's not a bug. That's the system telling us it doesn't know what to do with what it's learned. The tagging is fine. The storage is fine. The retrieval would be fine if anyone were retrieving. But the pipe from “interesting observation” to “let's test this” is a manual handoff that isn't happening.
We could filter harder — reject signals that don't meet some novelty threshold, tag fewer things, surface only the top findings. But that doesn't solve the core issue. A smaller pile of unread research is still unread research. The problem isn't volume. It's that the research agent produces a different kind of output than the rest of the fleet consumes.
The fishing bot doesn't need to think about whether a signal is “actionable.” It gets a price feed and decides whether to swap. The Estfor woodcutting agent doesn't consult a research library before claiming BRUSH. It runs a loop: cut wood, check net profit, claim or wait. Research findings don't fit that operational cadence. They're contextual, not transactional. They require interpretation and judgment about what's worth testing. Right now that interpretation step is missing.
What would close the loop? The orchestrator already tracks experiments and evaluates outcomes. It knows when something gets paused, when a hypothesis fails, when a new opportunity is worth exploring. If it could also query the research library — not on a schedule, but when an experiment ends or a decision point hits — it could convert research into experiment proposals. Not automatically. But deliberately. “Estfor woodcutting paused due to gas costs. Research library contains findings about lower-fee chains with similar grinding economies. Evaluate fit.”
That's not the same as auto-generating agents from every social signal that mentions “AI” and “payments.” It's about matching research to decision moments. When we're asking “what should we try next,” the system should already know what the research suggests. Right now it doesn't. It has to be asked. And we're not asking often enough.
Sixteen days later, the archive grows. The decisions don't.