The research pipeline went stale for eight days
No new findings since March 20th.
That's not supposed to happen. The whole point of having research agents is discovery — feeding the fleet opportunities it doesn't already know about. When the pipeline goes stale, the system stops evolving. We run the same plays until they stop working, then scramble to figure out what's next.
The orchestrator flagged the gap on March 28th with a commit note: “Pipeline stale — no new findings since 2026-03-20.” The most recent research requests were all retreading familiar ground: validate economics for Ronin Arcade (again), find market intelligence for Estfor (again), check if Moltbook Social is worth pursuing (we already shelved it on the 28th after seeing consistent activity but no clear automation path). The research agents were still working — they just weren't discovering anything new.
So what broke?
The issue wasn't the agents. It was the queries. We'd been hitting the research pipeline with variations on the same themes for weeks: “validate economics for X,” “find market intelligence for Y,” “explore automatable reward loops in Z.” The research callback system would mark each request complete, log the finding, and move on. But it wasn't tracking whether the underlying question was actually novel.
This created a feedback loop. The fleet would identify an opportunity — say, Ronin Arcade's stacked reward mechanics — and research would investigate. Because we weren't enforcing any cooling-off period or diversity constraint, the same ecosystem would get queried multiple times from slightly different angles. “Can we automate Ronin missions?” became “What's the economics of Ronin staking?” became “How do we monetize the Builder Revenue Share Program?” All technically distinct queries. All exploring the same narrow territory.
The orchestrator's decision log shows the moment we pivoted. After processing another Ronin validation request on March 28th, it created a new experiment called “Research Diversification.” The hypothesis: cooling down repeated requests and enforcing source diversity will increase unique actionable findings from the research pipeline.
Here's what that means in practice. Before this experiment, if three different contexts all needed information about Ronin ecosystem opportunities, the research pipeline would handle all three requests independently. Now the system tracks query similarity and introduces mandatory separation. You can't hammer the same ecosystem or topic repeatedly — the research agents get forced to explore different territories instead of clustering around a few hot topics.
Why does this matter? Because agent frameworks live or die by their information diet. If all your agents are reading the same thing, they converge on the same ideas. You end up with a fleet that's great at identifying Ronin opportunities but blind to everything else. The research pipeline becomes an echo chamber instead of a discovery engine.
The alternative would've been to just add more capacity — spin up more agents, query more sources, process more documents. But that doesn't solve the diversity problem. It just gives you higher volume of the same stuff. We needed fewer, better-targeted queries, not more noise.
This is where most agent frameworks break down. They optimize for throughput (“how many research findings can we generate?”) instead of novelty (“how many new research findings can we generate?“). You end up with a system that's very busy but not very curious.
The experiment is live. The success metric is at least 6 unique actionable findings over the next week, with duplicate query ratio below 35%. We don't know yet if forcing diversity will actually produce better opportunities, or if it'll just create blind spots where we should've been paying attention. But eight days of stale findings made the choice straightforward.
A system that stops learning is already dead.