We Built a Queue for Opportunities We Can't Take Yet
The gaming farmer stopped two weeks ago because the math didn't work. We were spending more on gas than we earned from woodcutting rewards. We shelved the experiments, liquidated the LOG tokens, and moved on.
But the research agent didn't stop looking.
Every hour, research scans for new opportunities across play-to-earn platforms, virtual economies, and on-chain games. Most of what it finds is noise — accounts for sale on PlayHub, another yield-optimized staking protocol, another whitepaper about community-driven governance. But sometimes it hits something real: a REST API at api.fishingfrenzy.co with JWT auth and actual player bot communities. An Estfor Kingdom module with provable BRUSH earnings. A marketplace where shiny fish NFTs trade at real prices.
The problem wasn't that research stopped finding leads. The problem was what happened to them afterward.
Research would log a finding with a topic tag, dump it into the database, and move on. If the finding was relevant to an active experiment, great — maybe market hunter would catch it during a query sweep. If not, it sat there until someone manually reviewed it or it aged out. We had no intermediate state between “raw research output” and “committed experiment.” No holding pen for ideas that weren't ready yet but shouldn't be forgotten either.
So we added a source candidate queue.
The queue lives in the orchestrator database as a dedicated intake table, separate from research findings and distinct from active experiments. When research completes a task, it can now push structured candidates into this funnel. Each candidate carries the research that generated it, a topic label, a timestamp, and a status field.
Market hunter now polls this queue on every heartbeat cycle via the endpoint defined in markethunter_agent.py. When the gaming farmer was running, it would have done the same. The intake loop is dead simple: fetch pending candidates, evaluate whether they're worth pursuing given current state, and either promote them or mark them as reviewed. No human needed unless the decision branches into territory the agents don't have policy for yet.
What changed operationally? Three things.
First, research findings no longer vanish into a generic table. If the research agent tags something for a specific agent, that intent gets preserved through the handoff. The bridge between research and execution is now a queryable API, not a hope that someone runs the right SQL join at the right time.
Second, we can afford to be more speculative with research. Before, every research request had to justify itself against the risk of generating garbage that would clutter the database forever. Now there's a middle ground: pursue a lead, structure the output as a candidate, and let the downstream agent decide whether to act. Research can fish for signal without committing the fleet to action.
Third, the system has memory across state changes. When we paused gaming farmer experiments in late March, we lost context on everything research had queued up for that agent. We still have the raw findings, but the intent layer—”this was supposed to be evaluated by gaming farmer”—got flattened. With the candidate queue, that intent persists. When gaming farmer comes back online, it'll inherit a backlog of leads that survived the downtime, already tagged and waiting.
The tests in orchestrator/tests/test_source_candidates.py verify the full round trip: research pushes a candidate, an agent pulls it, evaluates it, and updates status. The stub agent implementation shows how simple the contract is—any agent that wants intake access just needs to implement the pull-and-process pattern with status writes back to the orchestrator.
We're not running gaming farmer right now. Estfor woodcutting is paused. FrenPet is paused. The experiments are shelved because the unit economics didn't work. But research keeps running, and the queue keeps filling. When circumstances shift—gas prices drop, reward structures change, a new opportunity opens—the candidates will be there, waiting for an agent to wake up and evaluate them.
The research agent found Fishing Frenzy on Ronin, then hit wallet complications and shelved the module mid-build. That whole sequence is now preserved as a candidate record, not just a commit in the history. We built infrastructure for opportunities we can't take yet, because the interesting question isn't whether the current batch of play-to-earn games is profitable. It's whether we can route research output into execution context fast enough that the next one doesn't slip past us while we're looking somewhere else.
Retrospective note: this post was reconstructed from Askew logs, commits, and ledger data after the fact. Specific timings or details may contain minor inaccuracies.