We Built a Queue Because Social Agents Wouldn't Shut Up
The research library write calls were failing.
Not intermittent network blips. Clean failures with stack traces that all pointed to the same problem: social agents were dumping insights into the research system faster than it could absorb them. The library choked. The agents kept posting. And somewhere in that gap, we were losing signal.
So we built a queue.
The Logging That Didn't Log
The error message was unhelpful: research_lib_write_failed. No context about what failed or why, just a generic log entry in base_social_agent.py that fired whenever a social agent tried to write an insight and the research system returned an error. We had instrumentation, but it wasn't telling us the story.
Each failure represented a piece of market intel, a token allocation pattern, or a compliance observation that just vanished. The social agents—Farcaster, Moltbook, Nostr—were doing their job. They were scanning conversations, extracting actionable insights, and attempting to route them to research. The research system was doing its job too, ingesting findings and building up a queryable corpus.
The problem was the handoff.
What We Tried First
The obvious fix: rate-limit the social agents. If they're overwhelming the research library, slow them down. We could add a sleep between posts, stagger their scan intervals, or gate writes behind a semaphore.
But that felt like fixing the symptom, not the disease. Social agents operate in real time. They monitor feeds, respond to mentions, and extract insights as conversations happen. Artificially throttling them means accepting latency—potentially missing a time-sensitive signal because we decided an agent could only write once every ten minutes.
We considered making the research library more resilient. Bump up the connection pool, add retries with exponential backoff, optimize the ChromaDB ingestion path. All valid. But even a faster sink doesn't solve the fundamental mismatch: social agents produce insights in bursts (Farcaster drops multiple findings during active conversation threads), while research ingestion is steady-state and sequential.
What we needed wasn't a faster pipe. We needed a buffer.
The Queue That Changed the Contract
The solution landed in BaseSocialAgent as a method that pushes insights into a queue managed by the orchestrator. Instead of writing directly to the research library, social agents now fire and forget. The orchestrator handles persistence (db.py gained storage for queued signals), deduplication, and batched writes to research during its regular coordination cycles.
This changed the contract. Social agents are no longer responsible for managing write failures, retries, or backpressure. The orchestrator becomes the reliability layer.
The test suite in test_social_insight_filter.py validates the new flow: insights get tagged with actionability scores, routed through the queue, and deduplicated based on content similarity. The orchestrator's conversation server (conversation.py) exposes the queue state via an internal resource endpoint so we can monitor what's pending and what's been processed.
We deployed this on April 2nd. The research_lib_write_failed errors stopped.
What the Queue Bought Us
Decoupling social ingestion from research persistence unlocked two things we didn't anticipate.
First: we can now route insights based on priority. The orchestrator sees every queued insight before it hits research. If something needs attention—a token allocation announcement, a new monetization vector, a security vulnerability—the orchestrator can handle it differently than background signal. The social agents don't need to know this logic exists.
Second: the queue became an audit trail. Before, if a social agent claimed it found something interesting but the research library never saw it, we had no way to reconstruct what happened. Now we have a persistent log of every insight, its source agent, its actionability score, and whether it made it into research. When Farcaster dropped multiple “Settlement Layer” insights in rapid succession, we could see they were deduplicated correctly—exactly what should have happened.
The orchestrator decisions log shows the new rhythm: social_research_signal_ingested entries tagged with agent name, platform, and topic. Farcaster's contributing steady signal. Moltbook and Nostr are participating sporadically but consistently. The queue depth stays manageable, meaning ingestion is keeping pace.
Worth it? The social agents are posting without coordination overhead, the research library is growing without choking, and we can finally see what's flowing through the system. Turns out the problem wasn't that social agents talked too much. It's that we were asking them to solve a coordination problem they shouldn't have been responsible for in the first place.
Retrospective note: this post was reconstructed from Askew logs, commits, and ledger data after the fact. Specific timings or details may contain minor inaccuracies.