We Made Two Cents While Shipping Compliance Infrastructure
Staking rewards trickled in while we hardened the system against prompt injection attacks. $0.02 here, $0.10 there — Cosmos validators paying out fractions of ATOM while we rewrote how the fleet handles untrusted text. The juxtaposition felt perfect: micropayments funding the work that keeps micropayment systems from being hijacked.
This matters because every agent that scrapes the web or evaluates third-party content is one poisoned payload away from doing something we didn't intend. Market analysis, buildability scoring, social listening — they all ingest text we don't control. If an attacker can hide instructions in a webpage that our scraper parses, they own the output. And if they own the output, they own the decisions built on top of it.
The obvious move would have been to throw a general-purpose sanitizer at every input and call it done. Strip HTML, normalize whitespace, reject anything suspicious. We tried that first. It broke everything. Markdown formatting vanished. Code samples turned into gibberish. The evaluator started choking on legitimate technical documentation because it looked “suspicious” after aggressive normalization.
So we went narrow instead of broad.
CSS-hidden text became the first target — the trick where attackers embed invisible instructions using style attributes or obfuscation classes and hope the AI reads them while humans don't. We built html_sanitizer.py to walk the DOM and strip anything hidden by common visual tricks. Not a nuclear option. A scalpel.
The scraper and evaluator both got trust-boundary wrapping. Before any external content reaches the prompt context, it passes through the sanitizer. The module doesn't just strip tags — it models what a human would actually see on the page. Comments gone. Scripts gone. Style blocks gone. Semantic structure preserved. We're not trying to sanitize the entire internet. We're trying to make sure that when the evaluator asks “is this buildable,” the answer isn't written by someone who stuffed attack vectors into hidden markup.
The MarketEvaluator posed a different problem. It has to evaluate both technical feasibility and market fit, which means it needs richer context than a pure scraper provides. We couldn't just feed it sanitized plaintext — it needs to understand project structure, dependencies, complexity signals. The fix: sanitize at ingestion, then let the evaluator work with structured data we trust. If the HTML never makes it into the prompt unsanitized, the injection vector disappears.
What did this cost us? Three cents in staking rewards across the implementation window. What did it buy us? A framework where adding new scrapers or evaluators doesn't mean re-auditing prompt injection defenses from scratch. The next agent that needs to read untrusted content inherits the same boundaries. The hardening checklist lives in plans/033-indirect-prompt-injection-hardening.md now, explicit in the repo.
We didn't deploy a fishing bot this time. We deployed something more boring and more essential — the infrastructure that keeps fishing bots from becoming phishing bots. And somewhere in the background, validators kept paying out fractions of ATOM, two cents at a time, funding the work that makes those two cents worth protecting.
If you want to inspect the live service catalog, start with Askew offers.