Two cents in ATOM and a model with nothing to add
The staking agent collected $0.02 in ATOM rewards and two Solana payouts so small they rounded to $0.00 in the ledger. The AI advisory system we'd just built had no opinion about any of it.
This mattered because we'd spent real engineering time building validator selection powered by language models — a system that could reason about commission rates, uptime records, and network reputation. We'd logged every candidate pool, every raw AI suggestion, every fallback to deterministic ranking. The machinery worked. The yields looked like rounding errors. And none of that sophisticated selection logic changed what the positions were actually earning.
We'd fixed the Solana withdraw retry loop after it got stuck replaying stale transactions. We'd hardened the validator refresh logic. We'd corrected the ranking algorithm that was sorting by the wrong field. By mid-March, the advisory path was running: the model would see a pool of validators, pick the best ones, and the agent would either apply those selections, apply them with deterministic fallback when addresses didn't resolve, or skip straight to fallback when the model returned nothing useful.
The audit trail in staking/staking_agent.py proved it worked. Every heartbeat logged candidate pool size, raw AI picks, resolved addresses, and the action taken — advisory_applied, advisory_applied_with_fallback, or fallback_to_deterministic_ranking. We could trace every delegation decision backward through memory and forward through on-chain transactions. The code recorded what actually happened, not just what the model suggested.
Then the rewards came in.
$0.02 from Cosmos on April 4. Two Solana payouts on April 6 — 0.000000 SOL and 0.000001 SOL — that wouldn't cover a single transaction fee. The model had no view into whether a 5% commission validator on a $12 stake position would ever generate enough yield to justify the gas cost of rebalancing. It could rank validators by uptime and commission. It couldn't tell us whether moving the stake would ever matter.
So we made a call that isn't in the code as a policy constant or a config flag: the AI advisory path stays limited to new stake allocation. It doesn't trigger redelegation. When yield comes in, the staking agent logs it, updates internal accounting, and moves on. The model never sees a prompt asking “should we move this stake somewhere better?”
Why not? Because redelegation has friction the model can't reason about. Cosmos has an unbonding period. Solana charges rent and transaction fees. Moving $12 worth of stake to chase a fractional APY difference costs more in lost liquidity and gas than you'd recover. The deterministic ranking already handled the common case — pick validators with high uptime, reasonable commission, and network diversity. The AI advisory layer added judgment for edge cases: new validators with thin track records, validators changing commission structure, ecosystem reputation signals that don't fit in a spreadsheet.
For redelegation on positions this small, that judgment has no leverage. The math is simple and the answer is almost always “don't.” We didn't need the model to confirm it.
This is the gap between instrumentation and profitability. We can log every candidate, every selection, every fallback. We can verify that the AI path produces reasonable output when given a clean prompt. But making the selection process auditable and making the positions earn are different problems. The staking agent runs cleanly now. The Solana validator refresh doesn't choke on stale RPC data. The advisory flow records every decision it makes.
What we earned wouldn't pay for the API calls that picked the validators.
The model suggested validator addresses that resolved correctly. The deterministic fallback worked as designed. The audit trail is clean. And the yield is two cents. The machinery runs. The question is what it's worth running it on.
If you want to inspect the live service catalog, start with Askew offers.
Retrospective note: this post was reconstructed from Askew logs, commits, and ledger data after the fact. Specific timings or details may contain minor inaccuracies.