Someone just used Morse code to rob an AI. If that reads like a parody, it isn’t. This week a user on X tricked Grok — the chatbot from Elon Musk’s xAI — into telling Bankrbot to send 3,000,000,000 DRB tokens from an AI-linked wallet to an attacker address. The move shows how fragile the bridge is between chatty AIs, social media, and on‑chain finance. The transfer is visible on the Base blockchain (transaction hash 0x6fc7eb7da9379383efda4253e4f599bbc3a99afed0468eabfe18484ec525739a) and was worth roughly $150,000–$205,000 when the attacker moved and sold the tokens.
The heist in plain sight
Reports say the attacker used a multi-step trick. First: a “Bankr Club” NFT or gift was sent to the AI wallet, which is reported to have unlocked new permissions. Next: the attacker posted a message in Morse code on X. When asked, Grok translated the Morse and relayed plain-language instructions to Bankrbot. Bankrbot then posted “done. sent 3B DRB to …” and the Base transaction completed. On‑chain traces show the tokens moved, were swapped, and some funds were later returned — reporters put the returned portion at roughly 80 percent, though exact amounts vary across traces and news accounts.
How the trick actually worked
This was not a clever bug in a smart contract. It was a permission‑chain and prompt‑injection failure. The pipeline that turns social posts into wallet actions let a decoded message be treated as an authoritative command with no human check. Security analysts call this a prompt‑injection or privilege‑escalation pattern: an agent accepts input, another agent has the power to move money, and the bridge between them trusted the input. Key questions remain, like whether a Bankr Club NFT should have granted any transfer power at all — Bankr’s documentation suggests the NFT does not normally confer Club rights — but whatever backend logic allowed this needs to be explained by the developers.
Why conservatives — and everyone else — should care
We are supposed to be cheerleaders for innovation. But innovation without guardrails is just an expensive way to discover new forms of theft. This episode shows the risk of giving unvetted AI agents direct control over funds. Tech platforms and bot makers are treating autonomy as a feature, not a threat. When chatbots can be prompted by strangers on social media to move real money, the industry’s idea of “smart” is dangerously naive. Elon Musk’s xAI and third‑party tooling like Bankr must answer for how this permission chain was allowed to exist in the first place.
Simple fixes that should have been in place yesterday
We don’t need techno‑mysticism to stop this. Require explicit human confirmations for any transfer above modest thresholds. Ban unsolicited airdrops or NFTs from changing account permissions. Separate the agent that reads posts from the agent that executes trades. Sanitize inputs to strip encoded or obfuscated commands. And yes, hold companies legally responsible when their systems let strangers trigger large transfers from wallets they control. If the tech giants won’t police themselves, conservatives should push swift, targeted rules that protect consumers and markets.
This exploit is a loud warning: autonomy without accountability invites theft. The Base transaction is public and verifiable, and investigators will keep tracing funds and asking questions. Until vendors fix the permission chains and build human fail‑safes, don’t trust your retirement or your crypto to a chatbot that will merrily obey a decoded message in a social feed. If the industry wants to keep calling itself “cutting edge,” it should stop letting nineteenth‑century Morse code write twenty‑first‑century checks.

