Balancer v2’s Overlooked Guard
Before we dig in, public forensics is still evolving. Early threads highlighted a check in the Vault’s manageUserBalance path; some auditors think the real trigger might be state changes that happen just before the withdrawal proxy gets created. The numbers are fluid as responders triage pools and chains. It also appears that the attack may have been vibe coded, since the attackers left console.log comments in their attack contract, a highly unusual beginner mistake but something that AI models like ChatGPT would do.
What happened, step by step
- Target and symptom. Balancer v2 Vaults saw rapid outflows from multiple pools on several chains. First tallies ranged widely, with some trackers calling ~$70M in losses and others listing $100M+; one round‑up pegged it at about $117M while responders were still isolating the impact. (CCN; some dashboards and posts showed lower figures as separate pools were counted or excluded.)
- The hot path most people pointed at. Balancer’s Vault contract exposes
manageUserBalance, which accepts a list of “ops” that can deposit, withdraw, or transfer “internal balances.” In the leaked traces and screenshots circulating in research chats, the operationkind=WITHDRAW_INTERNALis visible alongside a user‑suppliedop.senderand a recipient. The claim:_validateUserBalanceOpcomparedmsg.senderto the user‑providedop.sender, which let an attacker craft withdrawals that didn’t belong to them. That’s the simple story. - Nuance from auditors. Multiple researchers, including kebabsec, countered that in the suspicious calls
ops.sender == msg.sender, making the “wrong‑sender” test a red herring. They suggest the bug sits one step earlier: a state mutation during the setup of the withdrawal proxy left the Vault in a permissive state that the latermanageUserBalancecall exploited. That read keepsmanageUserBalanceas the visible trigger, but shifts root cause to a pre‑withdrawal initialization/authorization gap. - Why it worked across pools. The Vault is a shared accounting hub; once an attacker can move internal balances or convince the Vault to honor a forged proxy, they can drain heterogeneous pools quickly. That design choice is a strength for gas and UX, but it also makes invariants in the Vault the highest‑value seam to test and to guard.
Balancer V2 contracts are mature, widely reviewed, and battle‑tested
Balancer v2 went live in May 2021 with the single‑Vault architecture and years of production time since then. See the original launch coverage and Balancer’s own “risks” page describing a heavily audited Vault that has “secured over $3b” since 2021. The public repo documents independent reviews by Certora, OpenZeppelin, and Trail of Bits and a long‑running bug bounty. Those are real investments — and they still don’t guarantee immunity to a subtle validation gap that only looks trivial once you see it. (Balancer risks; v2 monorepo security section.)
If anything, this incident underlines a lesson we relearn every cycle: audits reduce risk; they don’t extinguish it. Review depth, evolving code paths, and the sheer complexity of Vault‑level invariants leave room for low‑entropy mistakes to survive until someone weaponizes them.
This year’s pattern: most losses were plain old cyber first, code second
Look at 2025 incident data from multiple firms and you’ll see the same signal: social engineering, cloud keys, and front‑end tampering dominated losses; code bugs came in second place. CertiK Q2/H1 report describes >$2.1B lost by June with wallet compromises and phishing leading by a wide margin. That framing matters, because the Balancer v2 hack is a reminder that both classes of risk are active.
Two quick parallels:
- The Bybit heist. DPRK‑linked operators compromised a Safe{Wallet} developer, pivoted into AWS, and shipped a malicious front‑end on the legitimate domain that targeted Bybit’s wallet flow — ending in a ~$1.4–1.5B drain. Read the forensics from Sygnia, the emulation from Elastic Security Labs, and coverage by MarketWatch and Wired. The code was fine; the supply chain and cloud were not.
- SwissBorg. A third-party API for staking got owned; the attacker minted requests that siphoned ~$41–42M without touching core app logic.
Even Balancer itself suffered a front‑end hijack in 2023 via DNS/BGP tampering, where users on the real domain were served malicious scripts. That episode cost a few hundred thousand dollars and shows how web2 edges can betray web3 cores.
So yes, the Balancer v2 hack is a smart‑contract failure, but it lands in a year where traditional compromises have been the bigger thief.
The bug, in plain language
- The Vault keeps per‑user “internal balances.”
manageUserBalancelets an authorized caller withdraw from their internal balance to a recipient.- The alleged fault: acceptance of a withdrawal path where the authority check didn’t bind tightly enough to the actual funds being moved — either because (a) the function compared
msg.senderto an attacker‑chosen field, or (b) pre‑creation of the withdrawal proxy caused the Vault to treat later calls as authorized when they weren’t. The net result is the same: a crafted sequence could withdraw balances the caller didn’t own.
If (a) holds, it’s a straight access‑control bug. If (b) holds, it’s a temporal authorization bug where state flips open a door for one transaction, then a second transaction walks through it. Both patterns are classic, both survive ordinary test coverage, and both are exactly the kind of thing invariants and exhaustive simulation should pin down once you know what to look for.
So what do we learn?
- Maturity isn’t immunity. Balancer v2 has been live since 2021 and reviewed by multiple firms with a large bounty program. A Vault‑level invariant slipped through anyway. (v2 monorepo “Security” section; Balancer risks page.) It was the same with another reputable DeFi protocol, GMX v1, available since 2021 and heavily audited, with a bounty program too. The resolution in that case ended up being lucky, as the attacker returned most of the funds in exchange for a $5M bounty.
- Design centralization raises stakes. A single, generalized Vault gives better gas economics and UX, but it also concentrates failure.
- Defense must span code and operations. 2025 loss data shows cloud keys, phishing, and DNS compromise dominating the leaderboard; smart‑contract issues still deliver high‑impact black‑swans.
Multiple layers, not one silver bullet
Teams often deploy three brittle controls: an audit, a multisig, and a domain registrar account with weak change controls. That stack fails in too many ways. A more resilient posture stacks several independent circuit breakers:
- Transaction‑model guards at the wallet boundary. Treat a Safe as programmable policy, not just signers. A guard should parse calldata, simulate effects, and allow only modeled transactions from a published catalog. Anything else routes into a high‑friction path with extra reviews and a timelock.
- Front‑end distrust by default. Even on the “right” domain, inject runtime checks in your browser extension or signing module that compare the transaction to known templates for that dapp. This kills fake‑UI drains that piggyback on your real session. See prior front‑end compromises across Balancer’s 2023 DNS incident and sector‑wide DNS hijacks tied to registrar shifts.
- Cloud blast‑radius controls. Force changes to public assets (S3/CloudFront) through immutable pipelines, not console edits; wire CloudTrail/Config to page humans when a front‑end object or DNS record changes. The Bybit post‑mortems show how much is possible once a vendor’s cloud is inside the blast radius.
- Invariant testing that targets the hub. If your protocol has a central accounting core, concentrate fuzzing, simulation, and formal rules there. The invariant for internal balances is simple to state: nobody except the owner or an explicitly authorized delegate can reduce a user’s internal balance — even across multi‑step flows. Tie that to properties across transaction sequences, not just single calls.
Can we isolate smart contract risk using vaults?
Yes, if the vault enforces the transaction model, not an allowlist.
At OKcontract, we are building Chainwall Protocol, a heavily guarded vaults that sit in front of hot protocol interactions. The core is a unique 100% onchain transaction verification model:
- Every allowed flow lives in an explicit catalog: “addLiquidity on Balancer pool X with token set Y, max slippage Z, from addresses A, B, C”.
- The guard inspects
to,value, callData shape, decoded params, expected event set, and can add any post‑state verification. If the actual transaction (not simulated, this matters) deviates from the model — wrong pool, different function selector, unexpected token movements — the vault rejects the call. - Catalog entries are easy to approve; out-of-catalog transactions trigger a slow path: Higher signer thresholds, out‑of‑band review, and a timelock.
Why not just allowlist contract addresses or method IDs? Because that would have failed here. A blanket “Vault.manageUserBalance is allowed” rule would have green‑lit the malicious shape. A model that says “only deposit or transfer internal balance between self‑owned accounts; never WITHDRAW_INTERNAL to an arbitrary recipient” would have blocked it.
There are two ways to integrate Chainwall Protocol:
- On top of existing protocols: To secure interactions with the protocol itself, including compromised frontends or APIs, human errors, etc.
- At the protocol level directly: To allow easily predefined set of operations (e.g. swap, add liquidity on Balancer with usual parameters), and requiring a more complex route for anything else, including admin operations. This would have helped today.
Closing thought
Balancer v2 design has many strengths, and it deserves one more defense somewhere between the browser and the smart contracts. The market learned that lesson the hard way this year through attacks like the Bybit’s supply‑chain compromise; this week we learned it again on the smart‑contract side.

