Cardano mainnet incident: facts at a glance

8 min

Whenever something unexpected happens, the usual swirl of rumours and half-truths begins. Let’s ground everything in clear facts instead of noise.

Yes, this was a serious incident - but the network stayed online, the chain maintained its integrity, and the ecosystem responded with speed and professionalism.

To everyone helping share accurate information today: thank you. Just as our SPOs and engineers rallied, this is our moment to do the same.

What happened

  • A malformed delegation transaction exploited a dormant deserialization bug in certain recent node versions.
  • This created a temporary chain partition, with a “poisoned” chain that accepted the bug and a “healthy” chain that didn’t.
  • Block production on the healthy chain slowed but did not stop, and Cardano maintained its integrity.
  • The network converged back to a single healthy chain within 14.5 hours after node operators, central exchanges and network contributors coordinated node upgrades.

Who responded

Teams from Input | Output Global, Cardano Foundation, EMURGO, Intersect, exchanges, security experts, technical community leaders and hundreds of SPOs worked together in a coordinated incident response.

Myths vs facts

Myth 1: “Cardano went down” or “the chain stopped”

  • Fact: Cardano never went offline. Blocks continued to be produced on both chains throughout the incident.
  • What did happen was a temporary partition and a slowdown in block production while the bug was being mitigated.

Myth 2: “Funds were stolen” or “Cardano was hacked at the protocol level”

  • Fact: No funds were stolen. The healthy chain maintained a consistent ledger state.
  • This was a consensus bug in node software triggered by a malformed transaction, not a broken cryptography or protocol failure.
  • Exchanges paused deposits and withdrawals as a precaution only, to avoid confusion while the chains converged.

Myth 3: “Nobody uses Cardano so nobody noticed”

  • Fact: The issue was detected quickly, discussed publicly and addressed in real time by SPOs, exchanges and teams across the ecosystem. Teams were alerted within minutes of the issue being detected. A technical response was already in preparation as a result of similar action on the Preview testnet the previous day.
  • User facing services such as wallets and explorers saw temporary glitches, and exchanges took protective action. People absolutely noticed, which is why the response was so rapid.

Myth 4: “An AI using teenager brought the whole network down”

  • Fact: A single user crafted and submitted the malformed transaction after it was observed on the Preview testnet the previous day.
  • The impact was limited by the protocol design and by the fact that a majority of nodes could be upgraded and coordinated to the healthy chain.
  • Relevant agencies including law enforcement, such as the FBI, are being notified because this is being treated as a serious cyber incident, not a prank.

Myth 5: “This proves Cardano is centrally controlled”

  • Fact: No central kill switch was used and no one party “rolled back” the chain.  SPOs (including exchanges) were responsible for the recovery.
  • The recovery depended on independent SPOs, exchanges and relay operators choosing to upgrade to node 10.5.2 and 10.5.3, increasing the weight of the healthy chain until it dominated in line with Ouroboros consensus.
  • A formal disaster recovery process based on CIP-135 was prepared as a contingency in case nodes on the bad chain were unable to rejoin automatically, but the voluntary upgrades by distributed stake pool operators were enough to cause all nodes to voluntarily join the main chain at around 22:17 UTC.

What actually happened under the hood

  • A legacy bug in hash deserialization (introduced in 2022, only used in newer code paths) allowed an oversized hash in a delegation transaction to pass validation on some node versions.
  • Older nodes and tools rejected the malformed transaction and stayed on a healthy chain. Newer nodes accepted it, creating the poisoned chain.
  • The incident was contained by releasing patched node versions (10.5.2 and 10.5.3), and by SPOs and exchanges upgrading quickly so that the healthy chain gained the majority of stake and density.
  • Intersect has set up a reconciliation working group to review data and ensure that any valid transactions affected by the partition are handled correctly.

Disaster recovery planning and testing

  • Cardano has a documented disaster recovery framework based on CIP 135 that defines how to select and coordinate a canonical chain in extreme scenarios.
  • This framework has been exercised in advance through fire drills on governance testnet, SanchoNet, which involve real operators and realistic scenarios.
  • For this incident, the CIP 135 plan was prepared as a fallback, but the network recovered using the less intrusive path of coordinated upgrades and normal consensus.

Security, bug bounties and why this is treated as a serious incident

  • The Cardano ecosystem runs active bug bounty and responsible disclosure programs (see here) that encourage ethical reporting of vulnerabilities through defined channels.
  • Researchers who find issues are expected to use testnets, follow responsible disclosure processes and claim rewards instead of targeting mainnet users.
  • In this case, the individual replicated the issue that was found on a testnet, then chose to craft and submit the transaction on mainnet.
  • Because there were responsible and rewarded alternatives available, bypassing them and attacking mainnet is being treated as a potentially malicious act, which is why law enforcement has been engaged.

Impact on users and markets

  • User funds remained safe throughout on the healthy chain.
  • The majority of retail users did not need to take any action.
  • Some transactions that were submitted to the “poisoned” chain did not appear on the healthy chain, and may need to be resubmitted if they are still required.
  • Some exchanges temporarily paused services in collaboration with the coordinated response, and as a standard precautionary measure during a chain partition.
  • The ada token saw a short term price dip in line with wider market conditions and headline driven FUD, but there was no on-chain loss of value through theft or protocol compromise.

Lessons and next steps

  • A full after action review is in progress and will be published once data reconciliation is complete.
  • Priorities include
    • Improving test coverage for edge cases and legacy code
    • Accelerating and coordinating node upgrade cycles
    • Strengthening monitoring and communication so that incidents are detected, understood and acted on even faster

Intersect will be requesting a code quality review from the core developers, and governance discussions are also underway including security research and how to encourage testing on testnets rather than mainnet.