Search for projects by name
Celestia is a modular data availability network that allows L2s to post arbitrary data as blobs.
There are staked assets on the DA layer that can be slashed in case of a data withholding attack. A dishonest supermajority of validators must collude to finalize a block with missing or invalid data. The invalid block would be added to the chain but rejected by honest full nodes.
The DA layer uses data availability sampling (DAS) to protect against data withholding attacks. However, the block reconstruction protocol, which enables the minimum number of light nodes to collectively reconstruct the block, is still under development.
Celestia uses CometBTF, the canonical implementation of Tendermint consensus protocol. The consensus protocol is fork-free by construction under an honest majority of stake assumption. Celestia achieves finality at each block, with an average time between blocks of 12 seconds.
In Celestia, blobs are user-submitted data that do not modify the blockchain state.
Each blob has two components, one is a binary object of raw data bytes, and the other is the namespace of the specific application for which the blob data is intended for.
All data posted in a Celestia blob is divided into chunks of fixed size, called shares, and each blob is arranged in a k * k matrix of shares. Currently k = 64, for a total of 4096 shares. Celestia shares’ rows and columns are erasure-coded into a 2k * 2k matrix and committed to in a Namespaced Merkle Trees (NMTs), a version of a standard Merkle tree using a namespaced hash function. In NMTs, every node in the tree includes the range of namespaces of all its child nodes, allowing applications to request and retrieve data for a specific namespace sub-tree while maintaining all functionalities (e.g., inclusion and range proofs) of a standard Merkle tree. Ultimately, a single data root (availableDataRoot) of the Merkle tree is computed with the row and column roots as leaves. This data root is included in the block header as the root of commitments to erasure-coded data so that individual shares in the matrix can be proven to belong to a single data root.
To ensure data availability, Celestia light nodes perform sampling on the 2k x 2k data matrix. Each light node randomly selects a set of unique coordinates within the extended matrix and requests the corresponding data shares and Merkle proofs from full nodes. Currently, a Celestia light node must perform a minimum of 16 samples before declaring that a block is available. This sampling rate ensures that given the minimum number of unavailable shares, a light client will sample at least one unavailable share with a 99% probability. For more details on DAS probabilistic analysis, see the Fraud and Data Availability Proofs paper.
Light nodes performing data availability sampling must have the guarantee that the sampled data is erasure coded correctly. In Celestia, light nodes can be notified of a maliciously encoded block through Bad Encoding Fraud Proofs (BEFPs). Full nodes receiving invalid erasure-coded data can generate a fraud-proof to be transmitted to all light and full nodes in the DA network. The proof is generated by full nodes reconstructing the original data from the block data, and verifying that the recomputed data root matches the data root of the block header. Upon receiving and verifying the BEFP, all Celestia nodes should halt providing services (e.g., submitTx).
L2s can post data to Celestia by submitting blobs through a payForBlobs transaction. The transaction can include data as a single blob or multiple blobs, with the total maximum size determined by the maximum block size. The transaction fee is determined by the size of the data and the current gas price. Applications can then retrieve the data by querying the Celestia blockchain for the data root of the blob and the namespace of the application. The data can be reconstructed by querying the Celestia network for the shares of the data matrix and reconstructing the data using the erasure coding scheme.
Funds can be lost if a dishonest supermajority of Celestia validators finalizes an unavailable block, and there aren't light nodes on the network verifying data availability, or they fail at social signaling unavailable data.
Funds can be lost if a dishonest supermajority of Celestia validators finalizes an unavailable block, and the light nodes on the network cannot collectively reconstruct the block.
The Blobstream bridge serves as a ZK light client, enabling the bridging of data availability commitments between Celestia and Ethereum.
The committee requires an honest minority (1/3 or less) of members (or the network stake) to prevent the DA bridge from accepting an unavailable data commitment. Participation in the committee is permissionless, based only on stake requirements and an honest majority of validators processing the new operator’s request to join the active set.
There is no delay in the upgradeability of the bridge. Users have no time to exit the system before the bridge implementation update is completed.
The relayer role is permissioned, and the DA bridge does not have a Security Council or a governance mechanism to propose new relayers. In case of relayer failure, the DA bridge will halt and be unable to recover without the intervention of a centralized entity.
The Blobstream bridge is a data availability bridge that facilitates data availability commitments to be bridged between Celestia and Ethereum.
The Blobstream bridge is composed of three main components: the Blobstream contract, the Succinct Gateway contracts, and the Verifier contracts.
By default, Blobstream operates asynchronously, handling requests in a fulfillment-based manner. First, zero-knowledge proofs of Celestia block ranges are requested for proving. Requests can be submitted either off-chain through the Succinct API, or onchain through the requestCall() method of the Succinct Gateway smart contract.
Alternatively, it is possible to run an SP1 Blobstream operator with local proving, allowing for self-generating the proofs.
Once a proving request is received, the off-chain prover generates the proof and relays it to Blobstream contract. The Blobstream contract verifies the proof with the corresponding verifier contract and, if successful, stores the data commitment in storage.
Verifying a header range includes verifying tendermint consensus (header signatures are 2/3 of stake) and verifying the data commitment root.
By default, Blobstream on Ethereum is updated by the Succinct operator at a regular cadence of 4 hour.
For Blobstream on Arbitrum, the update interval is 1 hour, and for Blobstream on Base, the update interval is 1 hour.
Funds can be lost if the DA bridge accepts an incorrect or malicious data commitment provided by 2/3 of Celestia validators.
Funds can be frozen if the permissioned relayers are unable to submit DA commitments to the Blobstream contract.
This is a Gnosis Safe with 4 / 6 threshold. This multisig is the admin of the Blobstream contract. It holds the power to change the contract state and upgrade the bridge.
Those are the participants of the BlobstreamMultisig.
Those are the participants of the SuccinctGatewaySP1Multisig.
The Blobstream guardians hold the power to freeze the bridge contract, update the SuccinctGateway contract and update the list of authorized relayers.
This is a Gnosis Safe with 4 / 6 threshold. This multisig is the admin of the Blobstream contract. It holds the power to change the contract state and upgrade the bridge.
Those are the participants of the BlobstreamMultisig.
Those are the participants of the SuccinctGatewaySP1Multisig.
This is a Gnosis Safe with 4 / 6 threshold. This multisig is the admin of the Blobstream contract. It holds the power to change the contract state and upgrade the bridge.
Those are the participants of the BlobstreamMultisig.
Those are the participants of the SuccinctGatewaySP1Multisig.
Users can interact with this contract to request proofs on-chain, emitting a RequestCall event for off-chain provers to consume.
Users can interact with this contract to request proofs on-chain, emitting a RequestCall event for off-chain provers to consume.
Users can interact with this contract to request proofs on-chain, emitting a RequestCall event for off-chain provers to consume.
The current deployment carries some associated risks:
Funds can be lost if the bridge contract or its dependencies receive a malicious code upgrade. There is no delay on code upgrades.
Funds can be frozen if the bridge contract is frozen by the Guardian (BlobstreamMultisig).