Litecoin

The DA battle is over? Deconstruct PeerDAS and how to help Etherfrog regain data sovereignty

2025/12/19 00:44
👤PANews
🌐en
The DA battle is over? Deconstruct PeerDAS and how to help Etherfrog regain data sovereignty

By mToken

Towards the end of 2025, the Ethera community welcomed the upgraded Fusaka intake。

LOOKING BACK OVER THE PAST YEAR, WHILE THE EXPLORATION OF BOTTOM TECHNOLOGY UPGRADING HAS GRADUALLY FADED OUT OF THE MARKET ' S SPOTLIGHTS, IT IS BELIEVED THAT MANY CHAIN USERS HAVE COME TO FEEL A SIGNIFICANT CHANGE: THE ETA L2 IS BECOMING CHEAPER。

Behind the current chain of interaction, whether transfer or complex DeFi operations, Gas fees often require a few cents or even negligible, the Dencun upgrade and the Blob mechanism are certainly working, while at the same time, as the Fusaka upgrades the core features of PeerDAS (Peer Data Availability Sampling, Validation of Point-to-point Data Availability Sampling) are officially activated, and the Taifung is thoroughly setting off the "full download" data validation era。

We can say that:It is not only Blob itself, but also PeerDAS, which is the next step。

I. What's Peerdas

In order to understand the revolutionary significance of PeerDAS, we cannot talk about the concept without going back to a key node on the way to the expansion of the ETA, namely, the upgrading of Dencun in March 2024。

EIP-4844, in other words, by introducing a transaction model (which embeds a large amount of transaction data into the blob) that allows the L2s to move away from expensive calldata storage to temporary blob storage。

This change is directTo reduce the cost of Rollup to one tenth of the original, the L2 platform can provide cheaper and faster transactions without compromising the safety and decentrization of the Ether WorkshopIt also gives us a lot of users a taste of the "low Gas era."。

However, while Blob is very useful, the number of Blobs that can be carried by each sector of the ETA network has a hard ceiling (usually 3-6), for very realistic reasons, i.e. physical bandwidth and hard drives are limited。

In the traditional certification model, each of the network's certifiers (Validator), whether a server operated by a professional agency or a common computer in a family, must still download and disseminate the full Blob data to confirm the validity of the data。

This brings with it a dilemma:

  • If the number of Blobs is increased (in order to expand): data surges, the bandwidth of the household nodes will run full and the hard drive will be plugged, resulting in them being forced offline, the network will be rapidly centralized and eventually turned into a giant chain that can only operate in a large room
  • If you limit the number of Blobs (in order to decentrize): L2 is locked up and unable to respond to future surge demand。

Frankly, Blob just took the first step in solving the problem of where the data is. When data are small, everything is fine, but if the number of Rollups continues to increase in the future, each Rollup is submitting data at high frequency, and Blob capacity continues to expand, bandwidth and storage pressures at nodes become a new central risk。

If the traditional full-volume download pattern continues, and the bandwidth pressure cannot be resolved, the expansion route of the Taifeng will run its head into blood in front of the physical-bandwidth wall, and PeerDAS is the key to unlocking this dead end。

PeerDAs is essentially a brand-new data validation structure that breaks the iron law that certification has to download in fullAllows Blob to expand beyond the current level of physical throughput (e.g. jump from 6 Blob/ block to 48 or more)。

Two, Blob, solve "where to put", PeerDAS, solve "how to save."

As stated aboveBlob took the first step of expanding its scope by solving the problem of "where to store" data (from expensive Calldata to temporary Blob space), so PeerDAs is about "how to save more efficiently."。

The central question that it will address is how to keep the physical bandwidth of nodes intact while the data volume index expands? The idea is also straightforward: based on probability and distributional collaboration, "there is no need for each person to have complete data and a high probability of confirming that they exist."。

This can be seen in the full name of PeerDAS, which is actually a "sampling of point-to-point data availability"。

The concept sounds obscure, but we can understand this paradigm shift by using a popular metaphor, such as the full validation of the past, as if the library had entered a thousands of pages of the British Encyclopaedia (Blob data), which, in order to prevent loss, requires each administrator (node) to have a copy of a full book in hand as a backup。

That means that only people with money who are idle (bandwidth/drive) can be managers, especially the British Encyclopedia (Blob data) will be expanding and growing in content, and that, in the long run, ordinary people will be eliminated and decentralised。

And now, based on the PeerDAs sample, techniques such as Erasure Coding have been introduced, allowing the book to be torn into countless pieces and extended with mathematical coding, with each administrator no longer having to hold the whole book, but simply taking a few random pages in his hand。

And even if you do this, you don't need anyone to produce the whole book, and in theory, if you put together 50% of the debris on the whole web (whether you have page 10 or page 100), we can use mathematical algorithms to return the entire book to the full picture in 100% certainty。

This is also the magic of PeerDAS & mdash; —The burden of downloading data was removed from the individual nodes and dispersed into a network of collaborations comprising thousands of nodes across the network。

Source: @Maaztwts

Intuitive data dimensions aloneBefore Fusaka was upgraded, the number of Blobs was stuck to death in single digits (3-6). And the PeerDAS land, tearing this ceiling directly apart, allows the Blob target to jump from 6 to 48 or more。

When the user initiates a transaction on Arbitrum or Optimism and the data is packaged back to the main network, there is no longer a need to broadcast the complete data package on the Internet, which makes it possible to leapfrog the no more linear nodal cost of expansion in the ETA。

Objectively speaking, Blob + PeerDAS is the full-fledged DA (data availability) program, which, from the perspective of the road map, is also the key transition from the ETA to full Danksharding。

III. A New Normal in the Post Fusaka Age

As is well known, two years ago, third-party modular DA layers, such as Celesti, once gained enormous market space because of the high value of the Taifung network, and their narrative logic was based on the premise that it was expensive to store data in the first place。

With Blob and the newest PeerDAS, the Ether House is now cheap and extremely safeTHE L2 COST OF PUBLISHING DATA TO L1 HAS BEEN CUT IN HALF, AND THE ETA HAS THE LARGEST POOL OF CERTIFIERS ON THE WEB, WITH GREATER SECURITY THAN THIRD-PARTY CHAINS。

Objectively, this is a demeaning blow to third-party DA programmes such as Celestia, marking the recovery of the Ether Workshop's sovereignty over the availability of data, which is squeezing their living space。

You might ask, what does that have to do with my wallet, transfer, DeFi

The relationship is very direct. If PeerDAS succeeds in landing, it means that L2 data costs can be kept low for the long term, Rollup will not be forced to raise costs because of the DA backlash, chain applications will be able to design high-frequency interaction, and wallets and DApp will not have to compromise between "functional vs costs".

In other wordsThe cheap L2 we can use today is the credit of Blob, and if the future can still be used, the silent contribution of PeerDAS will be necessary。

This is why, in the ETA's expansion road map, PeerDAS, though low-key, has always been seen as an inescapable stop, which is essentially the best form of technology & mdash; & mdash; "for the benefit, the loss is hard to feel."。

At the end of the day, PeerDAs proved that a block chain can carry Web2-grade big data without excessive sacrifice to go centered visions through sophisticated mathematical design (such as data sampling)。

So, the data highway at the Etheraf has been completely paved, and the next car on the road is the question to be answered at the application level。

Let's see。

QQlink

ไม่มีแบ็คดอร์เข้ารหัสลับ ไม่มีการประนีประนอม แพลตฟอร์มโซเชียลและการเงินแบบกระจายอำนาจที่ใช้เทคโนโลยีบล็อกเชน คืนความเป็นส่วนตัวและเสรีภาพให้กับผู้ใช้

© 2024 ทีมวิจัยและพัฒนา QQlink สงวนลิขสิทธิ์