How to distribute 255GB of HTML and still make it browsable.
This is sick. The Dat team is benchmarking 2.0 using a dump of Wikipedia. One peer is seeding the whole archive—the peer in the video is selectively downloading files. And pages are rendered in a few seconds.
The total archive is 255GB of content with 5GB of internal metadata. This browsing session pulled down 3mb of the metadata and 6mb of content to the local device. (Again, this bench is showing the site get served fresh over the lan from another device.)
The innovation here is the new hash-trie index, which was laid out by Mathias Buus in the recent talk at Data Terra Nemo.
To me, this is reassuring. Beaker has really made progress toward becoming a stable peer-to-peer web browser—and to see them hustling on performance, working to improve the fundamentals—gives me great confidence. I can’t see Beaker becoming mainstream, but I think it could be tremendously useful to everyone else: artists, archivers, the underground—not in a ‘dark web’ sense, but in the sense of those who want to experiment and innovate outside of the main network.
Anyway—just want to encourage this work. This team is really pouring work into the protocol. Happy to give them some kudos.
In fact, maybe what could happen here is just that there could be a kind of Web between the centralized one and the ‘dark’ one. Fully anonymized networks just have such a target on their heads. ↩︎