Kicks Condor
05 Oct 2018

‘One day my window was darkened by the form of a young hunter. The man was wearing leather and carrying a rifle. After looking at me for a moment, he came to my door and opened it without knocking. He stood in the shadow of the door and stared at me. His eyes were milky blue and his reddish beard hardly concealed his skin. I immediately took him for a half-wit and was terrified. He did nothing: after gazing at what was in the room, he shut the door behind him and went way.’

— from “The House Plans” by Lydia Davis, p53 in The Collected Stories

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

The GeoCities Research Institute

A gateway to the Old Web and its sparkling, angelic imagery.

I try not to get too wrapped up in mere nostalgia here—I’m more interested in where the Web is going next than where it’s been. But, hell, then I fumble into a site like this one and I just get sucked up into the halcyon GIFs.

This site simply explores the full Geocities torrent, reviewing and screenshotting and digging up history. The archive gets tackled by the writers in thematic bites, such as sites that were last updated right after 9/11, tracking down construction cones, or denizens of the ‘Pentagon’ neighborhood.

Their restoration of the Papercat is really cool. Click on it. Yeah, check that out. Now here’s something. Get your pics scanned and I’ll mail you back? Oh, krikey, Dave (HBboy). What a time to be alive.

But, beyond that, there is a network of other blogs and sites connected to this one:

Pixel art of woman onswing.

I was also happy to discover that the majority (all?) of the posts are done by Olia Lialina, who is one of the original net.artists—I admire her other work greatly! Ok, cool.

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

02 Oct 2018

Taming Outlandish TiddlyWikis

A prototype for the time being.

I’m sorry to be very ‘projecty’ today—I will get back to linking and surfing straightway. But, first, I need to share a prototype that I’ve been working on.

Our friend h0p3[1] has now filled his personal, public TiddlyWiki to the brim—a whopping 21 MEGAbyte file full of, oh, words. Phrases. Dark-triadic memetic, for instance. And I’m not eager for him to abandon this wiki to another system—and I’m not sure he can.

So, I’ve fashioned a doorway.

This is not a permanent mirror yet. Please don’t link to it.

Screenshot of the h0p3 archive page.

Yes, there is also an archive page. I took these from his Github repo, which appears to go all the way back to the beginning.

Ok, yes, so it does have one other feature: it works with the browser cache. This means that if you load snapshot #623 and then load #624, it will not reload the entire wiki all over again—just the changes. This is because they are both based on the same snapshot (which is #618, to be precise.) So—if you are reading over the course of a month, you should only load the snapshot once.

Snapshots are taken once the changes go beyond 2 MB—though this can be tuned, of course.

  • Total size of the raw archive: 6.2 gigs.
  • Size of my kicksnap’d archive: 736 megs.

Shrunk to 11% of its original size. This is done through the use of judicious diffs (or deltas). The code is in my TiddlyWiki-loader repository.

A Few Lessons I Picked Up

I picked up this project last week and kind of got sucked into it. I tried a number of approaches—both in snapshotting the thing and in loading the HTML.

I ended up with an IFRAME in the end. It was just so much faster to push a 21 MB string through IFRAME’s srcdoc property than to use stuff like innerHTML or parseHTML or all the other strategies.

Also: document.write (and document.open and document.close) seems immensely slow and unreliable. Perhaps I was doing it wrong? (You can look through the commit log on Github to find my old work.)

On the Snapshot Technique

I originally thought I’d settled on splitting the wiki up into ~200 pieces that would be updated with changes each time the wiki gets synchronized. I got a fair bit into the algorithm here (and, again, this can be seen in the commit log—the kicksplit.py script.)

But two-hundred chunks of 21 MB is still 10k per chunk. And usually a single day of edits would result in twenty chunks being updated. This meant a single snapshot would be two megs. In a few days, we’re up to eight megs.

Once I went back to diffs and saw that a single day usually only comprised 20-50k of changes (and that this stayed consistent over the entire life of h0p3’s wiki,) I was convinced. The use of diffs also made it very simple to add an archives page.

In addition, this will help with TiddlyWikis that are shared on the Dat network[2]. Right now, if you have a Dat with a TiddlyWiki in it, it will grow in size just like the 6 gig folder I talked about in the last box. If you use this script, you can be down to a reasonable size. (I also believe I can get this to work directly from TiddlyWiki from inside of Beaker.)

And, so, yeah, here is a dat link you can enjoy: dat://38c211…a3/

I think that’s all that I’ll discuss here, for further technical details (and how to actually use it), see the README. I just want to offer help to my friends out there that are doing this kind of work and encourage anyone else who might be worried that hosting a public TiddlyWiki might drain too much bandwidth.


  1. philosopher.life, dontchakno? I’m not going to type it in for ya. ↩︎

  2. The network used by the Beaker Browser, which is one of my tultywits. ↩︎

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

Nikita’s Collected Knowledge

Along with a discussion of personal encyclopedias.

There has been a small, barely discernable flurry of activity lately[1] around the idea of personal knowledge bases—in the same vicinity as personal wikis that I like to read. (I’ve been a fan of personal encyclopedias since discovering Samuel Johnson and, particularly, Thomas Browne, as a child—and am always on a search for the homes of these types of individuals in modernity.)

Nikita’s wiki is the most established of those I’ve seen so far, enhanced by the proximity of Nikita’s Learn Anything, which appears to be a kind of ‘awesome directory’[2] laid out in a hierarchical map.

Screenshot of learn-anything.xyz

Another project that came up was Ceasar Bautista’s Encyclopedia, which I installed to get a feel for. You add text files to this thing and it generates nice pages for them. However, it requires a bunch of supporting software, so most people are probably better served by TiddlyWiki. This encyclopedia’s main page is a simple search box—which would be a novel way of configuring a TiddlyWiki.

I view these kinds of personal directories as the connecting tissue of the Web. They are pure linkage, connecting the valuable parts. And they, in the sense that they curate and edit this material, are valuable and generous works. To be an industrious librarian, journalist or archivist is to enrich the species—to credit one’s sources and to simply pay attention to others.

I will also point you to the Meta Knowledge repo, which lists a number of similar sites out there. I am left wondering: where does this crowd congregate? Who can introduce me to them?


  1. Mostly centering around these two discussion threads:

    ↩︎
  2. Discussed at The Awesome Directories. ↩︎

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

Reply: Strategy: Minimize

Tim Swast

Neat idea. My strategy has been to minimize the amount of shared headers and footers. Your method seems much more flexible.

Took a look at your blog—it’s sweet! I will be sure to include it in my next href hunt. I enjoyed the article about Dat and am interested in finding others who write about practical uses of the ‘dweb’—unfortunately, many of the links on Tara Vancil’s directory are ‘broken’ (perhaps ‘vanished’ is more correct?) and I’m not sure how to discover more.

At any rate, good to meet you!

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

Reply: Owning Your Content

This is a great article! It follows all of the same things I put in my reply—which makes me nod my head, for sure—but it also goes into much more detail and thought, which I very much appreciate.

The war against the word ‘content’ is also rad. Yeah, keep that up.

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

Fake HTML Includes (for Beaker)

My personal strategy for handling HTML on the distributed Web.

So, HTML is a bit different on the distributed Web (the Dat network which the Beaker Browser uses, IPFS and so on) because your file history sticks around there. Normally on the Web, you upload your new website and it replaces the old one. With all of these other ‘webs’, it’s not that way—you add your new changes on top of the old site.

Things tend to pile up. You’re filing these networks with files. So, with a blog, for instance, there are these concerns:

  • I want common things like headers and footers to be in separate files—because they bloat every one of my pages.
  • I also want them in separate files so that when I change something in my header it doesn’t change EVERY PAGE in my site—pushing lots of changes onto the network.
  • The trend with Dat seems to be that websites are delivered more as applications—where you could potentially access the underlying posts in a format like JSON, rather than just having a raw HTML dump.

Ultimately, I might end up delivering a pure JavaScript site on the Dat network. It seems very efficient to do that actually—this site weighs in at 19 MB normally, but a pure JavaScript version should be around 7 MB (with 5 MB of that being images.)

My interim solution is to mimick HTML includes. My tags look like this:

<link rel="include" href="/includes/header.html">

The code to load these is this:

document.addEventListener('DOMContentLoaded', function() {
  let eles = document.querySelectorAll("link[rel='include']");
  for (let i = 0; i < eles.length; i++) {
    let ele = eles[i];
    let xhr = new XMLHttpRequest()
    xhr.onload = function() {
      let frag = document.createRange().
        createContextualFragment(this.responseText)
      let seq = function () {
        while (frag.children.length > 0) {
          let c = frag.children[0]
          if (c.tagName == "SCRIPT" && c.src) {
            c.onload = seq
            c.onerror = seq
          }
          ele.parentNode.insertBefore(c, ele);
          if (c.onload == seq) {
            break
          }
        }
      }
      seq()
    }
    xhr.open('GET', ele.href);
    xhr.send();
  }
})

You can put this anywhere on the page you want—in the <head> tags, in a script that gets loaded. It will also load any scripts inside the HTML fragment that gets loaded.

This change saved me 4 MB immediately. But, in the long run, the savings are much greater because my whole site doesn’t rebuild when I add a single tag (which shows up in the ‘archives’ box on the left-hand side of this site.)

I would have used ‘HTML imports’—but they aren’t supported by Firefox and are a bit weird for this (because they don’t actally put the HTML inside into the page.)

I am happy to anyone for improvements that can be made to this.

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

New technologies always seem to, at least initially, create more problems than they solve.

This post accepts webmentions. Do you have the URL to your post?

You may also leave an anonymous comment. All comments are moderated.

PLUNDER THE ARCHIVES

This page is also at kickssy42x7...onion and on hyper:// and ipns://.

MOVING ALONG LET'S SEE MY FAVORITE PLACES I NO LONGER LINK TO ANYTHING THATS VERY FAMOUS

glitchyowl, the future of 'people'.

jack & tals, hipster bait oracles.

maya.land, MAYA DOT LAND.

hypertext 2020 pals: h0p3 level 99 madman + ᛝ ᛝ ᛝ — lucid highly classified scribbles + consummate waifuist chameleon.

yesterweblings: sadness, snufkin, sprite, tonicfunk, siiiimon, shiloh.

surfpals: dang, robin sloan, marijn, nadia eghbal, elliott dot computer, laurel schwulst, subpixel.space (toby), things by j, gyford, also joe jenett (of linkport), brad enslen (of indieseek).

fond friends: jacky.wtf, fogknife, eli, tiv.today, j.greg, box vox, whimsy.space, caesar naples.

constantly: nathalie lawhead, 'web curios' AND waxy

indieweb: .xyz, c.rwr, boffosocko.

nostalgia: geocities.institute, bad cmd, ~jonbell.

true hackers: ccc.de, fffff.at, voja antonić, cnlohr, esoteric.codes.

chips: zeptobars, scargill, 41j.

neil c. "some..."

the world or cate le bon you pick.

all my other links are now at href.cool.