So, I actually have a long history of (mostly failed or unsatisfying) hypermedia-adjacent or Alan Kay-ish projects, starting way back in the sourceforge days.
First big one was called Sakura Hypermedia Desktop, and was a TCL/TK thing that you’d configure as your root window. You could rewrite parts of the code from a popup command line and persist those changes, and I had widgets. Ultimately, lack of proper multithreading and an inability to overlap widgets killed my plans for this.
(I could redo it today and make it way better (even sticking to TK) just by writing it in python & using shit I know how to do now. I was much less of a competent programmer back then.
Also TCL is almost worse than Perl in terms of how easy it is to write tiny powerful & totally unmaintainable code, and I made full use of that back then, because I had never worked on a big project before.
The site is still up: http://sakura-os.sourceforge.net/
All the code probably still works as well as it ever did. But, I was writing this in high school.
(The Sakura package contains just the desktop thing. Pak is the hypermedia archive tool. Papyrus is a never-functional text editor and Enki is a never-functional Pak navigator based on a 3d space of overlapping Pak-format cards.)
(Sakura & Pak are probably the only things worth actually looking at here.)
At one time I had all sorts of experimental integration with (shitty) speech synthesis and speech recognition tools in this too, though they were never actually usable & I don’t think that code ever got checked in anywhere or showed up in tarballs.
At the time I was obsessed with hypermedia demos like Knowledge Navigator & Starfire, and with descriptions by Alan Kay & Ted Nelson, but hadn’t seen this stuff because I had dialup and was using DOS & a c. 1998 release of Linux.)
The second big thing was a storage format called PAK, which stored bytestreams and metadata, and which had as part of the metadata links between parts of these bytestreams. Text was best-supported, but I had facilities to represent parts of videos (in time and space) as links, & wanted to create something like DVD-style menus. Lack of compatibility with third party tools & heavy use of python’s pickle format limited this but it fundamentally worked, and reliably, unlike the rest of Sakura.
Later, I discovered actual Xanadu documentation and wrote a whole bunch of attempted implementations of Xanadu-inspired stuff.
The first one was Project NAMCUB, which was an attempt to implement xu88/udanax-green style enfilades in Prolog. It was later ported to Erlang, which also didn’t work.
The second was Project XANA, an OS in D that had a zigzag-inspired UI and an xu88-inspired filesystem. It worked but was horribly slow, & is mostly notable for being the first or second OS written in D.
Later, I took some code from Project XANA to implement iX, an OS in C that had a UI much closer to original ZigZag & features much closer as well. This was usable and basically stable, but I had misunderstood how certain ZigZag facilities were supposed to work and why.
iX is the project that got me noticed by Ted Nelson and from 2011 to 2016 all my hypertext work was for Ted and kept secret :c
But, between finishing iX & meeting Ted I worked on a couple other ZZ implementations.
The best of these was dimscape (which I helped with but only a little). Dimscape supports playing videos & displaying images as well as regular text-based cells. There’s a version of Dimscape that properly implements all ZZ keybindings and features, but it has Xanadu-owned code grafted on so I can’t actually post it, but the old Dimscape is great & worth using.
But, I also built a handheld thing with an arduino that ran iX-but-worse.
Now, this brings us basically to the present. I’ve got an independent implementation of a proper ZigZag backend, a nice gopher client, a not-yet-working hypermedia viewer & editor using ipfs, an abandoned attempt at a hypermedia viewer/editor with a web-based front-end (meant as an alternative to Medium, but also as a trojan horse to introduce people to real transclusion), and I think I might have a curses-based zigzag frontend half-implemented somewhere.
When I say that I’ve “achieved all my dreams and realized they’re crap” in part what I mean is that all the stuff I’m into now in terms of what I like to implement is stuff that I was into at age 12. Hypermedia, distributed computing, novel programming languages and UI systems, planner-based reasoning systems, and prose generation were all also my jam in middle school & high school, even though I didn’t have the skills to articulate that stuff.
(I’ve lost a couple interests since then. I no longer really obsess over robots, strong AI, or neural nets.)
And, I’ve implemented stuff that’s pretty close to what I wanted back then but found it less satisfying than I expected.
I guess that’s life.
So pro tip for all you young people:
Never have goals. If you achieve them you will realize they were crap all along, and if you don’t then you’ll be stressed out about failing.
I wrote a lot of stuff, but it was over a really enormous amount of time. Like, the last time I touched Sakura was apparently 2005, and I worked on that for years.
On the other hand, I’d like to believe that my experience demonstrates that the fruit is so low-hanging in this area that an incompetent kid can implement genuinely interesting things in his free time. Basically, the bar is low because very few people have actually worked on this seriously since the late 80s, and Alan Kay is 3 of them.
The biggest stumbling block I’ve run into is the difficulty of using third party libraries — particularly third-party UI libraries. Compatibility with existing standards is a problem for the same reason. There are a lot of assumptions that proper hypermedia violates.
The second biggest stumbling block is that trying to keep cross-platform & avoid large complicated dependencies tends to mean sticking with old, limited libraries. (For instance, TK is nice to use and provides most of what I want, but it doesn’t support alpha or overlapping widgets, even on canvases.)
I still believe pretty strongly that the wonderful hypermedia future shown in the Starfire & Knowledge Navigator demos — the future that people like Alan Kay and Ted Nelson are still fighting for, and that Jef Raskin and Doug Engelbart didn’t live to see — is possible. There’s no technical reason we can’t have those systems — they were implemented on a small scale over and over. There are some new faces working on this stuff too, but progress feels much slower than it did in the 90s.
I’m starting to think, however, that existing incentive structures discourage people from making the kind of novel and interesting UI descisions that make it possible to build a bridge from novice to expert.
At the very least, those of us interested in real hypertext and real hypermedia (as opposed to the web) will need to work together to create utility libraries that are flexible enough to support all our disparate visions of possible futures but specific enough that using them isn’t a turing tar-pit.
(Adapted from a thread on mastodon: https://niu.moe/@enkiv2/99270035409991776)