Spatial Hypertext As Contextual Tailorability
Part 3 of 5
Characters of the Net, Unite!
Futures Of Text Through The Looking Glass Of Tailorability
Way Back In 1992…
I snuck into the Pen Computing Conference in San Jose.
At the time, I was doing research at an early biotech startup in Burlingame, California, but I was fascinated by the emerging pen computing world. The $695 conference admission fee was beyond my means, so when I told a coworker about my desire to go, he challenged me with a story about Edwin Land.
My colleague told me that before co-founding Polaroid, Edwin Land would routinely sneak into the Chemistry laboratory at Columbia University and run science experiments.
Inspired by Edwin Land, I grabbed a pair of scissors and cut out a conference logo from a computer magazine and hastily laminated it into a fake badge. Then I jumped into my thrashed ’65 Dodge Dart convertible and puttered down highway 280 to San Jose.
Pen Computing 1992 was the first computer conference I attended.
I arrived mid-morning when the conference was in full swing. Gigantic video screens magnified guest speakers on stage. Loudspeakers evangelized Promised Lands for Pen Computing. Excited chatter by engineers, salespeople, and marketers buzzed like neon.
I heard Jeff Hawkins introduce his startup Palm Computing. Vern Raburn touted Slate’s AtHand pen-based spreadsheet. On the exhibit floor, I tried Momenta’s pentop running WindowBuilder — a Smalltalk framework my childhood buddy created (and later sold to Microsoft). Vendors rhapsodized about the next great operating system — Windows for Pen Computing, GRiD Systems, Go Corporation’s PenPoint OS, and my personal favorite Pen/GEOS by GeoWorks. Steve Ballmer was on the conference agenda but I missed him. Apple was notably absent — in stealth mode building the Newton. A year later they world announce MessagePad, a bigger iPad mini but much thicker and heavier with a stylus, short battery life, and dull monochrome screen.
With so much excitement and investment, the computing world seemed to be evolving on the spot, a moment of punctuated equilibrium (a term for sudden biological evolution Lester Thurow would embrace a few years later in The Future of Capitalism.) The swirl, thrill, and anticipation around Pen Computing reminded me of a rock band on the cusp of stardom—playing in a night club thronging with early adopters, fanatics, and acolytes — like Nine Inch Nails at the I-Beam.
What struck me was how the pen could turn context into hypertext.
Instead of typing within rigid constraints — lines, paragraphs, and fixed columns up and down the page — using a pen as input unleashed the power of direct manipulation and liberated where data — digital ink or recognized handwritten text — was placed on the tablet screen. And most profoundly, proximity of scribbles in relation to each other created implied relationships — semantic links based on context!
With a pen, you could easily write ink anywhere on the screen and bundle together notes and doodles as you would with paper. This spatial arrangement of digital ink conveyed informal hints about meaning. As scribbles appeared near each other, contextual groups emerged. Scribbles with similar style and color could also form contextual links. Ink, text, and photos could all be grouped and sorted on an infinite, zoomable canvas based on proximity and common attributes.
I did not know this whole idea had a name — it was called Spatial Hypertext.
Soon after the conference, I headed back to the Stanford Computer Science library and read about Aquanet (1991) and later VIKI (1994). In contrast to document-centered approaches of earlier hypertext systems like Notecards (1984), these spatial hypertext systems emphasized map-centered user interfaces. They empowered users to tailor contextual relationships by dragging nodes near each other. They leveraged spatial memory — our ability to remember associations based on where we put objects — and automatically recognized collections based on layout and visual appearance of objects. Spatial hypertext introduced an expressive authoring style where users could spontaneously drag symbols around and create informal relationships that could be modified with a simple nudge.
NoteCards co-investigator Halasz worked on Aquanet with fellow Xerox PARC spatial hypertext pioneers. In 1999, Catherine Marshall and Frank Shipman — apparently still teaming up today studying social networks — wrote a tidy overview of the field, “Spatial Hypertext: An Alternative to Navigational and Semantic Links”
Marshall and Shipman chronicled the emergence of spatial hypertext and key implications and requirements of this evolution.
“The use of these map-based hypertext systems to author new information spaces uncovered an interesting phenomenon. Users avoided the explicit linking mechanisms in favor of the more implicit expression of relationships through spatial proximity and visual attributes…
…Specifically, there was a need to support the expression of the implicit and transient relationships that develop between nodes . With that requirement came the concomitant need to support manipulation, the movement of nodes and structures of nodes within the information space”
The ability to create indirect relationships through direct input devices had been possible decades before — beginning with Ivan Sutherland’s Sketchpad (1963) and GRAIL (1966) — but in 1992 word processing programs did not think outside the text edit box nor did they support free flow composition with contextual hyperlinks.
The Pen Inkwell Dries Up
Dipping into Pen Computing 1992 for a few hours didn’t teach me how to manufacture low-cost polarizers and revolutionize the photography industry and beyond.
But that afternoon changed my career path.
Inspired by the conference and research projects like Aquanet, I quit my biotech job to be a programmer. I wanted to build spatial hypertext apps for pen computing! I immediately learned C++ and hung out regularly at Cubberly Community Center in Palo Alto, the mecca for Bay Area Windows programmers. In the early 90s, the Windows SIG (Special Interest Group) met weekly at Cubberly Center where programmers and industry guests showed off their latest hacks, products, and development tools. I remember a long presentation about Borland’s Turbo Pascal IDE. Young Kraig Brockschmidt evangelized about Microsoft’s early component programming model and Object Linking and Embedding (OLE). Steven Sinofsky, then lead for Microsoft Foundation Classes (MFC), killed it one night talking about MFC and passing out WADGs (Windows Application Development Guidelines) as “prizes” for the most wayward UX schemes and questions. Rapping over pizza and beer afterward was tradition. Sinofsky joked — pepperoni in cheek — about when his car battery died, he just bought a new car. :-)
Life was great for Microsoft during these Windows 3.x days.
But like Pen Computing, it wouldn’t last.
In just eight months since the Pen Computing 1992 conference in February, investment and enthusiasm evanesced. Pen Computing disappeared as quickly as it came. No killer apps transformed the industry since there was no industry left to transform.
Apple launched Newton a year later, but unlike iPad, it went out with a whimper. One of my few memories of the MessagePad was how Bob Dylan took Apple to court for trademark infringement against Newton’s Dynamic Language called Dylan. They eventually settled out of court. The absurdity was Bob Dylan embroidered the truth about his own background and borrowed said name — whether lifted from Sheriff “Dillon” of Gunsmoke or the Welsh poet Dylan Thomas. After playing that gig, Robert Zimmerman of Duluth was less authentic to me.
Fifteen years would go by, Blowin’ in the wind, before a new generation of mobile devices would enable futures of text based on spatial hypertext.
A Future of Text is Hypertags
“The world is like a fertile field that’s waiting to be harvested. The seeds have been planted, and what I do is go out and help plant more seeds and harvest them.”
— Edwin Land
In 2007, Apple announced the iPhone. Steve Jobs famously renounced the stylus in favor of a more natural pointing device, the finger. Despite not having a pen, the iPhone instantly reinvented pen computing as a mobile phone. iPhone provided Multi-Touch input, a 320×480 display with 18-bit color, WiFi, Bluetooth, and a rear facing camera.
Nine years later, today’s iPhone, iPad, and wearable devices from many vendors are even more powerful than the original iPhone.
Mobile devices now provide an array of input sensors and output features — 3D-Touch, Retina displays, GPS, BLE, magnetometers, GPUs, amazing audio, front and rear facing high definition cameras, accelerometers, haptic feedback, and even a Pencil!
With the ubiquity and power of today’s mobile devices, spatial hypertext can augment where, when, and how hypertext can be created and triggered.
In particular, spatial hypertext can include geo-spatial locations. By virtually “tagging” real world objects and locations as hypertext, we can now create hypertags.
Hypertags are like hashtags that can be slapped virtually anywhere — on your phone, subway stops and coffee shops, concert halls and campus walls, city parks and landmarks.
When slapped on a wearable device like a digital watch, hypertags act like hashtag you can wear and broadcast to people around you, “I care about #ThisIdeaCausePersonGroupCallToAction”.
When slapped on locations, hypertags are digital graffiti stickers, “Boycott this place because of XYZ”.
In 2011, I was at Occupy Portland. A tent city sprouted up in the Plaza Blocks covering Lownsdale Square and Chapman Square. From a communications viewpoint, what was most apparent was how loud, crowded, and chaotic the encampment was. Radio and phone calls were hard to hear because of the noise. Lots of music and shouting. Organizers resorted to wearing colored tape armbands to identify membership in ad-hoc groups — Medical, Legal, Communications, Outreach, Sanitation, etc..
What was missing from this emergent organizational movement was a quick, reliable way to visually identify groups.
In the future, I think hypertag solutions might help organize real world events, encourage serendipity, and improve workflow…
A Future of Text is Holotext
I doubt readers were alive in 1977 when Star Wars came out.
My friends and I waited in a serpentine line that wrapped all the way around the Westgate Theater in Beaverton, Oregon. It is the longest line I have waited in for any event in my life. But it was worth it. Many childhood memories are from Star Wars — the epic soundtrack, the Millennium Falcon, Obi-Wan, Darth Vader and the voice of James Earl Jones…
And Princess Leia’s hologram message!
I never imagined exchanging holograms would become an everyday reality! With improvements to spatial hypertext tools and Virtual/Augmented Reality, we are progressing toward this day!
Spatial hypertext lets us directly manipulate objects and arrange these objects anywhere in a contextual environment — like a 3-D virtual world.
As we continue to create, arrange, carry, and nudge virtual objects around, new ideas for launching hypertext emerge. For example, hyperlinks might be triggered whenever moving objects intersect — e.g., as drawn ink strokes or typed text characters overlap or, as with hypertags, when we walk near each other or enter geospatial locations.
By tracking movement of objects, spatial hypertext exploits the dimension of time. Movement of objects is familiar to us in games and cartoons. This is animation.
Project Draco by Autodesk is one of several new tools that enables sophisticated animation authoring…
Draco leaps beyond simple GIF creation and cinemagrams. Draco is like Ted Nelson’s hypergrams but without user interaction triggers…
Hypergrams — as well as hypermaps and branching movies — envisioned by Nelson over 50 years ago, are interactive pictures that respond to direct manipulation. Instead of tapping an underlined word to trigger hypertext, specific locations and parts of diagrams become live regions for hypermedia. Hypergrams also link drawing context together so manipulating one area of a drawing directly changes another area of the drawing.
Last week’s #GDC16 (Game Developer Conference 2016) and adjunct #VRDC (Virtual Reality Developer Conference) in SF was overflowing with Virtual and Augmented Reality.
This was in stark contrast to GDC way back in 2004. Back then the video game industry was only $10B and there were no signs of VR or AR.
All last week the media hyped VR/AR immersion, experiences, and storytelling as well as new products, services, stories, prognostication, investment, and money.
With today’s VR/AR hardware, 2-D hypergrams can inflate into 3-D as holograms.
Exchanging interactive holograms — holotext — is the next frontier in rich messaging.
Messaging is evolving from text to emoticons to emojis, stickers, GIFs, digital ink, and videos to a new world of Princess Leia-like interactive, moving, colliding, talking, volumetric holograms. In addition to exchanging holograms with your friends, transporting yourself — your holosapien (avatar) — to virtual locations for shared meet-ups will become common.
Google’s TiltBrush already lets users extrude and share holograms in a VR space …
Tilt Brush is amazing. It's magical. It's a bunch of stuff that sounds like hyperbole when you write about it. We…www.polygon.com
At a recent TED conference, Metavision demoed hologram messaging …
And tools that let you create user generated worlds are popping up like Oculus Toybox and in-VR game editors from Unreal, Unity, and Minecraft …
By end of year, VR designer and architectural modeling tools will surely be available.
A Future of Text is the Holoverse
After growing up without serious competition for over a quarter century, the World Wide Web finally has contenders on the horizon — the many universes within Virtual and Augmented Reality.
Back in 1992, in Snow Crash, Neal Stephenson prophetically introduced the Metaverse as interconnected virtual worlds where real people interact as avatars. Inversely, AR enables avatars and holograms to interact in the real world among real people — a complement to the Metaverse — the Superverse. In totality, the Web, immersive VR, and hyperreal AR experiences create a universe for interactive holograms — the Holoverse.
Web + Metaverse + Superverse = Holoverse
The DNA of the Web is interconnectivity. Connecting disparate realms together is exactly what’s missing from today’s VR/AR. Metaverse projects come and go as new ones like Internet 2021 sprout up.
Driven ultimately by joint business synergy, in the near future, the Web, VR, and AR industries will converge with frameworks, protocols, and services that link holographic worlds together.
Web incumbents — Google, Facebook, Microsoft — with billion dollar VR/AR budgets are fighting to own HMDs, tools, and apps of the Holoverse. Likewise Visual Arts, Sports and Entertainment producers — from Disney, Lucas Film, ILM, Pixar, UMG, to the NFL, NBA, etc. — are jockeying to become the Netflix and YouTube of VR/AR, delivering content for this new form of immersive television — Telemersion.
The Holoverse completely shakes up the balance of power in the developer landscape...
With VR and AR, the Game industry — players like Unity, Epic, Valve, Twitch, Sony — displace Mobile as the center of the developer universe.
Game vendors are skilled, seasoned, and wealthy competitors in the quest for hologemony. Games currently provide the apps, tools, engines, developers, artists, and SDKs that power today’s VR/AR. With the spread of the Holoverse beyond games and entertainment, to the enterprise, game programmers are also poised to be the next unicorn business app developers.
Apple appears to be playing catchup in the Holoverse. Particularly in developer tools and mindshare. Today’s developers amped about VR/AR are downloading Unity or Unreal and programming in C#/OpenGL not Xcode/Swift/Metal. Like Greek tragedy, maybe Apple’s hubris brought this upon itself. Apple has been hostile toward its own (few remaining) SpriteKit game developers..
In November, Apple betrayed every SpriteKit developer when they released iOS 9 and deliberately broke all SpriteKit apps.
Apple followed up this act by blithely ignoring all SpriteKit developer outrage. No apologies for throwing every SpriteKit programmer and product under the bus. A case of moral hazard for sure. Perhaps because of wide, religious, end-user fealty to the luxury status of brand Apple, Cupertino can afford to eat their fledgling SpriteKit game developer community with impunity. Apple — who ran the most famous marketing campaign in 1984 against the specter of tyranny — has ironically become Orwellian.
Make no mistake about it, the Holoverse is about money and control. Time will tell to see Apple’s next move. Maybe they will reveal secret VR/AR products and tools (via Metaio), and optimistically, make amends for jilting their SpriteKit developer community.
Spatial hypertext is about context creation. Connecting interstices of space and time. Rod Serling captured endless possibilities so poetically in 1959:
Tools to create new forms of context are rapidly evolving.
Tailorability in the Holoverse is a continuum — artistic extrusions and expressions that capture experiences like virtual sculptures. This is followed by user-generated excursions to rooms and realms within new mediums of VR/AR. Finally, movement, animation, and telepresence — tailoring dynamic and interactive holograms as messages and transporting ourselves to any location in holo res.
Developing spatial hypertext applications for the Holoverse demands new skills, tools, and best practices for maximizing product safety. The bar for full-stack developer has moved up several notches — you need command of graphics processing, linear algebra, game engines, and physics — like dynamic 2d and 3d motion, optics, electricity and magnetism. The developer trek traverses new equations, concepts, and vocabulary — terrain that’s maybe forgotten (long ago in college) or completely foreign to Web and backend coders— Laplacian, convolution, proprioception, Maxwell’s equations, quaternions, affine transformations, active contours, texture caches…
For marketers, journalists, and pundits, there’s a new VR/AR jargon to be developed for products, behaviors, locations, and entities, e.g…
“vrowsers” (browsers) that let us “vroam” (surf) to “vrooms” (pages) within “vrealms” (sites) linked by “vroads” (hypertext) addressed by “vrls” (urls).
Homeowners will dedicate rooms for their HTC Vives creating livingvrooms, gamevrooms, and playvrooms. Businesses will build workvrooms and meetingvrooms for collaboration with internal teams, external customers, and partners. Schools, universities, and MOOCs will invest in schoolvrooms and classvrooms where students, teachers, and parents learn together in VR. #jovrnalism will edit, publish, and distribute immersive stories in newsvrooms. Rock bands will jam, practice, and perform in musicvrooms. People, er, their avatars, will frequent chatvrooms, barvrooms, and dancevrooms. And every vroom and vrealm in the Holoverse will be linked and indexed for search and discovery by The Next Google.
After decades of technology innovation, spatial hypertext creates revolutionary tools for poiesis. The Holoverse is emerging from pages of science fiction to VR/AR reality.
Jump-in and bring your most fantastic ideas, stories, and dreams to life!
“The best way to predict the future is to invent it.”
— Alan Kay
This series continues in Part 4 of 5 where I’ll explore how applications derived from IRC lead to futures of text based on collaborative tailorability.
A Public Cellyzen