#3: AR Roundtable

Jon Bell
Jon Bell
Sep 13, 2018 · 25 min read

I recently discovered a bunch of old projects I had littered throughout the internet and completely forgotten about. Finding them all at once helped me to see some pretty clear themes in the sort of work I like to do.

I love trying new things. For example, in 2014 I did a game called The Long Talk, based on an old Stephen King novel called The Long Walk. Each day I’d send an email to a big group of people that asked a question or posed a challenge. The people that responded got to keep competing. So each day the list got smaller until eventually a winner was declared. I learned a lot running the game, and I’m considering doing it again. (Let me know if you’re interested!)

I like to be quick and intuitive rather than overthinking and perfecting. I found an old email newsletter where each edition was fun and sloppy early design thinking. For example I might say “Here’s how I’d design a bathtub” and then write for 500 or 1000 words as I worked through the problem. Then I’d hit send before I had a chance to second-guess it. I return to this format again and again, especially to break out of writer’s block.

I don’t like to specialise. I found a different abandoned newsletter where every edition described brief descriptions of people. That was the whole premise. But if you read the newsletter long enough, you’d see the same characters emerge again, you’d see their lives change, and you could guess what was happening in between the lines. I loved this format because it wasn’t expected to be any one particular thing. I could invent or write anything.

I love reading and receiving interesting email. I love how open-ended writing can be, and I love reaching someone on the other end of the line and hearing from them. I put a lot of time into email discussions. I’ve probably found more value in them than almost any other thing in my digital life. I’m on Dave Pell’s team. He says Newsletters Are Immortal, and I agree with him.

When I try to think about the future, I start with what’s good. What do people enjoy, what resonates with them, what needs are going unmet? And the word “meaning” keeps emerging. There are so many things I can do in my life, but I keep returning to silly improvised games, or being curious about some side project, or writing long emails with friends. It seems like the more likely something is to turn a profit, the harder it is to find real human value in it.

For this issue of Near Future Field Notes, I blended old and new. I roped designers Mark Vitazko and Lukas Mathis into an email thread and asked them to think about the future. Right off the bat, Lukas proposed a big idea: what if augmented reality is the computing endgame?

Email Roundtable: Augmented Reality

Hypothesis: AR is the personal computing endgame.

People say that we tend to overestimate short-term progress, and underestimate long-term progress. I don’t think this is generally true. Rather, I think we tend to overestimate how quickly technology evolves (we still don’t have truly bendable screens, even though people have been talking about foldable phones for two decades), and underestimate how disruptive to society technological innovations are (Russia probably changed the outcome of the US elections because there’s a social network that turns stupidity into a viral infection that spreads quickly and without control).

Having said that, I think this is true for AR: we overestimate the short-term impact, and underestimate the long-term impact.

Five years ago, people had pie-in-the-sky ideas about what VR and AR would do. 2013 was the year AR was supposed to go mainstream with Google Glass. It didn’t, but we all had to get used to the term “glassholes”, and to smug blog posts about how smug all of these glassholes were (this oddly summarizes a large portion of today’s technology journalism, which has replaced the enthusiasm for technology from the 80s and 90s with an odd sort of passive-aggressive dystopian neo-luddite view that technology is only good if it comes from the one company you swore allegiance to).

2017, then, was supposed to be the year VR changed the face of videogames forever. It didn’t, but it did give us sweaty palms and shaky knees when we played Richie’s Plank Experience, so that’s something.

We went straight from thinking that these technologies would change everything to laughing at how dumb we were just a year ago.

That’s a mistake.

AR won’t change computing this year, or the next year. But it will get better. Resolution will improve, devices will get smaller, we’ll have more power and more battery life and better inside-out tracking, and one day, probably not thatfar in the future, we’ll have sunglasses that have the perceptive resolution of a retina screen and cost as much as a laptop. And once we have that, we don’t need any other screens anymore. We don’t need laptops, TVs, watches, iPads, mobile phones, or any of these devices, because these screens, this information, will all be rendered dynamically into our world by our AR glasses.

IBM ruled the mainframe. Microsoft ruled the desktop PC. Google rules the mobile phone. One company will rule AR, and there won’t come another platform after that. AR is the last platform. This is the computing endgame. The company that wins this market, wins.

— Lukas

Google Glass and Hololens

I saw myself in several of your asides, and I wasn’t always on the flattering side of it. For example, I remember being confused by the Google Glass launch. Yes, skydiving is exciting. Yes, the marketing video that was super idealistic was, in fact, fun to think about. But.

People were relentlessly negative on the internet even then, and I remember it only took my little slice of the internet about a day to make a video with ads crammed into every nook and cranny of the idealistic experience. And I never thought they’d do that, necessarily. There isn’t enough screen real estate for a traditional banner ad.

In fact, this was when the idea awakened in me that maybe you can litter a website or app with ads, but once you try to do it with a wearable of any kind, I suspect people are far less ok with it. So how do you make money off it? Data. Invasion of privacy, of which Google had been roundly criticised for well before 2013.

So I found myself pretty down on Google Glass for the privacy questions/concerns, but the bigger thing was the interaction model. I tried to suss out how you’d action things. Tap? Long tap? Look in a certain direction? Surely it could be modelled out with boxes and arrows, and I was curious how a user would progress through them.

Like everyone, I spent time trying to figure out how the thing would work. The more I learned, the more I kept coming back to the interaction model question. I couldn’t figure out what a computer on my face could offer me. The best I could come up with was something we loved on Windows Phone. The concept of seeing a cool thing happen and saying “record that to video,” knowing the device could go back in time ten seconds and make sure to save the clip. That could be cool. But it would destroy battery life.

Around this time, I wrote the following email:

http://allthingsd.com/20130412/you-lookin-at-me-reflections-on-google-glass/

What a strange article. He said very little with so much swagger.”

But I actually like the article now. He’s talking about what’s socially acceptable, he’s talking about privacy. I think I was looking to see if they were using a carousel design pattern — really specific tactical stuff — and he was thinking like a researcher about what it means in the context of other people. Not just “privacy” but bigger questions too.

I don’t mean to be disagreeable, but I remember exactly zero people saying 2013 was the year AR would go mainstream. But there were definitely people that thought Google Glass was really super cool.

My poor mom. She wrote me a while later when Google Glass was going to be sold to the public for a single day, and asked if I had one. By then I think I had tried one and immediately had a negative reaction to the interaction model. For all the talking about “do we really want to be poking glass for the rest of our lives?” I was annoyed to discover that Google Glass required a combination of tapping … the side of your glasses and scrolling … the side of your glasses. I wrote back a curt “no.” She asked if I wanted any. Which is odd, she’s not the kind of person to randomly send me tech gadgetry. Never happened before or since, but I said “Ok, sure.”

So I got them. I tried them for several days and really tried to give them a chance. But nope. Can’t understand how they ever made it on my face, let alone out of the plane. I just didn’t see it. It wasn’t just the privacy angle, which was huge. I just didn’t see what it was adding. Even with infinite battery life, even with apps, I just never thought the scenarios were worth much.

I know I sound like I’ve picked my corner (Apple) and I’m just going to hate on everything done by anyone else. And maybe that’s true. But Hololens actually excited me. First, I knew that Microsoft wasn’t selling it as a complete solution yet. Second, I got the sense it should be used in certain situations like work, not every situation the way Google Glass was marketed. Google Glass was saying “this is the new way of life” and I felt like Hololens was saying “we can help with some parts of your life.” That made it resonate more.

But I don’t want to make this a “which device is cooler” discussion. I want to talk about AR/VR being the endgame for personal computing.

First, I think there’s a tactile angle that AR/VR struggles with. I’m a huge believer in real buttons. You can rest your finger on a button (hover) and also press on it (mousedown). The fact that those are two separate actions is vital. In the real world, I don’t accidentally open a door by brushing against it. Things are weighty and respond differently to different interactions.

But if you put me in a full VR helmet and try to emulate a full kitchen, you have to make the floor move. And make some sort of shape shifting system so every time I reach for anything — egg, trash can, wall — everything reacts exactly right. That’s a tall order. It’ll happen one day, I guess. But it’s a while out.

I’m reading Dawn of the New Everything right now, by an early VR pioneer, and I’m really liking it. He talks a lot about this stuff. (http://www.jaronlanier.com/dawn/)

So what about AR glasses? Sure, we’ll get there. Things will get cheaper and better and lighter and so forth. I have no doubt of that. And there will be scenarios that are hard to even imagine now that are obvious and awesome when the glasses are here.

But even with glasses that are literally identical to the ones on my face right now, with infinite battery life, I still think about the interaction model. How do I control it? How do I write a book with it? How do I watch Netflix with it? I know the 2018 versions of these answers, but I don’t find them satisfying. I control them by eye tracking or tapping my glasses. I write a book by using a bluetooth keyboard and broadcasting on any surface I want. I watch Netflix either in the glasses themselves or using a big flat black wall in my house where a TV used to be.

I know I’m going to shocked and confused and delighted by all sorts of things I can’t predict right now. But that’s where my head is. I suspect I’ll really enjoy AR, but that it will live side by side with a bunch of other things for a very long time.

— Jon

Three Quick Notes

When I mentioned people’s expectations for AR in 2013, I wasn’t thinking of cynical online tech commentators, I was thinking of things like this:

I don’t think people care about privacy all that much. Heck, I don’t even turn off location tracking on my Android phone, because I like the fact that I can go back to three years ago and see exactly over which pass we drove on that Sunday in late Summer.

How do you write a novel using an AR device? The same way you write one using an iPad, with a keyboard. AR doesn’t prevent you from using all of the data input devices you already use right now. AR may also allow for some new ways of data input (I love painting in Tilt Brush, for example, and eventually, these devices will know exactly where your hands and fingers are, so it can put buttons on real-world surfaces, or make user interfaces float before you, like VR games do), but in my opinion, AR’s main value is data output. You don’t need a laptop and an iPad and a phone and a watch if you have AR. All of these screens are now obsolete.

— Lukas

Enterprise, User Needs, Gazes

1. VR’s consumer future is uncertain but when it comes to enterprise, it’s already a huge success. Why is this?

I’ve never seen developer enthusiasm like I have in VR. It’s like they’ve discovered music and they’re trying to figure out what sounds good. Yet with anything new comes unknowns, so many tools to build and best practices to establish, all in an area with a decidedly niche audience (and thereby a gamble for developers trying to turn a profit). As you said, the price will come down, the tech will improve, but even in the best case it will still be dwarfed by more profitable avenues like mobile gaming.

Switch over to the enterprise and it’s a whole different story. VR has been taking off in training scenarios and remote collaboration (eyeing the promise of reducing real-world travel) where companies are more than willing to foot the bill for devices AND development. Medicine, manufacturing, defense, construction, retail, any field where you want to be immersed in a potential situation and tested (in a perfectly quantified virtual environment), or work over distances.

Also you mentioned Glass… Google Glass was a consumer failure but it’s had a kind of renaissance in the enterprise. Even in its current form (which is hardly AR, just a tiny head-mounted screen) it’s providing a substitute to holding a tablet full of documentation. This is crucial in scenarios where you need both your hands free to perform a task. To take it further, if you can give any worker access to your company’s full database of knowledge, it means that worker is more capable, more flexible. Or better yet, use a camera on the device to allow the user to ask for remote assistance (chatting with an expert located elsewhere)…

One last point on AR/VR in enterprise is that nobody minds looking like a dork if a) it makes their job easier/safer and b) they’re getting paid for it!

2. AR’s future is less about what the user sees and more about what the device sees. What does this mean for design?

There’s this core technology that AR uses called SLAM. Simultaneous Localization and Mapping. It’s the bit of software (and hardware) that helps a device map the world and figure out where in the world it is. It’s the same tech that’s behind self-driving cars and most recently VR (no need for external cameras).

Curiously, as companies are looking at sticking more cameras on things, the field of computer vision has seen a Cambrian explosion of progress. Now, not only can you map the world (and position yourself in it) but we can recognize the room you’re in, the objects around you, predict the path of moving objects, what direction your eyes are gazing. Suddenly we’ve given these devices a pair of eyes to see the world and understand the context in which they exist.

In terms of user experience, this reduces the burden of input from the user. To use a simple example: when I talk to my Alexa today I have to describe, in inane detail, which light I want to turn off (“Upstairs bedroom floor lamp!”). The promise of a seeing device is that I can simply gaze (or point) and say “Off”. This gets interesting when you think about a room full of people wearing seeing devices (say, a factory floor). Or better yet, a room filled with networked cameras (say, an Amazon Go store), where I don’t really need a headset at all. If whatever device I have (my phone, ear buds) can access information about my context, suddenly the interface needed to execute a task changes dramatically to the point where it might disappear entirely (like comparing Apple Pay to Amazon Go).

There’s real opportunity when you start breaking down device sensing capabilities (gaze tracking, hearing) and mixing/matching (gaze sensing + voice input). AR will certainly play a role but unbundling the underlying technology is a playground for design thinking.

3. Are AR glasses the new bendable phone? What’s the user need being addressed?

I enjoyed your bit about bendable/rollable screens because I feel like they’ve been promised forever! I think it’s worth remembering that it probably started at a time when phones barely fit in our pockets. “Imagine if I could stuff this tablet in my pants!” This reminds me a bit of the idea of ‘perfect AR glasses’ and how they will replace our screens and be the end-all, be-all device. “Imagine if I could stop staring at my phone! The digital world merged with the real world!”

But what user need is this addressing? What problem is this solving? And, crucially, is an AR device the best way to solve it? A common scenario for consumer AR is visual search: look out anywhere in the world and instantly know what it is, instantly have an interface for it… but is the appeal of that scenario that the information is in front of my eyes? Or that I don’t have to open an app, describe what I’m seeing into a Google search box, and hope for the best? What if I could just wave my phone and have it instantly recognize something(taking into account multiple signals of my context) with little or no action on my part? Maybe I don’t even need to look at my phone after I do that and I can just hear about it in my ear buds.

I was telling Jon that so many of these promised futures are technology-led, but what’s the user experience goal? I think in however many years it takes to make perfect AR glasses user needs will remain the same: We all hate telling computers obvious things (like who you are), we all like to fill the boring moments in our lives with something more interesting (like reading Twitter when you’re standing in line), most of us enjoy short human interaction (like your barista remembering your name)… The question is, will companies address these needs with a single, wearable device? Or will we want devices similar to what we have today but with dramatically less fiddling? Honestly, I’m not sure…

— Mark

Privacy and a Provocative Question

Writing a novel with an AR device

True, you’d do it with a keyboard, like an iPad. But after Surface and iPad I am keenly aware that I love my laptop partially because it’s a really great surface on my lap. You *can* prop a Surface on it, or an iPad, but — let’s just cut to the chase — it’s not as good.

Which isn’t a death blow, of course. There are huge benefits to a tablet and I am a huge fan of them. But in the tradeoff columns, we’ve got “it’s just more awesome to write on a laptop than anything else I’ve found” in the pro-laptop column.

Now, maybe putting glasses on my face with super amazing battery life will have the right number of tradeoffs. Maybe I think of that scenario sort of like writing at a typewriter, where not having other features is part of the appeal. Maybe I want to put on writing goggles and only be able to write, because I like the amount of focus.

What’s the user need being addressed?

I think that’s what it comes down to for every product. You have to figure out the user need. That’s hardly a controversial statement.

I trust that it’ll emerge. And I also trust we’ll sort of know it when we see it. For example, Google Glass (and related products) has found its way in some key scenarios. (I can’t believe I didn’t mention that bit. When the stories came out that said the team thought it’d be good for work at Google execs wanted a big commercial splash, I rolled my eyes knowingly. And then later on I discovered I actually know the father of Google Glass personally! Our kids hung out in Seattle back in the day and suddenly he moved his family to SF. For Google Glass, I learned later.)

So I think targeted, rather than general, scenarios are probably going to win the day. More than perhaps we’re used to in the PC/Internet/Mobile categories where part of the joy is that everything was possible. Maybe face computers are more like pressure washers. When you want one, you need one. But maybe not everyone needs one as badly as they need more common items like cars.

Privacy

Yeah, I think on the spectrum between conspiracy theorist living in a cabin and thinking the government has bugged his phone and a pragmatist who is living their life in the mainstream aware of privacy but not obsessed with it, more people lean towards the latter. But.

I don’t think this is a question of if people care or not. It’s more like “ok, we know a lot of things require more access than ever before. And sometimes that brings a lot of value. But is the software or product asking in a way that makes me feel comfortable? Do I trust the product or the company?” I don’t have time to go look up all the stats right now, so I’ll just assert some things:

* Privacy concerns affect how much people trust Facebook

* People change their behaviour based on their level of trust with a service. So Facebook still got a lot of people on their service, but posts in 2018 are different from 2010. Less authentic. More aware of the implications.

* There’s a myth that younger people don’t care about privacy. Actually, data shows that baby boomers are the ones that are more cavalier (and trusting) whereas younger generations know enough about tech to be more wary, generally speaking.

So here’s a thought exercise.

Apple, Google, and Microsoft All Make a Face Computer

Let’s say Apple, Google, and Microsoft all released basically the same Face Computer technology at the same time. (That would never happen, but bear with me.) Let’s say the prices were roughly the same (also would never happen) and let’s control for a bunch of other factors.

Assume the underlying technology is pretty much the same, and even assume the ease of use is about the same. Also assume people seem to like them. It appears the tech might be a mainstream success, and now it’s just a matter of seeing which of these three companies gets which marketshare. (Heck, let’s put Amazon and Facebook in there too)

What happens? How are the five products received? I have some thoughts. What do both of you think?

— Jon

Information, Writing, and An Answer

Information

I love the example of telling Alexa to turn on a light, vs using an AR system, which knows where you’re looking, to achieve the same goal. This is the kind of thing that will be super obvious to people in hindsight, but seems revelatory right now. And there will be many of these kinds of UX revelations that will change how we interact with the world. In the light example, we’re still interacting with a real-world item, but AR means that there doesn’t have to be a real item there.

How many things in our rooms are just there to give us information? Your TV, the little screen on your oven, the paintings on the wall, the books. Or, let’s go a few steps further, why do you even need lights if you have AR? Lights are just there to allow you to get information about your surroundings, but with AR, you can get that same information without the help of external light sources.

Writing

Tablets are terrible for writing. But so are laptops! They’re really bad! They’re better than tablets, but that doesn’t make them good, it just makes them not quite as bad. Laptops are big and bulky (or they’re small, and then they have screens that aren’t big enough). Their keyboards suck, and the screen is not in the right position relative to the keyboard. Also, the trackpad is in the wrong place, since laptops are designed to be ambidextrous. And when you’re sitting in a plane or a train, there’s never enough space on the tray for the laptop. Laptops are bad for writing!

Now imagine writing in a plane with your AR goggles. You only have to carry a keyboard and a mouse or trackpad, so you can make those devices much better than what you get in a laptop, because they’re purpose-built for writing. When you’re sitting in the plane, you can just remove the seats in front of you from your view, and instead put a large screen there. Need to keep track of research? Put a pinboard next to your screen where you track your chapters, characters, or research papers you need to refer to. Now you have a purpose-built environment for writing, instead of the crappy little screen on your laptop, and the cramped keyboard that gives you RSI since you have to hold your hands weird to reach it properly because it’s attached to the screen.

Everybody makes AR devices

Not sure what happens. Probably Apple users buy the Apple device, since it integrates perfectly with everything else they have, and since it will offer the best privacy protection. Probably everybody else picks the Google device, since it will offer features nobody else will be able to offer, thanks to Google’s relentless data collection. But then I’m just assuming that things turn out the same as they did with the last product that disrupted the computing market, the mobile phone, so I’m most likely very wrong.

— Lukas

The Five Companies

I love this prompt, Jon. Lots worth noodling on… Thinking through the five companies, three points resonated with me:

1. Each of these companies have market advantages (things they do best) that would likely reflect in the product, each of which are tied closely to how these companies make money.

At the risk of oversimplifying: Apple sells hardware/services, Google sells ads, Facebook sells ads, Amazon sells things/services, and Microsoft sells software/services. Any face computer these companies build would need to serve their bottom line, i.e. while Apple would make a premium hardware device, Amazon would make a product that would strive to support a ubiquitous service via an ecosystem of devices (that can help Amazon sell things). Apple’s HomePod and Amazon’s Echo/Alexa is a good example of this today.

From a user experience point of view, Apple needs to justify that premium purchase with a flawless experience (preferably within a well-constructed/tested walled garden), while a company like Amazon (or Google) strives for scale, offering a cheaper device (and perhaps a less robust UX) at the promise of greater flexibility to the user. There’s a whole rabbit hole to dive into here, the details of ecosystem strengths, app developers, core UX, but I think it’s worth looking at two companies in particular…

2. Facebook and Amazon would likely both aim a face computer at understanding user (consumer) intent, but with different approaches…

The more a company like Facebook or Google knows about you, the more it can target you, and charge companies access to you by selling ads. Facebook doesn’t just sell ads, they sell the best ads. No company knows its users quite like Facebook does. Google can be very good about selling search ads (i.e. users have specific search queries and Google can sell those specifics to ad buyers), but Facebook is on a whole different level in terms of you and your social graph. What Facebook doesn’t know is what you’re doing when you’re not using Facebook. It might know where you go (especially if you check-in) but it doesn’t know what you’re doing there or what you’re looking at. Face computers, and their ability to perfectly quantify what your eyes are gazing at and understand the environment you’re in, can provide exponentially more detail about your actions and intentions. This becomes even more valuable when you consider you’re not ‘opening an app’. We’re not thinking in terms of monthly active or daily active use but near-constant use. Not to mention, of course, thelimitless potential for ad space in the real world, at times when you might actually want to see an ad for goods or services. This is where Amazon comes in…

Amazon also wants to understand consumer intent not because it wants to sell ads to others, but to sell you those products directly. Amazon currently has a discovery problem, particularly for certain categories (i.e. fashion, cars). Users aren’t starting at the Amazon search box, they’re going to Google or Pinterest where they can more readily browse options and get inspired to make a purchase. It’s that intent to purchase that Amazon would position a face computer, with its ability to understand what you see and when you see it. If I’m walking down the street and see a car I like, maybe my gaze follows it and a price tag pops up. Maybe it knows I like certain colors and overlays the color on the car, or maybe I even see myself in the driver’s seat as the car drives away…

On the other side of intent is the idea of just-in-time purchases that Alexa has been aiming for. The moments when I run out of paper towels while I’m cleaning up a mess and can shout to Alexa to order more (rather than go through the friction of opening an app or remembering to do so later). An Amazon face computer would take this further, the idea ofthe world becoming an ‘Everything Store’ where the idea of Amazon Go (or Amazon Prime Now) is hyper-charged and everything on the planet becomes a showroom for purchasable goods and services. Again, without turning this into a whole essay, I think the linchpin in this thinking is around the idea of privacy… but I’m not so sure our idea of privacy today will be relevant to products of tomorrow.

3. Privacy is quickly becoming a key signifier of success today… will that be true in the future?

Apple has led a strong narrative around privacy in its products, a carryover from Steve Jobs’s famously vigilant stance. Meanwhile, Facebook has been put through the ringer on privacy, while Amazon is somewhere in the middle with Alexa (“Look at all the value you get when we can listen to you 24/7!”). I think today most consumers would flat-out reject the idea of giving their eye/gaze data to a major corporation, to say nothing of other people in the environment concerned with being recorded (as seen with Google Glass, and more recently with Snap’s Spectacles). Although if I were to guess, I’d say most people will eventually have a more relaxed stance on this concept (for better or worse). I look at what’s happening in China and see a public opinion that is largely in support of camera-filled public spaces and even advanced facial recognition (opting for the benefits of simplicity and safety).

If you consider something like gaze/eye-tracking to be a fundamental part of the experience as it relates to how a company makes money… will the company’s stance on privacy make or break a product like this? Is Amazon better suited to succeed since there’s an obvious benefit to the user while Facebook’s benefit is more nebulous? Is this where a player like Apple (with privacy built fundamentally into their company’s DNA) has an out-sized advantage? Or will companies play it safe, potentially limiting the full potential of the device (and the experience) to avoid privacy concerns?

Lots to think about!

— Mark

Typing on a plane

Lukas, I agree 100% with the plane thing. You’re right, laptops are better than tablets but that doesn’t make them great. And you won me over when you described the cramped plane situation. I can absolutely see that scenario lighting up for me if the tradeoffs were right.

It reminds me of two experiences in particular. When the iPad first came out, I was doing a lot of research in the field. I ended up with a setup where I’d put an iPad on my lap, closed, then a bluetooth keyboard on top. Then I could touch-type answers as I interviewed people. Neat.

But it went further than that, because the note taking app was Soundnote. (Side note I learned later: Mark knows the guy that made it!) Soundnote’s claim to fame was that the audio and text are synched so later you can tap a word in your notes and it jumps the auto to the correct place. Amazing.

And then I realised later on that you don’t need to take great notes with a setup like that. You can just look at your interviewee while they say stuff, then when they hit interview gold you can just mash your hands on your keyboard. “MI#OI3io#NLS” is an appropriate marker, because then you can just tap that later and the audio jumps right to the quote.

This meant I was far more present, getting far better data, and there was no downside. I could still get back to the great quotes. In fact I could do it better than someone trying to interview the traditional way 50 years prior. It was a good reminder that the end goal can sometimes get lost. The end goal is not “take notes,” it’s “get good data.” And I realised that “taking notes” was perhaps getting in the way of “get good data,” so my mashing keyboard approach seemed crazy but was actually a leap forward.

That’s memory number one. And it ties to this because it’s true. The goal isn’t “type on a laptop,” it’s “write well. And one of the great forgotten details about writing well is that distractions kill writing. I would absolutely give up everything about a laptop (including internet access) if I knew I was getting more focus. (And the keyboard quality would have to be equal or better.)

Here’s memory number two.

Once I was writing on a plane with my iPad keyboard, like I do. And the guy beside me kept sneaking a peek. And this particular story was embarrassing in some way. Maybe I was describing people dating so I was struggling through date dialog and I didn’t want some stranger watching me try to conjure romantic comedy chemistry on screen. So at one point I sighed and moved to my interview model: iPad went back into my bag while I typed on a bluetooth keyboard by touch.

I tried to correct errors as I felt them, but I also knew the errors didn’t matter much. I basically closed my eyes and channeled my thinking straight into the keyboard, with no visual feedback at all. And I switched into a whole new mode. There were fewer than zero distractions. It was almost distracting how connected I was to the writing. It’s like how silence can sometimes make your ears ring. The focus was so intense that I could feel it.

Later, I shared the story and was told it was really powerfully written. That intensity that I channeled into it could be felt by other people. So it made me wonder if there’s some benefit to that, perhaps. In some situations, anyway.

Imagine an AR setup where there was “do not disturb” mode, where it did nothing other than block out visual distraction. For someone like me who just wants to write without any distractions, that could actually be an interesting side feature. Not a core one necessarily, but I’d use it :)

Near Future Field Notes

Near Future Field Notes

Jon Bell

Written by

Jon Bell

I love building things.

Near Future Field Notes

Near Future Field Notes

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade