Funding the future of Human-Computer Interaction is about redefining our reality

Jad El Jamous
Humanity Sparks
Published in
34 min readJan 13, 2018

Part 1: The start of a post-smartphone era

Part 2: The new paradigms of reality

Part 3: Hacking perception to amplify interaction

Part 4: The case for human augmentation

- Productivity enhancement as a pillar of economic growth

- Dealing with complexity through transcending limitations

Part 5: Human-machine value exchange systems

- On-demand access to intelligent systems

- Symbiosis with the robots

Part 6: Expanding our acquisition of knowledge

- XR as an experiential education tool

- Feeling the world

Part 7: Changing our collective reality

- Human-to-human communication of thoughts, feelings, and desires

- Increasing beauty in the world

Part 8: The role and considerations of Venture Capital

Part 1: The start of a post-smartphone era

Before the launch of the iPhone in 2007, Steve Jobs explained to Wired Magazine’s Steven Levy that “people hated their cell phones”. Levy at the time described it better as a love-hate relationship, because it could well do the tasks we wanted yet not with the ease we would have expected. This feeling is reignited today towards our smartphones as we pass the iPhone’s 10th anniversary, and is echoed by Washington Post’s tech columnist Geoff Fowler in his “I love you, I hate you” letter to the iPhone. After he praises the smartphone as his beloved life partner, he then proceeds to portray it as a limited and addictive distraction machine.

Modernity is defined by a generation of people spending multiple hours per day on a tiny screen that is designed to keep their attention for as long as possible, using perverse methods of addiction feedback loops and behavioural modification — all in the name of advertising revenues. From an alien perspective, it’s absolutely crazy that we prefer to continuously gaze at screens instead of interacting with people and things around us, causing detachment from the world and each other.

We’ve been fed for many years the marketing slogans of having “all the information in our pocket” and of “instant access at the touch of a screen”. But when we dig deeper into those claims you find that the pocket and the screen are not the main selling points. The phone is just a hardware form factor for delivering information and experiences, but in reality it has failed in doing so. There’s still not enough real estate space on the screen even as phones get bigger (P.S. they can’t anymore); the speakers don’t provide a good listening quality; the games on mobile are so primitive they take us back ages compared to real current gaming hardware such as the PS4 and Xbox; there’s only a few apps out of that anyone uses or even spends money on.

It is easy to look at the world today and see the smartphone — a device that 3 Billion people carry with them wherever they go — as the embodiment of contemporary culture. Yet it is one of the fallacies of our times to believe that it is our “smartest” invention. In the evolution of technology, we saw the rise of PCs in the 1980s, the rise of mobile phone in the 2000s — and new paradigms in human-computer interaction are just starting to emerge as we approach the 2020s. We are today at what Phillip Moffitt in a 1980s article entitled “The Dark Side of Excellence” calls an observation point, where we can see both the past and the future. The next 3 Billion people will adopt the smartphone between now and 2020. As it approaches the end of its lifecycle, we are seeing rapid innovation and commercialization in augmented and virtual reality, voice assistants, wearables, cognified objects, holographic projections and haptic interfaces. These are the emerging technologies of a new ubiquitous computing era that has been long in the making, but is just reaching critical commercialization point.

Big Tech have pushed their investments in the space and prominent venture capital investors are starting to take notice. According to Digicapital, AR/VR investment in startups reached $3 billion during 2017. Recent launches in XR include the Oculus Rift, Microsoft’s HoloLens, Meta 2.0, the Magic Leap One, Google Glass 2.0 and more than 60 other XR headsets are being developed. Mixed reality (XR) is part of a bigger trend, that of future of human-computer interaction (HCI). Smart Speakers with voice control like Amazon Echo and the Google Home are already assisting people in tens of millions of homes. Gesture control recognition and wearables like the Singlecue from Eyesight Technologies, the Myo armband from Thalmic labs and Nod are also making headlines. Voice assistants and smartwatches are brought together in Mobvoi’s products from China. Moreover, reports about brain-machine interfaces startups such as Kernel and Neuralink have started appearing in many technology review magazines. And collaborative robotics such as Rethink Robotics and Kindred.ai are quickly entering industrial enterprises to complement workers.

Roelof Botha, Partner at Sequoia and board member in Unity and Mapillary, two companies that are building the blocks of mixed reality explains in a recent interview, “I’m sure most of the audience has seen the movie Her. This idea of having an invisible user interface, which is voice-based, and having a different way of interacting with technology. If you look at people at lunch breaks, it’s kind of strange that we’ve evolved where we’re all sitting there hunched over these very small screens, all developing neck strains. It’s hard for me to imagine that’s the end state.”

Within the limits of the smartphone new S curves are starting to appear. Luckily, it is not just one new curve that is emerging, there are actually many - some will fail and some will succeed. What we know for sure is that the next 10 years will look nothing like the last 10.

Part 2: The new paradigms of reality

The evolution of the Graphical User interface (GUI) into a touchscreen was the innovation that pushed the smartphone towards consumer success and allowed for better relationship to information. The new interfaces being developed today take HCI into the promise of full immersion in the digital. This process is about moving the internet from tiny colourful 2D flatscreens that we glare heads-down on to the real 3D world that we live in and move through. Why can’t we make our direct surroundings better instead of pushing us to disconnect from them? If we today think of our smartphones as intelligent devices that unlock a world of access and power, let’s try to imagine this intelligence overflowing into the world around us and immersing us in it at will.

One way to get people off their phones and out in the world is to digitize reality itself by overlaying holographic content onto the real world. Another way, one that the Internet of Things experts have been promising us, is a world filled with billions of always-on interactive devices. The technologies of AR and IoT will soon converge to usher in an internet made up of both digital and material objects that will surround us wherever we go.

We will therefore need new technological interfaces in our reach that are both part of this intelligence ecosystem and that let us connect to it. Clarke’s Law states that “any sufficiently advanced technology is indistinguishable from magic”. We indeed want our tech to be more like Harry Potter’s wand than like a smartphone. Manifesting our intentions in the real world should be as easy as making a small hand gesture and making a wish.

In this world, our AR headsets will have integrate links between objects in the real world and augmented digital apps. Voice interfaces will become ubiquitous, as demonstrated by the recent move of Alexa into cars and appliances. Current form factors of wearables will evolve to include advanced haptic-feedback systems, as well as new connected fabrics and even neural stimulators. Embedded devices will have a continuous connection to thousands of nearby objects that unlock new experiences, to robots that execute tasks for us, and to sensors alerting us of incoming opportunities and dangers. Instead of pushing buttons outside our body, we will feel our magical cyber-capabilities within our innate senses. Every digital interaction in that scenario is much grander and much more frictionless than what it could have been through a stand-alone connected smartphone.

Part 3: Hacking perception to amplify interaction

To make this new HCI vision come true, we will need to hack the five senses and allow a direct connection between the natural workings of our inner-world and the AI systems we build on the outside world. Since humans existed, our senses of touch, sound,smell, taste and vision have been the interfaces between us and an environment that is constantly sending stimuli. This input-output process works in feedback loops to construct our view of reality. It is when those biological processes are affected by computer simulations that feed them new sensations that our reality will change, our feelings will change, and hence the world will change. The convergence of man and intelligent technology will therefore come true as the body becomes a sophisticated electronics platform and our devices become bionic devices, in that they will be linked directly with skin cells, muscles and nerves, and will run directly through the blood and brain at the nano-level.

With immersive XR headsets, our senses are hijacked to create an illusion of reality. The eyes are tricked into replacing “real” world output with the “virtual”. They can for example show us a person in the room that is not really there, but we will actually see them as if they were. These interfaces are still primitive today, in that they do not yet trick us all the way into thinking it’s real because they only rely on targeting our vision systems. Hence to bring the whole body into mixed reality is to create better interfaces of perception, interaction and consciousness, with constant feedback among them to reshape the whole human experience.

The next step on this journey is to have people feel, interact and manipulate the objects around them, and have the objects interact back — possibly autonomously. This gives more life to the objects and necessitates input detection such as body motion-capture and touch controllers. Using our hands and bodies we can move, mix and mould simulated objects to achieve specific goals. Haptic gloves and body suits are new inventions (Notiom, Manus VR, Valkyrie) that bring the sense of touch into XR. Haptic wearables mimic real touch sensation with digital objects by exchanging strain energy with the environment. There is also a way to do it without wearables through natural gesture recognition (Manomotion), but this will miss touch sensations unless complemented with ultrasound frequencies technique that makes users feel certain shapes (Ultrahaptics). Both are solutions that feels more natural than touch controllers such as Tactical Haptics, although the latter will be needed for some kinds of input that would traditionally use real life tools. Robots are also touted to play a big role in XR as they can provide tactile and force feedback, and guide us in the simulated world through direct interaction.

Still the critical step for full immersion to create new emotions and hack our thought patterns. Our brain however that is the ultimate sensing and thinking machine, where everything starts and ends. The peak of immersive mixed reality will come with brain-machine interfaces, aka Neural Input-Output (NIO), that electrically link our neural systems with digital information. One company that is working on NIO to drive software and game control software in mixed reality — in near real time — is Neurable. Its early demos show people picking up and throwing items in the virtual world just by using their mind. The technology understands user intention by deciphering and localizing brain signals. Facebook is working on similar mind-control technologies around typing and clicking — they are calling it the “brain-mouse”. The Swiss company Mindmaze, whose mission is to power all other XR platforms, is scanning our whole brains and in a way bringing our whole consciousness with us into the digital realm.

The potential exists in transferring data in both directions and exchanging information between the brain and outside networks. In the academic world, biomedical engineers from the University of the Witwatersrand in Johannesburg have already connected the human brain to the internet in real time in a project called The “Brainternet”. They explain that sending brainwaves onto the internet turns the brain into a node on the Internet of Things (IoT). Spanish tech blogger (and my previous teacher) Enrique Dans believes that there will be no shortage of entrepreneurs willing to close the gap between controlling computers through brain stimuli and using those stimuli in more complicated tasks.

Brain computing might also not be as far from early commercialization as previously thought. Kernel founder Bryan Johnson, with $100M in capital raised at the very early-stage, is aiming to make our neural code programmable and notes that “Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces, and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.” Elon Musk — whose long term business plans are always admirable — puts 2021 as a milestone for people with disabilities to use products from his brain-machine interface startup Neuralink. He expects people without disabilities to start using it in 8 years. We should however, keep in mind that mapping brain neural function is one of the hardest problems in science today and the timeline is blurry. The ultimate challenge also remains in putting connected implants in the brain to capture, record and transmit data.

The big question is about how these interfaces will allow for human cognition to be expanded alongside our reality. AI theorist and co-founder of the Machine Intelligence Research Institute in Berkeley Eliezer Yudkowsky speculates that “Anything that could give rise to smarter-than-human intelligence — in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement — wins hands down beyond contest as doing the most to change the world.” Transcranial electrical stimulation (TES), a working model of safe non-invasive brain tools, is delivered through headbands and is now believed to be a mode of enhancing brain function. TES is becoming more popular among researchers with the introduction of a new technique using direct currents to trigger neural activity. Recent research positions it as a way to help functions such as language, learning, memory and visuospatial processing. People who are administered TES in research environments could multitask better, learn vocabulary terms faster, and improve their learning success over a significant period of time.

These new interfaces will therefore encompass and change the entire human experience by allowing people to feel things they have never felt and interact with the world in ways that augment mental, physical and social abilities and that they have never imagined. Not long from now, we will be able to combine the power of cognified devices and the power of our brains and our bodies. Humans need more logic processing power and the AI need more of that humanity that took hundreds of million of years to evolve into reality. Together they create a far more powerful intelligence than if AI is further developed on its own. Ben Medlock, co-founder of SwiftKey, calls for AI researchers to take this insight into consideration and to combine their algorithmic creations with millions of years of human evolution.

Part 4: The case for human augmentation

Productivity enhancement as a pillar of economic growth

There has always been a push from technologists towards using our tools as extensions to our biological propensity, and it’s intriguing that many theorists believe that the promise of superintelligence will make us philosophically rethink our humanity. The internet indeed rose with an optimistic liberatory movement in the 1990s but the narrative quickly turned it into a dark entertainment machine monopolized by advertising companies. This is understandable since the extrapolation of what the internet offers today from escapism and distraction, would lead mostly to an ad-filled fully immersive story-telling environment revolving around media and games. Yet the idea of a wholly simulated entertainment world is not enough to excite anyone about the future — entertainment is in fact only part of what we seek as human beings and the real promise lies in augmenting our reality on the basis that we want to accomplish meaningful and self-actualizing tasks in daily life.

The Harvard Business Review portrays XR as an “upskilling technology” that is started to be used in industrial contexts to improve worker performance and safety and counter slowing economic growth around the world that previous technological advancements failed to impact. Within the heated debate around automation lies an argument for using technology to increase human productivity, and it’s not only because robots are not capable of doing everything yet (and probably won’t be for another couple of decades). It is also because of an emerging ethical view of empowering employees and improving their satisfaction through a human-machine collaboration. This will also help combat rising stress level in the workplace and push the firms that integrate these approaches in their culture to the top of “best employers” lists.

Emergence Capital released their mobile enterprise trends report in 2017 explaining that their enthusiasm around this market comes from a $250 Billion market made up of 80% workers who are deskless in jobs like manufacturing, healthcare, hospitality, field service etc. and from the constant worry of boosting worker productivity. The solution, he predicts, involves in large part many organizations moving towards augmentation technologies such as HMDs and IoT as they get less expensive and offer more valuable use cases.

Further job losses to automation should be met with further augmentation. An Uber driver that loses his job to an autonomous vehicle in the near future will surely find a solution in XR headsets that offers rapid education in other kind of tasks. Augmented environments have the potential to not only train workers in specific complex capabilities, but are also able to help them execute tasks in real-time such as assembling products at a factory. In the same way that he follows machine commands through driving maps, he will follow HMD waveguide technologies that instruct in real-time on a wide range of complex industrial tasks.

I also expect new IoT and AR to be supportive of productivity through two more ways. One is their integration with cognitive software interfaces that help in unique problem-solving tasks such as risk-reward calculations and data-based managerial decisions. Ray Dalio, manager of Bridgewaters Associates, for example has been developing an algorithmic system that manages decision-making alongside employees in his fund. Two are collaborative robots that do things like taking notes for you, learning tasks and organize meetings may also be additional assets that interact with workers in business settings to improve their quality of work. Automata and Wandelbots are basically robot arms that you can teach to do anything.

In arguing for human augmentation, I am actually not fighting against outsourcing many jobs to the machines. Both extremes can work together. We do need to fully automate the routine and mindless jobs that people today dread but that can be handled by AI systems. All there should be left for humans is the real work of creative production and visionary thinking, also known as full-time hobbyism. Automation and augmentation actually feed into each other because when we increase worker performance with new assistive interfaces, we at the same time increase the cost of labor and therefore incentivize companies to invest in automation faster and push humans out of bullshit jobs into creative ones — narrow AI becomes cheaper than augmented, efficient humans. I definitely know that my salary without the laptop will not be as high as my salary with a laptop because I would not get anything done (I cannot however say the same about a phone, but on the contrary). The rise of AI should not be seen as a dystopian automation nightmare, but as a process of releasing people from the drudgery of subnormal wages so we can openly consider new purposes for human existence.

Dealing with complexity through transcending limitations

A big aspect of “intelligent beings” is the ability to adapt and react to a changing world, but this seems less and less likely that people constantly bombarded with information on their devices can actually do. In December 2016, Donald Trump himself actually articulated the problem very simply: ‘I think that computers have complicated lives very greatly. The whole Age of Computer has made it where nobody knows exactly what is going on.’ Trump is right in that technology has greatly increased complexity, and this complexity has us confused us to the point of paralysis.

As many have pointed to before me, the societal problem with the modern internet is not information overload but is filter failure, which means that attention is hijacked and stretched away from where we actually want it to flow. Our human minds are not equipped to handle the ever-increasing body of knowledge and world news available to dig into. It is possibly even reducing our knowledge as the majority of people become more indulged in mindless consuming rather than in creation. The problem with an infinite information stream on the internet is that we don’t do much with it. We just move rapidly through endless pieces of information instead of taking the time to act upon them, or even think about them.

First, This means that information has become entertainment. Second, this means that our society has become a “spectator society”. To just share and retweet, hoping someone else will do the right thing, is typical of user behavior in an advertising fueled, entertainment focused digital world. Third, the danger in equipping man with intelligence and material power without the wisdom to use it leads to meaningless life, and we are feeling this nihilism more day after day.

The mainstream reaction to this problem seems to be counter-intuitive. A society in crisis should not try to revert back to past times where things were actually working, as reflected by many primitivist chants today calling for either economic localism, reversal of technologic advancement, or retreat to conservative minimalism and slow web movements. There is another more realistic option of stepping forward into the future and embracing the state of flux, as humans always did. This option is to expand human capacities to process information and make decisions in a complex world. We actually do want more information, because it’s only through the sharing of knowledge that our economic and personal growth has happened in history. It is in fact those who today can still find ways to attain real knowledge and those that “create” are the ones who rise to the top of the hierarchy.

We can therefore create better ways to re-focus on action, creativity and better psychological health. In a fast-paced world, it seems that we are supposed to be constantly learning and constantly making decisions. This mode de vie seems an inevitable part of society if we want to be regarded and regard ourselves as “good” human beings. We need therefore all the help we can get from new technology to make our lives easier and alleviate the weight of the world on our shoulders. That again can be done through new interfaces that promote enhanced cognition and new mechanisms of human-machine collaboration. It is basically impossible to produce new consumer value without completely changing the front-end experiences reliant on interaction design.

The simplest example is that the limits in how we process information on our social media platforms today are our inability to create connections with no more than 150 friends (the Dunbar number) and our inability to cope with all the updates from people and pages we follow, so Facebook is pushed to create filtering algorithms for deciding itself what it is that is important to us. The sort of solutions I foresee are in the development of new user interfaces that upgrade our mindware to catch all the updates on our social media feeds, to assimilate all the information thrown at us from our contacts so that we learn more, connect with everyone we want to connect to, and be able react to everything we want to react to in a productive manner.

There should be more initiatives within new HCI to design better well-being and positive behaviours. The challenge here is to provoke emotions and please the senses because most emotions and behaviours generated by the internet lately were between unhealthy and detrimental — from the constant fear of missing out and anxiety produced when not connected, to the Douglas Rushkoff-coined Digiphrenia that is the scattering of one’s many personalities across the digital and real life.

Part 5: Human-machine value exchange systems

On-demand access to intelligence

Prominent SF design-consultant Tiago Forte has a more sci-fi take on the future of HCI, in which he states “The day that I can reach out my hand and, with nothing but my thoughts, make my intentions manifest in the real world, will be the day that technology can be considered grown up”. It is already becoming very convenient to tell a chatbot what we want so it execute those tasks for us. Take the example of the Amazon Echo turning off the lights at home — a simple task that links the user who expresses an intention, to Alexa that processes the demand, to the light switch which is connected and ready to execute. Conversational AI is about speaking things into existence, but is only one interface of many. Wearables, embedded tech and NIO will make way for more complicated direct links with intelligent systems in the smart home, in the smart enterprise and in the smart city. They all become an extension of our own intelligence.

The access to productivity and service applications can help get things done with the help of intelligent data and robotic systems, and even link to support partners such as personal coaches, physicians and dieticians that can react to any changes or problems. If we thought the jump from calling the restaurant to using Deliveroo on our mobile was towards much less friction, imagine for example what kind of value will be created when we link our senses directly to the restaurant to see the menu just before your eyes XR glasses and convey to the chefs exactly how we want our food to be cooked with just a thought.

In a few years, I envision we are going to go through our days effortlessly accessing transportation and payment networks, wearing stylish AR glasses that identify objects in the real world as tokens to augmented digital services that cater to our own needs, and taking advantage of all the data that is being processed around in many different ways.

These connections will become ever more personalized and contextualized. Ideally, the AI would learn about me the more I interact with it and starts adapting to what I already am good at or what I wish I was capable of. With implants tracking our body, our thoughts and our emotions, the software will take everything it learns about us into its service-feedback loops, and combine it with contextual intelligence to be proactive about the right course of action for the user.

Our personal health will also be augmented — a patient that is being monitored at his or her home with cameras and embedded devices that are sending information to their personal physicians or to AI-powered health apps who may then send commands to administer doses through smart skin membranes, body implants, or brain-focused biochemical interactions.

Symbiosis with the robot

Digital assistants in the format of voice-controlled home devices are very likely to evolve more and more into humanoid forms as demonstrated by Intuition Robotics, Furhat Robotics and Jibo. A young field of research called human-robot interaction suggests that since we’re hardwired to place our trust in humans, we should make robots more humanlike so that people share more of their with them. Amazon is slowly adding touchscreens and cameras to its Echo to complement its voice interface. The Amazon Alexa Fund also invested in 2017 in social assistive robotics startup Embodied. Social home robots can in fact become great caregivers if we can program social intelligence skills into them with careful design-thinking. I personally look forward to the day that the Amazon Echo gets smart enough to become a family doctor, so that it alerts me when any serious issues arise in any of my family members. In addition to diagnosis and assistance, automated healthcare and treatment robots should be able to automatically administer doses of medication when needed, to prevent problems.

AI expert David Levy proposes that we are not far from a time (he suggests roughly the year 2050) when humans will desire robots as friends, sexual partners, even spouses. Levy argues that a few decades from now the differences between human and android may be “no greater than the cultural differences between peoples from different countries or even from different parts of the same country.” The combination of speech, computer vision and emotional recognition are already starting to create affective human and machine collaboration and making collaborative robots will become our partners in daily life. Affectiva is a startup that is trying hard to create those multimodal emotional recognition techniques in software and in machines. The evolution of that concept means a move beyond command & control relationships to symbiotic, empathetic relationships and (hopefully) peaceful coexistence.

Part 6: Expanding our acquisition of knowledge

XR as an experiential education tool

Education has been at the forefront of the discussion on how we can make the world better, yet education has been stuck in the past and unable to provide any kind of innovation that helps young people learn and older generations to “go back to school” and pursue a lifelong journey or constant re-education that helps them not only progress their careers, but solve the complicated problems that the real world faces. In one of his shots of Awe episodes, Jason Silva Humans explains that we are hardwired to discover and explore, and that our visual cortex is linked to our opioid receptors meaning that our brain rewards the acquisition of new knowledge.

Immersive and augmented environments are proving to be a much more natural way of learning new ideas. Conscious engagement and interaction with a high-level simulated representation of a complex scientific idea can increase the effectiveness of learning outcomes. Students better contextualize what they are experiencing, because their senses and their whole body are engaged in the process, and research has shown that the brain remembers 90% of what it “does” vs 20% of what it hears or reads.

Copenhagen-based Labster is re-imagining STEM education through immersive simulations and saving universities huge costs of physical lab experiments by digitizing this whole environment. The startup Curiscope has created a physical t-shirt that shows the inside of our bodies when looked at through an HMD and shows one direction of AR being used for giving us more knowledge about the real world by looking directly at it. The Boulevard team, which are applying immersive education to Arts and culture, wrote on techcrunch in 2016 that “This global distribution of VR content and access will undoubtedly influence a pedagogical shift as these new technologies allow a literature teacher in Chicago to “take” her students to Verona to look at the setting for Shakespeare’s Romeo and Juliet, or a teacher in the Bronx to “bring” her Ancient Civilizations class to the ancient Mayan ruins at Chichen Itza”. Similarly, startups such as STRVR and FundamentalVR are building game-like simulations for healthcare and industrial training. The latter’s founder explains “We’re starting to look at using actual scans of real patients in simulation, so that a surgeon can practise doing a real procedure in simulation before they do it on a real person.

There’s definitely going to be less and less of sitting in classrooms boringly listening to someone speaking about a concept that might be hard to grasp without visualizing it. Immersive learning is finally close to happening, and Google is at the forefront of it with Expeditions, a project that fits into the company’s mission of organizing the world’s information and possibly takes that into translating this information into real knowledge and social progress.

As briefly mentioned in the augmentation part, knowledge acquisition will also move into real-time when we can actually augment our surroundings with information such as text or charts, and tutorial like instructions with waveguides and arrows. As XR evolves more into NIO, we can foresee a direct upload of knowledge into the brain. Futurists also expect that one day we will be able to create an exact mind replica and enhance it with more processing power for higher productivity and wakefulness. When the brain and replica are linked, we can have knowledge and ideas that are sparked in either one.

Feeling the world

But that’s not all what new interfaces directly linked to the senses can bring. Our senses seemed to have evolved throughout millions of years as a way of bringing life under control, of abstracting reality to make sense of sensory input and enable both understanding of the world and action on it. In fact, Donald Hoffman and others have scientifically proven that we don’t see reality as is but we only see an expression of reality that fits our evolutionary fitness function. So what if you can actually get information about the world through your body by wearing or embedding technologies designed just for that? To transcend our perceptional limitation is to therefore make the transfer of information between us and the world easier. As with the Facebook example that I’ve given before, it’s our best solution for adapting to increasing complexity and push for greater control.

The skin is a very important part of this new era. The smartphones and smartwatches we use today interact give us simple vibrations that we naturally understand. My fitbit gives one long vibration on my list when I’ve been sitting for more than an hour, several short celebratory buzzes when I reach my 10,000 step count, and another buzz pattern when someone is calling me or messaging. But the potential in these patterns becoming a new communication method is clear. The intensity or number can change with different urgency contexts. More complicated tactile feedback patterns can emerge. There will be clothing that writes on our skins and sends bursts of electrical signals directly through our nerves. Projects such as NeoSensory and Cyborg Nest are creating new modalities to transfer information in parallel with vision and hearing. They work in a way where the brain transforms incoming patterns on the skin or through the body into useful information. The input can be transmitted from the internet and social signals, to weather patterns and other environmental cues. North Sense from Cyborg Nest enables users to naturally detect where North is through a piercing. Its founder explains that “This was not another step towards us becoming machines, but towards us becoming more human.

The haptic perceptual system can easily learn and recognizing distinctive patterns, but it will definitely fit together with augmented vision and all the other senses, and in combination with other modalities to provide a much more dynamic exchange model with the world. WIRED magazine calls this the “alphabet of haptics” and tout it as a new mode of communication, one that continues the progression towards felt language after we have gone from spoken language to written language.

It is believed that in early Upper Paleolithic times, language has evolved way as a way to describe the feed of sensations and images that humans were subject to to explain the world. What language is not is a pre-requisite to thought, and it is widely believed that sentences and propositions are not the only way we think — and that this too can be changed. A recent research paper by the Google Brain team, which mentions maps, diagrams and mathematics as alternative ways of thought, then argued for new view artificial-intelligence that “changes and expand human thought”. It showed examples of new cognitive software interfaces that change the representations of how we think by surfacing the world in new ways. So just like language is an abstraction of reality, I believe we will try much harder to create new abstraction techniques that in turn create new ways of understanding the world — thereby making humans more efficient agents of reality. One of the same authors, Micheal Nielsen from Y Combinator explains in another essay that “You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness.

Part 7: Changing our collective reality

Human-to-human communication of thoughts, feelings, and desires

In early human times, language brought with it a cultural explosion because we could finally have our “beliefs’ about the world, communicate our beliefs with others and possess intentions to act in the world. We in fact are social creatures and we build our world by collaborating with others through common language. The new interfaces discussed above will amplify these connections and redefine our shared reality and our relationships with each other.

The internet has always been a connection machine, and today we are experiencing not only our lives but everyone else’s lives at the same time. We are seeing the world through each other’s eyes with cameras that are on us at all times. Our camera can break down reality into 1s and 0s and transfer them over 10,000km in real-time to another machine that friends are looking at, and that’s absolutely amazing by itself. While form factors change and improve, our desires and hopes don’t change — it’s how we experience them that changes. We have always wanted to travel and meet people but it’s the car that enables us to do so easily and it is the telescope that helped us find out the world that was beyond our understanding. We are social creatures and the future is in more collaboration and more empathy, with XR being touted since it’s conception as “the empathy machine”.

The nature of being human is that people are putting parts of themselves in other, we are communicating beings with a drive to become one with everything and this has been going on since the start of human history. We work together as humans to construct new perpetual realities, even if we all see and feel different things at different times through a unique individual perspective. While we live to influence the world and manipulate it in a way that makes sense to us, we also change the world for everyone else. Matt Miesnieks of Super Ventures, an US-based XR focused fund, writes “I personally believe that AR won’t really affect all our day-to-day lives until AR lets us communicate and share in new and engaging ways that have never been possible before.MimesysVR is for example startup that is working in Paris on collaboration tools in virtual work settings.

On a grand level, we should aim that Augmented Reality and NIO bring about a species-wide merging of consciousness, in which everyone understands everyone else and their perspectives and hopefully eliminate any kind of “Us VS Them” mindset. This hope is driven by the realization that a major consequence of networking brains together will be neurosocial activities such as direct transfer of knowledge and creative collaboration. The startup Unanimous.AI has raised in 2017 a seed round to continue working on a “wisdom of the crowd” collaborative decision-making product that mimics real-time swarm intelligence in animals like birds and bees. Its founder Louis Rosenberg writes that “When users come together in swarms, they’re engaging in a synchronous activity that makes them feel connected to others in real-time. They become part of something larger than themselves.” In other news, early NIO research on rats has shown potential in making a rat in the US and a rat in Brasil share knowledge through connected brains on which levers to press and begin working together as one nervous system.

When we reach the full-automation, there will be nothing left for humans but collaboration to help each other and engage each other in social play. The internet of today is already a “playground” and a place to discuss political issues, but I return to the point that the interfaces being used today are broken. The goal should be not to distract people away from productive work and problem-solving, but to problem-solve first and leave full-time play as a celebratory event for the age of the fully automated post-scarcity society.

Increasing beauty in the world

Humans have been involved since thousands of years in environmental manipulation for their own benefit through technology — and this perpetual process of augmenting reality is naturally progressing today with AR and ubiquitous computing. Our artefacts which include cars, buildings, appliances and artistic creations are objects that we imagined to enhance our lives and help us better adapt to the harsh environment that nature originally bestows. Yet we’re accustomed in modern culture to call everything physical to be “real” and everything on a screen to be “virtual”. The new digitally augmented reality generated by computer simulations won’t be any different than the material augmented reality, generated with atoms.

In no time, the line between reality and virtuality will start disappearing. The 3D virtual objects that we project onto the world would be in fact bringing new “real” things into existence and amplifying every existing atom, even the ones in our bodies. Augmented reality will move the fundamental essence of productive work towards designing a new world. Since the digital sphere is easier to code than real life products, the efforts of everyone involved in the economy will be focused on making the world a better and more enjoyable place not through the production of more “stuff”, but through the creation of more interactive art and content that can be layered on top of the real world.

Tools for artists are going to emerge to let anyone create their own virtual imagined worlds. Because of machine learning algorithms, adding a snowy mountain or a medieval castle becomes a simple command by the user. Moreover, people will start bringing to virtual life famous drawings from Salvador Dali and whole kingdoms from Game of Thrones. Art lovers, gamers and fiction nerds will pay to immerse themselves in these surreal environments in which they can freely follow their curiosity and surrender all their senses to wonder. AR is unique in that it puts us right inside new narratives of digital environments, and these narratives are produced by other designers and developers which put their creations to the test. It takes people out in the world, making them more active, and makes the world a more beautiful place to live in. The Cyberdelics incubator in East London is one place XR projects are taken to their spiritual and psychedelic limits in order to produce new states of consciousness and powerful emotional responses for the consumer.

In a continuation of what the experience economy is showing us today, people will continue to prefer to engage in mind-engulfing immersive experiences such as these and even in much simpler, but positive, ones. As everything becomes more “normal” and deja-vu, we spend a big part of your life chasing beauty. As we grow older some people make a hobby of trekking into nature every weekend, of travelling somewhere new every month, or of going to music festivals where their emotions are hacked with sights and sounds and they unite with other people. It’s their own way of chasing and discovering beauty. Beauty is really not just forms and shapes that attract our sight, it’s something deeper — it invades our emotions and then makes us experience everything in new ways, which those input-output technologies can do. Beauty’s reason is to change perspectives, and that is what the new augmented reality artists will do.

Tomorrow’s experience designers have a mission to create beautiful things that can improve peoples’ lives, through the new interfaces and IoT products. We have already created so much beauty in the world already — and looking back on history, everyone participated in it. We’re now on the brinks of creating much more beauty.

Part 8: The role and considerations of Venture Capital

The ideas in this essay have been running around since AI and VR started showing their face to the world. I’m very positive there are alot of bright entrepreneurs with similar dreams of redefining reality working at the moment on making this happen. And I do believe the time to make it happen has come. The canvas of an augmented reality, however, is brand new and we will need lots of inspiration.

We always knew these “Frontier technologies” were coming, but the big question for VCs is “how do you time these investments?” The answer is now. Machine learning, applied to big data, is the main catalyst behind this push into future HCI and augmentation. We can now better deploy computer vision systems, NLP, chip-based edge computing, and intelligent algorithms to find patterns in any kind of data.

The answer is that mobile AR is already here (with ARkit and ARcore) but is a small stepping stone towards HMDs. When asked by Digicapital about investing in AR, HTC Vive’s Alvin Wang Graylin predicts “as we see AR and VR merging in coming years. Mobile AR is bringing a new group of developers into immersive computing, many of whom could migrate from mobile to premium platforms…We see new products emerging in 2018, 5G helping to scale usage in 2019, with 2020 as the inflection point for more advanced AR/VR platforms.

As I’ve been researching the European AR/IoT market for the past few months, I’d like to zoom in on the European VC space. Europe has seen 40% increase in total capital raised in 2018 and a 16% increase in deal activity. As we go into 2018, we are already witnessing an increase in transactions going into “deep tech”. A recent Dealroom report points to deep tech crossing 20% of total investments in Europe and reaching 9.5Billion Euros in total since 2012, with half of that only in 2017 (it is the highest growth category within VC). The UK, Germany and France are shaping up to be the darlings of this transformation on the continent. London, along with Oxford, Cambridge and Bristol are all providing outstanding deep-tech talent from their world-class universities; Germany can marry its Berlin alternative mindset for customer-centricity with its country-wide industrial powerhouse; Paris is pushing hard to position itself as an innovation nation and made quite a name for itself in 2017 with more capital, R&D and innovation labs.

When investors in Europe think of Robotics, wearables, and IoT companies — they are directly turned off by the hardware aspect of it. Yet the future of hardware is looking brighter as costs of production go down and agile development is increasingly implemented. Some new hardware products will definitely fail but some may succeed in a big way. But it’s not just about the hardware, but more about the hardware-software combination premium, as Andrew Parker from Spark Capital explains. Chris Quintero of Bolt Ventures also writes that the “focus on hardware is occurring because there’s a growing recognition among tech companies that vertically integrated product experiences keep competition at bay”. His advice in two-folds — that the combination of hardware and software that will produce the next successes, and that a niche-focus is required in order not to go head-on with big tech/GAFA. A great example would be pet healthcare-focused wearable Whistle.

Going both into hardware and niche enables a startup to become defensible by owning the whole ecosystem. Bradford Cross, founding partner at DCVC, believes that a deep tech startup can really valuable when it “Provides a full-stack fully-integrated solution to the end customer problem from the interface that solves for the need all the way down the stack to the functionality, models, and data that power the interface”

It is really hard to imagine a future where one type of wearable, HMD, robot, or nano-implant will monopolize a whole market in a similar way the smartphone did. There will be many devices with specific use-cases. Aesthetics and branding will also matter alot for adoption, especially on the consumer front. Investments like these also takes patience and lot of capital, hence more follow-up money for early-stage investors. According to Matt Turk from First Mark, frontier tech startups look way behind in metrics on their series A & B pitches compared to non-frontier startups and their investors need much bigger cash for follow-up investing. The dilemma here is that some specialized investors will want to go in at the earlier stages in order to not get diluted later on, but this will be harder to do without those metrics.

I also believe that in computing shifts there are also opportunities in setting the horizontal building blocks of platforms. For example, the time to invest in the infrastructure and middleware apps of AR is now — waveguides to project optical illusions into our eyes (Waveoptics, Dispelix), SLAM to map the world (Mapillary, Kudan, Scape), photogrammetry to beam into virtual existence products (Augment) and people (8i), holographic technology that places volumetric objects in the real world without the need for glasses (Leia 3D, Kino-mo), and even simulated “autonomous” content (Asteroid). Yet these are more likely to go head to head with the thousands of developers from Facebook’s Oculus, Microsoft Hololens, Google Glass, and other corporate IoT players — though have seen many acquisitions by these big guys already. To my knowledge, there is no one size fits all success factor for these kind of plays — we must really understand what are the moats, potential customer margins to target and exit opportunities for each horizontal startup by itself.

Venture capital is a risky game, but I personally see it as a force for good in a world where big tech has kind of lost regards to creating real value for people. VCs are gate keepers of the future and wherever their capital flows there will be progress, and it is their excitement about emerging technologies that guides how these AI, XR and IoT are developed. We should start seeing within Artificial intelligence research and development a focus on people — thereby changing the debate from automation fear towards human enhancement and well being. One of Azeem Azhar’s predictions for AI in 2018 echoes this sentiment saying We will increasingly demonstrate how AI is augmenting human capabilities and move the narrative beyond that of AI exceeding human capabilities in speech/image recognition/Go.

I see some VCs pushing for automation solutions as a way to disrupt enterprise, but on the macro-economic level they have to realize that there will not be much investment in automation if labor continues to be cheap and competitive. There certainly also won’t be an acceleration in automation investment if AI continues to be a stand-alone systems instead of an embodied one. Furthermore, progress is as much an ethical and moral concept as much of a technological one. While many investor subscribe to the philosophy of abundance, I personally prefer Murray Bookchin’s definition of abundance (he calls it Post-scarcity) in which he states “Scarcity is more than a condition of scarce resources: the word, if it is to mean anything in human terms, must encompass the social relations and cultural apparatus that foster insecurity in the psyche…The human relationships and psyche of the individual in a post-scarcity society must fully reflect the freedom, security and self-expression that this abundance makes possible. Post-scarcity society, in short, is the fulfillment of the social and cultural potentialities latent in a technology of abundance.

I cannot predict what future interfaces will look like but what I tried to do is point to ideas and products that have surfaced, and looked at specific problems to be solved and their future consequences. What’s obvious is that there won’t be one technology that changes the world, but they will all converge to redefine what we think is “real”. Hopefully this will give myself and other investors convictions in so that we collectively aim towards real progress and high social impact.

Jad El Jamous

London, UK

--

--

Jad El Jamous
Humanity Sparks

Techpreneur. Cultural innovator. Working on 3 ventures for well-being. LBS MBA2018. Ex Growth lead @Anghami & @Englease. Digital business MiM @IEBusinessSchool.