Hello Tomorrow
Feb 12 · 14 min read

Welcome back to Hello Tomorrow long-form series. More in-depth pieces that allow you to sink your teeth into specific topics and discover the upcoming trends before everyone else.

Want to know what the future will be made of? Get comfy, set 14 min aside and read on.

Planck, Einstein, Bohr, Heisenberg — the names of these great physicists rarely appear in articles presenting our future. However, the quantum theory they developed about 100 years ago determines our everyday life today and will continue doing so.

At the beginning of the 20th century, Planck and Einstein discovered that energy and light aren’t continuously distributed but made of packets, which today we call photons. Then Bohr came to the understanding that the energy of electrons can take on only certain values and Heisenberg put forward the idea that electrons exist only in a defined place when interacting with something else. While even Einstein admitted that this last addition to the theory is counterintuitive, it was confirmed decades later and the current information age is largely built upon the principles laid out back then. Without the quantum theory, we would not have built transistors and computers — and it works, for real.

Today, we are generating and harnessing the properties of photons — the quantum unit of light — to transmit information through fibre optics or laser beams at the speed of light. This allows us to send digital information around the world in milliseconds.

In a similar fashion, we harness the properties of electrons. The directed movement of electrons provides electricity which we are able to generate, distribute, store and convert into other energy forms. In combination with photons, the properties of electrons allow for today’s digital information processing.

Moving away from the past, we want this article to focus on new, more immediate developments. Hello Tomorrow is in a unique position to preview the near-term future as we see many startups coming through our Global Startup Challenge, this year with over 4,500 applications worldwide alone.

As we spend many hours prodding at their data, we have built up a solid idea of what is going on in deep tech. The following article highlights the great work done by some of these entrepreneurs.

Analysing the details provided by our applicants, let’s open a window into the digital future.

What we foresee — core trends

If you only read the following, you will have a pretty good overview of the major trends currently dominating the digital sector.

The Internet of Things continues to transform industries

The term “Internet of Things” was officially introduced in 1999 and since then the manufacturing industry has increasingly applied connected sensors directly on the production floor to collect data about the status of machines or track the location of assets. If, for example, a machine breaks down, connected sensors can automatically locate the issue and trigger a service request, thus dramatically improving operating efficiencies. But we knew that already, so far, so old.

The next generation of Industrial Internet-of-Things (IIoTs) offers more advanced functions. Before the sensors would only warn the humans that a machine was failing, now the newest IIoT devices help manufacturers to predict a machine failure before entering a dangerous operating condition and eventually breaking down. This ensures better safety for human operators and huge cost savings. How do we do that? For instance, the sensor introduced by OneWatt analyzes the sound frequencies of motors. Similarly, CARFIT monitors the vibrations of a given car engine. The data obtained is then analyzed by AI-backed algorithms to identify potentially upcoming anomalies.

Other such as Tellmeplus go beyond by developing AI-driven analytic software platforms which can integrate data from various sensors and machines to create customized predictive models. Likewise, Amiral Technologies offers a predictive maintenance solution based on machine learning which allows the constant self-adaptation of the underlying predictive model to yield more accurate predictions.

Beyond machine management, IIoTs can transform traditional, linear manufacturing lines into dynamic, interconnected systems. The need to switch from the Ford-era assembly line to a more dynamic model is driven by the diversification of products. Instead of standard products, most manufacturers offer customized solutions to be able to compete in the market. Connected sensors and autonomous systems like the one introduced by Arculus, Hello Tomorrow’s 2017 Industry 4.0 Track winner, increase productivity thanks to an optimized use of machine operation time and given infrastructure.

Ford Assembly line, back in the day VS Arculus Modular one in 2019.

Like Arculus in the automotive industry, Safety Line applies a network of sensors and autonomous systems to transform the ground traffic at airport sites. All aircraft on the ground are handled by an autonomous, electric tug, and those vehicles are remotely controlled by a central system that optimizes the airport traffic flow.

The third industrial application of sensors is in the field of non-destructive testing — the analysis of material without disassembling it into its smallest units. The team at Senorics builds customized sensors to optically test any given organic material like food ingredients. Its sensor measures the unique “optical fingerprint” of substances by detecting their specific absorption characteristics in the near infrared wavelength range.

Cybersecurity solutions

bout 10 years ago, our real, analogue life and our digital life were two separate entities. Today, these two worlds have merged into one for many of us. We manage more and more of our personal and professional lives digitally and are quickly becoming surrounded by smart gadgets and various connected devices.

While in the past most cyber attacks have consisted of stealing confidential information by introducing malware on any given computer in a network, now entire populations and countries are vulnerable because of the connected devices around us and in systems like power grids. Why is this the case? Because the security standards of connected devices are often fairly low. For instance, the automotive industry was rushed into adding numerous digital services and network systems to their cars because of consumer demand and market competition which led to an increased number of car hacking incidents. But let’s introduce Trillium Secure, whose response to these hacks was to develop a multi-layered cybersecurity platform based on a combination of encryption, authentication and key management technologies to protect the in-vehicle network from intrusion.

One of the main issues in securing IoT devices is the need to adapt the security solutions to the small-sized chip and the typically low power working condition of these devices. These limitations create the need to design low energy consuming security protocols that require little storage capacities. For this reason, suppliers can’t utilize common, high-standard security solutions but develop proprietary ones for each product individually. To overcome this bottleneck, Acklio developed a compression-decompression technology that reduces the size of the internet protocol enough to implement it and the associated security protocols on the chips commonly used for IoT devices, thus enhancing their protection.

Another common problem is that the encryption protocols of IoT devices often rely on pseudo-random numbers. And yet the pseudo-random number generators are based on mathematical formulas, so are in the end, not really very random. By exploiting the quantum properties of light instead of using a mathematical formula, a team at Crypta Labs developed a true random number generator and integrated it into a portable hardware security module which any IoT device can be equipped with.

Aside from security solutions for connected devices, there are other novel security concepts emerging. Traditional approaches to data privacy and access control require one central point of trust: a hardware security module or a centralized server managed by a third party. This centralized approach requires trust in the service provider, which may be inappropriate for security-critical applications. Nucypher provides an alternative to this trust-dependent approach by using a combination of encryption and blockchain-like decentralization of access management.

Lastly, another cyber security concern is the deliberate use of biometrics. From face recognition, iris scan to fingerprints, authentication processes force us to reveal our unique biological information more and more often.

As a result, two main issues arise. The first, biometrics are inherently public, thus easily accessible to all these identifiers. Secondly, since biometrics reveal part of a user’s identity, if they are stolen, they can be used to falsify legal documents which can cause more harm than a stolen credit card number. And unlike passwords or credit card pins, you can’t replace physical identifiers, if someone has a copy of them.

Pretty grim perspective, isn’t it? The good news is that scientists are working on the so-called “zero knowledge proof”, which will authenticate you without revealing biological information. For example, Crayonic developed a stylus which recognises its owner based on a handwritten pin code or signature. Once the stylus confirms the presence of its owner, it then produces zero knowledge cryptographic proof for authentication.

Artificial Intelligence

A recently published report by the WIPO shows that the number of patents and scientific papers in the field of Artificial Intelligence (AI) has doubled in the last 5 years. Despite the initial concept being introduced in the 1950s by Alan Turing, it has remained a pretty niche topic for almost 60 years. We’re now past that… Today, new AI-based applications make headlines in public newspapers and their implications on society are part of the daily public discourse. People both fear and dream about the potential of AI. For instance, we read a lot in the media about the AI-based creation of “deepfakes” or how DeepMind’s StarCraft playing AI “AlphaStar” has taught itself a highly complex real-time strategy game. After all, the recent advancements in AI have already changed our world and will carry on doing so, even more increasingly. AI itself is neither good nor evil, it is up to us to make AI work to the benefit of everyone.

The significantly increased activity in the field of AI today is due to advances in the field of machine learning, particularly in the method called deep learning. Thanks to the improved design of these algorithms, in combination with increasing computer power and data storage capabilities, AI has been successfully implemented across multiple industries such as in connected sensors, voice assistants or medical imagery analysis as described in the deep dive into tomorrow’s healthcare.

While AI has many applications, one field stands out: around 49% of all AI patents relate to computer vision, a technique that gives a machine the sense of sight when equipped with adequate sensors.

However, sight doesn’t come with an intuitive understanding of the physical universe. For that, an AI needs training just like children do. Right now, they need way more training examples than children but can go through them significantly faster. Humans learn how the world works by observing it but the underlying, molecular mechanisms in the brain for learning are not well understood yet. What we know is that our biological neural networks are performing well at interpreting visual information even if the image we’re processing doesn’t look exactly how we got to know it. Besides, despite a large field of vision, we can focus on a single detail over a huge amount of background information. And finally, we are able to perform these processes at an incredibly low energy consumption. While the current state of the art machine learning is very far from achieving the energetic efficiency of our brains, we are getting closer to human image perception as proven by the following examples.

In fact, a lot of preprocessing work is done on the input images to reduce background information thereby creating the human-like focus on the detail of interests. A smart, novel approach to achieve this was introduced by a team at Insightness. Their novel vision sensor compresses the data already in the pixel circuit without compromising dynamic range, thus saving computing power to analyze the detail of interest.

At the same time, a long and still standing dream in computer vision is to create a robust AI system that does not require pristine input data for accurate output. In other words, scientists try to teach machines to navigate the world using common sense. A team at ZAC developed a more general AI algorithm which takes us one step closer to this goal. Their algorithm recognizes 3D objects from any direction while requiring much fewer training samples than AI approaches commonly used today.

Others such as BlinkAI introduce a deep learning approach to reconstruct images taken in low signal conditions, thus enhancing image fidelity which is particularly important for radar applications.

In contrast to the solutions presented above which push the technical frontiers of machine sight further, Heptasense uses state of the art image recognition solutions to create a human behaviour model. Its AI-backed algorithm analyzes footage from surveillance cameras to detect security incidents based on its behavioural model, differentiating it from other solutions which take gender, ethnicity or age into account.

Under the radar — picking up on new trends emerging

Now that you’re up to date on the core trends, stay onboard to explore our analysis of the more niche topics and thirst your quench for knowledge with these under-the-radar emerging trends.

Network technologies

Widely announced in the media and tested at huge events such as the Olympic Games in South Korea, the connected part of the world is looking forward to replacing the current 4G mobile communication standard with 5G. 5G will be able to handle a trillion connections supporting basically an unlimited number of connected devices. It will also decrease the latency of the network from about fifty to a few milliseconds and provide a bandwidth of a gigabit per second under optimal conditions.

And yet, other parts of the world are not even on the map of the global digital infrastructure. Less than 60% of the world population has access to the internet. Since much of our economic growth depends on the capability to quickly exchange data and retrieve information, the gap between those regions without access to the global network is ever increasing.

Mynaric’s laser technology

The good news is, there are solutions in sight to digitally connect those regions. Mynaric’s laser technology uses accurately steered laser beams to wirelessly transmit large amounts of data across wide distances between airborne or spaceborne flight platforms and the ground — they literally beam down high-speed internet from space into any remote part of the world.

Others aim at providing connections for IoT devices in remote areas which does not require the same bandwidth or latency of internet providing networks. Helios Wire uses satellite constellations to cost-effectively provide narrowband connectivity to retrieve data from sensors installed in remote areas. Another approach is taken by Fleetspace Technologies which developed stand-alone, smart gateways that collect and analyse data from IoT devices within a radius of 15 km. The analysis of the integrated sensor data is uploaded via satellite links which cuts down the amount of transferred data and consequently the costs. In contrast, Mesh++’s solution allows expanding wireless broadband network over a hundred football fields from a single internet access point on the ground thanks to their self-powered, highly efficient routers.

While the above-listed solutions can connect remote regions in the world to the internet, the team at Humanitas have developed a stand-alone, wireless network for use in emergency situations when the local network is down. Once deployed, it allows communication at up to 20 Mb/s within 160 km between any off-the-shelf devices such as smartphones, tablets or smart gadgets without the need for any pre-existing network infrastructure.

A different connectivity problem is tackled by R3 Communications. Today, wireless communications are barely used in industrial machine-to-machine communications because the latency and reliability of existing wireless systems are insufficient for industrial applications. The software developed by R3 communications named EchoringTM turns standard radio chips into real-time wireless communication systems with low latency, which enables time-critical industrial applications.

Last but not least, StealthCase developed passive signal repeating structures that allow mobile signals to pass through walls. Those can be implemented in construction supplies to make walls “mobile-friendly” to reach significantly better signal penetration into large buildings. This might be of particular value, once the 5G mobile standard is deployed which is projected to use frequencies above 20 GHz which are widely known to be blocked by walls of buildings.

A glimpse into the far future

Now that you know pretty much everything about core trends and niche topics, we are also able to build a longer-term view of what the digital future holds. Let’s go deep into the lab.

New computing systems

Today, we exploit known physical phenomena to efficiently compute difficult problems.

Credit: IBM Research Flickr

The most recent example is quantum superposition and entanglement used by quantum computers. In contrast to bits in today’s computers which can be either 1 or 0, the so-called qubits in quantum computers can represent various possible combinations of 1 and 0 at the same time. This admittedly pretty counterintuitive feature is called superposition and originates from the quantum theory. It basically allows quantum computers to calculate a vast number of potential solutions much quicker.

The second phenomenon of quantum entanglement describes the fact that the state of one qubit influences the state of another one in a predictable way, even if they are separated by very long distances. Consequently, adding qubits increases the computing power of quantum machines exponentially whereas doubling bits in today’s computers means only a doubling of processing power.

These physical phenomena combined are the basis of the massive computing power of quantum computers. However, to control these phenomena, the quantum computer has to be cooled down to cryogenic temperatures and any vibration must be avoided to not disturb the state of superposition and entanglement. For this reason, quantum computers won’t replace desktop computers any time soon.

Yet, it doesn’t mean that there aren’t other non-quantum physical properties that one may leverage to compute more efficiently. In fact, scientists at MemComputing developed a system based on memory that allows distant parts of a machine to correlate with each other efficiently, without recurring to the entanglement of quantum mechanics. This dynamic set-up shortens the required computational time to solve a problem while also decreasing the amount of storage and energy used to solve problems.

Others focus on developing memory modules based on a new architecture to reduce the performance gap between CPUs and RAM. Besides providing greater memory speeds, the memory developed by BlueShift Memory is also useful for AI applications based on deep learning, thereby leading the trend away from general purpose computers to task-optimized, specialised ones.

Metasurfaces

Optical metasurfaces are nano-patterned layers that strongly interact with light. Instead of relying on lenses and filters to manipulate the properties of light, this method is based on nano-scale structures. These nanostructures capture the light and re-emit it with defined properties, thus allowing the sculpting of light waves with unprecedented accuracy. This opens up numerous applications such as deep penetration of light into biological tissue which is hardly possible with today’s equipment or high-resolution 3D imaging. Scientists at Greenerwave developed a digitally reconfigurable metasurface for imaging applications in the automotive industry. However, those metasurfaces can also function as a signal enhancer for improved mobile or satellite communication.

Similarly, Metasonics has developed acoustic metamaterials allowing the creation of precise three-dimensional acoustic landscapes which is needed for numerous applications such as personal audio spotlights and ultrahaptics to non-destructive evaluation of materials as well as medical therapeutics and imaging.

– Nicolas Goeldel, PhD, Deeptech Lead at Hello Tomorrow –

If you would like to keep up to date with news from the science entrepreneur community follow Hello Tomorrow on Twitter, LinkedIn & Facebook or come see us and the startups mentioned here at the Global Summit in Paris at the 14–15 March 2019.

Hello Tomorrow Stories

Unlocking the power of deep technologies to solve some of the world’s most pressing issues. We are a nonprofit initiative run by science entrepreneurs, for science entrepreneurs. Find out more at www.hello-tomorrow.org

Hello Tomorrow

Written by

Unlocking the power of deep technologies to solve some of the world's toughest challenges. www.hello-tomorrow.org

Hello Tomorrow Stories

Unlocking the power of deep technologies to solve some of the world’s most pressing issues. We are a nonprofit initiative run by science entrepreneurs, for science entrepreneurs. Find out more at www.hello-tomorrow.org

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade