Vision of the future embraced by Virtualia

The Virtualia Team
40 min readJun 2, 2022

--

One characteristic of the Virtualia team is to create a vivid vision for the future that encourages them to keep striving.

Let’s have a look at the next technological steps for the coming years. By 2030, Virtualia could deeply rely on many technologies to work smoothly across all regions of the planet in cities as well as in rural places (offline mode).

However, there are 4 challenges to overcome:

TOPIC 1: Farm of Exascale green supercomputers mixed with Virtualians home devices and millions of TeraFlops mobile devices, 6G telecommunication, and cloud computing, managed overall by a smart AI engine to process heavy traffic of 3D data stream for virtual reality services, to fuel virtual worlds and access them in offline mode.

TOPIC 2: AI human-like interactivity for an enhanced shopping 3.0+ experience, including target talking, entertaining, a recommendation to every visitor on physical stores through direct communication to his phone or wearable device, such as AR lenses, glasses, or connected watch.

TOPIC 3: Nanotechnology and 3D printing for custom Augmented Reality (AR) lenses and eyeglasses.

TOPIC 4: Blockchain P2P payment for virtual services and delivery of connected objects through means of autonomous vehicle and drone transportation for logistics.

In 2022 where are we from that goal? And what are the challenges in those different four topics?

First topic: smart AI super-computers grid

Before talking about the AI part of the topic, we need to compare brains vs supercomputers as supercomputers are leading the way toward AI. Read below to learn how brains and supercomputers differ from each other and process at such great speeds.

Brains vs supercomputers

For now, brains and computers work in totally different ways. But what are the key benchmark figures that would help us make some comparison between human brains and computers?

Matching those benchmark numbers will not make the AI closer to humans as we may need to face a harder challenge than hardware in improving the way computers should work like closing the gap in different areas such as real-time multi-tasking and real-time spatial data analytics with what humans can perform to name a few.

✔ Processing Power: 1 Exaflop. This will be easily overtaken by the next generation of supercomputers.
The human brain is estimated to operate at about 1 exaflop[1]. Even though we don’t know exactly how the brain works, but this is the order of magnitude according to our understanding of neurons, gray matter, and synapse connections.

✔ Frequency: 1 kHz vs 3.2–3.6Ghz range
Think of our brain as a network of 100 billion 1kHz processors. However, the comparison stops there as each neuron is connected to thousands of other neurons (1013 connections), that we process data in no particular order (while most digital computers process serially), with a different layer of priority tasking and no central clock (neurons are activated any time, there is no central synchronization system known as clock speed in human brains). However, we do use different frequency bands like alpha (5–8 Hz), beta (9–12 Hz), and gamma (40–80 Hz) you have heard of, but they emerged spontaneously and in parallel in different parts of the brain. Interested to note that neurons’ average firing rate seems to be in the 0.1–2Hz range and up to 1000Hz but those are still rough numbers and theory today. But, we don’t know exactly of the brain works and its architecture (taking a weighting average of inputs to get output is overly simplistic).

✔ Architecture: unknown.
As we will see neuromorphic quantum computing may be a better way to model the brain than digital computers with weighting average input encoding of signals. Therefore, the brain works with information in a statistical approach and estimation of reality (perception) rather than exact values of inputs. In the meantime, neurons transmission can be modeled as a binary signal: in firing states or not.

✔ Longevity: 50 years vs 5 years.
We took 50 years as the average span of a human being in good mental health and at 90% maximum cognitive capacity. While supercomputers become quickly obsolete, it does not mean it is bad, but the innovations make reduce its life span at a faster rate as time passes until physical limits or technological ceilings are reached.

✔ Power: 20 W vs 2–50 MW.
You did read it right. A million times more efficient than top-ranked supercomputers. Moreover, on a per-neuron basis, human gray matter uses about 2 nanoW per neuron[2].

✔Energy efficiency: 50000 TeraFlops per W vs 0.01 TeraFlops per W at best (Greem500).

✔ Memory: In the range of Petabytes. Matched by top supercomputers.
As neuroscience progresses, our estimate of the brain’s memory capacity may change by several orders of magnitude, so those figures have to be taken with a pinch of salt. To put it in perspective, as there are 8 billion human brains, the total human capacity on Earth is in the range of 10.000 Zettabytes while worlds data will grow around the 100 zettabytes figure next year. By 2040, total data created worldwide from private and public IT infrastructures, utility infrastructure, cloud data centers, personal computing devices, and IoT should match all human brains capacity storage.

However, it is generally accepted that computing prioritization, creativity, and energy use are won easily by human brains, while task processing on both volume, velocity, variety and veracity, logic, and math are won by the computer.

Will supercomputers be necessary to handle our virtual worlds?

It is an important question that helps determine the future of our virtual worlds. In addition, the usage of supercomputers also enables us to estimate the quality of virtual worlds. Read below to find the answer to this question.

For our virtual worlds, we estimate about 10 billion 3D assets to be created in the range of gigabytes so a total of 10 Exabytes are required, or 0.01% share of total worlds data created by next year or 1/10th of streaming video on Netflix with 200M subscribers.

In addition, Virtual Shop catalog target is 1M assets on the first years of existence with an average of 30Mbytes for a total of 30Tb storage capacity and 1M daily user of 1Gb bandwidth a day for a total of 0.3 Exabytes in downstream.

Here is a table to put numbers in perspective

2010 and beyond capacities in perspective
Second part

Let’s come back to human brains

The human brain seems to be made of slow independent components (neurons), but its complex wiring and extreme parallelism make it yet unmatched by today’s supercomputers.

And yet, this computing will be limited by many data biased it will receive:

  • Cultural, vocabulary, ethical values, philosophical and educational biases on judgment on raw data received by sensors.
  • Confirmed biases on real-time audio (by hears) and video (by eyes).
  • Subjectivity & cultural biases on tastes and smells perception (what is perceived as good or bad).
  • Real time-varying spatial accuracy given changes in self-orientation perception (am I perfectly aware of where I am?), orientation speed (am I stable or moving fast?), coordination boundaries (what are the possible movements right now?), and external physical limits (environment).

That brain handles those types of complex computational work every second with biases and simultaneously, and now think of all the other things your brain performs without you being fully conscient of, like breathing, heart beating, and the right amount of nerve stimulation for proper movement coordination.

Furthermore, those cognitive responses have to be modeled. While most perceive supercomputers may be considered as a close match of the human brain, we do think we are in a very different league. Supercomputers outperform humans for boring repetitive and easy tasks, mathematical and logical computation, and some other areas like classifying and identifying at a speed of light and massive data storage and access. However, clear limits appear on multitasking, creativity, self-learning experience, quality data ranking.

It is clear that computers are getting faster quickly, following Moore Law[9] or a declining version of it but still growing at a very astonishing exponential speed[10].

Sourced from Erich Strohmaier, a senior scientist at Lawrence Berkeley National Laboratory

DeepMind and leading robotics corporations have done impressive leaps forward, but we are still at 2 major technological breakthroughs away from matching human overall average capabilities, let alone geniuses:

✔ The first breakthrough is software/data related with the need of catching with hardware capabilities due to the never-ending increase of data store every second as well as the need of developing analytical tools to understand them and train AI with quality data.

✔ The second breakthrough is hardware related as we foresee through neuromorphic or quantum computing which will help us deal with the ecological constraint. As more data and computing power becomes available to the public, reducing the among of energy to deal with is mandatory if one day we think it realistic to get AI robots at home given the actual amount of energy to deal with Exaflops computing. Otherwise, this technology would be restricted to big corporates and billionaires only.

Both challenges can be summarized that way: by 2040, theoretically, it is forecasted that we will have the computing power of a human brain on any personal device. In addition, we think that statement to be very optimistic, not only because the energy consumption limits (at an average of 200W per home device, much less for laptops or mobile devices) and miniaturization are roadblocks, but we may need to clearly think about software catching the hardware without mentioning the fact all devices may need to recycle/adapt with the old digital technology becoming obsolete with the rise of quantum computing by that time.

First required breakthrough: Improving data software digestion

We need to better prepare the data to be digested by Supercomputers and AI and match the speed of increase we are experimenting with. Even though, the data preparation is still performed by humans for the foreseeable two decades, and that require intensive years of university study in mathematics and computer sciences, with more and more early specialization as IT becomes more complex and broader if not multi-disciplinary (think neurosciences), We need also to prepare a full generation of data professionals, a lot of them, and probably we don’t have enough well-trained people in a given generation to match the required needs from SME to take advantages of the technological advances. In addition, we might face a technological gap between big techs and remaining corporates without mentioning individuals. Finally, we need to deal with increased complexity in data security and integration as well as build new tools to handle those increasing amounts of data for both the top supercomputers as well as home devices with limited power consumption.

It’s not about only raw data but getting qualitative data. IoT such as sensors and embedded devices will process the raw data and will throw to supercomputers much useful qualitative data making the digestion much safer and easy.

The switch done from IPv4 to IPv6 is a huge advancement[11], particularly for data cleaning performed by those IoT devices. Furthermore, it will help a lot with data digestion and training AI.

A second probable solution would be cloud computing with home devices and mobile is just at the front end to perform specific tasks to avoid daunting calculations made by edge networks and datacenters or supercomputers.

That’s why the whole Virtualia infrastructure is built around cloud computing, HPC, and many-task supercomputers farming, and interconnectivity of home devices, game consoles & mobile phones of Virtualians to run both the blockchain as well as participating in the overall computer power grid. This is clearly a big difference with other tech companies relying solely on private data centers to solve all these data challenges. Just to give an idea, 10M connected devices of 1TB free space use by 2030 would save 10 Exabytes of data storage, enough to fuel virtual worlds as we suggested early. Virtualia is yours and you are a part of Virtualia.

Today supercomputers are 20 years mobile devices. However, having megawatt bills is not conceivable. Top supercomputers 10 years ago are today home desktops or gaming consoles. Data centers are 5 years behind supercomputers while top500 lags by 8 years the number 1 supercomputer.

Nowadays, personal devices power is more dependent on graphical card improvement (GPU rather than CPU and especially if we consider 3D and image processing). We generally set the benchmark of a 1000 USD desktop computer for reference. According to this data analysis article[12], “the Passmark trend seems to suggest the 95th-percentile GPU price / single-precision FLOPS of GPUs has fallen by around 13% per year, for a factor of ten in ~17 years”. Even if we assume some important inflation, with a 1000$ today being worth 1500$ in 10 years, we can still estimate that the gap between personal computers and supercomputers will widen by a very large order of magnitude over a decade.

This analysis emphasizes the need to invest heavily in a top supercomputer for Zettaflops performance tasks by 2030 and on smart computer grids with a data center cloud for storage.
On the contrary, if we are optimistic and assume that we achieve the same performance on energy consumption for personal computers as we saw in the top 100 supercomputers of 2020, keeping the same power consumption, we should reach 1 Petaflop personal computers thanks to the performance of graphics cards, with top gaming PC desktops (such as Alienware) being at ten-fold magnitude faster. Important to note that the average desktop power consumption is 200 Watts, laptop 30 Watts. Will be possible to reach this computing power keeping the same trend on energy efficiency as supercomputers? Keep reading as we will answer that question in a couple of pages.

Console engines farming is also part of the solution and is generally representative of a medium-high desktop computer (and costing generally less than 1000 USD). On top of that, architecture and usage are optimized for gaming. They are generally catching up with PC GPU with major releases every 5–8 years. Over the span of 20 years, we saw an increase of x5000 or 72% per year, and over the past 8 years, about 27% which is clearly a trend comforting the idea of energy consumption limits for home devices as well the gap with top supercomputers.

Top consoles announced Flops past 20 years

If we consider 30-million-unit sales for recent console generation with 500GB on average, we estimate an accumulated 15 Exabytes capacity storage. Unit capacities have stagnated in recent years, hence considering a 1TB free space among individual home devices in 2030 sounds realistic too.

If we look at mobile devices[13], the GPU has increased at a rate of 86% per year and recently, 33% again an observable slowdown. Storage has been an x33 increase in a decade so we can expect mobile devices to be at 1–16TB by 2030 with the memory of 128GB and 10 TFlops which is a 10-year lag to console and average desktop computers. In conclusion, this analysis makes us confident however that top mobile devices will help handle augmented reality and navigate through virtual worlds or scroll 3D virtual stores on the Virtualia network.

Top iPhone models Storage, Ram and GPU GFlops

Clearly, GPU computing on mobile devices is required and Virtualia will only integrate them in the network in case of such improvement during the decade.

What about AR lenses?

We can speculate that there will be probably a 10-year lag for lenses or AR glasses compared to mobile devices, i.e., a 20 years-lag compared to a console or average home desktop. However, by 2030, 1 TFlops AR Glasses with 10 Gb and 1TB of storage should be large enough to experience our 3.0+ enhanced shopping experience. However, this lag could be shortened by a technical breakthrough in nanotechnologies or quantum computing. We bet that could occur post-2030 so the delay between AR Glasses and home desktops should be reduced by several factors in the 2030–2040 era. In addition to that, we also consider a huge slowdown in-home desktop computing progress in the next decade. Why? Let’s have a look at the energy efficiency trend.

About energy efficiency trend and how it affects Virtualian business model

While we observe an exponential Law on energy efficiency, a measure taken recently in GFlops per Watts, the trend observed for the Green500 supercomputer is slightly slower than the trend observed overall with Top500 supercomputers, meaning the energy efficiency trend does catch up with the increase in computing power but with a small lag.

Current trend of Top supercomputers

However, we should not worry per se about the 3–4% increase on the average of the top100 supercomputers as efforts are being made recently on making the overall industry greener. Solutions will be found, and the trend may reverse, i.e., an observed decrease in overall energy consumption.

If we consider the top500 list, in 2015, we average 886MW with 242 respondents, and in 2021, 1753MW with 180 respondents. This is a yearly change of about 12% per year. Furthermore, it shows that not all supercomputers are putting an effort into becoming greener. The difference of trend between the top 10 and the average of the pool (about 10%), shows that the effort on becoming greener is not taken seriously by all players. Adding more MW for an extra boost on computing power is probably more attractive than increasing the budget to improve energy efficiency.

What about home desktops?

If we apply the top500 greener supercomputers’ energy efficiency and if we compare with the energy consumption of a home desktop, we can see that by 2030, with 200W average home desktop computers, we could easily manage 100TFlops. Indeed, if we just do a simple projection on top Green500 trend, of 33% per year from 2021 to 2030, i.e., a 9-year window, we should get around 500GFlops per Watts[15], i.e., 100TFlops for a 200W home desktop. Actually, for 200W, and 1000$ budget, we could find up to 5TFlops home desktops today, i.e., about 25 GFlops per Watts which confirm, with a “fat” finger approximative calculation, that the average Green500 energy consumption factor is close enough to the average of home desktop computers (in “logarithmic” scale, i.e., +/- a factor of 2–4).

Another benchmark is to consider game consoles as they tend to match the average home desktop computing power. From 2013 to 2021, the game console computing power trend was an increase of about 27% per year which is slower than the trend observed on supercomputers. By 2030, consoles should peak at 100TFlops.

Another benchmark is to look at the top 10 GPU cards as the key peaks in computing power come from massive parallel capabilities offered by graphic cards.

While we are confident for 100TFlops, reaching 1PetaFlops however seems to be harder, and because we feel the trend on the home computer to slightly be slower for many reasons to match supercomputer computing power trend and their energy efficiency. We also think that most computers will rely on cloud storage and cloud computing rather than their one engine for demanding tasks such as rendering.

In addition, another key point to consider is the trend on graphical cards as home desktops will rely basically on the “horse” power of those cards to achieve that 1PetaFlops expectation by 2030. Let’s have a look now at Passmark data to compute the cost per GFlops.

Aiiimpacts.org analyzed up to 2017[16] then again in 2019[17], log cost per GFlops would achieve roughly -5 by 2030, i.e., 1 cent per TFlops or 300TFlops per 1000$ home desktop (considering graphical cards to account for 1/3 of total cost). For half-precision FLOPS on average, for 95th percentile Passmark data they scrapped, the trend for 2030 in projection is roughly in the order of 10¹¹ FLOPs per $ i.e., 30TFLops for 300$ GPU card, with a big difference between high-end range (1PFlops or 3*10^-12 $/Flops single-precision)) and low-range of about (10TFlops i.e., 2 orders of magnitude less than top graphical cards).

Once again, the average 1000$ computer of 2030 should achieve 100TFlops rather than 1PFlops. Only computers of the 5k-10K$ range would allow 1PetaFlops but not the average Virtualian.

Note

It’s important to remember that this is a simple trend analysis, that the computing power expressed only by “floating-point operations per second” depends on what task, rendering, crypto, and on which precision you consider the benchmark. That’s why performances of graphic cards come with additional information such as Performance FP16 (half), FP32(float), FP64(double) but also Pixel fill rate, Texture fill rate, TMUS, ROPS, Tensor cores, RT cores, Ray-tracing performance. In the end, a deep analysis could become complex, also because getting the same metrics over 10- or 20-years span on graphic cards is missing, without mentioning that we were pushing CPUs way before switching to GPUs and that software calculation architecture were dependent on the industry culture at the time of their release. Further, an analysis on the miniaturization/size aspect to fit a small home desktop mother card, limits on the cooling system, and a better sample for cost per GFlops would bring additional information to complete the analysis.

Finally, how many Virtualian will use a home desktop vs the comfort of a mobile (10-year lag) or even a laptop (5-year lag)? This distribution is also an important consideration in pooling Virtualians’ computing power.

Coming back to the data handling challenges

With supercomputer home desktops, with either 100TFlops or 1PFlops, we can assume a similar exponential trend in data storage. Accordingly, if, we don’t improve how the software will handle massive data and train the average joe to do so, it would be impossible for home devices by 2030 to fully digest and take advantage of their data capabilities. However, one way would be to host a custom program developed by Virtualian Engineers to handle these daunting tasks remotely rather than relying on actual software or the average joe to take advantage of his supercomputer home device.

This pushes the strategy at Virtualia to develop in the coming years an internal software specific to handle 3D data, visual graphics, crypto analytics, and the architecture trend of the average joe home desktop to profit from the large-scale pool effect.

By 2040, the theory is that a home supercomputer will match a human brain. Once again, software development and training the average joe to handle their computer capabilities or having automation done in the background by AI becomes critical. If we consider a probable breakthrough in quantum computing by 2040, the challenge of having adequate AI software to handle this new architecture becomes even more important than the computing power achievement alone. It’s like having a space shuttle rocket as the motor in your car and not the car structure or the right real-time embedded software to manage the horsepower and be able to drive carefully on the street road.

As some well summarizes[18], “the problem with hardware moving too fast is software has tried to keep up by releasing poorly tuned code. […] The cool thing is that if Moore’s law should fail, we can tune the software to achieve higher quality instructions instead of simply more instructions! “.

What about the electricity demand?

Another critical point to consider is the electrical park. As Virtualia philosophy is to add up on green energy resources or employ existing resources (pooling existent dormant average joe home devices), we must have a look at the actual trend in electricity consumption by 2030 and if resources may answer the demand without skyrocketing the electric bill as we will be experimenting for 2022.

The electrical park will already be very demanded by autonomous vehicles, drones, and smart grid cities. Hence, the question is if the increased production of electrical power would move at an even lower speed than the real need usage for dealing with personal devices, cloud & supercomputers, and their never-ending increase for data storage, streaming, and calculations for virtual worlds. While the computing power may be answered, the critical question is at what cost? More demand than offer, even If matched with sources, may push the average kWh price for the next decade at a very expensive territory. This is a complex question that needs to retrieve data on what the energy cost of would be having one billion people on different virtual worlds, not only Virtualia but competitors as well. Will it be a slight demand below 1Tn kWh or much bigger and thus much more demanding on the increase in electric production?

To answer a complex question, we must divide it into several smaller questions:

✔ Q1. What is the current expected trend in electric consumption by is 2030?

✔ Q2. Is the current electric consumption assumption correct for the decade?

✔ Q3. What will be the need for running virtual worlds if one billion people use virtual worlds actively?

✔ Q4. Can it be produced by renewable energies and at a reasonable cost?

Most did not predict well the intense paradigm shift in virtual worlds, in addition to blockchain and crypto demanding computing power for the next decade, and probably underestimated the demand for the electric park from EVs and autonomous vehicles.

Here is a table of electricity consumption per key decades[19]

Net consumption of electricity worldwide

Out of 31 Tn kWh, 8 Tn[20] would come from information and communication with big data, IoT, and blockchain having the strongest impact.

40% would come from renewables[21], mainly hydroelectric, then solar and wind up from 30% in 2020, thus the major share of electric production should come from greener sources[22], and probably among it mostly from solar energy.

2/3[23] will come from non-OECD demand with probable less efficient infrastructure than OECD counterparts. The major share of increase would come from non-OECD demand. This data alone let us think the demand from OECD is underestimated, especially if everyone goes viral and crazy on virtual worlds and augmented reality.

Peer-to-peer distributed energy trading in smart grids will become a common trend by the end of the decade. As such the Virtualia infrastructure strategy is to take full care of P2P smart grid technology as soon as possible. We can think of an AI-driven cloud elasticity mechanism for data centers to monitor and adjust themselves with the different sources of energy and demand on real-time even forecasting.

Second required breakthrough: neuromorphic or quantum computing for low energy consumption

Quantum vs Digital

Quantum computers sound fancy. Nevertheless, there is no big hype yet, outside specialized reviews, but that may come after the virtual reality revolution we are forecasting, being part and promoting in our whitepaper.

So, what are quantum computers? Are they regular computers? Read below to fully understand the concept of quantum computers.

Well, first we must clarify that classic computers nowadays are computing electronic machines processing digital information in parallel while quantum computers manipulate subatomic particles and the quantum phenomena like superposition and entanglement to perform computation. This computation is based on multiple “states”, due to the coherent superposition, of 0s and 1s at the same time. Each state is associated with probabilities to know if the final state will be a 0 or 1 when measured.

Thus, this capability makes quantum computers with small qubits (the quantum-mechanical analogy of storing information on bits for classic computers) vastly outperform digital supercomputers. Those Qubits are entangled, i.e., connected with other qubits — when one changes, the others are impacted.

Qubits are not the only way to store information. While Qubits are based on a 2-state quantum system, Qutrits are realized by a 3-level quantum system similarly as used by ternary computers. That said, most research focus on Qubits.

A quantum register is a set of qubits. N qubits are represented by a superposition state vector in 2n dimensional Hilbert space. Hence 2 qubits are represented in a four-dimensional space. According to IEEE[24], “300 qubits could perform more calculations simultaneously than there are atoms in the visible universe”.

It is difficult to estimate how many perfect qubits would be needed to achieve the performances of Exascale supercomputers.

What is clear is that the recent Gordon Bell prize winner[25] and IBM[26] showed that classic simulation of quantum circuits can reduce the theoretical disputed gap called the “quantum supremacy” between the claimed achievement done by quantum computers and actual supercomputers.

A quantum computer. Credit: Handout.

A few hundred qubits, but not that many[27], would be just sufficient to achieve quantum supremacy. This would theoretically be possible in the decade, but we will still need to prove that we are not able to replicate on a digital supercomputer. Performance alone is not sufficient without the software infrastructure to use this power, which would take another decade of new software engineering in a newly created field (e.g., quantum software engineering). Such a fate could lead to the next revolution before the end of the decade (that creates massive job opportunities), or better said a paradigm shift as we envision after virtual reality (that is to say, provides a clear advantage or a new way to solve problems).

For now, there is still a clear quantum advantage, called energy efficiency which makes quantum computing a very good candidate to match human brains efficiency.

Knowing if human brains are like quantum computers, is another question that is under research. It is a difficult hypothesis to test as it requires measuring the behavior of molecules inside the human brain.

Energy consumption and neuromorphic computing

Before solving the gap to reach human brain capabilities with quantum computing, actual A.I. can be enhanced through the next generation of intelligent computers called neuromorphic (cognitive) computing.

Hence, it takes a more biological approach to learn and nanomaterial designs to mimic a human brain with neurons and synapses, while the digital computer architecture separates logic blocks (CPU) from its memory (RAM) as originally designed by Von Neumann which is leading unfortunately to large energy consumption.

The first generation of neuromorphic computing was leveraging actual technology of MOS transistors as artificial synapses in artificial neural networks for image classification and speech recognition but was limited to a small hardware scale. The next generation with a bottom-up approach at the nanoscale is taking advantage of 2 and 3 terminal Memristive devices and has a promising future[28] specifically for algebraic tasks, pattern recognition, and image/face classification to achieve the “huge density and heavier of real synapses and neurons”.

This new technology has many advantages:

✔ Physical emulation of neurons and synapses will help reduce the energy bill by a big factor (from Megawatts to the standard home computer energy consumption).

✔ Smart prosthetic and bioelectronics application to take advantage of one-dimensional nanomaterial technology.

✔ Reproduction of synaptic plasticity between different components of the computer system as the human brain does when learning and connecting new information with larger nanomaterials.

✔ Animation of artificial skin for robotics thanks to sensing, mechanical flexibility, and biocompatibility attributes of those nanomaterials.

We can assume that merging quantum computers with neuromorphic architecture is a clear target to achieve human brain simulation.

From the above information, we can conclude that at Virtualia Interactive Technologies, we are keeping an eye on progress in this field for the coming decade. It will help enhance the basic functions and capacities of computers and as a result, provide Virtualians with better outcomes.

Second topic: AI human-like interactivity for an enhanced shopping 3.0+ experience

Challenges to overcome to get AI human-like interactivity

The era of PetaFlops supercomputers at home may become true around the thirties but as we discussed on the previous topic on data handling, there are three challenges to overcome, to fully appreciate that computing power and to avoid getting a Ferrari on the garage that we are not driving. Read the list of such challenges given below to learn how we can enhance our AI-aided shopping experience.

✔ Improve software capabilities to digest massive amounts of data with better-tuned code models, signals, reactions, and algorithms.

✔Train the average joe to take advantage of their supercomputers at home or get automated tools, RPA, and small AI to perform those tasks at desktop level or remotely leveraging the cloud proximity.

✔ Feed the AI engine with quality inputs with sensors, embedded devices, or small AI routines that handle raw data, find multiple sources, and clean it. This is very important for AI bots. We see four classes of the dataset to feed with. We explain what and where to find them:
(1) cultural semantic datasets that include moral values, idiomatic expressions, grammar, vocabulary, for better interaction with the user. Sources come from social networks, comments, radio, tv-series and programs, grammar books, dictionaries, etc.
(2) foundational knowledge such as hard sciences, soft sciences, humanities coming from recent good sources such as generally accepted academia and well-known published books;
(3) thematic data such as knowing all products of the shop or the field. An AI bot in an insurance company website should know the rules, laws, and taxonomy of his domain. Subject matter experts document, conferences, seminars, Ph.D. thesis on the topic, thematic newspapers are possible sources.
(4) unstructured raw IoT data. This will become a new feeding data class in the coming years and includes visual, audio, and one day, taste and smell data analyzed from sensors. Those will provide instant reality check feedback and are generally considered independent.
However, biases must be considered as critical as it happens with humans. Biases can be semantic, cultural, thematic, scientific, physical, and grow as new knowledge is acquired by human perception. Should with have independent bots or culturally biased bots could be at the end a final choice for the end-user. What, when, and how you feed AI engines have a big importance in the achieved result.

As our knowledge of the human brain advances with technological breakthroughs and neurosciences research, we might come on a day with a better understanding of creativity.

However, the main gap between AI as we construct them, and humans is not the calculation power but creativity.

We can easily fake it as we do with painting or architectural designs giving our input understanding of colors, chemistry, shapes, civil engineering laws. Furthermore, we can also classify each output to a given theme and then ask an AI architect to draw my new dream home[29] or to draw me a horse. However, we might feel it as much as a copy of an original author (on which the AI was trained hence his bias) or get results that feel very… non-human. One might say it is due to not feeding the right data to impulse what makes the output look like a human, or due to the training model being not well designed. Another might say this is culturally biased, and as such, the AI just throws a new culture on which the new transhumanist movement may give more importance one day. In the end, everything becomes relative as the moral values change over time, acceptance of what defines humans, or AI becomes blurry and more untangled.

What kind of AI human-like interactivity can we expect soon?

Virtualia Interactive Technologies has a strong interest to produce the best kind of AI human-like interactivity. Thus, our VR and AR applications and Virtualia entire network work on these multiple layers:

✔ Anonymity: the user turns off easily with a button all tracking, including non-essential cookies, and fingerprinting. But, this option does not exist in the marketplace today. Furthermore, the user won’t see any ad or recommendation. Bots are generic, meaning they don’t identify the user specifically and don’t collect any data from anonymous users. Also, Payments are only done in crypto. Important to note that due to regulation, only specific virtualized physical stores can offer this option while most virtual stores may allow this option.

✔ Standard: the user allows some form of tracking. This is the standard way of navigating websites today. Therefore, this includes targeting ads, marketing based on website visits, clicks, and preferences mentioned on his profile. In return, the user can see better ads and recommendations according to his profile. Those data are collected by third parties, and we don’t know at which depth they collect user information. Bots are also used to help the visitor, can be specific to the user profile, and collect data to improve his recommendations. Moreover, push notifications are also used based on your interest, the proximity distance of local stores around you, and whether you are walking in the city or at home.

✔ AI-based: It includes target talking, entertaining, a recommendation to every visitor on physical stores through direct communication to his phone or wearable device, such as AR lenses, glasses, or connected watch. Dedicated “concierge” AI agent services exist and are developed by the modding community. AI-agent knows a lot about the user, due to data sharing across platforms, and data he opened access on his profile and polls he submitted to improve his shopping experience journey. In addition, the user can choose the voice, being female or male as well as the look among thousands of hyper-realistic virtual humanoids. AI deep fake allows seeing their face and mouth moving as well as automated 3D animations are implemented to move accordingly to a specific event.

Granular intermediate levels can be implemented through time and feedback loops with Virtualians. The maturity level of new technologies will also improve interactions. We start with virtual AI agents and improve their AI learning to be very close to a human talk, let said a store/product specialist, then we will deploy real robotics on luxury or tech partner physical stores, then we will move to holographic 3D interaction with virtual assistants. Virtual stores will already implement those holographic AI agents while you “walk-in” to shop for products.

Note that during the early expansion years, there would be more stores added every day than the time it would require you to scroll all their products during your lifetime! Hence AI will be very helpful not to miss any valuable discount, rare product, or something you might like a lot.

Differences between 3.0 shopping experience we are promoting and future 3.0+ shopping experience

The 3.0 shopping journey as of today

Virtualia promises to bring positive changes in the 3.0 shopping experience in the near future. Therefore, they will introduce a new version named 3.0+ shopping experience. Read below to learn how both differ from each other.

We are yelling to the world a new way of experimenting with shopping through a virtualized store, where you can walk in 3D, shop real products from anywhere, anytime, the physical store being closed or opened does not matter. You can view products in 3D with VR, or AR applications. You can visit luxury stores in Champs Elysée while on holiday in Bora Bora.

You can scroll through thousands of similar products and fine-tune them all in 3D. You can project the desired product at home, your desk, your room and share it with your partner, your friends, and family members to give their opinion if that would suit you or your home. This can be furniture, clothes, a new car, virtual home staging, or renovation. Improved bots help you digest the full catalog in a matter of milliseconds based on your chat with them. Recommendations and push notifications help you not miss any discounts, cashback while you are walking in a new city as a tourist or a local. Worldwide or local brand enigmas are implemented at a large scale to discover their shop, earn membership points and discounts, by leveraging AR and VR applications.

Store owners can augment their physical store by adding a new layer of virtual rooms to present all their products. People can walk in 3D via their personal devices or use their phones in the real store to immerse themselves in secretive virtual stores by scanning a QR code on a wall.

Finally, you can sell real or digital products on virtual stores on our multiverse VirtualiaWorlds platform or digital products on our virtual worlds with fiat or Virtualian coins. It means that any artist, anyone can become a store owner.

The 3.0+ shopping journey by 2025

Enhanced 3.0+ shopping experience include body scanning, testing cloths on your own body, being connected to a pool of real and virtual AI fashion designers/aficionados/experts that provide you valuable information on how to match different cloth or give you recommendations for a specific style or change of look.

Therefore, Virtualian fashion stores are open to living that immersive experience. Mobile devices then AR lenses and glasses are used to personalize your journey, improve your discovery, share your treasures to your private or social network. On top of everything, holographic AI shop agents are implemented to get better interactivity with visitors, again first using human devices than with AR lenses and glasses.

So, the 3.0 shopping experience becomes common around the world.

Third topic: Nanotechnology and 3D printing for custom AR lenses and eyeglasses

Virtualia Interactive is keeping an eye on the trend in nanotechnology and 3D printing for customer AR lenses and eyeglasses. Read to below to find out how our company looks at solutions.

The third key challenge concerns the use of nanomaterials for generating high enough computing power with a small size and small weight for augmented reality eyeglasses, or for 2-systems AR lenses using earbuds to compute the data based on voice command and the smart lenses to project that information such as videos and images. There are plenty of other techniques (such as human-machine communication), but we are focusing on these two that avoid implanting software on your brain.

3D printing by 2030 should be more common at the industrial level while home 3D printers should be limited to large spart pieces reparation or creation (like our first ever Virtualia coin). Hence, miniaturization must not be yet ready by 2030 but repairing earbuds or part of the glasses we produce could be feasible.

What is the key difference between AR lenses and AR glasses?

✔AR lenses display information with sensors integrated into the corner of a smart eye contact lens.

✔ Eyeglasses on the other hand display information over the interface of the glasses.

At this stage this topic is more under research and development phase but by 2030, first-generation AR lenses for the general public may appear, and we are willing to be part of this venture with our product and attached services or by partnering with one existing player.

For the moment, Virtualia’s roadmap is to experiment how it would look like in virtual reality using a common headset. As a result, it would provide a lot of valuable product information.

Basic applications will be to display mobile phone push notifications which will avoid you to take out your VR headset.

Many questions are pending and solutions possible[30]

This phase is critical to assess what type of information should be selected and what is the best user experience mechanism (staring time, movement of the eye) that should be designed to fit most practical cases without being heavy, strange (looking at the corner MUST be avoided, using a smart wrist), painful (e.g., firing messages all the time) and must be under safety regulation standards (projected lumens into your pupils or radio wave intensity exposition and duration, as well as safety guards like not obstructing your vision at a specific moment). Will it be to replace your phone or more like an augmented tool to provide key information only, such as major push notifications or ads when entering one of our virtualized stores? Should it be used to project sports games, launch apps, embed a camera, a microphone as some providers have done so far? Should it be paired with a smart device to be used as a joystick to scroll pages, respond to texts, send messages, clear notifications, select menus? Should it be paired with an earbud to get sounds and communicate?

What technology must be improved to fit naturally in our everyday life and become utility outside mere status?

✔ For both AR lenses and glasses, we must develop low energy wireless radio communication, and nanomaterial motion sensors for eye-tracking and image stabilization as well as embedded software without the need of a smartphone to provide virtual assistant and multilingual voice-to-speech functionalities.

✔ AR glasses should keep the curvature of the lens so it looks classic glasses, be custom to own personality (many models available), and contrary to AR lenses it may use micro-projectors that throw rays of light to thin film in the lens and reproduce holographic images.

✔ Pairing with a personal device should not be limited to how third-party apps can handle those messages or restrictions with using a third-party virtual assistant. Certainly, constraints imposed by high-tech manufacturers are a clear barrier to mass adoption. This would probably require a wireless connection outside the phone ecosystem to cell towers with its own cloud server, blockchain, and communication tools. Therefore, smart wireless technology will help a lot.

✔ Battery life should match those of smartphones on a single charge. Therefore, having your lenses or glasses charging over their native pods every night would make it painless. Those charging pods should be well-designed to be more like a design decor than a geeky object.

✔ The weight should be light, so people don’t feel the difference with classic eyeglasses, i.e., around 20–50g, and don’t need to remove them because of the weight.

Finally, it should provide status so a social distinction for those not used to wearing eyeglasses. But it alone is not enough if it does not provide utility benefits or clear advantages compared to a smartphone, as a substitute product in specific cases.

Of course, these typical use cases go beyond the Virtualia shopping experience where they could be used to guide you, browse products, and see them in AR. End usage could be for avoiding teleprompters, seeing in the dark, improving vision for deficient eyes, or providing you driving directions as a passenger.

We will go for AR eyeglasses before AR lenses. Why? The key reason is that it is easier to create a status market where there is a benefit and social status attached with wearing beautiful, branded AR glasses that everybody can recognize as we recognize blue Ray-Bans. Though, they should not be high-tech nerdy but beautifully crafted AR eyeglasses.

One of the key constraints is to avoid looking abnormal or thick either at the glass level or the arm. That’s why nanomaterials are a key practical field for AR glasses to be massively adopted in addition to providing benefits such as our virtual shopping experience.

In addition to everything, we would pair them probably with your current smartphone so you can swipe across 3D products and rotate 3D assets by touching the screen of your smartphone while visualizing them in augmented reality over your AR eyeglasses. Earbuds will be used for voice chatting with your friends when asking them their opinion about the product you have just scanned with a QR code.

The customer touches the smartphone screen to rotate the 3D model synchronously in his smart eyeglasses

This is clearly a new market; it requires heavy investment (in the billion scales) and several years of research and development before reaching the consumer market.

On wireless selected IoT technology for Virtualia AR lenses and eyeglasses

To select one of the current promising wireless technologies, one must find a trade-off between range, data, battery life, and latency[31].

Among the options, current technologies are:

✔ Wi-Fi technology seems to come with a lot of security risks: from encryption to interoperability and access points. This could work only within a closed, limited, and known network such as barcode scanners and connected machines. Dropped out.

✔Cellular uses SIM cards and IoT will have to pay a mobile operation for network access. However, the good point is that most areas are covered by cellular networks. The key disadvantage is that it would be very expensive to handle 3D data sets stream in the order of hundreds of Megabytes. This can be suitable for autonomous vehicles. Finally, there is a big inefficiency in maintaining a large power budget to maintain the network. Dropped out.

✔ Three main players Sigfox, LoRaWAN, and Ingenu could provide interesting solutions for power-efficient and large-scale area coverage in the long term, but for the moment, it will take years to cover an area that is realistically acceptable from a business point of view based on human mobility. Moreover, those are appropriate solutions for automatic meter readings for utility management in smart cities. Dropped out.

✔ Mesh Networks use short-range, high data rate wireless to build networks. Unfortunately, development and performance are major drawbacks. Dropped out.

✔ Symphony Link is a wireless system with long-range, low latency, high capacity but low bandwidth which perfectly fits the sensor and controller industrial market but not the consumer 3D streaming market.

✔ Bluetooth is popular in the consumer electronics market which would enable mass adoption, and it is cheap. A clear drawback is that it is short-ranged, all smart devices must be near an access point.

Our first solution is to use 2-system pairing earbuds with AR eyeglasses coupled with your smartphone, before having one day both access and endpoint on the same wearable product (earbuds disappear). The earbuds and glasses connect to the smartphone backend in Bluetooth. Furthermore, the smartphone is connected to the cloud through tower cell antennas as usual. Currently, see how your smartwatch connects to the internet, whether it uses Wi-Fi, or Bluetooth within the range of your smartphone.

We would do something similar. When entering Virtualia Fashion Stores, our Wi-Fi bandwidth would be used for free by customers to watch our products in 3DAR[32] through their smart eyeglasses and share them with their friends and family or social network for live assessment or live fashion styling.

Picture of two customers projecting their body-self avatars with a new leather cloth in 3D VRAR. The leather adapts to the body size close enough to reality.

Fourth topic: Blockchain P2P payment for virtual services and delivery of connected objects through means of autonomous vehicle and drone transportation for logistics.

This is the last pillar to construct the Virtualia network by leveraging blockchain P2P payment between Virtualians for virtual services on VirtualiaWorlds or between one consumer and a virtual store owner that deliver real smart objects after payment on Virtualia Shop or the Virtualia App.

Read below to learn the major pros and cons for this resourceful payment method system; the Blockchain P2P payment.

In physical stores, we will use their capacity storage and the connectivity of the network with local start-ups — that promote efficient and electric means of transportation — to deliver on the same day. Then we will move out by 2030 to an autonomous vehicle and drones delivery solutions once many drawbacks will be resolved by third-party players, probably in specific cities where regulated licensed operations are available.

What are the key challenges linked to the blockchain P2P payment system?

✔ “In blockchain we trust”. This is the motto of the Virtualia Coin as it appears on its face side. Blockchain with its distributing structure, which allows distributed parties to trade directly without a middleman, has great potential for many applications within the Virtualia network. Furthermore, smart contracts can enable automatic, direct payment once the conditions for the contract have been digitally verified. Decentralized finance enables users to get access to a typical financial product such as consumer credit loans or saving-interest accounts.

✔ The method used for encrypting data called elliptic curve cryptography, an alternative to the well-known RSA technique using prime numbers, is difficult, mathematically speaking, to crack while maintaining performance, security, faster key generation, and signatures, less memory, and lower-key size[33] than RSA but harder to implement it. ECC has great potential for wireless mobile devices. Hence, it is vastly used for digital signatures in cryptocurrencies. We are also using this technique for our web applications.

✔ It must be understood that randomness predictability is THE key battle for the security of blockchain applications. However, with efforts being made on quantum computation, blockchain may not be secured enough in the future (by 2030?) as the decomposition of very large integer factors are no longer insoluble. IoT devices with lower than 8192-bit RSA keys could be vulnerable and post a high cyber risk to smart cities.

✔ Lattice-based public-key cryptosystems and alike are proposals to resist the attack of quantum computation. We called them quantum-resistant cryptographic algorithms. To reduce the size of public keys and signatures used by those lattice cryptosystems, several methods are being researched. However, one method[34] leverages IFPS (interplanetary file system) with 5G to store the complete content of public keys and signatures value to only store hash values on the blockchain to reduce the number of bytes occupied by each transaction (a process similar to our NFT 3D marketplace).

✔ As Virtualia strategy is to invest in quantum computers research and acquire one that is practicable by 2030, it is key that the future version of the Virtualia blockchain is quantum-resistant by replacing the original signature with a digital signature of the quantum-resistant algorithm (based on lattice cipher, Bonsai Trees technology[35] or other methods under research).

✔ Not only the security toward quantum computing should be reached but very low fee transaction, high capacity, and mining time are key for the business model. Therefore, duration distribution analysis of the signature, verification, and transaction are important metrics to get the shortest average time of blockchain.

What are the key challenges linked to autonomous logistics?

Where we need to define what qualifies as “autonomous”. There are 5 stages of automation (Lemmer, 2016) used on a legislative proposal and serve as the base for further law legislation:

0. Driver only. The driver has full control of longitudinal and lateral guidance permanently.

1. Assisted by the robot in one driving guidance.

2. Partly automated. Driver monitors while the robot takes full control in certain situations.

3. Highly automated. The driver has not to monitor for a determined time.

4. Fully automated. Full robot guidance for a concrete application.

5. Autonomous. The driver is not needed anymore.

Stage 4 will be attained in the mid-2020s while full autonomy in the ’30s whereas, the final stage is desired to face the challenges of future logistics.

Electrified autonomous transport for logistics will bring many significant disruptions:
Lower carbon footprint per capita as the most autonomous vehicle will be used (with no C02 emission during their exploitation), lower traffic congestion and accident (theoretically), better connectivity between production sites and warehouses, reduced noise, and optimized routes for public transportation (served by a central AI).

Furthermore, indirect benefits like more productivity and savings per capita could result from these changes. Driving jobs are a major source of employment in most countries and a probable negative impact would arise requiring a transformation of jobs during the next 30 years.

An analysis on the mode of transport frequency in Europe shows that road transportation is by far the most frequent mean, followed by the deep sea and air freight by a big margin then rail and very far inland waterway[36]. Hence, road transportation seems to be of increasing importance in the foreseen future making AVs development a priority on road.

Nevertheless, among all options, autonomous trucks, that show a higher applicability potential and benefits are seen by the transport industry as not realistic in mass scale by 2030 due to legislation in road traffic regulations, legal constraints, initial costs, and worldwide connectivity. However fully automated driving can become a reality, which means that a driver will still be required. Finally, the second application, namely air drones, has a lot of regulations and barriers to make it use at a large scale by 2030.

Therefore, these findings make us believe that autonomous vehicles should not be considered for the decade but fully automated truck transportation a reality. As for drones, we do think that their limitations (price and energy consumptions per distance per kg) could be discarded in the logistic framework we are considering for the decade.

Finally, all seems little relevant to our business model and of less importance than the automation in tracking from the logistic aspect.

Thus, we can conclude that this later can bring a clear paradigm shift that will have a massive cultural behavioral impact on the consumerism and shopping experience. For that reason, our whitepaper does not detail the effort that Virtualia will do on that topic. What can be said is that there is a real revolution regarding packed smart products, IoT sensors, blockchain, the delivery being classic or automated, mobile app tracking, and payment systems not only for stores but also for other markets not yet mentioned. Several patents are under consideration. What we are looking for, are solutions that will not delete the existing employment workforce but will create new job profiles requiring an equivalent level of skills. So, the new jobs won’t result in hiring only skilled engineers, but we will refit a full category of employment and competency into new tasks of the same level in reconverting those jobs into the smart logistic chain.

Conclusion

We analysed in this article four technological breakthrough to be considered before getting to the point where by 2030, Virtualia relies on many mature technologies to work smoothly across all regions of the planet in cities as well as in rural places.

In our next article, we write about a fictitious story by 2030 after having overcome the mentionned challenges.

[1] 1 exaFLOP equals 10¹⁸ floating operations per second

[2] https://www.pnas.org/content/118/32/e2107022118

[3] https://rivery.io/blog/big-data-statistics-how-much-data-is-there-in-the-world/

[4] https://www.comparitech.com/blog/vpn-privacy/netflix-statistics-facts-figures/

[5] https://www.yansmedia.com/blog/facebook-video-statistics

[6] https://rivery.io/blog/big-data-statistics-how-much-data-is-there-in-the-world/

[7] https://www.nielsen.com/us/en/insights/article/2021/tops-of-2020-nielsen-streaming-unwrapped/

[8] https://www.imore.com/how-shoot-trim-edit-and-share-4k-video-iphone

[9] “The observation that the logic density of silicon integrated circuits has closely followed the curve (bits per square inch) = 2^(t — 1962) where t is time in years; that is, the amount of information storable on a given amount of silicon has roughly doubled every year since the technology was invented”

[10] One billion of one billion times faster since 1962

[11]As IPv6 uses a 128-bit address, theoretically allowing approximately 3.4×1038 unique addresses

[12] https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/

[13] https://gadgetversus.com/processor/

[14] https://www.top500.org/lists/green500/2021/11/ and https://www.top500.org/lists/top500/2021/11/ . Same reference for 2015.

[15] probably the rate will decrease but that’s fine for a rule of thumb calculation

[16] https://aiimpacts.org/wikipedia-history-of-gflops-costs/ and https://aiimpacts.org/trends-in-the-cost-of-computing/

[17] https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/

[18] https://www.xentity.com/are-supercomputers-really-that-super/

[19] https://www.statista.com/statistics/238610/projected-world-electricity-generation-by-energy-source/

[20] https://www.enerdata.net/publications/executive-briefing/between-10-and-20-electricity-consumption-ict-sector-2030.html

[21] https://www.eia.gov/todayinenergy/detail.php?id=42555

[22] https://www.cleanenergywire.org/factsheets/germanys-energy-consumption-and-power-mix-charts

[23] https://www.eia.gov/todayinenergy/detail.php?id=12251

[24] https://spectrum.ieee.org/qubit-supremacy

[25] https://www.hpcwire.com/2021/11/18/2021-gordon-bell-prize-goes-to-exascale-powered-quantum-supremacy-challenge/

[26] https://www.inc.com/eric-mack/no-google-its-quantum-computer-arent-killing-bitcoin-anytime-soon.html

[27] https://spectrum.ieee.org/qubit-supremacy

[28] https://www.nature.com/articles/s41565-020-0647-z

[29] Actually, this is new research we are developing at Virtualia Interactive Technologies

[30] Check that great article by wired: https://www.wired.com/review/focals-by-north-smart-glasses/

[31]“Selecting a wireless technology for new industrial internet of things products” by LinkLabs

[32] 3D assets projected in augmented reality

[33] https://avinetworks.com/glossary/elliptic-curve-cryptography/

[34] https://www.hindawi.com/journals/scn/2021/6671648/

[35] C. Y. Li, X. B. Chen, Y. L. Chen et al., “A new lattice-based signature scheme in post-quantum blockchain network,” IEEE Access, vol. 7, 2019.

[36] Digitalized and autonomous transport — challenges and changes, by Stradner, Sashca and Brunner, Uwe, 2019

We are open to capital investment

We are looking from 300K€ (1 product to market) to 50M€ (whole ecosystem). Contact us at investor@virtualia.ai

To learn more about what we do

1- Stay tuned to our Medium article campaign (over 100 articles) by subscribing @The Virtualia Team

2- Visit our website at https://virtualia.ai

3- Follow us on LinkedIn at https://www.linkedin.com/company/virtualia-interactive-technologies/

4- Read our first introductory article and our second article on the spirit of the Virtualia ecosystem

5- Join our community discord server to discuss around 3DVRAR and what we do at https://discord.gg/UJ4xkAXUbB

--

--

The Virtualia Team

Virtualia is an ecosystem of mobile, web applications and virtual worlds built around a blockchain leveraging AI, VRAR, IoT, 5G, and space satellite imagery.