WebAR Wearable Digital Fashion NFT, Bio-Sensors, 3D DNN Shape Completion, Animations, Gan Synthesis

We interviewed Emma-Jane MacKinnon-Lee the CEO of Digitalax for Part 2 of our series, and this time we were joined by Anina Net the CEO and founder of 360Fashion Network, and Cecile Tamura who is President and CEO at Okasaki Tech Holdings Corp in addition to being regular contributor to Silicon Valley Global News.

Silicon Valley Global News SVGN.io
45 min readMar 7, 2021


Digitalax X SVGN.io News X Anina Net. Watch the interview here:

You can learn more about Anina here https://en.wikipedia.org/wiki/Anina_(model)

The Meta Jacket

Worn by model influencer and co-play artist Shirleen creates a window allowing us to see into the future of Augmented Reality with Digital Fashion and Fashion Tech.

The Meta Jacket, created by Artifact Studios (rtfktstudio’s first Digital Fashion Jacket), as worn by model, influencer, and cosplayer Shirleen (aka richchocolit on instagram) is a vision of our native digital future.

“The METAJACKET is RTFKT’s first Digital Fashion Jacket. Dropping Exclusively on the DIGITALAX Marketplace.”

The Meta Jacket is a leading example of Fashion Designers selling Digital Fashion using NFT blockchain technology, to validate it’s uniqueness, so customers can prove they own the original and not a copy, and this involves Ethereum smart contracts written with the Mona NFT token.

This Meta Jacket video was created in post production, but the reality is that in the near future all of us will be able to wear Digital Fashion in AR and VR in real time.

Deep Neural Networks in Fashion Tech

Wearing Digital Fashion in AR and VR becomes possible in part because of technologies 3D deep neural networks which are being used for things like Real Time Pose Prediction, eye tracking, hand & finger tracking, holistic body & environment tracking, and most importantly 3D semantic segmentation.

Mediapipe Holistic

Mediapipe Holistic

Mediapipe is an example of an application made with Deep Neural Network technology (tensorflow) that can add facetracking, body pose tracking, hand tracking, object tracking, and in the future new features like 3D semantic segmentation, and 3D reconstruction of scenes, and it runs from a web browser. It runs with WebAR and WebVR apps built with the WebXR api.

Your future designer glasses will have augmented reality built into them thanks to technologies such as MediaPipe, but also Tensorflow 3D, Pytorch3D and other neural networks dedicated to 3D Semantic Segmentation. The computer will be able to apply Digital Fashion to yourself, other people, and to everything in the world.

Emma-Jane writes this about the role she envisions for Digitalax

“DIGITALAX is a comprehensive umbrella of projects all attacking the same core and fundamental problems, and each project has its clear role to play. The fundamental point is that we take a modular, component approach, just like all of the best software projects on the internet today. What this allows for is considerate attention to each independent area of concern while still benefiting from a network flow of value generation between them. Another way of understanding this is that it is a graph model. We are ecosystem builders. Some people in prominent positions and their followers do not believe that such a thing is possible. However, I would kindly suggest that they visit my homeland of Australia and check out the Great Barrier Reef. Keystone species in ecosystem building environments are very real and have been applied to tremendous success in business time and time again.” — Emma-Jane, CEO of Digitalax

Digitalax is bringing together not only all of these technologies with DASH (which is written about in more detail later in the article), but also with the Artists, the Game Designers, Fashion Designers, the VR AR application engineers, and the Player-creators so everyone can work and play together.

This is why I say Emma-Jane is at the center of this technology and industry convergence in a sense.

There are astonishing possibilities that can be realized with the convergence of technologies such as 3D Deep Neural Networks, WebAR & WebXR, Digital Fashion & Art, NFT, and all the new sensors being integrated into our devices, some of which include brain computer interface sensors.

Our devices, phones, watches, VR AR headsets will have the capability to recognize people, recognize breathing, heart beats, predict intentions, predict emotions, detect your medical conditions, identify animals, cars, and objects in real time, and render Digital Fashion clothing in the real world via Augmented Reality websites (or WebAR), with the physics of the digital materials you are wearing to reacting appropriate to your environment, such as making digital clothing fit you correctly, and making it deform correctly when you sit down.

DNN Deep Neural Network Animations: Neural State Machine

To see an example of 3D Fashion deforming (or self modifying its shape in reaction to the environment, so that it moves correctly) we can look at a video from ACM Siggraph in 2019 where Adobe demonstrated a Neural State Machine for Characters running in Unity with Tensorflow.

In this example imagine that everything the character is doing is what your Digital Fashion will be doing, in terms of the clothing modifying its animations to match whatever situation it encounters.

That means that whenever the character/clothing encounters a new object it responds to the environment in a way that makes sense, a neural network is used to animate the character in real time, so that the character can adapt the variation in shape and type of an object.

Animating characters is a difficult task when it comes to interacting with objects and the environment. What if we used computer brains instead? In this research, we present the Neural State Machine, a data-driven deep learning framework that can handle such animations. The system is able to learn character-scene interactions from motion capture data, and produces high-quality animations from simple control commands. The framework can be used for creating natural animations in games and films, and is the first of such frameworks to handle scene interaction tasks for data-driven character animation. The research is implemented in Unity and TensorFlow, and published under the ACM Transactions on Graphics / SIGGRAPH Asia 2019.

This idea explains how your Digital fashion clothing, in place of this character, will respond correctly to your real environment depending on what you interact with. If you are in a windy environment, your digital clothing will animate with accurate physics for example. In 2019 this was inside a game, in the coming years this technology will be in the world with us.

Technology like the Neural State Machine will be able to apply to the real world in part because we will have pixel perfect 3D models of the world that our devices will capture at 90 to 120fps thanks in part to technologies like lidar rgb fusion algorithms that are for example included in the Varjo XR-3 Mixed Reality headset.

Varjo (pronounced var-yo) is based in Helsinki and is creating the world’s most advanced VR/XR hardware and software for industrial use, Varjo XR-3 Mixed Reality headset does volumetric capture turning the world into a cinematic quality 3D model in real time, thanks to its integration of a lidar RGB fusion neural network.

Here is a description from Urho Konttori, Co-Founder and Chief Technology Officer of Varjo: “Varjo’s XR-3 and VR-3 headsets are not only making photorealistic immersive applications more widely accessible than ever, they are also accelerating the spatial computing revolution in the workplace overall. Thanks to a lower price point, coupled with the absolute best technology available, such as human-eye resolution at over 70 pixels per degree and LiDAR for seamless depth awareness, these headsets unlock an entirely new set of experiences where users can no longer tell the difference between what is real and what is not. This is an essential factor for driving broader professional adoption and seamlessly integrating XR technologies into users’ daily workflows.”

As of March 4th, 2021 Varjo’s Next Generation VR-3 and XR-3 Headsets are now shipping worldwide. Companies can purchase the Varjo VR-3 and XR-3 at www.varjo.com

I think the Varjo headset’s volumetric capture and lidar RGB fusion software shows us what our future phones and other devices will someday be capable of with lidar RGB fusion, and 3D Deep Neural Networks for Shape Completion, Semantic Segmentation, Gan Synthesis, and Interpolation Rendering.

OpenAI, GP3 Dall-E, Microsoft Mesh, Gan Synthesis, Interpolation, and Shape Completion.

On January 5th, 2021 OpenAI revealed DALL-E a version of GP3 which is trained to generate images from text descriptions. DALL-E builds on CLIP which is a sister GP3 neural network that is focused on multi-modal image & text recognition. CLIP for example learned images of chairs and the semantic concepts of a chair in language, and at the same time it learned images of avocados, and the semantic concepts of avocados in language. So that with that information Dall-E could be asked to create an Avocado Chair, and as you can see in the picture belong it did so.

This immediately reminded me of the concepts of GAN Synthesis and 3D model interpolation that I had seen in a talk in San Francisco called “Deep Learning for “Exotic” Data Like 3D Meshes and Point-Clouds” presented by Or Litani. In this talk you can see in the images below concepts of 3D Semantic Segmentation neural networks being used for Shape Completion and even a slide for turning a 2D photograph into a 3D model.

Read a paper about Shape Completion here: http://proceedings.mlr.press/v80/achlioptas18a/achlioptas18a.pdf

A slide from “Deep Learning for “Exotic” Data Like 3D Meshes and Point-Clouds” presented by Or Litani
A slide from “Deep Learning for “Exotic” Data Like 3D Meshes and Point-Clouds” presented by Or Litani

Using 3D Semantic Segmentation for Shape Completion means we can eventually turn the incomplete point cloud based representations of people in Microsoft’s Mesh into complete & accurate live models of people.

Deep Learning Shape Completion which is from 3D Semantic Segmentation, means that your volumetric video avatar, in Microsoft Mesh, will eventually be pixel perfect, while Gan Synthesis, Interpolation, and the Neural State machine mean you will be able to wear your Digital Fashion clothing during your live meetings, on your live volumetric video avatar.

These deep learning technologies also means that you will be able to (in the near future) make your headset invisible so that other people see just your face as if you were not wearing the Hololens.

It’s amazing to think about how far we have come.

A slide of history, 3D Deep Neural Networks in 2007, (aka 3D Object/Semantic Segmentation)
2009 Darpa Lager was already demonstrating semantic segmentation on 3D objects with self-driving vehicles.

A recent talk from Yann LeCun dived into the history of Deep Learning, in the slide called ‘3D image segmentation for “connectomics”’ it goes to show us that 3D Neural Networks (3D ConvNet) was already a thing in 2007 before the wider world discovered that Deep Learning was cool.

Watch Yann LeCun’s talk here, for a great window into the history of Deep learning as told from someone who was a core contributor to that history (Yann LeCun), thanks to Cecile Tamura, President and CEO at Okasaki Tech Holdings Corp, for sharing it.

A recent talk from Yann LeCun dived into the history of Deep Learning

To summarize this section, with 3D semantic segmentation, the computer learns the 3D structure of what it is presented with down to the individual pixels or points that belong to that object, allowing the computer to complete shapes when presented with a partial object. It can deform meshes to make animations that look physically accurate and react to other objects both real and digital. It can interpolate or imagine new 3D models that are somewhere between previously learned models, and also with Gan Synthesis it can create new 3D model structures that are hybrid structures such as with the Avocado Chair example mentioned elsewhere in this article.

The Scan Truck

At first The Scan Truck might remind you of the Mixed Reality Capture studios at Microsoft, because they both have a room with cameras pointed at you from every angle, but its very different.

I wanted to include this story about The Scan Truck, because I think it is an excellent demonstration of what is possible with advances in 3D scanning technologies, in terms of creating photo realistic avatars.

I experienced Scan Truck’s hyper realistic digital avatar in virtual reality at Siggraph in 2019, looking at me, and tracking me with eye contact, head movement & body movement, while he was talking, and I was stunned by how good it was. I did not sense the uncanny valley affect. Thanks in part to the Unreal Engine by Epic Games.

The Scan Truck approach, that you can see in the video, involves using an array of cameras to capture high resolution images of a person from every angle, to turn that into a 3D avatar, and then to record that person’s facial expressions with a camera mounted in front of that person’s face. The motion capture is mapped to the high resolution model resulting in an incredible leap forward for volumetric characters.

What’s special about The Scan Truck approach in my opinion is that it is using a 3D photogrammetry scan for the body, to create a high resolution model, and then they are just updating parts of that 3D scan, such as the face, with motion capture, so it’s much cheaper than the Volumetric Capture studios at Microsoft.

See this paper from Facebook Reality Labs published in March 2021 called Mixture of Volumetric Primitives for Efficient Neural Rendering for what I think is a similar concept. “Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods.” https://arxiv.org/pdf/2103.01954.pdf

In summary your phone, VR headset, AR headset, (and perhaps your smart watch) will someday eventually replace the entire Microsoft’s Mixed Reality Capture studio, with the ability to not only capture the highest resolution 3D models, with lidar RGB fusion algorithms, but to also complete them with Neural Network Shape Completion, so they are pixel perfect, and compress them with technologies like the Fourier projection slice theorem and the DASH File Format Specification and File Intercommunication Architecture from Emma-Jane at Digitalax.

I had additional questions about Digitalax’s technology and one of the things Emma-Jane wrote in reply to me was this about Dash.

“I must say that this part is also the area with unbounded potential to greatly enhance and transform everything I spoke about above, as well as how we communicate, connect and create. DASH is a file format built for the metaverse and beyond. It is solving information transfer and interoperability the right way. It is not about optimizing on what has already failed. DASH is embedding the application layer directly into the file layer through elegant usage of some of the most beautiful math ever recorded by humanity. The whole creator, player, developer economy will change through this as they will be able to seamlessly transfer 3D/4D information through different graphic and digital economy environments with consideration for creative control. DASH also establishes business model innovation and exemplifies the power of open source and community driven collaboration, as we embed community incentive models into the upgradeability of the DASH Transformation Set.” — Emma-Jane, CEO of Digitalax

Digital Fashion X Fashion Technology, BCI, and lots of new sensors in all our devices, and on our bodies.

This technology convergence also comes with new brain computer interface sensors, and other sensors to track the entire body, the eyes, the heart rate, your breathing, to do affective computing, prediction your emotion, intention, and even diagnose you for the diseases.

All the major hardware tech companies in the VR, AR, smart phone space, and wearable tech space (including smart watches, clothing, shoes etc) are adding new sensors to our devices.

Gabe Newell talked about Brain Computer Interfaces, OpenBCI, and the future integration with VR, such as with the Valve Index

In January 2021 Gabe Newell talked about Brain Computer Interfaces, OpenBCI, and the future integration with VR, such as with the Valve Index

“Gabe Newell says brain-computer interface tech will allow video games far beyond what human ‘meat peripherals’ can comprehend”

A great article by Skarred Ghost that goes in depth into cooperation between OpenBCI and Valve for future Valve Index devices.

It’s easy to imagine that Valve Index headset with eye tracking and eeg sensors could lead to experiences in online game worlds built with WebXR technology such as Cryptovoxels with BCI & Eye tracking the computer could do what is called affective computing. That means predicting your emotions, intentions, and perhaps even giving you a medical diagnosis.

Tobii, an eye tracking company that has had its products integrated into some HTC Vive headsets, is also part of the new deal with Valve.
The Tobii, Valve, and OpenBCI partnership means eye tracking, and eeg sensors will probably be included in the next Valve Index headset.

Facebook Research at Siggraph

At Siggraph in 2019 Facebook Research revealed its work to capture the face, eyes, mouth, with sensors in a VR headset and reconstruct your face as an avatar in VR (and in AR)

VR facial animation via multiview image translation

Apple’s AR VR Patents

What Apple’s patents tell us about their future VR and AR headsets is that Apple is interested in using many more cameras and other sensors to track everything about your face and body, to reconstruct the digital you in VR & AR.

The Apple AR VR patents have shown us descriptions of how they plan to capture eye movement, jaw movement, hands, fingers, gestures, facial expressions. Extra cameras for high precision hand tracking that includes two handed gestures.

This will be combined with outward-facing sensors to track the body, the room, others. The Apple patent “Display System Having Sensors” describes some of this.

Just think about how Apple phones can already use complex face tracking software to unlock the phones, or to animate Animoji’s. From these patents we can guess that Apple intends to provide the same features that Facebook has been showing for years at Siggraph in their AR VR headsets.

Google’s Soli sensor

A radar beam that captures finger motion in 3D space. Touch your hands to press buttons & dials with fingers, palms, and thumbs. You will see your hands with WebAR and Virtual Reality.

Facebook doesn’t think Google’s Soli is safe

With this information we can infer that Apple’s Patents are pointing us towards the same functionality as Google Soli, only accomplished with many more cameras compared to using a radar beam.

The over arching point here is that all of the major tech companies in the AR VR Device space want to be able to capture every part of you, and the world, to digitize both you and the world.

At some point your Apple watch, for example, with these technologies, will know everything about you. If your watch observes you admiring a can of soda for example, the question that follows from that is, will your insurance company somehow get ahold of this information, adjust your risk profile, and increase your monthly bill?

These VR AR headsets also have microphones like your phone. With Deep Neural Networks the possibilities are endless.

For example researchers have been using Deep Neural Networks to analyze voices and coughs to try to diagnose Covid-19. What other medical conditions could we diagnose from your biometric sensor readings?

Covid-19 has been detected from a cough.

Since March 2020 there has been intense research to diagnose Covid-19 from your voice.

Clubhouse can be used to mass diagnose viral infections and ID you.

The Mars Rover: A new discovery Deep Learning + Microphones

I heard a story about an engineer working on iot devices discovering that with deep learning neural networks he could use the microphone to provide data that would normally be extrapolated from the accelerometer. The microphone could for example be used to filter noise caused by vibrations. Apparently the scientists at Nasa working on the Mars Rover were surprised to discover that they could do this with the Microphones on the Rover which they discovered after the Rover had landed on Mars.

After a lot of searching I found a paper titled “Soundr: Head Position and Orientation Prediction Using a Microphone Array” Which is another surprising way to use Microphones & Deep Learning. Potentially this means that the Microphones on the Mars Rover could be used to track which way the Mars Rover is facing.

I also found “A deep learning approach to multi-track location and orientation in gaseous drift chambers” https://www.sciencedirect.com/science/article/abs/pii/S0168900220310378

Deep Learning + Wifi or Radar

New research: Using radar or wifi researchers could analyze tiny changes to the transmitted signals caused by subtle body motion, such as heart rate & breathing rate. When combined with deep learning it was used to predict the emotions of subjects. A good question is what else could deep learning & wireless signals reveal about a person? Could we identify someone’s biosignature this way? Could this be another tool to diagnose someone’s medical condition? Can we predict human intentions, and things like a person’s political party when combined with other data sources?

“For this study, the scientists employed deep learning techniques, where an artificial neural network learns its own features from time-dependent raw data, and showed that this approach could detect emotions more accurately than traditional machine learning methods.” source https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242946

It does not seem likely that any human knows the limits of what can be detected and predicted when combining sensors with deep learning.

There are also numerous advances in medical imaging technologies on the Horizon not limited to the following:

Electrical Impedance Tomography with Deep Learning

Furaxa Microwave Imaging



Functional NIRS imaging

This conversation “Optical Imaging with Kyle E Mathewson”, dived into how FNIRS, or Optical imaging technology might detect the firing of a neuron, because the body of the neuron swells when firing, and it also points to how we might detect disease like Covid-19, because the virus Sars-Cov2, causes vasoconstriction (by degrading the Ace2 receptors largely in the endothelial lining), and thrombosis (blood clots) (the damaged enthothelial lining releases blood clotting factors like von Willebrand factor (VWF), which affects the flow of blood that is what FNIRS and similar technologies detect.


“HD-DOT” “uses light to detect the rush of blood”

“What we’ve shown in this paper is that, using optical tomography, we can decode some brain signals with an accuracy above 90%, which is very promising.”

You can read more about this description of Sars-Cov2 here
1. Pulmonary Vascular Endothelialitis, Thrombosis, and Angiogenesis in Covid-19 https://pubmed.ncbi.nlm.nih.gov/32437596/

2. This review of 1000 Sars-Cov2 papers studying the effects of Covid-19 on the human brain also comments on this idea, but particularly it shows how the Sars-CoV-2 ACE2 receptor is able to invade the human brain as a consequence of viral damage to the endothelial cells which form part of the blood brain barrier. “The critical role of proteins S and E in HCoVs, specifically 0C43, and the slow movement of the blood in the brain’s microcirculation, can aid in the interaction of the SARS-CoV-2 S protein with the ACE2 receptor expressed in the capillary endothelium. Viral damage and recruitment of endothelial cells can promote invasion of the CNS by SARS-CoV-2 [23, 59–62].” https://pubmed.ncbi.nlm.nih.gov/32925078/

I also wrote about Sars-Cov-2 via ACE 2 degrades your endothelial lining and the implications of that with links to many papers in April of 2020 https://medium.com/silicon-valley-global-news/d-ribose-coronavirus-covid-19-sars-cov-2-research-on-potential-therapeutics-47d8ef56a9ff

If you are super interested to talk about next generation brain computer interface research you should join this facebook group.

Brain Computer Interfaces, and new sensors to track every aspect of humanity on a mass scale force us to think about a broad range of new considerations from neuro-hacking, to neuro marketing, cognitive nudging, privacy, public safety, human rights, and environmentalism.

With all of these advances in sensor technologies, especially combined with deep neural networks. It would be understandable if you eventually begin to question your reality and your mind as Yuval Noah Harari explains in his conversation with Mark Zuckerberg in April 2019.

Can we trust technology companies like Facebook and Google to protect our privacy, safety, our consent and our human rights? I like technology companies but history shows us that we cannot blindly trust corporations to do this by default. We need legislation as well to protect humanity including foreigners, from the potential risks of these new converging technologies.

Mark Zuckerberg explained in his talk with Sapiens author Yuval Noah Harari. How when free nations demand companies store data locally (in their own countries) it legitimizes authoritarian nations to do the same, this in effect allows all nations, to secretly compel Facebook to hand over that data for their own nefarious purposes. (As long as Facebook’s data centers are located in that country.)

Since some of Facebook’s servers are located inside the United States we can logically deduce from facts, including many news reports, that the US Government is using Facebook messenger to spy on other countries and also all Americans who have conversations with people who live in other countries which is almost all Americans. The US Government can then map and study the relationships of all these Facebook friendships, and they have the option to study your private conversations with the aid of computer automation and analysis tools.

Mark Zuckerberg explained this perhaps as a warning as foreign nations increasing demand that Facebook build data centers in their countries. This would be the most plausible reason why.

In addition we have this statement from former US Attorney General William Barr who infamously wrote a public letter asking Facebook to not add end to end encryption to Facebook Messenger by default. End to end encryption in theory could prevent or make it difficult for Facebook to read your messages, and thus it becomes difficult for Governments to read your messages.

The risks of these technologies converging are not limited to neuro-hacking with brain computer interfaces, or to nation states forcing tech companies to comply with mass spying activities, there are potential dangers from hacking Augmented Reality, potential safety issues with using AR in public from distractions, and the potential for environmental pollution from over-use of the energy expensive proof of stake systems.

The potential dangers with Augmented Reality usage in public.


Andrew Bosworth, Head of Facebook Reality Labs has said that facial recognition technology is a thing that Facebook is “looking at” in relation to AR devices. This scenario presents challenges like what about protecting information privacy, like with google glass, you are going to have a camera recording what you see? Won’t people be upset about that? Or will they be okay with it?

These devices have or will have cameras, world tracking, eye tracking, audio recording, and multi-modal deep neural networks that could potentially reconstruct a volumetric representation of you, the places you visit, and your entire daily life, they may predict your thoughts and feelings, like Apple’s fingerprint & face recognition these devices may capture a person’s unique biometric signatures in body movement/gait, in your voice. These devices can be used potentially to diagnose medical information, whether you are having a heart attack, a seizure, a stroke, are tired, drunk, on drugs, it could figure out who you are attracted to and establish a semantic record of your love life, and all your other relationships, your psychology, your faults, your strengths, from this information.

So solving the privacy issue in theory means that this data capture can’t be saved or uploaded to the cloud, and that’s what a commercial company like Facebook is talking about right now. A privacy first idea. What’s going to enforce Facebook’s Privacy First Idea? Only legislation. Until governments decide to pass legislation on tech companies to enforce Facebook’s Privacy First Idea, we don’t have a means of enforcement should Facebook fail to live up to it’s ideal.

With WebAR, Deep Learning & Digital Fashion there could come a day when we need to worry about what a computer could do “if the computer had a contextual understanding of the environment you are in? It could change anything in the environment.”

Imagine that this next generation software could actively hides things from you that you should be avoiding, such as a low hanging branch, or a nail with wood sticking out of it. It could, in theory, deliberately hide the sight of oncoming car from you, or accidentally distract you from something essential like your friend being in danger.

With the first Microsoft Hololens, that I own but have loaned out permanently, it was easy to imagining banging your head on something that you can’t see because you were distracted by AR graphics. I have even lost track of some of the things in my environment when using Google’s Project Tango Device. I’ve also gotten distracted from my environment Pokemon Go by Niantic on my phone, using the AR mode that uses ARkit’s technology.

This creates an enormous potential safety problem if you use WebAR to visit an untrusted and secretly malicious website, but also if you just use trusted websites that are accidentally distracting you from your environment.

The ethical considerations for designing AR social networks

All of these concerns call upon us to think about the ethics of design, because it’s not just about hacking your brain, or getting distracted, there is also the social networking element of WebAR and VR.

There is a great talk you should check out from GDC in 2017 if you want to hear more about the importance of ethics in AR VR social networks

“MMO designer Raph Koster talks about the social and ethical implications of turning the real world into a virtual world, and how the lessons of massively multiplayer virtual worlds are more relevant than ever.”

Raph Koster helps us to understand that we developers need to be proactive about designing programs that protect people in chatrooms from things like virtual rape, virtual molestation, using systems like consent (people have to be approved by you before they can speak, or be seen or have other privileges), a real names policy, and other considerations.

We also need tech companies and governments to respect the consent of the people. So there are projects like the Cognitive Integrity Protection Act (which I heard about in a Clubhouse chat but I still know very little about.) which apparently it aims to create national legislation to compel companies into protecting user privacy, and to not using our devices, and their software to nudge our minds or influence our decisions.

Will these technologies, like Deep Learning, BCI, WebAR, the Internet, Digital Fashion NFT remain open and accessible to everyone worldwide? So that every country can develop with them? Will totalitarianism overcome democracy? Will the powerful effectively end freedom? Or will freedom, democracy, and self governance prevail?

These are issues that everyone in the world must consider eventually when they understand the world we are moving towards, the mass convergence of science & technology, and so we must ask ourselves about the great implications

Are we helping human rights? Are we reducing the total global energy consumption? Are we correctly addressing the present and future issues of climate change?

I so appreciated that when I was thinking the different possible ethical considerations of Augmented Reality I saw that Emma-Jane, CEO of Digitalax had partnered with the Human Rights Foundation.

So that calls upon us to think about ethics. I appreciate so much that Emma-Jane, CEO of Digitalax had partnered with the Human Rights Foundation.

The Human Rights Foundation’s “Wear Your Values” campaign is partnering with DIGITALAX “to draw attention to human rights concerns in closed societies, voice the importance of a globally transparent supply chain and bridge the Digi-Fizzy (Digital-Physical) realms for promoting positive compounding action & distributed content generation native to movements for social & economic change.” read more with this link:

Part of Emma-Jane’s contribution to solving these Human Rights Issues has to do with PODE, which is an erc1155 access token.

When I asked Emma-Jane for additional clarification on PODE she wrote this.

PODE is an access token into another project under the bigger umbrella. It is its own container and it serves as another keystone, where if it were removed from the overall system the other stones would collapse. You can’t just take gears out of a machine and then expect the machine to work. PODE, as stated in the original released article, acts as a proof of ownership for DIGITALAX’s digital experiences, digital economies and digital networks. This is true.” — Emma-Jane, CEO of Digitalax

“To breakdown this further to reinforce how we are approaching this in a way that actually creates proper value for the rest of the ecosystem, it is an entry point for young and upcoming players and creators to break in, field test and level up. This is what then generates and ensures defensibility, community growth and long term value with a consistent byproduct of rewards, which in part are returned to the PODE holders for engaging in this system. This means, that PODE holders directly hold the keys to being able to test run early games, mods, content experiences, digital experiences and events of the upcoming creators — evaluate and assess this content through DIGITALAX’s Native Digital Valuation Mechanism (Released and detailed further under our FGO standard). These BETA economies are where the value for the PODE holders comes from. Just like the significance of the on-ramp of fiat to crypto. There must be a low barrier to entry for the highest levels of potential talent at the earliest points in their trajectory, for those crossing into this digital realm. The ability to level up must mean something and that is why we take the approach to developing PODE that we do.” — Emma-Jane, CEO of Digitalax

Digitalax is helping to make Digital Fashion environmentally friendly.

In addition Digitalax run’s it’s smart contracts, and $MONA transactions, on Polygon which is a “zero-gas”, “proof of stake” blockchain platform that is interoperable with ethereum, that enables artists to know that they are not polluting the environment

Digitalax only accepts $MONA for payment (in auctions and instant buy collections (Exclusive, Semi-Rare and Common)). However $MONA ERC-20 can be mapped from Ethereum to Polygon and back again.

See my story:

“Don’t blame climate change on artists or fashion designers who use NFT. A deep dive into Cryptoart.wft numbers & logic. What is the big picture for climate change, what are the solutions, how can we help?”

We have established that NFT is environmentally smart when combined with proof of stake, especially when the blockchain is powered like green energies that include solar, wind, and hydropower.

We can and must pass legislation to protect individuals from past & future corporate & government abuse. We need to go beyond talking about these issues, beyond commitments from company leaders to do better, to passing legislation, a way to legally enforce and protect human rights when it comes to technology, we can work together to encourage our local governments, our state governments, and our national governments to pass common sense laws, laws to compel companies like Facebook to keep their word to protect human privacy and other human rights for example. This goal is in part to also compel Facebook and other companies to finally make all their messenger apps encrypted from end to end by default, and to install systems of consent in social media, augmented reality, and virtual reality chat applications, like Raph Koster talked about, and something like a system of consent to protect our cognition from being influenced or nudged by companies that use data, advertising, or try to influence people in anyway.

Assuming that we get these ethical considerations resolved, and these technologies are actually delivered into our devices, the next question is how will people use these new technologies in the native digital economy of the future?

A historical example could be the Player-Creator gaming community from Japan, with in-game economies.

“Japanese games have pioneered the way for more advanced user gameplay, appealing to the early adopter demographic. MyCryptoHeroes, an RPG game featuring a sophisticated in-game economy, came onto the scene and continues to top the charts of DappRadar. MyCryptoHeroes was one of the first games to combine on-chain ownership with more sophisticated off-chain gameplay. Users could use their heroes inside of the game and then transfer them to Ethereum when they wanted to sell them on secondary markets.” quote from OpenSea source link https://opensea.io/blog/guides/non-fungible-tokens/

MyCryptoHeroes featured an in-game economy, because with these technologies we will be able to be the characters in these games, in real life, we can be inside real life RPGs with cool graphics.

Accel World

Accel World was a Japanese Anime series that featured people playing games together in Augmented Reality that would transform their appearances. Their AR devices are brain computer interfaces. Augma, or Nerve Gear.

On the left is one of the main heroes Kuroyukihime as she looks normally, on the right is that same hero as she appears in Augmented Reality


Much of the technology we have discussed in this article would be a necessary pre-cursor for their technology to work. Accel world is set in the same universe as Sword Art Online.

Cecile Tamura wrote “The concept about a holographic figure has been in Japan since 2007. She is also a vocaloid”

GaiaOnline (h/t Cecile)

A long time ago (think 2007) GaiaOnline was an Anime-themed social networking and forums-based website.

“Users had the ability to customize their avatar in many ways, including skin tone, eye style and color, hairstyle and color, gender, race (e.g. human, vampire, elf, zombies), and attire. Numerous clothing items and accessories for avatars can be purchased from a range of NPC-run stores using the site currencies, Gaia Platinum and Gaia Cash. Avatars appear next to posts in the forums and profile comments (the post itself encapsulated in a “speech bubble”), and in Gaia Towns and Gaia rallies, and other environments the avatar appears as a movable character that can travel from place to place, interacting with the environment (catching bugs, shaking trees, digging for buried treasure, collecting trash and flowers, etc.) and other users.”

Elder Scrolls Online

You will be able to buy clothing for your game just like you shop in games like the Elder Scrolls Online where I recently watched a well known player named Khaljitt on twitch (has more than 78.3k followers) selecting the costume for the character.

With these technologies Khaljitt could wear this character in real life, if you have AR glasses, and the right permission, you could see Khaljtts AR character.

Now you can imagine that in the near future you are walking down your favorite street in New York City wearing designer fashion AR glasses, seeing yourself as a character in an RPG, or wearing the latest Digital Fashion.

Thanks to partnerships between tech companies and designer fashion brands, your glasses have WebAR, the Augmented Reality Web inside them, overlaying interactive computer graphics on the world around you, and allowing you to wear your Digital Fashion NFT purchases.

How will this become mainstream?

  1. Major Technology companies will increasingly partner with major Fashion Brands.
  2. Technology companies will use deep learning to create pixel perfect avatars of you with accurate measurements that you can share with retailers.
  3. Retailers will help you to find fashion, clothing, shoes, underwear that fits you perfectly.
  4. Retailers will give the customer the option to share their measurements with Luxury Designer Fashion Brands

Specifically, technology companies like Facebook, Apple, Google, Microsoft, Magic Leap, Valve, Varjo, Vive, and many more will capture your eyes, expressions, posture, into 3D models in AR in real time so you can see someone’s avatar in AR mode that hides their AR glasses, wearing their digital fashion items, Fashion designers, with your consent, will be able to use your avatar to have perfect measurements of your entire body that they can use to fit their designs to.

In September 2020 Amazon created its Luxury Fashion Hub, featuring designer Oscar de la Renta

Amazon, the biggest retailer in the world, and a big investor in AR VR technology, with the customer’s permission, will likely use these pixel perfect body measurements (thanks in part to pyTorch3D, Tensorflow 3D, and 3D Semantic Segmentation.) To help customers find clothes that fit perfectly, and Amazon will likely make it easy for you to share your measurements with designers and fashion brands like Oscar de la Renta, and Amazon’s partners’s will likely profit enormously because the improved measurements will translate to greater customer satisfaction numbers. Following Amazon other retailers, fashion brands, luxury brands, clothing and shoe makers around the world will follow suit, jump on the band wagon, and do the same, so they are not left behind.

Facebook and EssilorLuxottica

Mark Zuckerberg has said (paraphrasing) that augmented reality glasses (and VR headsets) will someday look like an ordinary pair of glasses. To him the form factor is super important. and he has talked about partnering EssilorLuxottica, an Italian eyewear conglomerate, the world’s largest company in the eyewear industry. With the idea that you will have AR (and VR) built into fashion glasses, eventually. Facebook’s first AR Glasses, code named Project Aria, are set to launch this year in 2021

Facebook Partners with designer glasses brand EssilorLuxottica

At Facebook Connect in 2020 Mark Zuckerberg revealed his vision to combine Facebook’s Augmented Reality glasses with Fashion Designer Glasses by talking about his partnership with EssilorLuxottica, the maker of Ray-Ban, Oakley, and they make frames for Armani Versace. These designer augmented reality glasses will ship this year in 2021.

In this scenario, that I think is implied, you might wear AR glasses most of the day everyday, and potentially you never have to sit at a computer desk again or sit to use a laptop again.

Aria, or Facebook’s AR Glasses, in terms of being a smaller form factor, is really also the future direction of Oculus Quest, I hope that the Quest 3 will have color cameras with higher resolution for pass-through AR, so developers can work on Apps that run on both Quest and Aria.

There are other examples of Luxury Designer Brands converging with Tech Companies, to create Fashion Tech products and Digital Fashion. I will just note a few examples:

Ralph Lauren PoloTech T-shirt has sensors to track breath depth, heart rates, balance, calories & more to an app on your iPhone

Drest. a small company with a gaming app, signed Gucci, Stella McCartney, Burberry, Valentino, Prada, and 100 more luxury brands so that digital versions of luxury clothing appear in apps and video games, and then customers can buy physical versions of those luxury items.

Louis Vuitton, a luxury brand partnered with video game League of Legends by offering in-game Digital Fashion “skins” including a capsule collection by designer Nicolas Ghesquière.

Gucci partnered with Wanna Kicks to create Augmented Reality filters that let customers preview Ace sneakers on their feet.

Digital Fashion also leads to a new experiences in retail shopping, especially with 3D Light Field Displays and 3D Deep Learning.

In the video discussion included at the beginning of this article Anina, the CEO of 360Fashion Network, pointed out that the future of Digital Fashion includes novel use screens to display 3D models in retail environments to revolutionize the in person shopping experience. When the store becomes a screen, your store can be anywhere, in an airport in Singapore, a giant mall in China, or your favorite street in Paris France.

The screen you see can act like a magic mirror, to let you try on items digitally, because they will be able to create a perfect 3D avatar of you in real time, with precise measurements, thanks to 3D Deep Learning, and then you can try on anything in the store’s inventory, or have items fitted to you in real time, tap a button, and the real item gets shipped to your home so its there when you arrive.

The outdoor 3D displays that will help transform the retail experience include Lightfield displays. There are three companies in this area that I am watching. These three companies will help bring AR VR Digital Fashion applications to the world without headsets or phones. Light field displays will help to create the Holodeck that was imagined in Star Trek.

This will be exciting to people in real life, when they visit their favorite places, you can think about being able to experience the real being brought into the virtual realm, and the virtual realm being brought into the real. An early example might be the Instagram areas that you can see at some malls in China.

Looking Glass Factory


Light Field Lab

Emma-Jane MacKinnon-Lee the CEO of Digitalax is playing a key role in helping to make this vision of the future possible.

Digitalax which as we discussed last time, interested me in part because it is a giant ball of concepts neatly organized and stacked together. You’ve got Pode, Dash, Mona, Fractional Garment Ownership, NFT, Smart Contracts, ESPA, the Player Creator Portal, the Player Access Card, and on top of that I could not help but connect everything she was doing with convergence of brain computer interfaces, sensors, Deep Learning, Augmented Reality, Virtual Reality, and AI enhanced affective computing with our devices and headsets.

There are so many things to talk about, Emma-Jane, for example is an expert on many things that the average person knows little about. Each of this topics is easily a days conversation, or a podcast, or an article or a class with prerequisite classes for some people (I’ve been reading her articles) The sheer scale of what Emma and Digitalax is doing, everything combined, including the speed at which Digitalax is moving just towers over what most people in the world are imagining. This is why I embarked on creating a series of articles to show how her technology is connected to all these other technologies and all these global industries, this is the second article, the first can be found here.

Emma-Jane writes: “DIGITALAX: This is the underlying digital assets supply chain and logistics router. It serves as the umbrella and the shepherd for each of the subset projects. Digital Fashion is the wedge into the market and spiritual underpinning. Fashion is one of the most essential components of what it means to be human; self expression, identity forming, creativity, how we select our mates. And it is also where computing came from. Textile production at the start of the industrial revolution is what led directly to the machines that we are typing these conversations on. There is something undeniably beautiful about that. The purpose of DIGITALAX is to create a sustainable on-chain transparent supply chain for native digital goods, particularly digital fashion — the core industry. Part of the reason for that is to establish the ability to assess value in an industry that dwarfs most others ($3 trillion is quite a bit of money before it has even been amplified by the transformation into digital). But more than that, it’s not about the industry size, it’s about human nature. Fashion is an inevitable and existential pillar of what it means to live fully digital lives. And digital is already dominant. So brief summary, DIGITALAX is building the infrastructure for a sustainable and scalable digital fashion industry to exist, as a vital bridge into fully digital lives” — Emma-Jane. CEO of Digitalax

At the center of Digitalax is ESPA

“ESPA is the core use case and token platform implementation architecture for DIGITALAX’s native ERC-20 utility token, $MONA. The whitepaper token economics introduced the concept of “Casual Play” into DIGITALAX, and the importance of $MONA when it comes to serving to further incentivise utility and application in the Player-Creator economy.” Source:

“ESPA provides, for the first time, a full incentive driven triad between Digital Fashion Designers, Developers, Players, to engage in an ecosystem that directly embeds sustainability into the model — through our native ERC-20 utility token $MONA.” — Emma-Jane. CEO of Digitalax

“MONA: There were 10, 000 $MONA issued, to be distributed through our NFT and LP staking over the course of 12 months. We started staking distribution in December, and there are still 9 months of token distribution to go. Staking was chosen as the most fair way to distribute this token. The team took 0% of $MONA token allocation, and the DIGITALAX treasury was allocated 10% of the total 10,000 for R&D maintenance, operations and project furtherance.” — Emma-Jane. CEO of Digitalax

“You can think of ESPA as a well defined Layer 2 utility and application environment to the entire gaming, VR, 3D industry. Any game developer can plug in and start showcasing their content, allowing players to engage in casual esports battles and earn income streams in $MONA. The Digital fashion from the DIGITALAX supply chain and marketplace is the core identity authentication for the players as they engage in cross-content matches. The designers and devs within this ecosystem also get income streams in $MONA. We are starting with indie devs, modders, because we recognise the human and business sense in identifying radically undervalued assets and people and removing the barriers arbitrarily blocking their growth.” — Emma-Jane. CEO of Digitalax

A 3D clustered chart, made with D3 & Threejs

These converging technologies mean that you will be able to use software libraries like D3 and Threejs to create 3D graphs of your life, from your health data, to your trades, to the progress in your games.

Imagine that you could wear your bio activity, as something that could change your digital fashion in ways that people who know you understand.

Read more about the 3D Clustered Chart, D3 and threejs here:

NFT, Crypto, XR, Deep Learning Success Stories

Nyan Cat sold for 300ETH, A Rare Hashmask sold for 650K. Deep Learning projected to create more value than the Internet. Oculus Quest developers make millions of dollars indicating an explosion in sales for the VR industry.

NBA Top Shot: Collectable Sportscards with videos that let you own a moment in sports history:

Plasmapay: DeFi for The Masses

“Consider PlasmaPay for example — a digital payments company that has optimized the fiat-crypto-fiat conversion process for millions of people from over 165 countries. The PlasmaPay app lets users purchase digital assets with virtually any Visa/MC card. In other words, you could deposit money from your bank account and instantly spend.”

CryptoKitties was one of the first NFT success stories in the history of NFT

I recommend that people read the “History of non-fungible tokens (2017–2020)” by OpenSea. History of NFT. Today the developer of CryptoKitties is making a new success story called NBA Top Shot. I wrote a story about that below.


Blockrocket was building NFT platforms before ERC-721 was ratified: Read this article where James Morgan of Block Rocket explains his companies support for Digitalax and her vision of the future of digital fashion.

“We at BlockRocket have been building NFT platforms for ourselves and others since before ERC-721 was even a ratified standard. When Emma approached us with her vision of the future of digital fashion it was an opportunity we could not refuse.”


Digitalax has had many successful NFT auctions since then and continues to innovate in the NFT space working with artists, fashion designers, smart contract writers, creating worlds, a new file format and specification for 3D file interoperability between 3rd party programs, and a player-creator economy that bridges a convergences of technologies that will transform the world and create the native digital economy for future generations.


Polygon is making it easy for NFT companies like Digitalax to have much lower cost transfers, use zero-gas, and save the environment with a Proof of Stake Network that is interoperable with Ethereum.


This decentralized trading platform for ‘Fractional NFTs,’ connects economic DeFi incentives to real-life statistical data. Thanks to the new governance token, TSX, users can now get paid for their sports knowledge, by wagering on their favorite athletes’ performance estimates.”

Beeple at Christies

The incredible story of Beeple selling digital art that was created over 5,000 days on Christie’s

“Created over 5,000 days by the groundbreaking artist, this monumental collage is the first purely digital artwork (NFT) ever offered at Christie’s “

“‘Christie’s has never offered a new media artwork of this scale or importance before,’ says Noah Davis, specialist in Post-War & Contemporary Art at Christie’s in New York. ‘Acquiring Beeple’s work is a unique opportunity to own an entry in the blockchain itself created by one of the world’s leading digital artists.’”

Other NFT Successes are not limited to: Ben Mauro’s EVOLUTION, Bitcoin Origins, KOGS, Nifty Gateway, Rarible, and Superrare.


Ready to learn how to program with WebXR? This is a screenshot from a new course on three.js a library that is at the heart of developing WebXR. Threejs-Journey, a class taught by Bruno Simon. At threejs-journey.xyz

Learn: Threejs with Threejs Journey

Check out 8thwall’s real time reflections.


In our previous article I mentioned that developers can create WebXR applications with threejs, A-Frame, react three fiber, babylonjs, Unity, playcanvas, and more. This week I thought I would share some resources for how people can create WebAR apps specifically.

Here is a link to the previous article by the way:

If you want to start building WebAR today, of course you can start with Unity, 8thwall, AR.JS, or you can try your hand at making stuff with just the WebXR API which can be used in combination with A-Frame, threejs, react three fiber (via React XR) and should be compatible with Babylonjs, Playcanvas, and others either now or in the future(not sure when))

To start with the WebXR API directly. I can recommend a two part article. This works great if you have a newer Google Android phone to develop on such as the Pixel 4 or 5 it uses ARcore for tracking. The WebXR community is seeing evidence on the internet that Apple will be bringing WebXR support to Webkit and thus to ios phones soon so that your same applications will use ARkit for tracking on iphone. In addition you can write applications that work in the Hololens, and the Magic Leap thanks to the Firefox Reality browser.

WebAR with WebXR API: Part 1

WebAR with WebXR API, Part 2

I also host a coding meetup (4th year) where we will be focusing on WebAR this year. It’s called WebXR Online Coding Support

WebXR Online Coding Support is a meetup that is only for programmers (not recruiters or companies looking to hire) making apps with webxr related open-source code libraries, not limited to threejs, A-Frame, react three fiber, babylonjs, playcanvas, webgl, webgl 2, wasm, webgpu, rust, neurotech or neural networks.

You can get a link to join the WebXR Online Coding Support discord here:

Feel welcome to follow me on twitter. twitter.com/worksalt

Micah Blumberg on twitter

Please join the Deep Learning group on Facebook and other groups that Cecile and I both admin to comment, share your ideas, and participate in the growing conversations.