Digility: Digital Realities at Photokina 2018

Spencer Marsden
11 min readOct 18, 2018

--

For@BBC Blue Room

AR/VR/MR/XR/AI/ML

The views expressed in this report are the personal views of the author, Spencer Marsden, and should not be taken as the views or policies of the BBC.

The soon to be annual camera-fest that is Photokina in the beautiful city of Cologne has for two years held a new attraction: Digility.

I attended September’s Digility while my Blue Room colleague Colin Warhurst absorbed as much of Photokina as he could. Colin’s report can be found here: www.blrm.io/photokina2018report

We looked for similarities, crossovers and contradictions between the two events. Photokina is aimed squarely at the professional working in the stills photography field, Digility is aimed at..well, I’d find out.

Digility is a portmanteau of Digital Realities and as such reflects a perspective on all things VR, AR, MR, AI, ML and IoT: our environment is being digitally enhanced and its happening now.

To cut to the chase a bit, and to get to the headlines in the first minute of reading, I left the two day conference with a sense of the different technologies coming together and being useful. Digital Reality is become more realistic, if you will.

I saw positive steps toward standardisation in Web AR/VR standards, some great examples of native (i.e. VR-in-VR) creation tools, and a much stronger conviction in the future and inevitability (for some) of AR.

I was struck by the confidence in the economic case for XR (XR is the catchall shorthand for VR, AR, MR and RR) at this conference. Anyone whose job involves interactions between people, places and things will have their abilities amplified. Sure, VR is great for gaming, and AR is a cute way of showing your brand off, but what’s the roadmap for these technologies becoming ubiquitous and indispensable in our lives?

Answer: follow the money. Training and supporting staff (or robots) where skills are learned and enhanced with all the benefits of spatial relationships in VR, and then supported and distributed out in the real world via AR. Working inside a VR environment removes the abstraction of traditional flat workflows. Accessing a digital twin of our environment with an AR device helps people to work smarter, giving them a whole range of new abilities.

Five quotes from Digility 2018:

If motion makes you sick you’re too old!

Augmented Reality is THE human computer interface

AR and VR means the death of paper manuals

AI: There has never been this speed of development in human history

Departments can order and install VR just like any other IT equipment request.

Shared Out of Home VR experiences such as this Foosball-VR hybrid point the way to XR enhancing a night out.

We took a lot of photographs and images at the event, which you can check out here.

VR for training robots, AI for space

Nvidia had a few presentations, setting off by sharing confident figures in the industries in which they sit: gaming as a $100 billion industry, autonomous vehicles $10 trillion and AI at a cool $35 trillion..

In Nvidia’s ISAAC programme, VR is used as a way of naturally and safely interacting between a simulated robot learning a task and its human teachers. The commonality of GPU hardware between the simulation and the real robot means that once the simulation has achieved a satisfactory level, it can be brought into the real world to carry out its’ tasks. Nvidia’s robotics hardware is the Jetson Xavier, a low energy CPU with 9 billion transistors, delivering 30 TOPS (trillion operations per second).

Alison Lowndes from Nvidia talked about how gaming is fuelling the AI revolution, where the software agents in simulation undergo hundreds of human lifetimes worth of in-game experience, using Reinforcement Learning to develop strategies which are then tested on the fields of digital battle. Alison stated that this is a period of technological development unlike any other, with at least 80 scientific papers a day released related to AI.

New Nvidia GPU cards include Turing Tensor cores, which bring real-time deep learning to gaming applications.These are used to enhance graphics, and rendering for games as well as AI applications. They have just released the Titan V, a £3K graphics card boasting over 5000 CUDA cores, with included access to a suite of cloud AI services for development.

Nvidia apply their research not only to headliner AI use cases such as autonomous cars, but also as part of NASA’s Frontier Development Lab. Here Nvidia as well as companies such as Google Cloud, IBM and Intel collaborate to address such interplanetary issues such as detection of exoplanets, and, in 2016, planetary defence.

360, the Bigger Picture

Back to Digility and staying with 360, Peggy Stein from Steinzet MedienDesign shared some of her working practices involved in the capture of high resolution HDR flat 360 images for use as backplates (AKA Skyboxes) for CGI presentations and games. Using a Canon DSRL and a bespoke pan/tilt 32 bit images are captured at 31800 x 15900 px resolution.

Both myself and Colin checked out the few innovations in 360 cameras, mainly incremental improvements in the insta360 Pro 2 and the Vuze+.

The Insta360 Pro 2 looks like the result of user feedback — more reliable wireless control, better stitching and easier integration with video editing software. A great improvement is how the camera allows offline editing, with the final stitch and render being left until the end of the edit, saving lots of time and meaning you don’t need a supercomputer to work with.

Insta360Pro 2: attenSHUN

An interesting partnership between Facebook and Red cameras has yielded the Manifold. This boasts 16 x 8k sensors with Schneider 8mm f/4 180-degree fisheye lenses, capturing at 60fps. What’s different about this device though, apart from the unannounced and probably large price tag, is that it can offer environments with 6 degrees of freedom rather than the standard 3 for 360 footage. By that I mean you can move in a limited way in and out of the scene, made possible by the use of Facebook’s depth sensing algorithms, as demonstrated here.

Facebook and RED’s Manifold 360 6dof camera

This seems similar to Google Lightfields, where multiple perspectives are shot from within a capture area, and are then rendered to the viewer, adapting to correspond to the viewer’s line of sight giving a sense of depth and parallax.

More Tools for VR authoring, with the emphasis on native

Germany’s Dear Reality demonstrated their DearVR Spatial Connect, a way of mixing and editing audio in a virtual environment. Swapping faders and dials for VR controllers, the engineer can grab sound sources and parameters in the 3D space and manipulate them in a much more intuitive manner. It acts as a bridge between a games engine, in this case Unity, and an audio plugin operating within the audio software. Control interfaces and music have a long symbiotic history, and while there’s nothing out there yet that makes sense in terms of music composition, I feel this is on the right track (pardon the pun) and is addressing a real need that can be met with the VR toolset.

Native VR development tools include Unity with their EditorXR, Unreal Engine’s VR Mode and Quill from Facebook to name just a few. I expect the games engines to incorporate these audio editing features, albeit more simplistically, in a manner similar to Vuforia’s AR Marker technology and Apple ARKit’s now standard AR marker detection.

This was made in Quill, Facebook’s VR paint and animate tool

VR starts to stand alone

Hervé Fontaine from Vive talked about the Vive Focus at Digility, while at the same time in the US Oculus announced their Quest headset. Both sets are standalone, in that they are all in one systems that don’t require a cable to a powerful PC, but contain all the processing and power within.

The presentation from Vive was more geared towards their enterprise packages, indicating their commitment via their range of products, platform and support, but it was a chance to check the Focus out. In the context of Digility, Vive were talking about the range of use cases for Focus, such as education, sales support, museums and galleries and training.

Still finding it hard to look suave in a VR headset

The Focus headset itself was lighter than I’d expected, comfortable and has the same resolution as the Vive Pro. The only content on there seemed to have lagging artefacts as I moved my head around — though that could be the fault of the experience rather than the headset itself.

Will standalone devices be the silver bullet that gets more people into headsets? Based on this and my experience with the 3 degree of freedom Oculus Go, I’d say removing the fuss involved in wearing a VR headset can only be a good thing. Standalone systems that use your phone, such as Daydream or Gear VR don’t quite cut it, as the phone gets too hot with use or disturbs with notifications and so on. I feel that the tipping point may be when we start seeing people in standalone sets on airplanes, and not just in first class!

Web XR is developing, standardising, improving — but it’s still a mess!

“With just one link you can use the biggest distribution system in existence”

Dr Diego Gonzalez-Zuniga from Samsung Internet

Experiencing VR or AR content delivered by the web is an exciting prospect, where code is processed on an XR device via the browser. There is development towards standardisation, but the as the picture shows, it is a confusing state of affairs:

An Augmented Reality check

With numbers like these- how could AR possibly fail..?

As I mentioned earlier, there was less of a wide eyed, techno-shamanic evangelism regarding XR here, rather a practical recognition of the usefulness of a new toolset available to business. A look into the AR crystal ball by Sacha Kiener from Augmentio revealed a list of multinational corporations already actively using this technology, including Intel, Siemens, Audi, Lego, Electrolux, Zeiss, Opel, Mercedes Benz,VW, McDonalds, E-on and Nokia. Thomas Hoger from 3Spin and Dr Stefan Roth from DB Systel noted that in Germany there is a skilled labour shortage, and that on site technical support in the form of AR could be a way of beginning to address that. Use of an AR to train pilots led to a 15% improvement over paper based training materials, perhaps AR and VR could mean the death of paper manuals?

They also demonstrated a field trial of using Microsoft Hololens to repair the coffee machines on Deutsche Bahn trains, leading me as a UK train user to feel mildly jealous that unmaintained coffee machines are the biggest problem on German trains…

AR was presented as an effective bridge between the digital and the human components of Industry 4.0., or the Fourth Industrial Revolution, comprising of Cloud Computing, the Internet of Things and Machine Learning (thank you wikipedia…).

“The Internet of Things exists to add digital capabilities and behaviours to physical things in the world around us, and the reason we do this, is because its cheaper than not doing it”

The above statement is from Dirk Songür, Studio Head of Microsoft Mixed Reality, where he made the case for how the Internet of Things will be the bedrock between the real world and Augmented Reality.

A simple thought experiment is to imagine an area with sensors embedded in, well, everything, and how the resulting data could be useful to a range of interests. Roads, production lines, floor tiles, lights, CCTV, furniture, pedestrian crossings and so on. Looking at the scene as a police officer, shopper and shop owner, courier, commuter, planner, engineer, tourist surfaces different interpretations of the same data via need.

Objects give out data, telling us about how it is, was and will be, and we can then make smarter decisions and selections based on that data. Augmented Reality offers a way to see, hear and feel that information in realtime, or as it was put, a window into this digital world. If this makes sense (AKA profit) for business, then we will see it becoming more ubiquitous in everyday life. Will we get to a point where we wonder how we ever did without seeing digital objects and information overlaid onto the real world?

While I was writing this Magic Leap described a similar thing at their Leap Convention, describing it perhaps a tad prosaically as the ‘Magicverse’.

As with a lot of trends I think it is always useful to look at how artists are reacting to new technology. I feel graffiti artists in particular have a singular perspective on graphical interventions into the public realm, and the below videos give some taste of this.

Bond Truluv’s AR graffiti

To sum up, Digility had a very rich and interesting programme of presentations, the tech on display definitely reflected the current placement of VR on the hype curve, that’s to say firmly out of the ‘trough of disillusionment’ and on its way to the ‘plateau of productivity’. I left with a sense that business is adding a much needed calm and practical voice to the XR conversation, and I’m hoping the organisers will be able to build on this stimulating event on a yearly cycle.

I nearly managed to make it through the article without a Black Mirror reference….nearly.

And how does all this affect the BBC? It all depends upon our audience’s expectation of content and interaction, and what technology they expect to experience it upon. Will we make more immersive and augmented content in three dimensions, capturing objects and performances volumetrically? What content works best on what device and in which scenario? What technologies will support the distribution of this content?

I hope this has been interesting, and thanks for reading!

The Uber wasn’t what I expected…

https://www.flickr.com/photos/bbcblueroom/albums/72157696231005420

For@BBC Blue Room

--

--

Spencer Marsden

Engineer for BBC Blue Room, spiking on interesting tech