Taking the Metaverse to the Next Level with Data Compression

Guido Meardi, CEO and Co-Founder of V-Nova, makes a believable case and mentions 'Ready Player One', Atari and 'The Matrix' along the way.

Forget previous attempts at immersive virtual worlds. The metaverse is here and it’s going to wipe the floor with all of them. Or at least that’s the intention. It’s a concept that is on the lips of everyone connected to the technology world, and it’s one to be taken seriously.

In 2021, Facebook put all of its eggs into a Meta-branded basket, signalling a major marketing transformation and commitment to creating its own virtual world.

But what exactly is the metaverse? Essentially, it’s a brand-new type of Internet user interface to interact with other parties and access data as a seamless augmentation with the actual world in front of us. It aims to replicate the 6 Degrees of Freedom (6DoF) and inherent depth witnessed in the real world, as opposed to the flat devices that we typically consume data from.

Right now, however, the metaverse remains in the concept stage and realm of the early adopter. To make it a mass market reality, organisations developing their virtual destinations – some of which may eventually rival the diverse world seen in Ready Player One, but most frequently smaller and more focused applications, such as concert venues or virtual offices – need to consider the volumes of data needed to deliver an immersive experience.

There’s also the need to perform so-called “split computing”, rendering 3D objects on one machine and then casting high quality ultra-low-latency to the “extended Reality” (XR) display device.

In addition, end users expect visually stunning and immersive experiences. This is of little surprise when video gamers now take for granted that the average football game allows them to see the beads of sweat dripping from the brow of a very recognizable virtual Cristiano Ronaldo.

Participating in a virtual dance party or in a virtual training class next to sketchy low-res avatars moving at sluggish frame rates will not cut it, and may even make some people motion sick.

It’s All Down to Data…

Yes, it’s that word again. But the importance of data can’t be understated when it comes to the development of the metaverse. Volumetric objects need to be compressed and streamed in real time to the rendering device, and this requires suitable lossy (and ideally multi-layer) coding technologies to efficiently compress/decompress and stream types of data such as meshes, textures and point clouds.

Truly photorealistic 6DoF volumetric experiences are unlikely to be feasible at current HD video streaming bandwidths, otherwise organisations risk a graphical output that will evoke memories of the Atari 2600’s capabilities. The network backbone and CDNs must therefore be able to reliably stream volumetric worlds.

The other piece of the XR/metaverse puzzle is split computing, i.e., the need to render 3D objects on a device that is different from the lightweight XR device that – most typically – users will wear on their face. Lightweight XR headsets and eyeglasses will not be powerful enough to guarantee sufficient quality, so we will need to render somewhere else (either locally or in the cloud) and then cast an ultra-low-latency high-resolution high-frame-rate video to the display device.

The compression method used between the rendering device and XR display device also needs to be efficient enough to fit within realistic bandwidth availability offered by latest-gen wi-fi and/or 5G, which is typically 30-50 Mbps or less.

Ultimately, these enormous volumes of data need to be manageable in real time with reasonable devices and low power consumption. Simply building new data centres and putting down new fibre cables won’t make the grade, since the above examples were already best-case scenarios.

Compress to Impress

Data compression that fits the quality, bandwidth, processing and latency constraints is the name of the metaverse game. Luckily, standard solutions are available to efficiently accommodate higher resolution displays and  streaming quality for video feeds and volumetric objects, allowing true user immersion within realistic technical constraints.

The MPEG-5 LCEVC (Low Complexity Enhancement Video Coding) coding enhancement standard facilitates compression of ultra-low-latency video feeds to fit within XR streaming constraints, enabling high-quality immersive experiences without wearing a game console on our face.

On the volumetric object side, a point-cloud technology based on LCEVC’s close relative standard SMPTE VC-6 is available to compress the otherwise intractable volumes of data of photorealistic volumetric movies into manageable assets. These are then ready for distribution, real-time decoding and consumption on all major VR headsets.

Efficient multi-layer data coding methods make higher quality XR experiences possible, encouraging mass adoption. They enable more quality, higher processing speeds and more reliable ultra-low-latency video casting to lightweight XR devices.

Delivering a Believable Digital World

The ever-expanding volume of 3D data and the tight video streaming constraints of split computing are the key challenges to address to ensure that immersive worlds can be delivered to users in an interoperable way and at scale. By incorporating supporting technologies such as LCEVC to ensure optimum visual quality within realistic constraints, metaverse pioneers can deliver worlds so believable that even Neo from The Matrix wouldn’t notice that they are not real.

By Guido Meardi, CEO and Co-Founder of V-Nova.

Guest Contributor
Guest Contributor
Follow on Twitter @eWeekUK
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.

Popular Articles