The Metaverse Is Not What You’ve Been Told

Mark Rolston, Founder and Chief Creative Officer of argodesign, argues that the future is the meta-me, not the metaverse.

The way the metaverse is commonly described today suggests that we as users will need to enter into a VR world in order to enjoy a wide range of new experiences. This vision is essentially backwards.

The evolution of computing is pointed the other way – one where the computer is becoming part of the world we live in. Computers are getting really good at meeting us where we are and they will only get better at this.

The real inflection point is not a place. It’s us – the empowered self. In other words, the future is the meta-me, not the metaverse.

Despite the amount of attention showered on Mark Zuckerberg’s vision of the metaverse, on scrutiny, it’s hard to be particularly excited. There’s little demand among users, and what business excitement there is is self-serving.

But what I can appreciate is that the metaverse has started a much needed conversation. We should be talking about our future, especially about what role computing will take in our lives. And we should be especially suspicious of a vision that suggests we lend much of our digital interactions to a fully immersive private universe, a concept as wild as committing to live, work in a town owned, ruled, and administered by any corporate giant – Mcdonaldstown, Coca-colaville, or Amazon City.

When we talk about the metaverse, we’re really talking about a new approach to computing that will reconcile our digital and physical universes into one experience. The reason we’re just now talking about this is because of the maturation of several key technologies: virtual reality (VR), alternative/mixed reality (AR/MR), and artificial intelligence (AI). So let’s first explore a bit what’s valuable about these technologies.

VR for Niche Applications

Meta (formerly Facebook) has focused heavily on VR as part of its drive to push the metaverse concept to the public. However, in reality, VR will represent only a small percentage of interactions that we’ll likely use in daily life. This is due to the simple reason that VR, like traditional computing, requires the user to stop what they’re doing and “plug in” and even then the experience is not comfortable for extended use.

In reality, most of us will experience technology that organically fits in ever more effectively and comfortably ways into our existing world. That’s not to say that VR won’t have any role at all, but it’ll be limited to niche applications.

For example, the physicality and immersiveness of VR creates a clear space for full-body video games, both single- and multi-player. It also has commercial applications through simulations, such as through providing 1:1 mockups of architectural plans or archaeological sites. VR also has some clear professional applications that can leverage its immersiveness, such as in its potential for tele-surgery or sensitive conference calls that require access to the body language of participants.

Ultimately, though, VR will represent only a subset of interactions that fit into the larger computing landscape. Instead, we’re going to increasingly adopt interfaces that are embedded in our existing universe (the real world).

The Ascent of AR

AR (or sometimes called MR, mixed reality) can refer to a variety of what are currently novel interfaces: whether they be Google Glass styled spectacles, projected interfaces and holograms on everyday objects and furniture.

What’s great about AR is that it enables us to work with computers in positions and contexts other than that of the phone in our hand or the keyboard at our desk. Let’s use a simple example. We want to access content that is associated with a specific location, perhaps a presentation that was placed in a real conference room. We call this concept “placefullness” – the imbuing of digital information with physical location context. There are hundreds of uses for such an idea and today we largely effect this through brute force. In future scenarios a pair of AR glasses will be aware of the user’s location and be able to project information into those spaces as if they were common real-world objects left there. It’s also possible to enjoy the same spatial context with today’s smartphones, only less elegantly presented.

AR will provide more organic means for us to communicate and liaise with colleagues, through allowing us to communicate with each other with less friction, and with the context of real world space. This opens up multitasking, alongside a greater degree of spontaneity in how we approach computing.

AI Will Reduce the Overhead

While AI is not specifically a new interface technology, it has a major role to play in the integration of our digital and physical realities. First, it has a big role to play in reducing the overhead of interfacing with ever more sophisticated computing scenarios and in more situations where we won’t have access to the touchscreens, keyboards, or mice that today allow us to effectively interact.

Machine learning (ML) will also unlock exciting possibilities in the form of models that come to learn our preferences and habits, model our decisions, and can act on our behalf. Such models – what I dub the meta-me – will extend what it means to be human in the digital world by giving us a chance to automate away many of the smaller decisions we need to make on a day-to-day basis, such as life admin, organising calendars, making repeat purchases, or signing contracts.

Most excitingly, our respective meta-mes would be able to meet, interact with one-another, share information, and form agreements or make plans on our behalf. All this can be done without us having to think about it, except for when we review the outcomes of their actions. This represents a comprehensive merging of our digital and physical footprints to ultimately make our day-to-day lives more productive, fulfilling, and enjoyable.

The meta-me is a vision of the future person, fully able to interact with the digital world and all that it offers. Not limited to a VR interface, nor limited by any special device, but where they are, employing all the technology around them in their world and conversant with all that the real world demands of them.

The Metaverse as a Confluence

From my point of view, the metaverse cannot be considered a killer app or singular experience. Instead, if we want to use this world, Metaverse, it must be thought of as the next pattern of computing – all devices, all modalities: it will see the silo between computing and the real-world knocked down, and replaced by an integrated experience that sees computing as an omnipresent and increasingly invisible part of our everyday lives.

Ultimately, in contrast to Meta’s vision, the metaverse is going to be a means to enhance our experience in the world, not a way to distract us from it. Zuckerberg’s concept of the metaverse is ridiculous for this very reason: we don’t want to enter the machine, we want it to meet us on our terms.

By Mark Rolston, Founder and Chief Creative Officer of argodesign.

Guest Contributor
Guest Contributor
Follow on Twitter @eWeekUK
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.

Popular Articles