Edges of the Metaverse, Part 2 of 6: The Rising Compute Needs of Spatial Navigation

The metaverse challenges us to stretch our collective imaginations to the edges of our digital experience.

While building the navigation engine for the metaverse, we at Lighthouse have grown accustomed to thinking about its edge cases, mysteries, and unresolved dilemmas.

And we’d love to invite you into some of our late-night musings.

Powering a truly immersive open metaverse will require as much as 1,000x our currently available computing power. In this second installment of our six-part series, The Edges of the Metaverse, we map out what it takes to get the compute we need.

Key Takeaways

  • In physical environments, attendance capacity is limited by space constraints. In digital realms, that capacity is determined by computing power.

  • We’ll need increased computing efficiency to enable a fully embodied metaverse experience to get anywhere near the realism of our physical one.

  • Major companies, including Improbable and Hadean, are building interoperability standards and the necessary instructure for high-fidelity, low-latency virtual spaces with more than 10,000 concurrent users.


How do we gather in the physical world? Typically, we consider places that provide enough space to fit everyone we want to invite. Spaces provide the borders (and thus, the constraints) for how — and how many of us — can come together.

At first glance, the metaverse seems to eliminate those limitations, offering expansive potential for gathering and community building without the constraints of physical space and resources.

Get rid of those and “in theory, the ‘total addressable market’ for the metaverse is anyone with an internet connection,’” the Water and Music community writes in its “9 design principles for a musical metaverse” report. “The ability to reach many more people at once than what would otherwise be possible in the physical world is a key value proposition of musical experiences in the metaverse.”

For example, a world-renowned medical specialist might invite thousands of students to closely shadow her at once, rather than just the handful that could fit in the brick-and-mortar surgery room. Large-scale conferences, conventions, and activism movements could assemble for a fraction of the cost while reducing flights and carbon footprints.

But the digital reality of maintaining a persistent 3D virtual environment that supports real-time social interactions is not that straightforward. After all, the goal isn’t just to meet virtually — we can already do that on Zoom — but to experience embodied digital immersion that engages all of our senses (touch, and someday perhaps even smell and taste) and simulates physical encounters.

That kind of embodiment — one that supports serendipitous encounters, spatial audio, and hundreds of concurrent users in the same area — requires a level of technological sophistication and global resource mobilization significantly beyond our current capabilities.

Scoping the size of the computing challenge

Raja Koduri, VP of Intel’s accelerated computing systems and graphics group, wrote that an immersive metaverse requires “a 1,000-times increase in computational efficiency from today’s state of the art” and major upgrades to “the entire plumbing of the internet.” As he said in an interview with Quartz,“You need access to petaflops (one thousand teraflops) of computing in less than a millisecond, less than ten milliseconds for real-time uses.

That is because every interaction in a fully embodied metaverse is exceedingly complex. Even placing just two individuals as convincing avatars with realistic features in a virtual space involves massive real-time rendering and sensor data tracking, high-bandwidth data transfers with extremely low latencies, and a persistent environment model that can support accurate interaction. As we wrote in our report, The Road to Interoperability:

When a particular metaverse world or experience inside that world reaches its upper limits for hosting concurrent users with high-enough quality UX to meet users’ expectations for synchronicity — or realistic presence with others sharing the same experience — the environment can crash or glitch in annoying ways that destroy the experience. This is because maintaining shared server state becomes exponentially harder with each additional user. This challenge is especially daunting for smaller worlds that lack the resources and computing power of market leaders like Roblox or Fortnite.

And it is not just emerging virtual worlds that face this upper-limit problem. Even metaverse giant Meta struggles to sustain concurrent usage at scale. Each of the 10,000 worlds that comprise its metaverse environment, Horizon, can only accommodate a few dozen people at any given time because projecting a virtual shared space across multiple headsets requires more computation capacity than is available today. Meta compensates by duplicating worlds that exceed capacity, creating an overflow environment. Naturally, mitigation strategies for smaller up-and-comers in the open metaverse space may not be quite as comfortable or adequate as Meta’s.

Things get even more daunting when scaling to the billions of users — as expected and hoped for by those of us building the metaverse. Last December, Koduri forecast that Intel would reach the next stage of “zettascale computing” within the next five years, with additional capacities to achieve a truly immersive metaverse even further off.

“The technology might not exist in 2032,” as Max Cherney of Protocol wrote in February, concluding that the metaverse “will require computing tech no one knows how to build.”

And those computational challenges don’t even factor in the added complexity of sourcing enough super chips when countries are already struggling to produce ordinary ones amid a global semiconductor shortage that is “nowhere near ending,” as the CEO of car manufacturer Stellantis recently put it.

Solving for immersive embodiment

While the capacities for enabling billions of concurrent metaverse users may still be decades away, a number of companies are working on solving the challenges of building smaller-scale communities in today’s metaverse.

For over a decade, the British tech company Improbable has been pushing the boundaries of virtual world infrastructure with its Morpheus technology, which can power over 10,000 real users in high-fidelity, lag-free environments, all interacting at the same time and same place.

Key to the large-scale simulation platform’s efforts has been its work with virtual ecosystems to develop and prioritize common standards enabling greater interoperability between them. Last April, Improbable raised $150m to establish and develop M² (MSquared), a network of interoperable web3 virtual worlds powered by its Morpheus technology.

Hadean, another UK company, is doing similar metaverse infrastructure work, with clients that include Minecraft, Epic Games, and PixelMax. The company laid out its vision for powering a fully immersive and scalable web3 metaverse in 2021, and last September raised $30m to further build its open platform for distributed cloud computing.

“Today's virtual worlds are a limited experience — small scale, siloed, and insecure,” said Craig Beddis, Hadean’s Co-Founder and CEO, while announcing the Series A funding round. “We believe the true success and mass adoption of the metaverse will rely on the ease by which creators will be able to build their own experiences at scale, leveraging open and robust metaverse-as-a-service technologies.”

Both companies have drawn significant attention from developers and technologists for impressive results in hosting concurrent users. Hadean featured its 14,000-player Unreal Engine demo at the 2019 Game Developers Conference. Improbable hosted a concert featuring K-pop star AleXa and 1,450 fans in November 2021, and, earlier that year, conducted a playtest event that featured over 4,000 people battling 10,000 AI-controlled zombies.

Gaming illustrates the challenge and opportunity

One exciting result of the expanded investments in metaverse infrastructure and ecosystems in recent years? The industry’s elevation of game developers and virtual designers as creators with huge utility in the spatial web.

Their importance is understandable, as a16z gaming head James Gwertzman, a Playfab founder and ex-Microsoft gaming general manager, writes in his Future piece Unlocking the Metaverse: New Opportunities in Games infrastructure:

“Why game creators? No other industry has as much experience building massive online worlds, in which hundreds of thousands (and sometimes tens of millions!) of online participants engage with each other—often simultaneously.”

Key elements of the future metaverse already exist in massively multiplayer online role-playing games, from crafting materials and trading resources within in-game economies to streaming for audiences on Twitch or building user-generated content on games like Roblox.

For a long time, games “were primarily monolithic, fixed experiences,” as Gwertzman writes. “We’re now in the era of Games-as-a-Service, whereby developers continuously update their games post-launch.”

In the past, popular game engines like Unity or Unreal, as well as hand-built engines, were housed within the games themselves. Users acted in siloed, closed ecosystems where they almost never enjoyed the ability to transfer assets between them.

The infrastructure of the metaverse, including the individual pieces of it built by Improbable, Hadean, and other companies, presents a new paradigm. As Gwertzmann writes, in the metaverse “it is likely that games will be wrapped and hosted within the engine.” In other words, the larger game engine — and its requisite infrastructure — becomes a platform upon which a number of virtual worlds and ecosystems operate.

Source: "Unlocking the Metaverse," Future.com
Source: "Unlocking the Metaverse," Future.com

One can already see the beginnings of this with Improbable’s M² network: The Morpheus technology becomes the game engine, offering the infrastructure and common standards upon which other virtual worlds are built.

For users to travel between those worlds within the network, each of them must agree to some level of composability — “recycling, reusing, and recombining basic building blocks,” as Gwertzman defines it — as well as interoperability, or the ability for the components of one world to work within another.

Of course, that expansive model doesn’t have to be confined to just virtual worlds on the M² network. Unlocking the full potential requires building out the technical, creative, and experiential layers of the metaverse so that a vast galaxy of interoperable worlds can be built in the spatial web.

Source: "Unlocking the Metaverse," Future.com
Source: "Unlocking the Metaverse," Future.com

Unity and Unreal provide key services like graphics rendering, audio playbook, physics, and multiplayer. But the metaverse requires extended capabilities, such as player avatars and identities, social services, and robust tools and asset libraries to help players create their own content.

Providing such services, and making them widely interoperable and composable across multiple virtual worlds, won’t just require additional computing power but also more sophisticated versions of current gaming technology. Think:

Advanced matchmaking: In gaming, matchmaking connects players with each other, often based on skill level. In the metaverse, the needs expand as the a16z article poinst out, from “find me a group of players to raid a dungeon” to “find me another person to practice Spanish with”.

  • Serverless multiplayer: In the current model, developers might have to spin up their own servers to host games and other content they have built. In the metaverse, serverless multiplayer could allow developers to use game logic to automatically host and scale the game on the cloud.

  • Sophisticated cloud storage: Processing high-fidelity, low-latency interactions requires not just storage but also catch-and-release systems for “literally millions of files,” Gwertzman writes, from textures and characters to animations, visual effects, recorded dialogue and music — and that’s just for one game!

We’re still a ways away from making these innovations a reality, but thinking about future needs helps concretize the possibilities that will emerge once developers, builders, and creators crack the code on the metaverse’s critical computational challenge.

Check back next week for Part 3, where we explore how the metaverse upends our expectations around privacy, identity, and self-expression. Because, as hot as it is to talk about digital privacy, it gets positively scalding when we bring immersion and biometrics into it.

In the meantime, come world hop with us! Our Chrome extension makes navigating the spatial internet dead simple.

Get it here: https://extension.lighthouse.world

Subscribe to Lighthouse
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.