Edges of the Metaverse, Part 1 of 6: The Role of AI in the Social & Spatial Web

The metaverse challenges us to stretch our collective imaginations to the edges of our digital experience.

While building the navigation engine for the metaverse, we at Lighthouse have grown accustomed to thinking about its edge cases, mysteries, and unresolved dilemmas.

And we’d love to invite you into some of our late-night musings.

In this first installment of our six-part series,The Edges of the Metaverse, we explore why it’s impossible to imagine a fully realized virtual environment without AI.


Key Takeaways:

  • Machine learning, generative AI, and other AI technologies have already attracted massive investment and consumer interest around art and writing. Lesser known but likely more impactful will be AI’s role in handling social functionality in the metaverse, including building virtual assets and protecting privacy and consent.

  • Without AI services to support privacy, moderation, and UGC, virtual worlds are already struggling to keep up — including those owned by Meta and Microsoft. These challenges will increase as millions more worlds come online.

  • With the right frameworks in place, AI can enable a more enjoyable and more human social experience, providing quality and protecting safety in ways that human moderators cannot handle at scale.


A number of outlets, including the New York Times, have suggested that crypto, web3 tech, and, yes, the metaverse are being overshadowed in the public eye by artificial intelligence — so much so that The Information recently asked: “Is AI Stealing Web3’s Thunder?

The sentiment is understandable: A year ago, Facebook changed its name to Meta, Decentraland valuations were sky high, Nike was acquiring a metaverse studio in RTFKT, and the creators of Pokemon Go had just raised $300M to build a metaverse.

Now, AI technologies like ChatGPT3 and Midjourney command near-household name status while companies like OpenAI (reportedly seeking a new investment round led by Microsoft) and Stability AI (which just raised $100M+ to train AI systems) dominate the news cycle.

It might feel like AI is on a totally divergent path from crypto and that the two do not naturally intersect. But that conclusion is incorrect. In fact, for those of us cutting our teeth in the metaverse trenches, it is difficult to picture a functional virtual universe that does not use machine learning, generative AI, and other AI technologies to handle core functions.

If the metaverse is thunder, then AI is the lightning — and you can’t have the former without the latter. Here’s why.

#1 Building the Metaverse

Empowering developers with affordable, interactive, AI-assisted UGC.

It’s challenging to build on current virtual world infrastructure, and interoperability between ecosystems is mostly nascent. Even within a specific platform, simple tasks like placing a single object can require multiple steps.

On Decentraland, for example, “you’ve got to build a lot of TypeScript to build your scene,” said Jonathan Luebeck, a lead developer for a third-party DCL scene editing tool, in a recent Twitter Spaces event hosted by Lighthouse. “If you want to place a single object in your scene, you have to write 15 lines of code to place that object.”

Now think about developers building (and then executing) more complex interactions, such as a live video streamed from the physical world into a virtual TV set. As Jonathan Lai wrote in his a16z piece, “Meet Me in the Metaverse,” giving developers greater tools for creating content will be a massive boon to the entire metaverse. One of those key tools will be AI.

That’s because the combination of user-generated content and the evolution of AI is a perfect storm to unlock massive, spontaneous social creation across the entire galaxy of virtual worlds.

Source: “Meet Me in the Metaverse,” a16z
Source: “Meet Me in the Metaverse,” a16z

Current open-world games like Grand Theft Auto or The Witcher 3 allow for significant exploration and discovery, but “the scope of these worlds has also been limited by the amount of content a professional team can create,” Lai writes. For example, the MMO game Star Wars: The Old Republic cost more than $200 million and a team of over 800 people working for six years to simulate just a few worlds within the Star Wars universe.

“In this respect, one of the biggest challenges with building the Metaverse is figuring out how to create enough high-quality content to sustain it,” Lai writes. “It would take a tremendous amount of content to populate the intricate worlds shown in Ready Player One’s OASIS.”

AI-driven graphics could enable exponential growth of user-generated content. But to actually port AI-generated graphics into the metaverse at any kind of meaningful scale requires a significant increase in our computational infrastructure, a challenge for the entire metaverse, as we will explore in depth in the Part 2 of this series. (Check back here next week!)

Some projects have shown intriguing work in building out AI-assisted graphics. Webaverse’s Sentient tech, for example, helps creators “build procedural worlds, NPCs, and evolving stories which connect to other stories” as part of its Upstreet metaverse ecosystem.

However, AI-driven graphics creation “has been more challenging than text and audio due to exponentially larger asset sizes and computational requirements,” as Lai writes. “After text and audio, graphics is the final frontier.”

#2 Protecting the Metaverse

Providing the framework for scalable consent.

The metaverse may be associated with terms like “augmented” and “virtual” reality for now, but one day it will simply be called “reality” for most people.

Whether they spend a minute or a month in the metaverse, scores of avatars that represent people will have to be managed by the virtual ecosystems those people visit. With that many people spread out across countless digital universes, it’s easy to see how consent and privacy frameworks and policies could run into conflict with each other or fail altogether.

We’ve already seen plenty of examples of how scale challenges these basic protections: As Meta has expanded its plans for the metaverse, articles have highlighted two key areas where consent is often violated: VR harassment and data privacy violations.

In February, Microsoft shut down all of the hosted social space hubs in AltspaceVR, a virtual world it acquired in 2017, for safety reasons. In addition, it turned certain safety features on by default, including its camera-muting features and personal boundary bubbles that limit how close avatars can get to another person’s.

"As platforms like AltspaceVR evolve, it is important that we look at existing experiences and evaluate whether they're adequately serving the needs of customers today and in the future," Alex Kipman, who led Microsoft’s HoloLens mixed reality group, wrote in a blog post at the time.

AI can make the tall task of sifting through scores of worlds and avatars more manageable. For instance, AI-assisted privacy and preference management could extrapolate from a user’s history — as well as the preferences and history of users like them — to make informed predictions about how a user would want a certain consent moment handled.

In the metaverse, literally every single action — from moving an arm to taking a step — will be a moment driven by data, which of course, means that data could be collected and used by someone else.

Requiring a pop-up asking for approval at each consent moment quickly becomes tedious. But with the help of AI, platforms could take a user’s experiences in one world and forecast how they would want a similar experience to play out in another.

Obtaining explicit consent would still be required in moments where new information is necessary to make a decision — such as a physical interaction between two users. However, certain users may be able to establish a history in which they don’t have to click “yes” to consent every time a specific company asks for permission to access their data, for example.

Managing countless interactions like this for millions of people is not practicable with just humans at the helm, but with the help of AI, digital consent starts looking less like an unobtainable luxury and more like a baseline expectation.

#3 Moderating the Metaverse

Managing content moderation, when “content” engages all of the senses.

The consent question, which takes on a physical element when applied to a fully immersive metaverse that engages all of the senses, shares many similarities with the question of content moderation in the metaverse.

Although the medium may be new, the challenge itself is not. For years, social media giants and news publishing platforms alike have struggled to balance user safety with creative and expressive freedoms.

Tiffany Xingyu Wang is the Chief Strategy Officer at Spectrum Labs, a content moderation platform that uses AI to identify toxic behaviors in text and voice content across languages, advertising its ability to detect such behaviors 10-times better than alternatives. Spectrum Labs has incorporated moderation tackling solicitation and doxxing on dating apps, hate speech and radicalization in gaming, spam and fraud on marketplaces, and bullying on social platforms — foreshadowing the breadth of challenges that content moderators are likely to face in the metaverse as well.

Wang is also the founder of the Oasis Consortium, a think tank bringing together digital leaders focused on building safer communities and promoting data privacy and inclusion online. They are taking the lessons from Web2’s challenges and applying them to the future of content moderation, beginning with the publication of their “User Safety Standards,” developed after months of interviews with hundreds of trust and safety professionals.

By 2025, humans will create about 463 exabytes of data — the equivalent of more than 200 million DVDs — per day. When the metaverse is fully realized, it will almost certainly add magnitudes to that estimate. The immensity of the space presents a number of challenges for content moderators, writes Rem Darbinyan, Founder and CEO of SmartClick, an AI and machine learning company, in Forbes.

Human moderators aren’t just dealing with increasing volumes of information. They are also having to grapple with changing perceptions of moderation: on one hand, those who call for more stringent moderation, and on the other, those who think moderation should be far less strict.

Individuals will always play a role in moderation, so long as they help dictate algorithmic parameters for what is okay and not okay to say or express on a platform. However, putting the decisions in the hands of AI allows moderation to occur at significantly higher scale faster, decreasing exposure to harmful content while still allowing humans to weigh in and overrule the algorithm where necessary.

The metaverse will eventually require even further evolution of content moderation technology, from tech that mostly focuses on text and keywords to tech that can interpret touch, smell, sight, and other senses. The challenge of moderating visual information is already playing out on TikTok, which has struggled to strike a balance while employing both human moderators and AI tools.

#4 Populating the Metaverse

Creating identities and characters beyond the human.

For those of us who got our start in virtual worlds with turn-of-the-century gaming, there’s a certain strangeness — and joy — in the fact that “You’re an NPC” has become a common IRL insult for younger generations. This, simply because it’s hard to imagine our old high school classmates even knowing what a “non-playable character” is, much less using it in everyday parlance.

The underlying meaning of the insult is simple enough: Nobody wants to be an NPC.

In the metaverse, though, NPCs will play essential roles in facilitating virtual interactions between human avatars. Just think about all the jobs that people already don’t want to do in real life…and now imagine trying to get them to do that work in a virtual world.

Autonomy Network, a decentralized automation protocol backed by top web3 incubators, including Protocol Labs and ConsenSys, has already worked to introduce the concept of “Sentient NFTs.” These “aNPCs,” as Autonomy founder James Key calls them, are alive, independent, and can exist across games or virtual worlds.

One of their first pilot projects? An aNPC bouncer in a Decentraland casino that users can try to sneak past to enter the high roller tables. If the bouncer checks them and sees they don’t have a high enough balance, they get unceremoniously kicked out.

It’s unclear to what extent these NPCs utilize AI technology currently, but aNPCs provide an early prototype for what the future of AI-based NPCs may look like.

“They’ve never existed before, so we’re going to have to literally create an entirely new industry,” Key wrote in one of the company’s first Medium posts. “Once the aNPC industry is growing without us providing life support to it, we will then build on this momentum to make them seem a little more…life-like.”

While metaverse explorers may be OK with some avatars being NPCs, they likely won’t want those NPCs to distract from a truly immersive experience. Unless AI technology enables authentic-sounding dialogue and interactions, buggy NPC avatars will noticeably detract from the authentic experience of interacting with human-controlled avatars.

Regardless, there will likely be a learning curve for humanity to get used to AI-controlled NPCs. M3 steward Jin (who goes by @dankvr on Twitter) has suggested such AI characters should be tested as digital pets before taking on more humanoid forms.

Using pets as a proxy for AI NPCs may help address the “uncanny valley effect,” he told Lighthouse in a Twitter space last year focused on avatars, privacy, and identity. The uncanny valley describes experiences or concepts that are very close to human but that deviate just enough to inspire revulsion, fear, and other negative associations — so think humanoid robots. “Pets are so forgivable. If they’re a little derpy, it’s fine.”

AI NPCs will be especially critical to the metaverse in its early days for one less-discussed reason: it’s sorely needed to populate nascent virtual worlds, which otherwise may be so sparsely populated that it could turn off would-be users.

“Being one of the first few users in a new world may feel briefly exciting,” as Ben Goertzel writes for Cointelegraph. “But if there’s nobody there to interact with and nobody doing interesting things, it will get old fast.”

The metaverse is inherently social. After all, people are not going to flock to empty, lonely landscapes. Perhaps ironically, it’s not people but AI that will help unlock more of that social serendipity and authenticity by playing a major role in facilitating shared social experiences. We foresee AI aiding in everything from protecting and moderating to building and populating virtual worlds.

Check back next week for Part 2, where we explore the metaverse’s computing requirements, stretching the limits of available capacity.

In the meantime, come world hop with us! Our Chrome extension makes navigating the spatial internet dead simple. Get it here: https://extension.lighthouse.world

Subscribe to Lighthouse
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.