|
Neal Stephenson’s foundational cyberpunk novel Snow Crash brought to the public the concept of a metaverse, a virtual reality in which people interact using avatars in a manufactured ecosystem, eschewing the limitations of human existence. More recently, Ready Player One capitalized on that idea and brought it back to prominence with a bestselling novel and subsequent film adaptation.
Amid rebranding efforts and seeking a new way forward, Mark Zuckerberg has made it Facebook’s (now Meta Platforms) priority to build a platform that could enable the metaverse to become a mainstream technology with the sort of reach that their social networks and WhatsApp have. After acquiring VR pioneering company Oculus in 2014, Facebook has struggled to help shepherd the technology towards mass adoption, with the overall growth of the VR market falling short of expectations and even experiencing a dip in 2020 due to decreased interest in the usage of shared or public equipment amidst the pandemic.
Persistent online worlds are not new, and have been around since text-based multi-user dungeons (MUD) were set up in early 1980s computer networks, later seeing a meteoric rise in prominence in the 2000s with games such as EverQuest, EVE Online, and World of Warcraft, as well as the more daily-life oriented Second Life. More recently, platforms such as Fortnite and Roblox have caught general attention as being early examples of metaverses mainly due to their impressive revenue, transmedia efforts, and ability to attract and retain young fanbases.
Facebook’s plans go a step beyond, aiming to create a platform that pushes the boundaries of these gaming experiences and establishing a bona fide parallel world like the one presented in Snow Crash. Disagreements, however, already start from within. 3D gaming pioneer and Oculus CTO, John Carmack, stated in a recent keynote that: “the big challenge now is to try to take all of this energy and make sure it goes to something positive, and we’re able to build something that has real near-term user value, because my worry is that we could spend years and thousands of people possibly, and wind up with things that didn’t contribute all that much to the ways that people are actually using the devices and hardware today.”
To the Internet Governance ecosystem, several questions immediately become apparent when considering the implications of a large-scale metaverse: infrastructure, protocols, interoperability, bandwidth, security, privacy, ownership, access; all debates that are already part of the core of this area. It has been shown time and again that the community needs to look ahead and monitor developments in these subjects before they actually come to fruition, so it’s worth thinking early about some of these challenges, seeing as even if Facebook does not achieve their vision of a metaverse, the push for any company to build one is now inevitable.
Our first question should be whether there is even enough infrastructure in the world to enable the success of such a project. At the moment, VR experiences of a more realistic nature are dependent on powerful computers and specialized equipment, relying largely on data stored on the user’s side. To achieve the desired level of dynamism of a true metaverse and make it accessible to more people than a select elite, streaming technology will be necessary. Moreover, frames per second (FPS) being delivered are a significant consideration, as it has been widely observed that a constant 60 FPS are the bare minimum necessary to avoid motion sickness in most people during prolonged VR use.
A paper cowritten by a group of telecom industry experts roughly outlines the bandwidth requirements for the streaming of a 360 degrees VR experience. At an undesirable rate of 30 FPS at a reasonable 2K resolution, 100 Mbps with 30ms of latency would be ideal. To achieve a passable 60 FPS at a 4K resolution, 400 Mbps with 20ms of latency would be ideal. This is the table with their findings:
Data from the streaming platform Parsec can give us some insight into what current bandwidth availability looks like for users that already engage in more demanding interactive experiences and are potentially more willing to invest in having better Internet capabilities. In the Americas, Asia, and Europe, even among this group, the average available bandwidth falls below 100 Mbps, as can be observed below:
Region | Avg. available bandwidth |
---|---|
Europe | 42.56 |
America | 82.96 |
Asia | 86.03 |
Meanwhile, average latency also falls short of what would be desirable, as outlined here:
Region | Avg. latency |
---|---|
Europe | 42.00 |
America | 39.14 |
Asia | 69.10 |
One important catch is that 5G technology could theoretically be the panacea that would make large-scale metaverses viable. 5G should be able to offer ample bandwidth, average download speeds at least in the 100-200Mbps range, and average latency between 1-4ms (or around 10ms, to be more realistic). However, current real-world data shows that even in major cities from the developed world, those promises are yet to materialize, and overall rollout of the technology has been slow.
In other words, we don’t currently dispose of the infrastructure necessary to create an experience that could be transformative enough to motivate people to adopt VR and the metaverse as solutions to their needs and interests. Of course, a watered down, simplified version of the experience could be delivered with lower requirements, but to have that, users can already engage in non-VR, NTF-centric platforms like Decentraland and pay nearly 1 million USD for a theoretical patch of virtual land that they can decorate with graphics comparable to a glorified version of Minecraft. But that’s not the metaverse dream.
If metaverse technology is really to consolidate, significant investment will need to be made in infrastructure, and consistently low latency will become a top priority. This is part of the premise of 5G and the Internet of Things, so it is possible that a concerted push in this direction could make all of these technologies viable. Right now, however, we are still very much tied to this particular universe.
More articles may follow this one in the near future, discussing other potential impacts outlined in the introduction.
Sponsored byVerisign
Sponsored byRadix
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byWhoisXML API
One of the problems with hoping 5G will solve the problem is that 5G is a last-mile technology. If you do some probing with traceroute and ping, you find that much of the latency comes not from the last-mile connection to your ISP but from backbone providers or the link between your ISP and the backbone. The same holds true for bandwidth limits. The best 5G service in the world won’t help when your bandwidth is being limited by congestion on the backbone link between Seattle and Chicago.
On top of that, you run up against physics. Consider that for advanced VR (per your table) you want a 20ms ping but it takes ~16ms just for the signal to go from the west coast of the US to the east coast. That means that if you’re a customer in San Francisco and the data center’s in Virginia, the entire network in between can introduce no more than 4ms of latency on top of the lightspeed delay before it exceeds the 20ms limit. I think that, for the foreseeable future at least, VR is going to depend on asset models being sent to the client and rendering being done locally. Changing that is, I think, going to require someone inventing the ansible.
Thank you for the comment, Todd. I think your observation reflects something I kept thinking about as I wrote the article. In the 5G spec, there is a clearly established ceiling for user plane latency, but as you say, this does not necessarily mean a whole lot in practice. My hope was that by writing "and average latency between 1-4ms (or around 10ms, to be more realistic)" I would be able to convey that uncertainty to some degree. I'll continue to mull over this subject and take this into further consideration in future articles on this theme.
- A dedicated protocol will be the best way to make the Metaverse stand out and outlast the World Wide Web.
Graphics, and information architecture can be run by blockchain dedicated protocols.
The dedicated protocol will be used for live streams. The rest will be off mobile relays / CDNs - say 10% at most still using https - where users share locally generated image data (like we do with a Comcast router when we allow it to join Xfinity connections).
Mr. Blum, Very prescient of you to post this, as I'll do a followup on discussing the implications of creating new standards for the Metaverse. :) Regards,