Meta Architecture

OVERVIEW

The metaverse is a persistent and continuous 3D virtual space, a new frontier full of possibility. It will allow for an added information layer to be projected onto the world around us, building on the information connection we already enjoy from our smartphones today.
It will unlock a layer of value upon the reality around us.

Today we already enjoy virtual worlds through reactive experiences like real-time games and more passive/observational experiences through film and animation.
These experiences allow us to transport to different worlds with their own unique storys.
Most of these experiences are isolated and there aren’t many examples currently of interoperability between them, however, they are important to the birth of an eventual metaverse that connects all things.

This post will talk about virtual worlds and the way they are made and distributed in the modern day. How the games industry plays the mother role and how the architecture for virtual worlds needs to be positioned correctly for an open, interoperable and persistent metaverse.

AR & VR

The metaverse as an experiential layer will be expressed through Augmented Reality [AR] and Virtual Reality [VR] experiences.
I would argue that most of it would be through AR and we will see a world where we combine the real and with the artificial, allowing for projections and visual mapping over the surfaces of the world that we know. Enhancing and providing a layer of information for everything from helping workers with daily tasks, providing entertainment through casted augmented characters in one’s own house or allowing artists to project their works over otherwise boring and lifeless buildings and areas of nothingness.

The field of AR like VR has been slow but it has had a few breakthrough moments. Pokemon GO and filters pioneered by Snapchat pushed the medium massively. The next big unlock will come from a universal and widely accepted piece of hardware that hits the notes of what the iPhone did for smartphones. Today we are still looking at Apple and their long-awaited Glasses as a potential catalyst.

Virtual reality is a replacement medium and is a method of complete transportation to virtual worlds. Immersion is key as it allows the participant to completely leave the real world behind. VR will lead the way for certain games, reactive films and storytelling experiences.
It will add an immersion layer to almost all forms of entertainment that we know today. I think that many will end up gravitating toward this medium in the future as one of the most effective ways to experience various forms of the arts.

I remain unsure of whether VR will be the primary medium of the metaverse as the barrier for entry is much more challenging for the average participant and there remains a slight disconnection to reality over prolonged use. But all it could take is a culturally accepted move towards VR or even a breakthrough in tech that would break down these barriers. I have used Ready Player One before as an example, but the Oasis as a world replacement social hub and almost Second Life experience would be enough to push this medium to primary.

GAME ENGINES

Design for virtual worlds currently requires systems of creators, programmers, animators and engineers to work in harmony. Game design in the modern day is the early forms and example cases for how the metaverse will be built.

Engines like Unity and Unreal are at the capacity to develop effective virtual experiences and have been for a while. Gaming has been a medium that has slowly made its way to a cultural powerhouse as of 2022 and in my opinion, is one of the most beautiful mediums of art in the world today. These engines are also being more commonly used as a way of crafting sets for films.

Game engines are experiential reality creators. They contain the framework to build everything from realistic environments to effective AI & physics calculations and do so in an all-in-one solution. They are incredibly powerful.
Hopefully not to be deployed as an inconceivable illusion like in the film The Matrix.

All of this was pioneered by the gaming industry and the culture that followed. From the early 2D games like pong in the 70s through to games like DOOM, these evolutions over time set standards for the game creation pipeline and has led us to the point where we’re at now.
Game engines will allow creators and engineers to birth the reality around us with the metaverse.


ARCHITECTURE

In this section, I will talk about the creation of worlds and the effective distribution of them as experiences known today. The architecture of distributing experiences cannot be applied to the methods of AR, which require a different toolset and methods of application. Instead, this section talks only about the matter of real-time applications.

Distribution

The most favoured method of releasing an experience has always been to simply ship a .exe via discs or make it available through online downloads through Steam or the XBOX and Playstation stores.

With the advent of web3, this is simply not the case moving forward. The dangers of making an audience download a .exe file in the web3 world today with the legions of scams and hacks is nigh near impossible to ask. Instead, a safer solution is needed.

While this doesn’t echo the wider market and the fact that central entities may dominate market share in years to come and allow for the normalised and integrated marketplace/download option to remain. A standard may be set in the web3 movement via a number of methods, which may shift the model away from what is known.

Webplayers and HTML

A popular method in the youth of the web3 movement. Most early metaverse protocols and experiences on offer use these methods. Using the WebGL and OpenGL (via hardware-accelerated rendering) standards with HTML. With this route, you can display basic but interactive games and social experiences in a way that doesn’t compromise the user via downloading any content, it’s accessible entirely via a webpage.

Some of these include Decentraland, Sandbox, Somnium Space and Cryptovoxels.

There are limits to this option. Without real use of a GPU through DirectX and HLSL (High-level shading language) creators are limited to the number of resources they can use to make the game come to life. This is predominantly a concern for graphics, but complex interactions can also be limited when the game is already under load.

HTML5 seemed to be incredibly promising in providing more resources to web player-based experiences. There are still limitations like with its previous generations, including memory requirements and downloads. The future seemed bright for this tech but in a swift move, Unreal discontinued HTML5 packaging support in 4.24 to give full attention and development to Pixel Streaming as a prefered and future-proof distribution method (More on PS in the next chapter)

A web player solution will never provide the experience that completes what we can imagine being the metaverse, but it serves as a great distribution tool for the early concepts of a decentralized internet.

Pixel Streaming

A young but incredibly promising technology. Similar to how you would think your movies and videos are streaming over the internet to your device from a Netflix server. Pixel Streaming is a method that allows the game to be rendered on the cloud and streamed directly to any device. The connection isn’t one way, however, as with a movie streaming solution you only need to download, with games and interactive experience there is input and output. Player inputs via direction keys, interaction buttons and movement inputs. In return, the cloud machine feeds this into the running application and returns the download stream. This is a continuous cycle, a continual conversation with the cloud computer.

Because everything is rendered in a virtual computer on the cloud, the only bottleneck to the experience is the participant’s internet bandwidth. This is where things get interesting.
The virtual computer is often much more powerful than most people’s home equipment whether that’s a phone, a desktop PC they bought 4 years ago or a mac. None of these has the necessary ability to run the most recent AAA games with raytracing and high fidelity realism.
Most specialized virtual computers can handle these loads, as long as the user has a good internet connection, they can enjoy the experience from anywhere.

Pixel Streaming is the distribution method I am running with for
THE CORE virtual world.

While we have a lot to discover and find out about just how powerful processing units become. It might and probably one day be a none issue and the ARM processing units in 10 years time, that fit into those AR glasses or wireless VR headsets are all that we need to run a reality similar in fidelity to what we know in the real world.
I could imagine a solution appearing in the meantime in which modern networking infrastructure improves greatly and with the advent of 5G, most experiential solutions could be streaming to a headset of choice without any issues of running a high-end game today.

A LANGUAGE FOR ALL

Currently, the metaverse as a concept lacks the required architecture today to be constructed. The building blocks of this new world still has some way to go in fact.


I once read a tweet from Tim Sweeney (CEO Epic Games) that stuck with me and got me thinking. It said something along the lines of ‘The language of the metaverse has not been invented yet.’ (I couldn’t find the actual quote.)

The next frontier of the web as we know would need a universal coding language for optimal interoperability and persistence. On the web today, all of its front-facing sites, programs and applications are written in numerous programming languages making it a difficult task to achieve a compatible all-in-one social virtual space.
There is no standard for separate entities to abide too.

Most languages can’t talk to each other directly.
There are too many walls and workarounds to achieve a high bandwidth cross-communication between applications that is needed.

Whatever the metaverse ‘will be’, just like the web today there will be a huge range of applications covering many different operations and needs. Hence, there needs to be a language that compiles all of these requirements into one universal solution that can be used to build the medium out.

Very similar to languages, there is a huge interoperability issue between real-time applications today. Something the NFT world today doesn’t understand is that a particular asset needs to be redesigned for almost every individual application that they are implemented in. If you own say a sword via an NFT or simply through a traditional MMORPG asset database, to use that sword in another game the developers of that game will need to create this asset to the guidelines of their own production template. You can’t have a COD weapon in a mobile or VR game, budgets and processing power doesn’t allow it. Due to the diverse specifications of hardware and languages in the world today, it creates an incredibly difficult environment for interoperability.

Moving forward, unless the desired universal programming language is implemented or open-source hardware unit standards are made. There will have to be a hard priority between every major party that is focused on building to establish thorough pipelines and standards for everything ranging from digital assets, virtual world resource requirements and cross-language communication.

DISTRIBUTED GAME ENGINES/CLIENTS, SERVERS & NETWORKING

In a traditional multiplayer game today, we use a basic client and server partnership. The client on the participant’s machine does the rendering and input & output. The server calculates all connected clients and determines the world state i.e. where projectiles are flying, if that body really got hit, is the building occupied in a king of the hill game mode?

This has been the optimal way of doing things basically from the beginning of multiplayer gaming. It is effective and streamlined, but it is limited. It is limited because there is a central entity. This central server calculates everything, it can only do so much before the calculations surpass its resources. This is why you can only have certain-sized maps with certain-sized teams and certain caps on AI and event processing. Similar to the issues of mobile games having limited resources as discussed previously, with this example because there are so many factors to consider in the server processing, there are limits set on nearly all areas of a game and this ends up having a knock-on effect that generally makes all areas of a game lean conservative in building approach.

Distributed systems like Improbable’s M2 may provide a solution to this issue. They use a novel distributed worker system that dynamically scales. Breaking it down, a worker can be a player’s client (game on PC) or the physics engine, AI node or logic node. It is customizable so even the games weather system can be calculated entirely by a worker. This system can run workers that range from a few to hundreds if not thousands. With the information now flowing between many points instead of one, you have more bandwidth and the ability to process more information and faster. This architecture is incredibly important to unlocking 3D virtual spaces with participants in the millions/billions.

While there is still so much more to figure out in networking architecture to support virtual worlds with participants covering most of the world’s population. The standards seem to be aligning with scalability and flexibility in mind to meet the needs of what the metaverse ‘will be’.

CONCLUDING THOUGHTS

We are well on the way to achieving the grand visions set by pioneers of the XR movements and novelists as early as the 1970s. While there is still so much more work to be done, there has been an incredible amount of progress in the fields of gaming, graphics engineering, web3, the cloud and infrastructure. Putting the goal in our grasp.

One thing I would like to see is more collaboration on setting standards between parties. We are already seeing this between some of the biggest companies in the world, forming metaverse interoperability alliances to work these issues out. There is also a web3 equivalent that focuses more on the open and decentralized equivalent. https://twitter.com/oma3dao

The biggest advancements will come from a direct address or collaborative work on a new open and language powerful enough to build a virtual world on top of our real one. As well as an open standard for hardware units in AR and VR headsets. If these are achieved, progress on the building of the metaverse will be accelerated vastly and would be accessible without centralized hardware gated control. It would be what the pioneers intended it to be, an open and free new world for all.

Previous
Previous

THE CORE

Next
Next

Reverence