Fran Tufro

Decentraland Adventures: Part 3 - Roadblocks

🗓 2022-10-24 ⏳ 8m.

This is a non-stripped down version of this month's update, because the update fields were limiting my verbosity.

Introduction

This is not a happy update, I must say, but, in my opinion, is pretty hopeful.

As mentioned in the previous update, we've been trying to make a scene runtime for the current Decentraland scenes in Rust using Deno. This has proven to be very difficult for a bunch of reasons. As to not bore most of you with technical details, I'll leave the explanation for the blockers section, you can read it if you're technically savvy and interested in this topic. The conclusion is: this is not the right time to implement a scene runtime, and it will take us much more time (~10 times more) to do it now than doing it in the future. This means we won’t be implementing the 2d rendering of 3d scenes, but focusing on better support for 2d scenes.

After venting some frustrations with @maraozwe decided to change directions a bit based on the actual goal we have, which is: “let's see how far we can get creating a new client for Decentraland that does not use the existing implementation”.

Highlights

Besides the failed experiments with rendering the 3d scenes in 2d, we had some pretty amazing wins this month.

Our prototype client is now communicating with Catalyst in an asynchronous way, able to download scenes as we walk through Decentraland 2D (although there is no 2D content deployed yet! but we'll get there).

We started working on the tools needed to compile a scene defined as a JSON into a binary format that is both smaller and faster to load. We decided to go with MessagePack for the serialization, since it's available in a lot of different languages, so people can create tools in whatever language they want and can serialize to a valid scene object.

Blockers

As mentioned in the introduction, we had a major block with the scene runtime. There are three main reasons as to why we decided to stop making progress on that and move in a different direction:

  1. Lack of documentation
  2. The Foundation is making a pretty big change regarding how scenes work at the moment.
  3. Difference in philosophy

Lack of documentation

By no means this is a critique to the foundation (or is it?), but there is a complete void in documentation related to how scenes communicate with the kernel. The RFC-2 is a good starting point in terms of the protocol, but when you get to the runtime, it's all TODO.

So, in order for us to make a working runtime, we'd have to almost reverse engineer everything from the reference implementation. This is a lot of hard work that will take a lot of time and it's not justified. I don't think we should use DAO money to reverse engineer the implementation, it should be used to document the existing protocol, but I think that's outside of this grant's scope.

In my opinion, moving forward, I think Decentraland should be built from specifications, and not the other way around, but that's a discussion for the DAO to have with the Foundation. I understand that we're where we are because this has been an effort to get Decentraland going, but to fulfill its mission of being an open source metaverse, it needs to work from documentation into implementation, otherwise it makes much harder for other developers to jump in, which is exactly our experience.

ECS 7

The foundation is working on a very nice improvement to the scene runtime at the moment. This means that we've been trying to dig into a codebase that is half-way done in a nice new path, very dirty in an old path (ECS6), and without any documentation saying what's new, what's old, what's used and what's not. We estimate that it would take us almost 8/9 months to implement a scene runtime having to dig all of this, which, as I mentioned before, it's not a good use of DAO grant money.

I'm not sure if the consequences of these two points are obvious to you: these are blocking us from having the ability to automatically create the pixelated graphics for our 2d scenes. We already have the 3d to 2d algorithm working, but creating the runtime to obtain a serialization that will let us do the work is just unnecessarily complicated. We could, in theory, modify the existing explorer to generate these images, but this also goes against the original spirit of the project, which is to create an independent tech stack.

We can re-evaluate this situation when the Foundation is done with the migration to ECS7, hopefully they'll take this clean slate opportunity to write good documentation for us to use. So, it makes sense for us to postpone integrating with existing scenes to minimize the amount of money the DAO spends on this. Or not even do it at all if we (meaning the Decentraland community) decide that.

Difference in philosophy

Now this is more related to our future vision of Decentraland than to the current implementation.

In our opinion, for Decentraland to become an open-source metaverse, there are 3 very important values to take into account:

  1. Performance
  2. Long-Term Portability
  3. Developer toolset choice

The current runtime is written in JavaScript, using layer after layer of indirection, using concepts borrowed from React (reducers, etc) and a level of complexity in general that feels very overwhelming for a developer that doesn't come from the modern JavaScript ecosystem (aka. Us, and most other game developers).

The decision to make use of JavaScript probably came from the fact that this was a web project from the get-go, I think this was a good short/mid-term choice, but a bad long-term one.

The current kernel is very tied to the browser, to the point that the "native" client needs to open a browser in the background (through Electron!) to even work. In my opinion this is just building on top of bad decisions instead of stopping and reworking what's wrong to support a better future.

The three values we defined earlier are suffering immensely because of the choices of using JavaScript and coupling to the browser.

Performance suffers because JavaScript is not a fast language. Videogames are real-time simulations that require you to squeeze as much as you can from our CPUs, using a language that is so slow goes against this big time.

Long-Term Portability is suffering by making the kernel depend on a browser environment. We don't know what will happen 10-years from now, but depending on existing implementations of browsers is certainly not a good idea, especially if it's not a necessary dependency. The less we depend on external environments the better, and choosing technologies that think about this problem is very recommended.

If we want many developers to join the ranks, we need to make sure we don't make heavy choices for them, and allow them to work with the tools they want to work with as much as we can. Using JavaScript and npm is a statement that says "we only want to work with modern web developers". This is too alienating for other kinds of developers that are not familiar with the complexity of the JS ecosystem, especially game developers.

We believe the alternative is to choose a technology that is meant to be fast, portable and support multiple toolchains. In our opinion this technology is WebAssembly. (From Wikipedia: "The main goal of WebAssembly is to enable high-performance applications on web pages, but it does not make any Web-specific assumptions or provide Web-specific features, so it can be employed in other environments as well. It is an open standard and aims to support any language on any operating system, and in practice all of the most popular languages already have at least some level of support").

Next Steps

We're almost ready to start working on documentation and tooling to upload 2d scenes.

If you're interested in being part of the initial batch of miserable (but very valuable) humans that will have to deal with broken stuff to upload a 2D scene to your land, please join our grant channel on Discord and let me know.

After this is done, we'll start working on the v1 for the runtime using WebAssembly, we'll try to work from documentation into implementation, so if we're not done implementing by the end of the grant at least the spec should be ready.