Third Room Development Update 4-25-2022 #47
robertlong
announced in
Announcements
Replies: 4 comments
-
Awesome work!!!! |
Beta Was this translation helpful? Give feedback.
0 replies
-
That's sounds great! Supporting Mozilla Hubs and VRM avatar skeletons is important I think. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I'm loving what you guys are doing! But I'm wondering how many simultaneous users a space can accommodate without slowdown/lag? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Third Room Development Update 4-25-2022
A lot has been happening since we kicked off Third Room development back in February. In this update we'll give you a summary of all the cool things going on with Third Room's development and what's coming up next.
General Development Updates
We recently set up a bunch of developer tooling such as switching to Vitest for testing, adding automated testing on GitHub Actions, and automated deployment to Netlify. thirdroom.io is now currently tracking the main branch of the Third Room repository. Also all PRs will build deploy previews so you can check out the status of a feature without having to run it locally. This also includes our super early and rudimentary Storybook replacement which is available in development and deploy previews at the
/storybook
route. Production deploys do not include the storybook route.UI Progress
We've finished and started implementing the most recent Third Room UI designs. You can see some of them starting to be implemented on thirdroom.io. Obviously some pages such as the login and loading screens have still yet to be styled and others still have bugs or are in a half implemented state, but we're making progress and building out our design system to speed up development as we go. We'll be sharing more of our designs publicly as we finalize this design system. For now, if you'd like to help out and contribute to frontend development, reach out to us in the #thirdroom-dev:matrix.org room and we'll get you set up. We hope to improve our outside contributing process as we lay down the foundation for the design system.
Here's a sneak peak of some of the updated designs:
Shift to multi-threaded engine design
Building a game is a constant balancing act between CPU and GPU performance. From the start we've had fairly good CPU performance. Our ECS framework bitECS is built around data oriented design where we iterate over component properties laid out linearly in memory avoiding CPU cache misses. We also use a WASM-based physics engine called Rapier which performs much better than JS-based options like CannonJS. However, rendering with WebGL actually has huge amounts of CPU overhead. WebGL's API involves calling many functions each frame to set up the pipeline for drawing each object. WebGL has a higher overhead than OpenGL's APIs because it needs to validate each function's parameters to ensure your web browser can't crash your computer or other nasty things using your GPU.
So, in order to get around the CPU overhead needed to render complex scenes we separated our render thread from our game thread. The game thread runs at a fixed 60hz tick rate and updates the physics simulation and other game logic. While the render thread runs at your devices' native refresh rate and renders the scene using Three.js. The game thread reads your inputs from your keyboard and mouse and computes and updated scene graph every 16.666ms. This game state is stored in a SharedArrayBuffer which can be read by the render thread. However, these two threads are running at different rates. In order to share the game thread's state with the render thread we use a data structure called a triple buffer. A triple buffer stores 3 copies of the game state. One is used by the game thread to write to, another is used by the render thread to read from, and the last is used when the game thread has finished updating and needs a new game state buffer to write to. This means the game thread keeps running at a constant 16.666ms tick rate and always has a new buffer to use when it's done without any locking. The render thread will then swap with the last updated game state buffer and also doesn't need to lock. This also means we have a full 16.666ms of frame time to update the game state and at 60hz we have 16.666ms of frame time to render the scene. We get to submit to the GPU right at the start of the frame and the GPU always stays saturated.
This is something we haven't seen much of on the web so far. Largely due to the fact that SharedArrayBuffers have only been turned on in all browsers in the last year. It's also worth noting that the render thread is actually only split out into its own WebWorker in browsers that support OffscreenCanvas (currently all Blink based browsers like Chrome/Edge). Still, moving the game thread to a web worker is a huge performance improvement in Safari and Firefox.
This also means that the game thread no longer has access to Three.js or DOM APIs. This has required huge rethinking of how to interact with our scene graph. All scene graph logic is controlled from the game thread. The game thread can create resources on the render thread asynchronously using the
postMessage
API. It also increments an atomic counter to create a resource id that you can use immediately to reference resources such as textures, materials, geometry, meshes, lights, cameras, and more. All Three.js objects are loaded and used on the render thread. We still use three.js's loaders like the TextureLoader and GLTFLoader but most of that data stays on the render thread. For the GLTFLoader we send back a minimal data structure describing the glTF's scene graph structure and components like transform, lights, meshes, etc.In practice this game thread API ends up looking something like this:
We're still working out the details of the game worker API, but we hope over time to enable a simple yet powerful API to work with. This API will also likely be very similar to what we expose to our WASM user generated content API which third party developers can use to add behaviors to their Third Room worlds. We hope to share more details on the UGC API soon.
Networking
Currently networking is in a proof of concept state. You can join a Matrix room, move, jump around, and spawn cubes at the moment. We've integrated the hydrogen-view-sdk and the work in progress group calls implementation. It's still unstable due to how it handles sessions and a number of other things, but progress is being made quickly. We hope to add spatialized voice chat, work on connection stability, and synchronize additional networked object properties in the coming weeks.
Once support for group calls is added to Hydrogen SDK we'll be adding support for room state events so we can implement world room types and store data like the world's scene url and member's avatar urls. This will allow us to finish filtering out worlds from other Matrix rooms, load different environments per world, and custom avatars per member.
Ecosystem Updates
We're actively participating in the Open Metaverse Interoperability Group which just celebrated its one year anniversary! We're working on standardizing audio and physics glTF extensions. Standardizing glTF extensions like these prevents vendor lock-in for third party content and promotes more interoperability in the greater 3D community.
What's Next?
Aside from the updates above we have a lot on our short term road map as we make progress towards the MVP of Third Room.
Content creation tools
Third Room needs an editor. We've been weighing our options and there isn't a perfect solution yet. We'll likely have to build out our own Third Room editor due to our unique multithreaded engine design and glTF based UGC system. We'd love feedback from the community on what you'd like to see for environment/avatar creation tooling. We ideally want an experience that is immediately previewable in engine and has a path towards collaborative content creation.
We also need to build out tooling for scene optimization. This includes texture compression with Basis Universal, light baking, precomputing irradiance volumes / reflection probes, and much more. This needs to integrate into our editor and ideally run in-browser.
Audio
Unfortunately, with the switch to our multithreaded design, audio has become a lot more complicated. The WebAudio API is not available in WebWorkers. So, we need to implement a system that always runs on the main thread while being synchronized with the render thread, and we need to be able to control the audio behavior from the game thread. We have big plans for audio in Third Room, but we'll likely start small with basic spatialized member audio streams and environmental audio.
Avatars and Character Controller Improvements
Right now you run around Third Room as a cube. As fun and cute as the cube is, we realize we need to implement an avatar system. We'd like to support both the Mozilla Hubs and VRM avatar skeletons and implement an animation retargeting system that works with both. There are a couple different libraries out there that we can use to speed up this work, but we still have a huge amount of work ahead of us in this space. As with audio, we'll start simple and iterate on it.
You may also notice some bugs with the existing physics-based character controller. You can jump infinitely, sometimes launch yourself high in the air, and many more pleasantly (or not) annoying bugs. Rest assured, we'll be cleaning up our character controller soon.
Summary
We've accomplished a lot in the last couple months and we're looking forward to much more fun and visually interesting updates to come!
If you'd like to get involved or stay up to date with Third Room development, make sure to join our Matrix room: #thirdroom-dev:matrix.org
Beta Was this translation helpful? Give feedback.
All reactions