This document describes alternatives to synchronize scenes and renderers using the CRDT protocol. It also presents the problem of dropped frames and frame delay between the renderer and the scene. It describes the sequencing and locks of the scene loop and the render frame of the renderer to optimize interactivity. An implementation similar to "double buffering" is chosen.
Decentraland scenes run in contexts isolated from the rendering engine (renderer from now on), in a worst-case scenario, in a different process, only being able to communicate via messaging. Since scenes have an independent update loop (scene frame from now on) clear synchronization points need to be designed to reach consistent states between the renderer and the scene.
The context of synchronizing the scenes and the renderer is complex. Many dimensions participate in the analysis:
This document will enumerate the considered alternatives and the implications of each one of them.
This approach is the simplest to explain: A scene frame runs in the Scene, then sends the updates to the renderer. It waits for the response to run the next scene frame.
To illustrate better what happens inside the scene frame and renderer frame, we will consider some stages for each of them:
For the scene frame:
For the renderer frame:
Now that the frame is decomposed into smaller chunks, it can be observed that the RENDER part doesn't necessarily need to halt the Renderer.Send back to the scene, but it is a requirement for the Renderer.Render, since physics and transformations are used to calculate the GPU buffers for the next frame.
An extension to this optimization is that the Renderer.Receive can happen in parallel, while the previous GPU frame is still being processed by the GPU Process. Effectively removing the dropped scene frames caused by the excessive waits.
Considerations: This approach is way better on multi-threaded systems. Since it would be possible to parallelize the rendering and the Renderer.Send
The scenes' runtimes are RECOMMENDED to be independent and to run in parallel. It is also RECOMMENDED that the Renderer can process those updates concurrently.
There is one explicit synchronization point that implementers MUST consider: The renderer MUST NOT respond to the scene until all the messages of the previous frame have been processed and the physics and camera position calculated.
It was considered for this design that a scene in the renderer could take several renderer frames to process all the queued messages. Implementers SHOULD process all messages in order and MUST prioritize first "global scenes" and then scenes ordered by distance.
If scenes are too far away, it MAY be possible that those will receive eventual updates because the closest scenes MAY consume most of the processing quota.
This is to prioritize experiences where the user is participating while keeping the world visible in the surroundings.
Update messages will arrive at the Renderer via sockets or shared memory. It is RECOMMENDED that those operations are batched and executed while the GPU process is rendering the previous frame.
The first frame of a scene is either sent by the scene code or by the runtime (via
main.crdt as stated in ADR-133). The physics phase of
this initial frame MUST only be executed after all its messages have been processed and all
the models have been loaded.
This enables the scene to embed Raycast queries that will hit the models being loaded in this first frame.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 and RFC 8174.