ADR-65: Avatar System for Renderer (Unity)

More details about this document
Latest published version:
GitHub decentraland/adr (pull requests, new issue, open issues)
Edit this documentation:
GitHub View commits View commits on

Problem Statement

Avatars are a key part of Decentraland, the system behind the curtains in charge of loading and rendering them must be resilient, scalable and performant. This ADR's goal is to give an introduction to the system itself as well as serving as documentation for new contributors.


For an avatar model (a set of wearable ids with color settings)


Existent solution

Although it falls out of the scope of the ADR, it’s interesting to state the problems associated with our previous implementation to explicitly take them into consideration in the proposal. It consisted in a monolithic multipurpose class with few downsides:


DCL Avatar is an on-going system in constant evolution and needs to be as modular and clear as possible. The first step is to split the loading of an Avatar into different modules:


Our starting point is an avatar profile, just a collection of ids without metadata and some color settings for skin and hair.


Transform ids into a readable avatar profile with metadata.

public interface IAvatarCurator : IDisposable
        WearableItem bodyshape,
        WearableItem eyes,
        WearableItem eyebrows,
        WearableItem mouth,
        List<WearableItem> wearables,
        List<WearableItem> emotes
    ) Curate(AvatarSettings settings, IEnumerable<string> wearablesId);

Internally uses IWearableItemResolver to bring the metadata of a wearable based on its id.

public interface IWearableItemResolver : IDisposable
        List<WearableItem> wearables,
        List<WearableItem> emotes
    ) ResolveAndSplit(IEnumerable<string> wearableIds);

    WearableItem[] Resolve(IEnumerable<string> wearableId);
    WearableItem Resolve(string wearableId);

    void Forget(List<string> wearableIds);
    void Forget(string wearableId);


Receives a set of wearables with metadata and output a single SkinnedMeshRenderer with the combination of wearables and bodyshapes. At the moment FacialFeatures are handled separatedly.

public interface ILoader : IDisposable
    public enum Status

    GameObject bodyshapeContainer { get; }
    SkinnedMeshRenderer combinedRenderer { get; }
    List<Renderer> facialFeaturesRenderers { get; }
    Status status { get; }

    void Load(WearableItem bodyshape, WearableItem eyes, WearableItem eyebrows, WearableItem mouth, List<WearableItem> wearables, AvatarSettings settings);
    Transform[] GetBones();

Loader makes usage of IWearableLoader, IBodyshapeLoader to download and prepare each wearable (included the bodyshape).

public interface IWearableLoader : IDisposable
    public enum Status

    WearableItem wearable { get; }
    Rendereable rendereable { get; }
    Status status { get; }
    void Load(GameObject container, AvatarSettings avatarSettings);
public interface IBodyshapeLoader : IWearableLoader
    WearableItem eyes { get; }
    WearableItem eyebrows { get; }
    WearableItem mouth { get; }

    SkinnedMeshRenderer eyesRenderer { get; }
    SkinnedMeshRenderer eyebrowsRenderer { get; }
    SkinnedMeshRenderer mouthRenderer { get; }
    SkinnedMeshRenderer headRenderer { get; }
    SkinnedMeshRenderer feetRenderer { get; }
    SkinnedMeshRenderer upperBodyRenderer { get; }
    SkinnedMeshRenderer lowerBodyRenderer { get; }
    bool IsValid(WearableItem bodyshape, WearableItem eyebrows, WearableItem eyes, WearableItem mouth);

Internally both rely the heavy-lifting of downloading and retrieving the assets to IWearableRetriever.

public interface IWearableRetriever : IDisposable
    Rendereable rendereable { get; }
    Rendereable Retrieve(GameObject container, ContentProvider contentProvider, string baseUrl, string mainFile);

In the case of the BodyshapeLoader we also have to get the facial features using IFacialFeatureRetriever.

public interface IFacialFeatureRetriever : IDisposable
    (Texture main, Texture mask) Retrieve(WearableItem facialFeature, string bodyshapeId);

Once every the bodyshape and every wearable is downloaded and the colors for hair and skin are set, we merge them into a single multimaterial mesh. There’s an in-depth post about that herer

The merge of the avatar is done by an IAvatarMeshCombinerHelper.

public interface IAvatarMeshCombinerHelper : IDisposable
    public bool useCullOpaqueHeuristic { get; set; }
    public bool prepareMeshForGpuSkinning { get; set; }
    public bool uploadMeshToGpu { get; set; }
    public bool enableCombinedMesh { get; set; }

    public GameObject container { get; }
    public SkinnedMeshRenderer renderer { get; }

    public bool Combine(SkinnedMeshRenderer bonesContainer, SkinnedMeshRenderer[] renderersToCombine);
    public bool Combine(SkinnedMeshRenderer bonesContainer, SkinnedMeshRenderer[] renderersToCombine, Material materialAsset);


At this point we have a fully loaded avatar combined in a single mesh. The next step is to prepare it for animations. IAnimator takes care of that:

public interface IAnimator
    bool Prepare(string bodyshapeId, GameObject container);
    void PlayEmote(string emoteId, long timestamps);
    void EquipEmote(string emoteId, AnimationClip clip);
    void UnequipEmote(string emoteId);

IAnimator.Prepare will set up the locomotion animations and create the needed components by Unity in the root of the avatar hierarchy.


The equipped emotes will be received with the rest of wearables in the user profile. Once they are identified (by the AvataCurator) a whole process to download and process the animations is required.

Requesting, retrieving, caching and processing animations is not trivial and it's explained on its own ADR.

To summarize it: IEmoteAnimationEquipper will take care of requesting an emote animation and wait until it’s ready to equip it in the IAnimator.

public interface IEmoteAnimationEquipper : IDisposable
    void SetEquippedEmotes( string bodyShapeId, IEnumerable<WearableItem> emotes);

GPU Skinning

GPU Skinning is part of our optimization tweaks. It composes the transformation matrix for each bone in an animation and forward them to the shader to relocate every pixel.

public interface IGPUSkinning
    Renderer renderer { get; }
    void Prepare(SkinnedMeshRenderer skr, bool encodeBindPoses = false);
    void Update();

It also contains a throttler which spread the update between frames for avatars further away:

public interface IGPUSkinningThrottler : IDisposable
    void Bind(IGPUSkinning gpuSkinning);
    void SetThrottling(int framesBetweenUpdates);
    void Start();
    void Stop();


The LOD system allows disabling expensive rendering features based on distance. At the moment three levels have been implemented:

LOD0: Fully 3D Avatar.

LOD1: Fully 3D Avatar without SSAO and FacialFeatures.

LOD2: A billboard impostor with a texture of the body on top.

public interface ILOD : IDisposable
    int lodIndex { get; }
    void Bind(Renderer combinedAvatar);
    void SetLodIndex(int lodIndex, bool inmediate = false);
    void SetImpostorTexture(Texture2D texture);
    void SetImpostorTint(Color color);

LOD will also make use of the visibility handler (see below) to hide different parts of the avatar.


The final step of the loading process is to visibility handler, it’s not as easy as just turning on/off the avatar. Multiple systems have different reasons to hide or show an avatar and usually conflict with one another. An avatar can be hidden because it’s behind the camera, inside an AvatarModifierArea or because the max budget for avatars has been reached...

To avoid these conflicts a visibility constrains system has been implemented.

public interface IVisibility : IDisposable
    void Bind(Renderer combinedRenderer, List<Renderer> facialFeatures);

    void AddGlobalConstrain(string key);
    void RemoveGlobalConstrain(string key);

    void AddCombinedRendererConstrain(string key);
    void RemoveCombinedRendererConstrain(string key);

    void AddFacialFeaturesConstrain(string key);
    void RemoveFacialFeaturesConstrain(string key);

The Avatar itself wont be rendered if any global or CombinedRenderer constrain exists.

The FacialFeatures wont be rendered if any global or FacialFeature constrain exists.

i.e. A constrain own_player_invisible will be added when toggling between 1st and 3rd person camera.

Tests suite

The test suite is fairly simple compared to the complexity of the avatar system. Every dependency is injected in the constructor and based on an interface. This pattern called DependencyInjection allow isolation of every subsystem by mocking its dependencies using any of the mocking frameworks available (in our case NSubstitute).


public class EmoteAnimationEquipperShould
    private EmoteAnimationEquipper equipper;
    private IAnimator animator;

    public void SetUp()
        animator = Substitute.For<IAnimator>();
        equipper = new EmoteAnimationEquipper(animator);

    public void AssignReferencesOnConstruction()
        Assert.AreEqual(animator, equipper.animator);
        Assert.AreEqual(0, equipper.emotes.Count);



The system competes directly against its previous implementation. The whole system was a single class in a god object anti-pattern (refer to "Existent Solution" above). The new avatar system, while sticking to best practices in the industry, improves the design flaws from the previous implementation (refer to "Benefits").


Copyright and related rights waived via CC0-1.0. Living