Scene optimization for Oculus Quest
User skill level: basic
Standalone goggles do not require connection to a computer, but this results in lower computing power. Therefore, resources for use with applications running on these goggles should be well optimized. In this article, I want to show you what optimization techniques can be used on the example of a scene prepared for the VR Oculus Quest goggles.
Cleaning and retopology of models
The number of triangles is far too large for such a simple model. As you can see in the photo below, the model has an unnecessary detail on the railings of the bridge. The effect of rounded corners is better with a normal map than with a topology.
Camera position and its influence on optimization for Oculus Quest
We should set the camera and prepare the assets in such a way that it is possible to determine which elements and how are visible to the player. In effect, we will be able to adjust the models to have a topology only in places that are visible to the player. The purpose of this is to thin the grid of models in places that will not be displayed. Suppose we set a player to this location:
The player can look in every direction, only his position does not change, he has a 360-degree view. As you can find out from this perspective, we will be able to see the bridge only from one side, as well as the gate and the fence.
By setting the camera position, we already know from which side the player will see the bridge. The photo below shows the bridge seen from the player’s perspective:
And here is the other side of the bridge, invisible to the player, with elements marked in orange, which we can get rid of and the player should not notice their absence. We can do this action because in the scene we are optimizing, the player is not moving (will not see the other side of the bridge). In this project, the user has a predetermined position in space. He’s limited to just looking around.
Here we can see both bridges before and after with textures from a view inaccessible to the player:
As you can see in the photo below, both bridges do not differ significantly from the perspective that the player will see. At the same time, you can observe a decrease in the number of triangles from 1236 to 750. Applying this method to other models will have a positive effect on the optimization of the final scene.
Optimization for Oculus Quest: LOD (Level of detail)
The use of models with different levels of three-dimensional complexity, the so-called LOD (Level of detail)
The Wiki definition of LOD is “the level of complexity of a three-dimensional object at the appropriate distance from the point of view. LOD increases rendering efficiency by reducing the number of points displayed, usually through vertex transformations. The reduced quality of the model is often unnoticeable due to little embellishments on the overall appearance of the subject when it is far away or moving quickly.”
This means that during the modeling stage we create its variances with high, average, and low mesh. LOD is the process of preparing less detailed versions of the same model that will replace the base model the further they are from the camera. When the camera moves away from objects with applied LOD, models with a complex mesh are replaced with models with a lower degree of complexity, at the same time the player may not even notice that the quality of the model has changed to a lower level. As a result, we reduce the number of points displayed simultaneously, and this significantly improves the performance of the scene. This allows us to reduce the load on the computer, so we can display more objects while maintaining a high number of frames.
The example below shows that replacing a model with a large amount of detail with a simpler one at a distance from the camera does not significantly affect its reception.
LOD optimization can be divided into two parts, the first is the geometry (number of polygons) and the second is the texture resolution.
LODs must have a specific name to run on the engine. In the case of Unity, it is LOD0, LOD1, LOD2, etc. LOD0 is the name of the most detailed model that will be visible close to the camera. From LOD1, these will be models with fewer and fewer details.
LODs are used in game dev and when playing games you can sometimes see a change from one LOD to another LOD called “pop-in” if it occurs too close to the camera. This is an undesirable effect. It is best to compensate for it by changing the LOD distance setting from the camera to one at which the model change won’t be noticeable.
Unity has built-in LOD functionality. All we have to do is name the models appropriately and set the camera range at which they will change.
LOD creation relies on model optimization by reducing the amount of geometry. It’s a retopology. We can do this manually or with tools like Instant Meshes, but remember that they can damage UV.
Create LOD from textures
Creating LOD from textures is a much easier process. For example, if we use texture in 4096×4096 for LOD0, it is enough to halve it to 2048×2048 for LOD1 and 1024×1024 for LOD2, etc. depending on how many LOD degrees we need.
The important thing to remember is that engines like Unity prefer a texture size of a power of 2. Creating LOD even on small projects is advisable. LOD will always impact on quality and allow better performance on less powerful hardware.
For more information on the use of LOD in Unity, please refer to the documentation:
Limiting the number of models on the stage
I recommend a good practice to create models that are not broken up into many smaller pieces. For example, as you can see in the picture below, the gate model was divided into many smaller parts. Each of these objects reduces the performance of the scene and translates into a lower number of frames per second. This is because each separate element must be rendered separately, with one element it is rendered only once. At the same time, the whole scene should not be merged into one object, because then all elements, such as trees and other objects, at the moment we are not looking, they are constantly being rendered. A gateway model by combining all elements into one object. As you can see, it consists of 12 separate objects.
Here we already see the gate as one object:
As you can see, the hierarchy consists of one object.
Using the Terrain tool in Unity and optimizing the created terrain
There are a few things to keep in mind when using Terrain in Unity.
- Draw Instanced – a good option is to set this option to OFF.
- Pixel Error – it is responsible for the accuracy of the mapping between the terrain maps and the generated train, a higher value means less accuracy but less rendering burden.
- Base Map Dist – the maximum distance at which Unity displays the terrain map in full resolution, above this value the system will display a map with a lower resolution.
- Cast shadows – in our case there was no need for terrain to cast shadows and we could set this value to OFF.
- Detail Distance – this is the distance from the camera where details are trimmed.
- Detail Density – number of detail objects/grass in a given unit of area. Setting this value lower will reduce rendering overhead.
- Tree Distance – the distance from the camera at which trees are pruned.
What should interest us most is:
- Terrain Width, Terrain Length, Terrain Height – these are the parameters responsible for the dimensions of our terrain.
- Detail Resolution Per Patch – the size of a single Patch rendered by a single draw call.
- Detail Resolution Map – this is a resolution that defines separate areas for detail/grass.
- Higher resolution results in smaller and more detailed “Patches”.
Finally, using the bakery asset to burn out lights on textures, so we won’t use lights in real-time.
Summing up, the batches dropped from 1136 to 177, the number of triangles from 443,000 to 81,000, and the number of passes from 335 to 121.
The application built on Oculus Quest before optimization ran at an average of 25 fps without post-process on the stage. As a result, after optimization, we detect that this result jumped to an average of 60 fps with the additional use of post-processes on the stage.
This is only part of what we can do to improve scene optimization.