This is the second blog post of the sequence in which I talk about the LensVR rendering engine. In the first post, I discussed the high-level architecture of the LensVR rendering. In this post, I will get into the specifics of our implementation – how we use the GPU for all drawing. I will also share data on a performance comparison I did with Chrome and Servo Nightly.
Rendering to the GPU
After we have our list of high-level rendering commands, we have to execute them on the GPU. Renoir transforms these commands into low-level graphics API calls (like OpenGL, DirectX etc.). For instance, a “FillRectangle” high-level command will become a series of setting vertex/index buffers, shaders and draw calls.
Renoir “sees” all commands that will happen in the frame and can do high-level decisions to optimize for three important constraints:
- Minimize GPU memory usage
- Minimize synchronization with the GPU
- Minimize rendering API calls
When drawing a web page, certain effects require the use of intermediate render targets. Renoir will group as much as possible of those effects in the same target to minimize the memory used and reduce changing render targets for the GPU, which is fairly slow operation. It’ll also aggressively cache textures and targets and try to re-use them to avoid continually creating/destroying resources, which is quite slow.
The rendering API commands are immediately sent to the GPU on the rendering thread, there is no “intermediate” commands list as opposed to Chrome, where a dedicated GPU process is in charge of interacting with the GPU. The “path-to-pixel” is significantly shorter in LensVR compared to all other web browsers, with a lot less abstraction layers in-between, which is one of the keys to the speed it gets.
The GPU works as a consumer of commands and memory buffers generated by the CPU, it can complete its work several frames after that work has been submitted. So what happens when the CPU tries to modify some memory (say a vertex or index buffer) that the GPU hasn’t processed yet?
Graphics drivers keep tracks of these situations, called “hazards” and either stall the CPU until the resource has been consumed or do a process called “renaming” – basically cloning under the hood the resource and letting the CPU modify the fresh copy. Most of the time the driver will do the renaming, but if excessive memory is used, it can also stall.
Both possibilities are not great. Resource renaming increases the CPU time required to complete the API calls because of the bookkeeping involved, while a stall will almost certainly introduce serious jank in the page. Newer graphics APIs such as Metal, Vulkan and DirectX 12 let the developer keep track and avoid hazards. Renoir was likewise designed to manually track the usage of its resource to prevent renaming and stalls. Thus, it fits perfectly the architecture of the new modern APIs. Renoir has native rendering backend API implementations for all major graphics APIs and uses the best one for the platform it is running on. For instance, on Windows, it directly uses DirectX 11. In comparison, Chrome has to go through an additional abstraction library called ANGLE, which generates DirectX API calls from OpenGL ones – Chrome (which uses Skia) only understands OpenGL at the time of this post.
Renoir tries very hard to reduce the amount of rendering API calls. The process is called “batching” – combining multiple drawn elements in one draw call.
Most elements in Renoir can be drawn with one shader, which makes them much easier to batch together. A classic way of doing batching in games is combining opaque elements together and relying on the depth buffer to draw them correctly in the final image.
Unfortunately, this is much less effective in modern web pages. A lot of elements have transparency or blending effects and they need to be applied in the z-order of the page, otherwise, the final picture will be wrong.
Renoir keeps track of how elements are positioned on the page and if they don’t intersect it batches them together and in that case, the z-order no longer breaks batching. The final result is a substantial reduction of draw calls. It also pre-compiles all the required shaders in the library, which significantly improves “first-use” performance. Other 2D libraries like Skia rely on run-time shader compilation which can be very slow (in the seconds on first-time use) on mobile and introduce stalls.
Results & Comparisons
For a performance comparison, I took a page that is used as an example in the WebRender post. I did a small modification, substituting the gradient effect with an “opacity” effect, which is more common and is a good stress test for every web rendering library. I also changed the layout to flex-box, because it’s very common in modern web design. Here is how it looks:
Link to page here.
All tests were performed on Windows 10 on a i7-6700HQ @ 2.6GHz, 16GB RAM, NVIDIA GeForce GTX 960M, and on 1080p.
I measured only the rendering part in the browsers, using Chrome 61.0.3163.100 (stable) with GPU raster ON, Servo nightly from 14 Oct 2017, and LensVR alpha 0.6.
Chrome version 61.0.3163.100 results
The page definitely takes a toll on Chrome’s rendering, it struggles to maintain 60 FPS, but is significantly faster than the one in the video. The reasons are probably additional improvements in their code and the fact that the laptop I’m testing is significantly more powerful than the machine used in the original Mozilla video.
Let’s look at the performance chart:
I’m not sure why, but the rasterization always happens on one thread. Both raster and GPU tasks are quite heavy and a bottleneck on the page – they dominate the time needed to finish one frame.
On average for “painting” tasks I get ~5.3ms on the main thread with large spikes of 10+ms, ~20ms on raster tasks and ~20ms on the GPU process. Raster and GPU tasks seem to “flow” between frames and to dominate the frame-time.
Servo nightly (14 Oct 2017) results
Servo fares significantly better rendering-wise, unfortunately, there are some visual artifacts. I think it’s Servo’s port for Windows, that is still a bit shaky.
You can notice that Servo doesn’t achieve 60 FPS as well, but that seems to be due to the flex-box layout, we ignore that and look only at the rendering, however. The rendering part is measured as “Backend CPU time” by WebRender at ~6.36ms.
LensVR rendering engine ( alpha 0.6)
Here is one performance frame zoomed inside Chrome’s profiling UI which LensVR uses for its profiling as well.
The rendering-related tasks are the “Paint” one on-top, which interprets the Renoir commands, performs batching and executes the graphics API calls and the “RecordRendering” on the far right, which actually walks the DOM elements and generates Renoir commands.
The sum of both on average is ~2.6ms.
The following graphic shows the “linearized” time for all rendering-related work in a browser. While parallelism will shorten time-to-frame, the overall linear time is a good indicator of battery life impact.
Both WebRender and Renoir with their novel approaches to rendering have a clear advantage. LensVR is faster compared to WebRender, probably because of a better command generation and API interaction code. I plan to do a deeper analysis in a follow-up post. Let me know what you think about the post and also if you want to try LensVR yourself, sign up for the technical preview.
Author: Stoyan Nikolov, Lead architect LensVR browser. Follow him on Twitter.