How we found a WebVR developer for the LensVR UX team?

Hire a WebVR developer

by Bilyana Vacheva

Virtual Reality is a new platform that poses a lot of UX challenges – I’ve discussed some of them in previous articles. One of our main goals is to discover the most intuitive ways for interaction in VR. The only way to discover what is a good user experience for this platform is through experimentation. So, I needed to hire a WebVR developer to actually implement our concepts and see what actually works in VR and what doesn’t. If you are planning to build a WebVR app but do not have WebVR experience in the team, you will most likely go through the same process so you may find this article helpful.

A few months ago, when we identified the need of a WebVR developer to join the LensVR UX team, we had several options:

  • hire someone who has experience with WebVR
  • find a WebGL developer
  • get someone who has web experience and train him to become a WebVR pro.

The WebVR API was pretty new, so there were not that many people who were actively using it. I knew most of them through Twitter or Slack, but they were passionate about their own projects, so it would have been hard to attract them to join our team.

The second option was to find a good WebGL developer as the WebVR API is based on WebGL. It would have been easy for such a person to transition to the new role. However, experienced WebGL developers are extremely rare and expensive. It could have been months before I found the right fit for my team.

Thus, the best option was to hire for talent and grow the person internally. There are more web developers with several years of experience than WebGL developers, so we figured that it would be easier to find such a person.

The Perfect WebVR Developer Profile

Hiring for talent is tough. It’s much easier to evaluate someone based on previous experience, rather than on what he can become in the future. We came up with the following list of what our future WebVR developer should be:

  • a curious developer who was up to the task to learn new skills
  • someone who was excited by Virtual Reality
  • someone who has worked with several JS libraries such as jQuery, Angular, React because it would have been easier for such person to move to a new library

Grow the talent internally

It took me less than a month to find a motivated wholehearted web developer with a passion for VR ( this hire, as well as most of the hires we do, are through a referral of a current LensVR team member. The way to hire great people is through looking after the current team members. They will help you find the right talent). The next step was to come up with a challenging, but also doable training program, which would give our new teammate a practical experience with 3D and WebVR within a month. So here we faced the same challenge many developers face, namely -how to get started with WebVR. If you have the same dilemma I recommend reading a previous article, I’ve posted which will help you understand better WebVR and the JS frameworks you could use to build VR experiences.

The senior developers on my team lent me a hand here and we came up with the following program:

  • First task: Get to know WebGL and 3D by developing a 3D viewer from the ground up
  • Second task: Get to know A-frame by repeating the same task, but also adding the controls for Oculus Rift.
  • Third task: Get familiar with React VR by recreating the 3D viewer, but also adding the controls for HTC Vive.

Although we planned to use more high-level libraries such as three.js, A-frame and React VR for the UX prototypes, we decided that a WebVR developer should start by learning the basics of WebGL. Understanding the basic principles of 3D would help him identify problems and solutions when something does not behave as expected (there is always something that does not behave as expected, trust me).

We chose the task of developing a model viewer that loads, rotates, zooms in and out of a 3D model because it gives an overview of all basic aspects of 3D including file formats, texture, light, math, camera perspective, etc. It is doable within 2 weeks and there are a large number of tutorials to help a person get started.

Understanding WebGL

WebGL is an API used for creating 3D graphics in a browser. It is based on OpenGL ES 2.0 and runs in the HTML5  <canvas> element. All calls are done through JavaScript.

Step 1

Nikolay, our new teammate, had no previous experience with WebGL, except a small experiment he did for a university project. He started with the simple task of creating a cube and learning how to rotate it left and right. Then he added a texture and a simple UI through which he could rotate the cube.

Here are some of the tutorials he found useful:

WebGL basics
OpenGL fundamentals series for lighting etc. – keep in mind that not everything is applicable for WebGL but still the tutorial is very useful
WebGL Tips

Simple WebGL model

Step 2

The second step was to load a complex 3D model. A great place to get free 3D models is Google Blocks. The file format Nikolay decided to use was .obj ( a standard format for low-poly models). .obj is a format that contains vertex data (vertex positions, UV position of texture coordinates, vertex, vertex normals, faces that define polygons as a list of vertices, and texture vertices). However, Google Blocks models turned out to be quite complex and for the first attempt, it was easier to use a simple model of a monkey head from a tutorial for building a 3D viewer. Here is a link to the tutorial:

Step 3

The next step was to add diffuse light to the scene. At this point, it got obvious that something was wrong. The monkey’s head had strange shades that did not match the direction from which the light should have illuminated the object.

WebGL 3D viewer

Half of the time dedicated to the task had already passed so Nikolay had to step back and evaluate the work he had done so far. Although .obj is a common 3D format, it is not the preferred 3D file format for VR – the preferred one is . glTF is optimized for web and is smaller in size, and it also carries information such as the objects hierarchy, skeletal structure, and animation, information about the light sources and cameras, and it also supports complex materials and shaders. That is why it is regarded as the best file format for WebVR. We discussed it and decided to change the .obj format with a .glTF as the shift would have resolved the problem with the light by itself.

It really turned out that the .glTF models looked better than before.

Here is a sample code of the shader Nikolay used (including lighting and texture), followed by the two tutorials on lighting 3D models he used.

// Fragment shader

#version 100

precision mediump float;

varying vec2 fragTexCoord;
varying vec3 surfaceNormal;
varying vec3 toLightVector;
varying vec3 toCameraVector;

varying vec3 fragVertPosition;

uniform sampler2D sampler;
uniform vec3 lightColor;

uniform float reflectivity;
uniform float shineDamper;

uniform vec3 lightPosition;

void main() {
vec3 unitNormal = normalize(surfaceNormal);
vec3 unitLightVector = normalize(toLightVector);

float normalDot = dot(unitNormal, unitLightVector);
float brightness = max(normalDot, 0.2);

vec3 diffuse = brightness * lightColor;

vec3 unitToCamera = normalize(toCameraVector);
vec3 lightDirection = -unitLightVector;
vec3 reflectedLightDirection = reflect(lightDirection, unitNormal);

float specularFactor = dot(reflectedLightDirection, unitToCamera);
specularFactor = max(specularFactor, 0.0);

float dampedFactor = pow(specularFactor, shineDamper);
vec3 finalSpecular = dampedFactor * reflectivity * lightColor;

gl_FragColor = vec4(diffuse, 1.0) * texture2D(sampler, fragTexCoord) + vec4(finalSpecular, 1.0);

Lighting tutorial 1

Lighting tutorial 2


WebVR developer training

Phono lighting model (diffuse + secular lighting)

Step 4

The last step of the project was adding support for mouse rotation additional to the UI buttons. It took less than a day. You can see the code here:

let isMouseDown = false;
let lastMouseCoords = {
x: 0,
y: 0

function mouseDown(event) {
lastMouseCoords.x = event.clientX;
lastMouseCoords.y = event.clientY;
isMouseDown = true;

function mouseUp() {
isMouseDown = false;

function mouseMove(event) {
if (isMouseDown) {
let deltaX = lastMouseCoords.x - event.clientX;
let deltaY = lastMouseCoords.y - event.clientY;

model.rotateX(-deltaY / 2);
model.rotateY(-deltaX / 2);

lastMouseCoords.x = event.clientX;
lastMouseCoords.y = event.clientY;


// Attachment of listeners
window.addEventListener('mousedown', mouseDown);
window.addEventListener('mousemove', mouseMove);
window.addEventListener('mouseup', mouseUp);

So after 2 weeks, Nikolay had a 3D viewer which could rotate the monkey left, right, up and down, and also zoom in and out with UI buttons or with a mouse. The project was a great way for our new teammate to learn the basic concepts of 3D without going too deep into WebGL. Equipped with these skills he was ready to move ahead and learn high-level frameworks such as A-frame and React-VR.

WebVR Viewer example

Let me know if you found the article useful and I will share more of our experience. Also, sign up for the newsletter in the left menu bar.

Author: Billy Vacheva, Twitter


Did you like the post? Share with your friends:

Comments 5

  1. Very cool. I know all of this as a web developer but I can’t actually spend the time to do the work myself. I keep learning for my own enjoyment. GLTF2.0 is coming out in AFrame 7. But, I hear that they will have code that flags GLTF1 as legacy code so it still shows up.
    Sketchfab does a great job of converting other formats to GLTF.

  2. Pingback: Virtual Reality on the web: why digital agencies should bet on WebVR

  3. Pingback: WebVR: choose a framework and create your first VR experience

Leave a Reply

Your email address will not be published. Required fields are marked *