This example dynamically generates a combined ambient occlusion and direct illumination lightmap on the GPU. Ambient occlusion refers to the shadows caused by a uniformly lit hemisphere enclosing the model, which approximates the light arriving from the sky. Try turning off the accumulation across frames to see what the the instantaneous shadow test looks like:
Lightmap generation is traditionally done by tracing thousands of rays per pixel, bouncing them around the scene, and accumulating the light contributed along each ray into a lightmap. This is very hard to do on the GPU for anything other than small special-cased scenes because rays that bounce everywhere have very incoherent data access patterns that interact poorly with the GPU cache.
However, if the lightmap only needs to represent a single bounce of light, rasterization can be used to massively speed everything up. Instead of picking a unique random direction direction for every ray, you can randomly pick one direction and test many parallel rays along that direction. These two approaches average out to the same result in the long run but the parallel rays approach can be GPU-accelerated using orthographic rasterization. That's what this demo does.
Both sides of every quad in the scene are assigned square patches in the lightmap texture. The scene is rendered once every frame from a random direction into a shadow map using an orthographic camera. That shadow map is then accumulated onto the lightmap. The direction alternates between a direction near the primary light and a direction uniformly sampled from the sky hemisphere. This alternation simulates both a soft direct light and a global ambient light.
This scene can't be multisampled even though it uses forward rendering because there will be leaking around the edges of quads due to extrapolation. For more details see the article at https://www.opengl.org/pipeline/article/vol003_6/. These artifacts can be avoided by using centroid sampling except it isn't supported by WebGL.