I'm working on some code here, and here is what I have done. It is based on the work by Ng et al. An example of what this looks like is here.
Background: Here, a "lighting cubemap" is a bunch of color values that sit on the faces of a cube.
The above lighting cubemap can be considered as a light source, where each individual pixel represents a quantity of light.
Now, say we find the visibility function at each vertex. A visibility function is just a description of what directions a vertex receives light from. Where there is geometry, set the pixel to 0 (total blockage of ambient light), and where there is no geometry, set the pixel to 1 (totally clear ambient light).
So clearly, if you just multiply and sum (lighting cubemap function • visibility function) you will get the total light sent to that point by the lighting cubemap.
Now here is the strange part:
The massive amount of visibility maps needed for a large scene makes memory consumption enormous and the dot product extremely computationally expensive. So, take the Haar transform of the visibility maps, with the Haar transform of the lighting cube map. Drop all but 1% of the coefficients in both of these.
The result of dotting the filtered, Haar transform of the visibility function and the light function yields approximately the same result!! You don't have to "unwavelet" transform as it were-- you just multiply and add, and whammo, you get the correct color at each vertex in the scene.
Why is that?