|
min_t=EPS_F
as described in the implementation details. For this bug, ChatGPT was useful in identifying the
higher level issue (i.e. floating point inconsistencies), however, it was very unhelpful as it
led us down a bunch of incorrect paths before we noticed the proper solution in the
implementation details section.
In this assignment we took a quantum leap in our ability to render objects that look realistic. Up until
this point, we have learned how to represent and render geometrically complex objects, however, our
understanding of shading has been fairly shallow. Prior to path tracing, we had a very simple method of
approximating lighting (the Blinn Phong lighting method). This method was okay at capturing direct
relationships between objects, viewers, and light sources, but could not adequately capture indirect
lighting
in which light is scattered and reflected off of objects to create a field of illumination. As we see in
this
assignment, indirect lighting is absolutely critical in creating a photorealistic final product.
Here are the steps that we followed to simulate the transport of light:
|
|
|
|
|
|
|
|
|
For this part of the assignment, we implemented a bounded volume hierarchy data structure to speed up
the
time required to compute the ray object intersection. The BVH is a tree data structure that contains
a bounding box inclusive of all of its elements and either contains links to left / right BVH sub-trees
or
contains a list of primitive objects (if said list is small enough). The bounding box concept is very
useful
as a ray does not need to run intersection tests with a primitive if said ray does not intersect with
its
bounding box. To construct the BVH you have to recursively subdivide the list of primitive objects into
left and right nodes (while building inclusive bounding boxes for all elements processed) until your
list
of nodes is less than the max leaf size. When this case is met, you just store the pointers to said part
of
the list in your leaf node. The most complex part of BVH construction is determining how to partition
the
list into left and right sections. We used an algorithm where we measured the extent of a given bounding
box
in each dimension (i.e. max_x - min_x, max_y - min_y, max_z - min_z), we then decided to partition the
list
based on the dimension that had the largest difference between max - min. In other words we chose the
longest dimension from every bounding box and split there. We then sorted the list in ascending order
based on each primitive's bounding box centroid values in said dimension and chose the midpoint of the
list to be the split point. This meant that both left and right would be well balanced and
would never be empty (assuming that the total list size >= 2 which would always be the case).
We struggled with many bugs in this implementation and with our BBox intersection implementation that
led to many hours of debugging. We used ChatGPT for assistance in this debugging journey to mixed
results. By far our worst bug was with incorrectly partitioning the sublists used in each BHV left and
right operation. You can read more in the AI acknowledgement section.
|
|
|
|
|
|
|
|
|
|
| Scene | BVH Time (s) | Normal Time (s) |
|---|---|---|
| Cow | 0.0577 | 15.7713 |
| Building | 0.0326 | 116.7138 |
| Banana | 0.0431 | 6.6515 |
| Teapot | 0.0521 | 6.3522 |
| Coil | 0.0660 | 24.3718 |
As you can see above, implementing the BVH radically sped up the time to render scenes using our ray tracing algorithm. The amazing this is that our BVH implementation stayed quite performant even as the number of primitives increased. It did increase slightly, but was far less than a linear relationship (log(n)). On the other hand, you can see that the time to render increased linearly with the naive implementation. This would become a far bigger issue deeper down in the project as the global illuminance code required many, many samples per pixel. Therefore, while this acceleration was very helpful in rendering assets for task 2, it was absolutely critical for rendering some heavier assets (like those in task 4).
| Uniform Hemisphere Sampling | Light Sampling |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
In this scene with an area light, as we increase the number of light rays from 1 to 64 (using importance sampling with 1 sample per pixel), the soft shadows become progressively smoother and less noisy. With just 1 ray, the shadows are extremely grainy and undefined. At 4 rays, the shadow edges start to form but remain noisy. By 16 rays, the shadows look much more natural, and at 64 rays, they appear soft and clean with minimal noise—closely matching the expected look of realistic area lighting.
When comparing the rendered images of hemisphere sampling and importance sampling, the difference in visual quality is quite striking. Hemisphere sampling, which distributes rays uniformly across the upper hemisphere, tends to produce images with noticeable graininess—especially in shadowed areas, corners, and surfaces facing away from light sources. Furthermore, using indirect illumination with hemisphere sampling returns results that appear blotchy and take a very large number of samples to converge to something clean. Additionally, point lights are completely missing and contribute no light as can be seen with the banana image rendered with hemisphere lighting. On the other hand, importance sampling focuses its rays toward the directions of actual light sources. As a result, it captures the contribution of direct lighting much more efficiently. The resulting image is significantly smoother, with sharper shadows, brighter highlights, and reduced noise across the entire frame—even with fewer samples. Surfaces that are partially lit or receive bounced light benefit especially from the denser, more relevant sampling directions. In summary, while both methods are unbiased and will eventually converge (in the case of area lights), importance sampling dramatically improves convergence speed and visual clarity, making it much more practical for photorealistic rendering.
|
|
|
|
|
|
|
|
|
|
In the CBbunny scene shown above, we compare the effects of only direct
illumination (left) and only indirect illumination (right), both rendered with
1024 samples per pixel:
Direct Illumination (Left):
The lighting comes solely from the area light source on the ceiling. This results in sharp, well-defined
shadows under the bunny and strong highlights on surfaces directly facing the light.
However, regions not directly visible to the light (e.g., parts of the bunny facing away from the
ceiling) remain in deep shadow, and there's no visible color bleeding from the red and blue walls.
Indirect Illumination (Right):
Here, the light source itself is blacked out, and all visible lighting comes from secondary
bounces—light that hit other surfaces before reaching the camera.
The shadows are extremely soft, and the scene is generally dimmer, but we observe realistic color
bleeding: the bunny picks up red and blue hues from the adjacent walls.
The illumination is more diffuse, with ambient fill in areas that would be completely dark under only
direct light.
This comparison highlights how indirect illumination is essential for realistic global lighting—it fills in the shadows, simulates interreflection, and contributes to the overall color harmony of the scene.
isAccumBounces=True
|
|
|
|
|
|
isAccumBounces=False
|
|
|
|
|
|
For the second and third bounces of light, we observe increasingly subtle and indirect lighting effects that contribute to the overall realism of the scene:
In the second bounce (top left and bottom left images, max_ray_depth = 2),
we begin to see clear color bleeding from the red and blue walls onto the white bunny and surrounding
surfaces.
The corners and shadowed regions also become brighter compared to one-bounce results, as indirect light
has bounced once off another surface.
In the third bounce (top right and bottom right images, max_ray_depth = 3),
the scene becomes even more evenly lit.
Previously darker areas receive more illumination, and the soft indirect shadows under the bunny become
more diffused.
The overall effect is subtle but visible, especially in how the light balances out across the surfaces
and enhances the interreflection between walls and the object.
This progression illustrates how recursive bounces enhance global illumination, capturing complex light behavior like multi-bounce color bleeding and improved ambient fill.
Compared to traditional rasterization-based rendering, which typically uses approximations like ambient occlusion or baked lighting, path tracing with multiple bounces produces much more physically accurate results. It simulates light transport as it naturally occurs in the real world, accounting for indirect lighting, soft shadows, and subtle color blending across surfaces. This greatly improves the realism and visual richness of the final rendered image, especially in complex scenes with occlusion and interreflections.
|
|
|
|
|
|
To improve rendering efficiency and reduce unnecessary computation in deep light paths, we apply the Russian Roulette strategy during path tracing. This method probabilistically ends low-contributing rays while keeping the overall radiance estimation unbiased.
Russian Roulette helps us avoid tracing every light path to the maximum allowed depth. Instead, we introduce randomness to determine whether to continue or terminate each ray bounce based on a fixed continuation probability. This reduces noise in deeper bounces and balances performance with image quality.
r.depth > 1) to ensure that at least some indirect light is always computed.
coin_flip(termination_prob).
Suppose a ray continues with probability \( q \), and contributes radiance \( L \) when it does. The expected contribution across many such rays becomes:
\( \mathbb{E}[\text{Radiance}] = q \cdot \frac{L}{q} + (1 - q) \cdot 0 = L \)
This approach guarantees that our estimator remains consistent with the physically correct light transport, while allowing us to randomly skip computationally expensive paths that are statistically insignificant.
|
|
|
|
|
|
|
raytrace_pixel() function, which traces and estimates
lighting for one pixel at a time.
s1: sum of illuminance (brightness)s2: sum of squared illuminancesamplesPerBatch), using jittered
subpixel positions to avoid aliasing.
|
|
|
|
|
|
|
|
|
|
We have worked together for all three projects in this class so far and are in the same group for the final project. We have enjoyed working together as both of us are very communicative and hard working. On this project we made the mistake of trying to get to the next task instead of making sure that our previous task was completed in the correct way. This meant that there were several shadow bugs that left our program in a state where it was mostly, but not fully, working. This ended up causing us to have a huge debugging journey because the bugs were harder to isolate as there were more points of failure. This led us to bang our heads against the wall and have a never ending WhatsApp chat for the last 5 days. Thankfully, it all came together yesterday. Below are a few of our debugging experiences, but also refer to the AI acknowledgement to read about our debugging experience for task 2 which was quite a doozy.
Ray shadow_ray(hit_p, wi_world);
shadow_ray.min_t = EPS_F;
shadow_ray.max_t = distToLight - EPS_F;
the lighting result was corrected, and the rendered image matched the expected output. ChatGPT was
useful in confirming that our issue was most likely a floating point issue.
cpdf = 1.0 to always continue the path), we were able
to render correct ceiling bounces and significantly improve color accuracy. While ChatGPT was not
directly involved in this fix, it helped us explore other RR-related debug ideas before the TA's insight
ultimately resolved it.