CS184/284A Spring 2025 Homework 1 Write-Up

Names: Yuhe Qin & Henry Michaelson

Link to webpage: hw-webpages-yuhe-henry-webpage
Link to GitHub repository: hw-rasterizer-yuhe-henry
Atomic Rasterization

Overview

We worked together to build a simple rasterization pipeline to render many vertices and textures into computer-based works of art.

Task 1: Drawing Single-Color Triangles

To rasterize triangles we do the following steps:


We implemented the bounding box technique so we are as fast as said technique. We identified the leftmost x (lowest value), rightmost x (highest value), top y (lowest value), bottom y (highest value) and used these points to define a square that we needed to evaluate inside of. Everything outside of this square cannot have their pixel values filled as they are guaranteed fall outside of the range of the triangle. One caveat is that we use the floordiv method to transform the bounding box into pixel indices instead of a raw float bounding box. We then used <= as the operator in the box iteration instead of < so we could capture the highest valid x,y values.

Here are two images created by implementing task 1:

This is the image that was requested that we show. You can see that we highlighted the artifacts that are appearing from the smushed red triangle. This is expected given the simple implementation called for in task 1.
This is an extra image showing how the point sampling works.

Task 2: Antialiasing by Supersampling

To implement supersampling we did the following steps:


Supersampling is useful as it lets us get more information as per a pixels true value. When we only sample one point in the center of the pixel, we lose a lot of information on whether the other points spanned by the pixel are included in the triangle or not. This can lead to aliasing which includes Moire Patterns and jaggies. Supersampling is helpful as it allows us to better approximate a pixel's true coverage. It also acts as a convolution and blurs sub-pixel level features that often cause aliasing.

As described above, our main tweaks were to change the sample buffer data structure to enlarge it to contain the supersampled pixel values. We also had to write out said supersampled values in the rasterize triangle method and had to update the framebuffer resolution method to allow for downsampling when sending to be rgb_buffer.

After implementing supersampling, we were able to antialias our triangles by increasing the supersampling rate. As the rate increased, the averaged value of the pixel became a better approximation of the continuous triangle value. In other words, as we increased the supersample rate, we better approximated a 1<>1 pixel convolution to blur out aliasing artifacts. You can see in each of these images below that as the supersample rate increases the image becomes clearer and contains fewer aliasing artifacts.

Here are four images created by implementing task 2:

Naive point sampling
Supersampling @4
Supersampling @9
Supersampling @16

Task 3: Transforms

After implementing the transformation code, we were able to show my_robot diving head first into a swimming pool:

Diving Robot

Extra Credit

We implemented viewport rotation using the Q and W keys.

Task 4: Barycentric coordinates


After learning about barycentric coordinates, we visualized how a triangle can smoothly blend colors at each vertex using this coordinate system.

This image shows a triangle with red, green, and blue vertices. The interior colors are smoothly interpolated using barycentric coordinates.

Color Interpolation Using Barycentric Coordinates

Barycentric coordinates are a way to describe the position of a point inside a triangle by expressing it as a weighted combination of the triangle's three vertices.

Each point inside the triangle is assigned three weights (usually called α, β, and γ), which tell us how close the point is to each of the triangle's corners. These weights always add up to 1.

For example:

This is very useful in computer graphics because we can use these weights to smoothly interpolate values-like color, texture, or lighting-across the triangle.

A png screenshot of svg/basic/test7.svg with default viewing parameters and sample rate 1

Task 5: "Pixel sampling" for texture mapping

Explanation

Pixel sampling is the process of determining the final color of a pixel on screen by looking up color data from a texture image. In texture mapping, screen-space triangles are mapped to texture-space coordinates, and pixel sampling helps decide which color from the texture should be used at each pixel.

How we implemented it to perform texture mapping

To implement pixel sampling for texture mapping, we updated the rasterize_textured_triangle function to compute barycentric coordinates for each pixel covered by the triangle. For every pixel center (x + 0.5, y + 0.5), we calculated the barycentric weights (α, β, γ) and used them to interpolate the UV coordinates from the triangle's vertices.

To prepare for proper texture sampling, we also computed the UV coordinates at neighboring positions (x + 1, y) and (x, y + 1), which allowed us to estimate the partial derivatives ∂u/∂x, ∂v/∂x, ∂u/∂y, and ∂v/∂y. These are passed into the SampleParams struct, along with the current UV and the selected pixel and level sampling modes (psm and lsm), which the GUI can toggle.

Then we passed the SampleParams to the tex.sample() function. Inside the texture class, we implemented both nearest-neighbor and bilinear sampling:

This setup allows flexible switching between sampling modes and supports antialiasing via mipmap level selection, which we handle in later parts of the assignment (e.g., implementing get_level and L_LINEAR interpolation).

Sampling Methods

Screenshots & Comparison

Capture screenshots with:

Nearest sampling at 1 sample/pixel
Nearest sampling at 16 samples/pixel
Bilinear sampling at 1 sample/pixel
Bilinear sampling at 16 sample/pixel

Comments on Differences

When Differences Are Significant

Bilinear sampling performs significantly better when:

This is because nearest sampling may pick discontinuous texel values, resulting in visible artifacts, while bilinear blends neighboring texels to produce a more consistent appearance.

Task 6: "Level Sampling" with mipmaps for texture mapping

Explanation

Level sampling is the process of choosing which mipmap level to use when looking up a texture during rendering. Mipmaps are precomputed, downscaled versions of a texture that help improve performance and reduce aliasing when the texture is viewed at smaller sizes on screen.

Instead of always sampling from the original high-resolution texture (level 0), we select a lower-resolution mipmap level depending on how much the texture is being minified. This helps prevent visual artifacts such as shimmering and moire patterns.

How we implemented it

To implement level sampling, we modified the rasterize_textured_triangle function to compute screen-space derivatives of the texture coordinates. For each pixel inside the triangle, we calculated the interpolated (u, v) using barycentric coordinates.

To estimate texture coordinate changes, we computed the coordinates at neighboring points: (x+1, y) and (x, y+1). These were used to generate p_dx_uv and p_dy_uv. We then filled the SampleParams struct with the current coordinates and their derivatives, and passed it to tex.sample(sp).

Inside Texture::sample, we handled three level sampling modes:

In get_level, we computed the norm of the derivatives in UV space, scaled them by the base level's width and height, and used log2 of the maximum value to estimate the mipmap level. We clamped the result to ensure it remains non-negative.

This implementation helps reduce aliasing when textures are minified and provides smoother transitions between mipmap levels.

Tradeoffs Between Speed, Memory Usage, and Antialiasing Power

In texture mapping and image filtering, various techniques are used to balance performance, memory, and image quality. The table below compares six such techniques:

Technique Speed Memory Usage Antialiasing Power
Pixel Sampling (Nearest / Bilinear) Very fast Low Low to Moderate
Level Sampling (L_ZERO / L_NEAREST / L_LINEAR) Fast (L_ZERO) to Moderate (L_LINEAR) Moderate (due to mipmaps) Moderate to High
Anisotropic Filtering Slower than mipmaps and bilinear filtering Higher (requires directional sampling) Very High (especially for oblique surfaces)
Summed Area Tables (SATs) Fast lookup after preprocessing High (stores summed values for every texel) Very High (precise rectangular averaging)
Trilinear Filtering Moderate Moderate (uses two mip levels) High (smoother LOD transitions)

Each technique serves a different purpose depending on the texture distortion, viewing angle, and performance requirements. For example, anisotropic filtering is ideal for extreme angles, while summed area tables are powerful for large blur kernels and rectangular averaging.

Screenshots & Comparison

These 8 images demonstrate the visual results of different combinations of pixel sampling and level sampling strategies:

Capture screenshots with:

L_ZERO + P_NEAREST
L_ZERO + P_LINEAR
L_ZERO + P_ANISOTROPIC
L_ZERO + P_SAT
L_NEAREST + P_NEAREST
L_NEAREST + P_LINEAR
L_NEAREST + P_ANISOTROPIC
L_NEAREST + P_SAT

Extra Credit

We implemented anisotropic filtering or summed area tables.