Spring 2023, Yunhao Cao and Yuqi Zhai
We implemented every single task in the project, we believe that throughout this journey the most challenging part is to tweak many edge cases that would lead to crashes in the program. Also, through this project we really see how algorithm complexity affects response time in a CG program, even a complexity change of 2x can cause a huge difference in the response time.
In part 1 of this project, we are given a bunch of points representing a triangle, and a Color
object containing the r, g, b
values of the color with the triangle with. The challenge here is that:
Our approach is:
In this part, we are given a triangle and a color, and we need to rasterize the triangle with antialiasing technique. The reason why we need to do this is because the resolution of the screen usually does not match with the edge of the triangles perfectly and through supersampling we can get a better approximation of the shape of the triangle edges through smoother color transitions.
width * height
to width * height * sample_rate
, where sample_rate is the number of samples we want to take in each pixel.fill_pixel
function.
resolve_to_frame_buffer
function. Instead of a one-to-one relationship in task 1, we first sum over all the colors in each pixel due to supersampling (summing up each rgb value of the indices corresponding to the pixel in the sample buffer) and average them by dividing by the sample_rate
. Then we assign the overall rgb value for the pixel in the frame buffer.Image | Image |
---|---|
Sample Rate: 1 per px |
Sample Rate: 4 per px |
Sample Rate: 9 per px |
Sample Rate: 16 per px |
Analysis: As sample rate increases, we see a smoother transition of color at the edge of the triangle.
For this task we implemented 3 functions that involves transformations in the 2D plane. Here is a picture of rendered (and modified) little robot picture using those functions.
In this task, we are asked to rasterize triangles with interpolated color values. The challenge here is that we need to determine the color of each pixel inside the triangle.
Barycentric coordinate is a coordinate system for triangles of the form (alpha, beta, gamma), each representing the proportion of distance of a point relative to the three vertices. This is extremely useful in interpolation of color, texture coordinate, etc. We calculate the proportion distance by calculating the perpendicular distance from the point to the opposite edge. By utilizing the three proportions, we could, in this case, interpolate the color of each point inside the triangle via blending the colors of the three vertices.
By borrowing this picture from the Internet, we could see as we move away from the red vertex, the interpolation of color will become less and less red since the distance from that vertex to the point is becoming larger, hence a smaller influence. The same works for the vertices with blue and green color.
Pixel sampling is sampling the texture map through uv coordinates to get the texture element encoded that is either nearest to the uv coordinate given on the texture map or through bilinear interpolation to get an intermediate texture element, similar to getting the color parameter we are given in rasterizing a triangle in previous tasks.
u,v
by texture's width
and height
respectively, and round that number to get the nearest pixel on texture corresponding to the u,v
position.Image | Image |
---|---|
Nearest Sampling with 1 samples per pixel |
Bilinear Sampling with 1 samples per pixel |
Nearest Sampling with 16 samples per pixel |
Bilinear Sampling with 16 samples per pixel |
Level sampling is using appropriate mipmap levels to avoid sampling multiple texture pixels for a single on-screen pixel. Mipmaps are basically a set of pre-sampled textures at different resolutions, and we can use the derivative of the uv coordinates to determine which mipmap level to use.
I implemented Level sampling by first computing the derivative of the uv coordinates
We will use to represent the pixel coordinates on the screen, to represent the uv coordinates on the texture map, and to represent pixel coordinates on the original texture map at level 0.
We know and both represents the movement of the texture pixels(regard less of direction) on the texture map when we move one pixel on the screen. Then since each increase in level of mipmap results in half the resolution of the previous level(1/4x the pixels), we can use the following formula to determine the appropriate mipmap level to use:
Sometimes is computed to be or , so we need to clamp to be in the range. Those values happen because we might zoom in the image so much that the texture pixels on the screen are larger than the original texture map, or we might zoom out so much that multiple pixels on the smallest mipmap corresponds to one pixel on screen. We can't do too much about it so we just clamp its range.
After we get the appropriate mipmap level, depending on our level sampling method
round(D)
and use the code in task 5 to perform samplingfloor(D)
and ceil(D)
and use the code in task 5 to perform sampling. Then we use the following formula to interpolate the two results:Color lowerC = task5(psm, floor(D), u, v); Color upperC = task5(psm, ceil(D), u, v); Color result = lowerC * (ceil(D) - D) + upperC * (1-(ceil(D) - D));
Tradeoffs:
Pixel Sampling Method | Level Sampling Method | Memory Footprint | Computational Cost | Antialiasing Power |
---|---|---|---|---|
P_NEAREST |
L_ZERO |
1x |
1x |
1x |
P_LINEAR |
L_ZERO |
2x |
2x |
2x |
P_NEAREST |
L_NEAREST |
3x |
1x |
1x-4x |
P_LINEAR |
L_NEAREST |
5x |
2x |
2x-8x |
P_NEAREST |
L_LINEAR |
4x |
2x |
2x-8x |
P_LINEAR |
L_LINEAR |
6x |
4x |
4x-16x |
Visualizations:
Pixel Sampling Method | Level Sampling Method | Result |
---|---|---|
P_NEAREST |
L_ZERO |
|
P_LINEAR |
L_ZERO |
|
P_NEAREST |
L_NEAREST |
|
P_LINEAR |
L_NEAREST |
|
P_NEAREST |
L_LINEAR |
|
P_LINEAR |
L_LINEAR |
Credit and Side notes:
Webpage hosted at quantumcookie.xyz/Opensourced-Study-Notes-Berkeley/CS184/proj1-rasterizer-writeup