This is the HW4 description, and this is a sample transformation file..

Below are screengrabs of the PDF pages in the hw description link above..

Sent to everyone via email, on 4/27: Hi everyone, thought I'd post some notes to help you code up HW4. First off, to do the scan conversion, we'll relax the requirements a bit - you can use [almost] any scan conversion scheme you want. It still needs to be scanline oriented and still needs to be z-buffer based. In other words, you can't do flood filling or a variation of it called boundary filling. Try to get the polygon filling working first, then you can add lighting on top of it. Here's a brute-force way to scan convert just one polygon: The idea is to start with what you had in HW2 (wireframe rendering). That involved projecting points into NDC and then converting verts to (int_x,int_y) pixel coords and doing Bresenham DDA between adjacent verts to draw the wireframe edges. Here you'd draw those DDA pixels into a rectangular buffer in memory, not on the actual output window (these edge pixels will help you find scanline "spans" to fill). You can now march through the memory DDA line buffer one row at a time, and pick out and store (in two int arrays) the left_x and right_x integer pixel values found at that row (ie scanline). Some scanlines at the very top or very bottom might now contain any DDA pixels, that's ok (it means your poly is small and doesn't fill the image). Also, some scanlines will only have one pixel filled, that represents a topmost or bottommost vertex of your poly. You can take this single pixel x value to be both left_x and right_x. What you now have are SPANS for your (convex) poly. If you do a for() loop from top to bottom of your image and do setPixel() calls between left_x and right_x (both of those inclusive) and fill the span in each scanline (if a span exists - like I mentioned above, scanlines at top and bottom might not have spans), you should now see a 'solid' (nicely filled, with a single fill color) poly. This might be a good thing to get to work before moving on to what's below. The above was the brute force scan conversion step for just one polygon. In a polymesh you have multiple polys, parts of which hide each other. So you need to use a z (depth) buffer to decide, for each pixel in each span, whether to include it or not in the span. For this, you need to do camera space depth comparisons between the z value "of the current pixel" and the existing z value in the z buffer for that pixel [z buffer starts off with INF or INT_MAX at each pixel]. To get the z at a pixel, you can interpolate z values from the endpoint left_x and right_x pixels. To get THOSE z values, when you record DDA pixels into your memory buffer, you can interpolate camera space z values of the vertices. So in summary, given cam space z values of just the verts of a poly, you should be able to calculate appropriate z values for all pixels that fill the poly. Again, it is good to verify that this calculation works before moving on. If you use a different (eg random) rgb value to fill scanline pixels of different polys, doing the above (poly filling by consulting the z buffer at each pixel) should result in properly filled polys where polys in front correctly obscure ones behind. Make sure this works well before doing lighting. All the above was for just polygon filling (scan conversion). Additionally you need to do lighting too, be it flat or Gouraud or Phong. Again, use values at the vertices, interpolate them along edges and then again interpolate THOSE along the scanline spans. Since you'd be calculating vert xyz values in cam space [for z depth comparisons], polygon normals found by taking edge cross products will also be in camera space. So if you express the light vector also in cam space you can do all lighting calculations in that space. Flat shading is the easiest - a entire poly will be filled with a SINGLE rgb value for all pixels in all spans. This value comes from the dot product of the light vector and face normal, L.N. Gouraud shading: calculate L.N [like for flat shading] rgb values using VERTEX normals in cam space, at each vertex. Interpolate such vertex rgb colors first along edges and then along spans. This should give you smoothly varying colors across a face (instead of a single color for the entire face). Phong shading: don't calculate L.N at each vert. Instead, interpolate the vertex normals themselves along each edge, and then interpolate the interpolated normals across each span. This will give you continuously varying normal vectors where each pixel has a slightly different normal compared to its neighbors. Use such a custom normal value at a given pixel to do L.N and get an rgb for that pixel. The extra calculations stemming from normal interpolation and doing an L.V at each pixel [not just at vertices] gives you a higher quality image. That's it.. Hope this clarifies things.. The above is a truly brute force way but we'll still accept it. For efficiency, z value at a pixel is calculated by using the plane equation for a poly, using integer (or even fixed point) math, etc. Crow's algorithm is one such efficient method. For extra challenge (or if you want to "do it right") you can try these more efficient schemes (but they don't carry extra credit). If you look up literature on scan conversion you keep encountering these more efficient schemes, and the detail (book-keeping) involved can appear daunting. So you can choose whether or not to implement one of these classic schemes or to "roll your own". For extra credit, you would submit a "pure OpenGL" program where you use OpenGL calls for the output image window, interaction, space transformations and for lighting/materials as well (where OpenGL does the scan conversion and lighting calcs). For the mandatory (not extra credit) scan conversion version, you can still use OpenGL calls but just ones for creating the output window and interaction (Ilya's FrameBuffer class encapsulates these OpenGL calls and is therefore a good choice). Good luck, Saty