W Randolph Franklin home page
... (old version)
ComputerGraphicsFall2014/ home page Login


... and labs

Week 1a

1.  Mon 8/25/2014 (L1)

  1. Discuss Syllabus.
  2. Intro to OpenGL.
    1. It's a platform-neutral competitor to DirectX. See http://en.wikipedia.org/wiki/Comparison_of_OpenGL_and_Direct3D .
    2. The competition improves both.
    3. The designers made a decision to do only rendering, no input, audio, or windowing.
    4. The OpenGL standards committee gradually adds and subtracts features.
    5. The goal is to exploit the latest GPUs while keeping the size manageable.
    6. This makes the hello world program much longer, but makes complicated graphics more efficient.
    7. OpenGL4 exploits latest GPU features. However many expensive business laptops, such as my Thinkpad x201 cannot run it.
    8. OpenGL4 being lower level opens a niche for a replacement easy-to-use graphics API. I recommend Qt.
    9. Tools like Blender are higher-level, very good for realistic scenes, but too fancy for simple graphics programming. I'll leave them to other classes.
    10. See http://en.wikipedia.org/wiki/Opengl .
    11. WebGL is an API for HTML5.
    12. This course uses JavaScript because web-based graphics is easier with it.
    13. Although JavaScript is less efficient than C++, the serious graphics computing is done on the graphics card.
    14. The OpenGL API for C++ is quite similar to JavaScript's. Instead of the JavaScript framework, you need a C++ toolkit to provide the the higher level functions that OpenGL chose not to provide. This includes creating windows. GLUT is one such toolkit. It is widely used but old. There are other newer ones.
  3. Why is this course different?
    1. Shader-based
    2. Most computer graphics use OpenGL but still use fixed-function pipeline, don't require shaders, do not make use of the full capabilities of the graphics processing unit (GPU)
    3. Web: With HTML5, WebGL runs in the latest browsers. Makes use of local hardware, No system dependencies.
  4. Some web resources
    1. http://www.cs.unm.edu/~angel/
    2. http://www.cs.unm.edu/~angel/WebGL/7E
    3. http://www.opengl.org
    4. http://get.webgl.org
    5. http://www.kronos.org/webgl
    6. http://www.chromeexperiments.com/webgl
    7. http://learningwebgl.com
  5. Long history of attempts at a good API described in text.
  6. OpenGL, 1992:
    1. platform-independent API that was
    2. Easy to use
    3. Close enough to the hardware to get excellent performance
    4. Focus on rendering
    5. Omitted windowing and input to avoid window system dependencies
  7. OpenGL evolution
    1. Originally controlled by an Architectural Review Board (ARB)
    2. Members included SGI, Microsoft, Nvidia, HP, 3DLabs, IBM,…….
    3. Now Kronos Group
    4. Was relatively stable (through version 2.5)
    5. Backward compatible
    6. Evolution reflected new hardware capabilities
    7. 3D texture mapping and texture objects
    8. Vertex and fragment programs
    9. Allows platform specific features through extensions
  8. Modern OpenGL
    1. Performance is achieved by using GPU rather than CPU
    2. Control GPU through programs called shaders
    3. Application’s job is to send data to GPU
    4. GPU does all rendering
  9. Immediate Mode Graphics
    1. Geometry specified by vertices
      1. Locations in space( 2 or 3 dimensional)
      2. Points, lines, circles, polygons, curves, surfaces
    2. Immediate mode
      1. Each time a vertex is specified in application, its location is sent to the GPU
      2. Old style uses glVertex
      3. Creates bottleneck between CPU and GPU
      4. Removed from OpenGL 3.1 and OpenGL ES 2.0
  10. Retained Mode Graphics
    1. Put all vertex attribute data in array
    2. Send array to GPU to be rendered immediately
    3. Almost OK but problem is we would have to send array over each time we need another render of it
    4. Better to send array over and store on GPU for multiple renderings
  11. OpenGL 3.1
    1. Totally shader-based
      1. No default shaders
      2. Each application must provide both a vertex and a fragment shader
    2. No immediate mode
    3. Few state variables
    4. Most 2.5 functions deprecated
    5. Backward compatibility not required
      1. Exists a compatibility extension
  12. Other Versions
    1. OpenGL ES
      1. Embedded systems
      2. Version 1.0 simplified OpenGL 2.1
      3. Version 2.0 simplified OpenGL 3.1
      4. Shader based
    2. WebGL
      1. Javascript implementation of ES 2.0
      2. Supported on newer browsers
    3. OpenGL 4.1, 4.2, …..
      1. Add geometry, tessellation, compute shaders
  13. Source of much of this material: slides accompanying the text.
  14. Reading assignment: Angel, chapter 1.
  15. Homework 1 is online; due next Thurs.

Week 1b

2.  Tues 8/26/2014

Shen Wang's office hour today is at 4pm not 3, in JEC6037.

3.  Wed 8/27/2014

Today's lab is to help people with the programming question on the homework. Bring a computer. Know your RCS id and password.

4.  Thurs 8/28/2014 (L2)

4.1  Programming homework

AFS note: We saw in yesterday's lab that RPI's AFS has capacity restrictions. x7777 told me that it is not really maintained any more. When I asked how could I have known since there is a web page documenting it, the answer was that a decision had been made not to announce this to the campus because too much information would just confuse people.

The consultant suggested using Filezilla, which works well. Sftp is an alternative. Samba is not since it has been decommissioned.

You may also run WebGL on any machine, including on your laptop if you have a local webserver on it.

For the homework question, hand in a listing of any file that you changed and a screen dump of it working.

You need a reasonable new laptop; you cannot learn a new technology on an obsolete system.

When copying my files over to your account, you have to maintain the relative directory structure (or edit the files to reflect what you changed).

My RCS info: FRANKWR, 12356.

4.2  Lecture

Today's lecture will end around 5pm since I have to get away to a reception that Pres Jackson is holding.

Today's material is on slides from Angel's website. The links here are to my local cache.

  1. Angel_UNM_14_1_3.ppt Coding in WebGL
  2. Angel_UNM_14_1_4.ppt What is CG?
  3. Angel_UNM_14_1_5.ppt Image formation
  4. Angel_UNM_14_2_1.ppt Models and architectures
  5. Angel_UNM_14_2_1.ppt Programming with WebGL Part 1: Background

Week 2

5.  Wed 9/3/14: no lab

There's nothing new.

6.  Thurs 9/4/14 (L3)

We'll attempt to use the iClicker.

Angel_UNM_14_1_4.ppt ctd

SIGGRAPH 2014 - Emerging Technologies Media Tour

Angel_UNM_14_1_5.ppt

Homework 2 is online; due next Thurs.

(Enrichment, which means this will not be on a test.) If you're interested in the C++ API for OpenGL, the 2013 OpenGL SIGGRAPH short course is useful. See http://www.cs.unm.edu/~angel/. There's a Youtube video and PPTX slides.

The 2014 short course, which was just posted in the last few days, switched to WebGL, just as I did for this course.

The notes attached to each slide are quite good. They're longer than the slides but shorter than the textbook.

Notes written on my tablet during the lecture: 0903.pdf

Week 3a

7.  Linear algebra tutorial

  1. For students confused by the linear algebra questions on homework 1, the following may help:
    1. vector methods.
    2. Vector Math for 3D Computer Graphics
    3. Working with vectors, from Maths: A Student's Survival Guide
  2. Also see wikipedia.
  3. Some older graphics texts have appendices summarizing the relevant linear algebra.
  4. The scalar (dot) product is needed to do things like this:
    1. compute how a ball bounces off a wall in pong
    2. compute lighting effects (how light reflects from a surface)
  5. The cross product's applications include:
    1. compute torque and angular momentum.
    2. compute the area of a parallelogram.
    It is the only operator that you'll probably ever see that is not associative. Ax(BxC) != (AxB)xC.
  6. The triple product (A.BxC) computes the volume of a parallelepiped.

8.  Mon 9/8/14 (L4)

  1. Graphics display hardware: The progress of Computer Graphics is largely the progress of hardware. We'll see more of this later. However, here's an intro.
    1. What physical principles are each type of HW based on?
      1. CRT: certain rare earth materials emit photons when hit by electrons. Explaining this is what got Einstein his Nobel (not relativity).
      2. LCD: electric field causes big asymmetric molecules to untwist so that they no longer rotate polarized light passing through them.
    2. What engineering challenges required solving?
      1. Shadow-mask CRT: electron beams travel varying distances at different angles, but don't hit the wrong phosphor even as the system gets hotter. The precision is 0.1%.
      2. Hi-performance graphics requires hi bandwidth memory.
      3. Virtual reality headsets require knowing where your head is and its angle (harder).
        Think of 3 ways to do this.
    3. What tech advances enabled the solutions?
      1. Raster graphics requires cheap memory.
      2. LCD panels require large arrays of transistors.
  2. Note on Engineering Grounded In Reality and famous graphics alumni.
  3. Executive summary of Portability And Standards.
  4. SIGGRAPH 2014
  5. Angel_UNM_14_2_1.ppt Programming with WebGL Part 1: Background
  6. Angel_UNM_14_2_2.ppt Programming with WebGL Part 1: Background
  7. Angel_UNM_14_2_3.ppt Programming with WebGL Part 1: Background
  8. Angel_UNM_14_2_4.ppt Programming with WebGL Part 2: Complete Programs
  9. Angel_UNM_14_2_5.ppt Programming with WebGL Part 2: Complete Programs
  10. Angel_UNM_14_3_1.ppt Programming with WebGL Part 3: Shaders
  11. Notes written on my tablet during the lecture: 0903.pdf

More iClicker questions.

Week 3b

9.  Wed 9/10/14: Lab

Optional help on the homework. If you don't want help, you don't need to attend.

10.  Thurs 9/11/14 (L5)

  1. Extra material on colors
    1. Tetrachromacy
      Some women have 2 slightly different types of green cones in their retina. They see 4 primary colors.
    2. Metamers
      Different colors (either emitted or reflected) with quite different spectral distributions can appear perceptually identical. With reflected surfaces, this depends on what light is illuminating them. Two surfaces might appear identical under noon sunlight but quite different under incandescent lighting.
    3. CIE chromaticity diagram
      This maps spectral colors into a human perceptual coordinate system. Use it to determine what one color a mixture of colors will appear to be.
      1. More info: CIE_xyY
      2. Purple is not a spectral color.
    4. Color video standards: NTSC, SECAM, etc
    5. My note on NTSC And Other TV Formats.
    6. Failed ideas: mechanical color TV. http://en.wikipedia.org/wiki/Mechanical_television (enrichment)
    7. Additive vs subtractive colors.
    8. Sensitivity curves of the 3 types of cones vs wavelength.
    9. Material reflectivity vs wavelength.
    10. Incoming light intensity vs wavelength.
    11. Retina is a neural net - Mach band effect.
  2. SIGGRAPH 2014: Computer Animation Festival Trailer
  3. Angel_UNM_14_3_2.ppt Programming with WebGL Part 3: Shaders
  4. Angel_UNM_14_3_3.ppt Programming with WebGL Part 4: Color and Attributes
  5. Angel_UNM_14_3_4.ppt Programming with WebGL Part 5: More GLSL
  6. gasket2.html
  7. Angel_UNM_14_3_5.ppt Programming with WebGL Part 6: Three Dimensions
  8. gasket4.html
  9. Angel_UNM_14_3_6.ppt Programming with WebGL Part 6: Three Dimensions
  10. More iClicker questions.
  11. Homework 3 is online; due next Thurs.
  12. Notes written on my tablet during the lecture: 0911.pdf

Week 4

11.  Mon 9/15/14 (L6)

  1. SIGGRAPH 2014 : Technical Papers Preview Trailer
  2. This week will be mostly mathematics. I'm replacing the Angel lecture at Angel_UNM_14_3_E1.ppt Incremental and Quaternion Rotation, with my own material.
  3. My note on 3D rotation
    1. All rigid transformations in 3D that don't move the origin have a line of fixed points, i.e., an axis, that they rotate around.
    2. deriving the vector formula for a rotation given the axis and angle
    3. computing the matrix from a rotation axis and angle
    4. testing whether a matrix is a rotation
    5. if it is, then finding the axis and angle
  4. Quaternions. This is an alternative method to rotate in 3D. Its advantages are:
    1. It starts from the intuitive axis-angle API.
    2. Animating a large rotation in small steps (by varying the angle slowly) is easy. In contrast, stepping the 3 Euler angles does not work well, and there's no obvious way to gradually apply a {$3\times3$} rotation matrix, {$M$}. (You could compute {$M^{1/100}$} and apply it 100 times, but that computation is messy. Eigenvectors are useful here.)
    3. When combining multiple rotations, the axis and angle of the combo is easy to find.
    4. Having only 4 parameters to represent the 3 degrees of freedom of a 3D rotation is the right number. Using only 3 parameters, as Euler angles do, causes gimbal lock. That is, you cannot always represent a smooth rotation by smooth changes of the 3 parameters. OTOH, using 9 parameters, as with a matrix, gives too much opportunity for roundoff errors causing the matrix not to be exactly a rotation. (You can snap the matrix back to a rotation matrix, but that's messy.)
  5. More iClicker questions.
  6. Notes written on my tablet during the lecture: 0915.pdf

12.  Wed 9/17/14

Optional lab for people who wish extra help.

13.  Thurs 9/18/14 (L7)

  1. iClickers: LMS now has columns from iClicker for the last 4 lectures. In each class, you got 1 point for answering at least one question.
    There are many unregistered iclickers used in class. Coincidentally, there are many students on the roster who haven't registered their iclickers. It's not too late to register, which will then include all your old answers. Please register using your RCSID, i.e. (approximately), last name + 1st initial + number. If you use your 9-digit RIN, then we have to match you manually. That makes us grumpy.
  2. More iclicker questions.
  3. More on 3D rotations:
    1. Finding the axis and angle from a rotation matrix.
    2. Quaternions.
  4. SIGGRAPH 2013: Technical Papers Preview Trailer
  5. Homework 4 is online; due next Thurs.
  6. Notes written on my tablet during the lecture: 0918.pdf
  7. (Unrelated to this course). Tonight is the annual IgNobel award ceremony, parodying the upcoming real Nobels. However, in 2005, Roy Glauber, the janitor who usually sweeps paper airplanes off the stage during the Ignobel ceremony, won a real Nobel. (His day job was a professor of physics at Harvard.) So, maybe the Nobels parody the Ignobels.

Week 5

14.  Mon 9/22/14 (L8)

  1. Euler and angles and Gimbal lock
    1. http://www.youtube.com/watch?v=rrUCBOlJdt4&feature=related Gimble Lock - Explained.
      One problem with Euler angles is that multiple sets of Euler angles can degenerate to the same orientation. Conversely, making a small rotation from certain sets of Euler angles can require a jump in those angles. This is not just a math phenomenon; real gyroscopes experience it.
    2. http://en.wikipedia.org/wiki/Gimbal_lock
    3. What is Gimbal Lock and why does it occur? - an animator's view.
  2. This week, we'll see how the programs in http://www.cs.unm.edu/~angel/WebGL/7E/03/ work.
  3. Angel_UNM_14_4_1.ppt Input and Interaction
  4. Angel_UNM_14_4_2.ppt Animation
  5. Angel_UNM_14_4_3.ppt Working with Callbacks
  6. SIGGRAPH 2013 : Emerging Technologies Preview Trailer
  7. Notes written on my tablet during the lecture: 0922.pdf

15.  Wed 9/24/14

16.  Thu 9/25/14 (L9)

  1. SIGGRAPH 2012 Technical Papers Video Preview
  2. Note on 3D Interpolation (to replace textbook section on animating with Euler angles).
  3. Angel_UNM_14_4_4.ppt Position Input
  4. Angel_UNM_14_4_5.ppt Picking
  5. Angel_UNM_14_4_6.ppt Geometry
  6. Remote X sessions: (enrichment only) Running a program on one machine (the client) that does interactive graphics on another (the server) is hard to implement.
    1. That was one of the earliest goals of the ARPAnet in 1968. It didn't work out because the bandwidth over a phone line was too low, say 1KB/s. However the ARPAnet turned out to be useful for frivolous things like exchanging messages (smtp), as well as remote logins (telnet) and exchanging files (ftp). So it continued to grow...
      (Opinion): That's one of the most successful examples of government-private cooperation. The government (i.e., taxpayer) paid for the R&D, for several decades in this case. The immediate users were military and research universities. The tech was made freely available, and eventually private companies commercialized it.
    2. Project Athena was at MIT in the 1980s, sponsored by DEC and IBM, costing nearly $100M. It did distributed computing, including the X window system, which we still use. IMHO, it's still better than remote graphics on MS Windows.
    3. However, the default config for remote X sessions, which you get by ssh -X foo.rpi.edu is unusably slow. Too many low-level bits are sent, all encrypted.
    4. Here are some things I've discovered that help.
      1. Use a faster, albeit less secure, cipher:
        ssh -c arcfour,blowfish-cbc -C client.rpi.edu
      2. Use xpra; here's an example:
        1. On client: xpra start :7; DISPLAY=:7 xeyes&
        2. On server: xpra attach ssh:client.rpi.edu:7
      3. Use nx, which needs a server, e.g., FreeNX.
  7. Picking
    The user selects an object on the display with the mouse. How can we tell which object was selected? This is a little tricky.
    E.g., It's not enough to know what line of code was executed to draw that pixel (and even determining that isn't trivial). That line may be in a loop in a subroutine called from several places. We want to know, sort of, the whole current state of the program. Also, if that pixel was drawn several times, we want to know about only the last time that that pixel changed.
    There are various messy methods, which are all now deprecated. The new official way is to use the color buffer to code the objects.
  8. Homework 5 is online; due next Thurs.
  9. Here's the story about the bridge over the Rhine between Germany and Switzerland that had a 54cm vertical error between the two halves. The two countries have different standards for where sea level is. The difference is 27cm. That's ok, there are many different coordinate systems used, and the differences are published. However, someone added instead of subtracting. http://www.science20.com/news_articles/what_happens_bridge_when_one_side_uses_mediterranean_sea_level_and_another_north_sea-121600
    These differences started because in the 19th and early 20th centuries, various countries fit ellipses to their parts of the earth's surface. E.g., the US created the NAD27 (North American Datum of 1927) standard. The various datums usually disagree by tens of meters horizontally and much less vertically. Another problem is that sea level is not level.
    Before GPS all that was insignificant. Everyone then got together to create the World Geodetic System 1984 (WGS84), to replace the various national standards. However, I bought a map this summer that still used NAD27.
    Some years ago a Russian plane crashed when it tried to land 100m away from the runway. That sounds like a datum error. This is unrelated to the Russian plane crash where the pilot let his teenage son sit in his seat. The kid accidently turned the autopilot half off, putting the plane into such a steep dive that the pilot had trouble reaching the controls. Unfortunately, the pilot's efforts prevented the autopilot from saving the plane. See http://en.wikipedia.org/wiki/Aeroflot_Flight_593 and http://www.nytimes.com/1994/09/28/world/tape-confirms-the-pilot-s-son-caused-crash-of-russian-jet.html .
    As another example, NASA once lost a space probe to Mars because one group used English and other metric measurements.
    As another example, the clocks used by GPS and cellphones differ by 12 seconds. GPS doesn't use leap seconds; cellphones do.
    The importance of the above examples is that details matter in engineering. User interface design is one important detail.
  10. Notes written on my tablet during the lecture: 0925.pdf

Week 6

17.  Mon 9/29/14 (L10)

  1. NVIDIA's Light-field Glasses Prototype demo @ Siggraph 2013
  2. More programs from http://www.cs.unm.edu/~angel/WebGL/7E/03/ .
  3. Angel_UNM_14_5_1.ppt Representation.
  4. Angel_UNM_14_5_2.ppt Homogeneous Coordinates
    This is today's big idea. My take on the topic is here:
    HomogeneousCoords
  5. Angel_UNM_14_5_3.ppt Transformations
  6. Steve Baker's notes on some graphics topics:
    1. GL_MODELVIEW vs GL_PROJECTION
    2. Euler angles are evil
  7. Notes written on my tablet during the lecture: 0929.pdf

18.  Wed 10/1/14 (L11)

Regular lecture, in lab room.

  1. Iclicker questions that I've asked in class. I'll update it from time to time (the process is not automatic).
  2. An attempt at asking new iclicker questions.
  3. SIGGRAPH 2011 Technical Papers Video Preview
  4. HomogeneousCoords continued.
  5. http://www.ecse.rpi.edu/Homepages/wrf/wiki/ComputerGraphicsFall2014/Angel/WebGL/7E/04/ programs.
  6. Angel_UNM_14_5_4.ppt WebGL Transformations
    1. The old OpenGL modelview and projection matrix idea is now deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
  7. Angel_UNM_14_5_5.ppt Applying Transformations
  8. Adaptive Tearing and Cracking of Thin Sheets, SIGGRAPH 2014
  9. (Unrelated to course but a benefit of being at RPI) Steve Case is talking in EMPAC on Tuesday, October 21, 2014 at 3:00 p.m. There are interesting things all day, see http://lally50.splashthat.com/. Case is a textbook story dot-com boom and bust. He built up the internet startup AOL, rising from marketing consultant to CEO. In 2000 at the dot-com peak, AOL purchased Time Warner for $164G, based on AOL's very optimistic stock price. (TW was fundamentally a much bigger company.) That has been called the biggest business mistake ever (for TW; it was a great move for AOL and Case). AOL optimized the game perfectly by converting their frothy stock to something solid at the peak. In 2002, the combo lost $99G, the biggest corporate loss ever to that date. AOL's stock value eventually fell from $226G to $20G.
    Ref: http://en.wikipedia.org/wiki/Time_Warner, http://learning.blogs.nytimes.com/2012/01/10/jan-10-2000-aol-and-time-warner-announce-merger/

etc.

19.  Thu 10/2/14 (L12)

  1. More powerpoint slides from the text, iclicker questions, new homework.
  2. Siggraph 2014: Facial Performance Enhancement Using Dynamic Shape Space Analysis
  3. Angel_UNM_14_6_1.ppt Building Models.
    big idea: separate the geometry from the topology.
  4. Angel_UNM_14_6_2.ppt The Rotating Square. big idea: render by elements.
  5. Angel_UNM_14_6_3.ppt Classical Viewing. Big ideas: parallel (orthographic) and perspective projections. The fine distinctions between subclasses of projections are IMO obsolete.
  6. Angel_UNM_14_6_4.ppt Computer Viewing: Positioning the Camera.
  7. Angel_UNM_14_6_5.ppt Computer Viewing: Projection. big idea: view normalization.
  8. Notes written on my tablet during the lecture: 1002.pdf

Week 8

20.  Mon 10/6/14

No class

21.  Wed 10/8/14

  1. Review for midterm
  2. Notes written on my tablet during the lecture: 1008.pdf

22.  Thu 10/9/14

Midterm F2014, Solution.

Week 9

23.  Tues 10/14/14 (L13)

  1. World's largest tablet: Ocado Technology's 42" sLablet
  2. No iclicker.
  3. New homework this evening.
  4. Siggraph 2014: Focus 3D: Compressive Accommodation Display
  5. Angel_UNM_14_E2.ppt Virtual Trackball.
    A minor application of rotation.
  6. Angel_UNM_14_7_1.ppt Orthogonal Projection Matrices.
    Big idea: Given any orthogonal projection and clip volume, we transform the object so that we can view the new object with projection (x,y,z) -> (x,y,0) and clip volume (-1,-1,-1) to (1,1,1) and get the same image. That's a normalization transformation. See slide 14.
  7. Angel_UNM_14_7_2.ppt Perspective Projection Matrices.
    Big idea: We can do the same with perspective projections. The objects are distorted like in a fun house. See slide 8.
  8. HomogeneousCoords continued.
  9. Angel_UNM_14_7_3.ppt Meshes.
    Big ideas:
    1. Meshes are common for representing complicated shapes.
    2. We can plot a 2 1/2 D mesh back to front w/o needing a depth buffer.
    3. 'Unstructured point clouds with millions of points from laser scanners are common. We can turn them into a mesh by using various heuristics about the expected object.
    4. Drawing the mesh edges a little closer than the faces is a hack that probably works most of the time if the scene isn't too complex. I.e., go ahead and use it, but don't be surprised.
  10. Angel_UNM_14_7_4.ppt Shadows.
    Big idea:
    1. If there is one light source, we can compute visibility with it as the viewer.
    2. Somehow mark the visible (i.e., lit) and hidden (i.e., shadowed) portions of the objects.
    3. Then recompute visibility from the real viewer and shade the visible objects depending on whether they are lit or shadowed.
    4. This works for a small number of lights.
  11. Angel_UNM_14_7_5.ppt Lighting and Shading I.
    Big big topic.
  12. Notes written on my tablet during the lecture: 1014.pdf

24.  Wed 10/15/14

Lab to get graded midterm exams back, and to ask questions about them.

25.  Thurs 10/16/14 (L14)

nn

  1. More powerpoint slides from the text, iclicker questions, new homework.
  2. This Woman Sees 100 Times More Colors Than The Average Person
  3. URP: Programming virtual environments for Cog Sci research
    Prof Fajen writes:
    I am looking for an undergraduate student to work as a programmer in my virtual reality lab. The primary responsibility will be to write code to develop virtual environments that will be used for NSF-sponsored research on human visual perception and motor control.
    Candidates must have a strong background in C++ or Python and some experience with computer graphics. Preference will be given to students with experience working on large-scale programming projects. The position pays $12 per hour. Students must be willing to devote at least 5 (and preferably 8-10) hours per week during the rest of the Fall 2014 semester. The position starts immediately and will continue into the Spring 2015 semester.
    This is an excellent opportunity for students to develop their programming skills and gain experience working with virtual reality equipment.
    Students who are interested should send an email expressing their interest in the position to Professor Brett Fajen (fajenb@rpi.edu <fajenb@rpi.edu>). Please include a resume with relevant coursework, GPA, and any work/research experience.
     
       Brett R. Fajen, PhD
       Professor of Cognitive Science
       Department of Cognitive Science
       Rensselaer Polytechnic Institute
       Carnegie Building 301D
       110 8th Street
       Troy NY 12180-3590
    
       Voice: (518) 276-8266
       Fax: (518) 276-8268
       Email: fajenb@rpi.edu
       http://panda.cogsci.rpi.edu
    
    
    He will be here at the start of Monday's class to talk about the project and show a video.
  4. http://www.ecse.rpi.edu/Homepages/wrf/wiki/ComputerGraphicsFall2014/Angel/WebGL/7E/05/ programs showing:
    1. sombrero function
    2. perspective and orthographic projection
    3. shadow
  5. Note that project proposals were due.
  6. Reading for the current slides is Chapter 6.
  7. Homework 6 is online; due next Thurs.
  8. Notes written on my tablet during the lecture: 1016.pdf

Week 10

26.  Mon 10/20/14 (L15)

  1. The new LMS column MidtermEstGrade was computed as follows:
    1. midterm exam out of 50, plus
    2. four best of the first five homeworks, each scaled to be out of 12.5, plus
    3. knowitall points
    The potential max is 100. It was computed outside LMS and uploaded. Therefore it won't change if your input grades change. However, it is just for your info.
  2. The two Oculus Rift headsets that I ordered in early August should finally ship soon. I'll figure out some way to share them around the class. So, how many people are interested:
    1. in playing with them for a couple of days, or
    2. in using them for a term project?
  3. Phong lighting model: The total light at a pixel is the sum of
    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
  4. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.
  5. In OpenGL you can do several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.
    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
  6. Maureen Stone: Representing Colors as 3 Numbers (enrichment)
  7. Why do primary schools teach that the primary colors are Red Blue Yellow?
  8. Today, from 5pm-6pm in Sage labs 5101: Dreamworks company overview talk. I'll end class at 4:50.
  9. Tuesday the 21st, from 10am-12am in Sage labs 2411: Dreamworks Technical Director presentation.

27.  Thurs 10/23/14 (L16)

  1. Siggraph 2014: 3D Object Manipulation in a Single Photograph using Stock 3D Models
    http://www.cs.cmu.edu/~om3d/ Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.
  2. Summary of the new part of WebGL/7E/06/ shadedCube:
    1. var nBuffer = gl.createBuffer();
      Reserve a buffer id.
    2. gl.bindBuffer( gl.ARRAY_BUFFER, nBuffer );
      1. Create that buffer as a buffer of data items, one per vertex.
      2. Make it the current buffer for future buffer operations.
    3. gl.bufferData( gl.ARRAY_BUFFER, flatten(normalsArray), gl.STATIC_DRAW );
      Write a array of normals, flattened to remove metadata, into the current buffer.
    4. var vNormal = gl.getAttribLocation( program, "vNormal" );
      Get the address of the shader (GPU) variable named "vNormal".
    5. gl.vertexAttribPointer( vNormal, 3, gl.FLOAT, false, 0, 0 );
      Declare that the current buffer contains 3 floats per vertex.
    6. gl.enableVertexAttribArray( vNormal );
      Enable the array for use.
    7. (in the shader) attribute vec3 vNormal;
      Declare the variable in the vertex shader that will receive each row of the javascript array as each vertex is processed.
    8. The whole process is repeated with the vertex positions.
      Note that the variable with vertex positions is not hardwired here. You pass in whatever data you want, and your shader program uses it as you want.
  3. Angel_UNM_14_8_1.ppt Lighting and Shading 2
  4. Angel_UNM_14_8_2.ppt Lighting in WebGL
  5. WebGL/7E/06/ shadedSphere1, 2
  6. Angel_UNM_14_8_3.ppt Polygonal Shading
  7. Angel_UNM_14_8_4.ppt Per Vertex and Per Fragment Shading
  8. Angel_UNM_14_E3.ppt Marching Squares
  9. Computing surface normals.
    1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
    2. If it's a parametric surface, partial derivatives are tangent vectors.
    3. A mesh is a common way to approximate a complicated surface.
    4. For a mesh of flat (planar) pieces (facets):
      1. Find the normal to each facet.
      2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
      3. Apply Phong (or Gouraud) shading from those vertex normals.
  10. Homework 7 is online; due next Thurs.

Week 11

28.  ECSE-4965-01 Applied Parallel Computing for Engineers, Spring 2015

A computer engineering course. Engineering techniques for parallel processing. Providing the knowledge and hands-on experience in developing applications software for processors on inexpensive widely-available computers with massively parallel computing resources. Multithread shared memory programming with OpenMP. NVIDIA GPU multicore programming with CUDA and Thrust. Using NVIDIA gaming and graphics cards on current laptops and desktops for general purpose parallel computing using linux.

Details

29.  Mon 10/27/14 (L17)

  1. Angel_UNM_14_9_1.ppt Buffers
  2. Angel_UNM_14_9_2.ppt BitBlt
  3. Angel_UNM_14_9_3.ppt Texture Mapping I
  4. Lots of examples of crazy things done with fragment shaders: https://www.shadertoy.com/.
  5. Evans and Sutherland flight simulator history.
  6. How does A320 Full Flight Simulator work? - Baltic Aviation Academy
  7. Today's big new idea: Textures.
    1. Textures started as a way to paint images onto polygons to simulate surface details. They add per-pixel surface details without raising the geometric complexity of a scene.
    2. That morphed into a general array data format with fast I/O.
    3. If you read a texture with indices that are fractions, the hardware interpolates a value, using one of several algorithms. This is called sampling. E.g., reading T[1.1,2] returns something like .9*T[1,2]+.1*T[2,2].
    4. Textures involve many coordinate systems:
      1. (x,y,z,w) - world.
      2. (u,v) - parameters on one polygon
      3. (s,t) - location in a texture.
  8. Aliasing is also important.
  9. Notes written on my tablet during the lecture: 1027.pdf

30.  Wed 10/29/14 (L18)

Regular lecture (in the lab room).

  1. Folding and Crumpling Adaptive Sheets, SIGGRAPH 2013
  2. Structure-Aware Hair Capture (Siggraph 2013)
  3. Angel_UNM_14_9_4.ppt Texture Mapping II
  4. Angel_UNM_14_9_5.ppt WebGL Texture Mapping I
  5. Angel_UNM_14_9_6.ppt WebGL Texture Mapping II
  6. Notes written on my tablet during the lecture: 1029.pdf

31.  Thurs 10/30/14 (L19)

Regular lecture.

  1. Dean Shekhar Garde writes, "we require that all instructors take about 10-15 minutes to talk through the attached Power Point Presentation with their students in each their classes within the next week"
    1. EbolaProtocols2.pptx
    2. EbolaGraphic.pdf
  2. Videos - commercial applications of graphics
    1. NGRAIN augmented reality capability demo
    2. Modeling and Simulation is Critical to GM Powertrain Development
    3. Hydraulic Fracture Animation
    4. Deepwater Horizon Blowout Animation www.deepdowndesign.com
    5. On-board fire-fighting training simulator made for the Dutch navy
    6. Ship accident SMS , the best video I have ever seen
  3. We are now in Chapter 7 of the textbook.
  4. Angel_UNM_14_10_1.ppt Reflection and Environment Maps
  5. Angel_UNM_14_10_2.ppt Bump Maps
  6. Angel_UNM_14_10_3.ppt Compositing and Blending
  7. Notes written on my tablet during the lecture: 1030.pdf

Week 12

Instead, you should watch this video of Sebastian Thrun's keynote talk at this year's NVidia technical conference. (This is a baseline of a good term project, given that Thrun was hampered by being at Stanford not RPI.) (Local cache).

It is also a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

Here is the talk abstract:

What really causes accidents and congestion on our roadways? How close are we to fully autonomous cars? In his keynote address, Stanford Professor and Google Distinguished Engineer, Dr. Sebastian Thrun, will show how his two autonomous vehicles, Stanley (DARPA Grand Challenge winner), and Junior (2nd Place in the DARPA Urban Challenge) demonstrate how close yet how far away we are to fully autonomous cars. Using computer vision combined with lasers, radars, GPS sensors, gyros, accelerometers, and wheel velocity, the vehicle control systems are able to perceive and plan the routes to safely navigate Stanley and Junior through the courses. However, these closed courses are a far cry from everyday driving. Find out what the team will do next to get one step closer to the holy grail of computer vision, and a huge leap forward toward the concept of fully autonomous vehicles.

Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad.

Week 13

32.  Mon 11/10/14 (L20)

Regular lecture.

  1. Videos - military applications of graphics
    1. I/ITSEC 2012 -- Rockwell Collins Shows Curved 6-projector Blended Simulation
    2. Complete Characters HD Soldier and Combat Animations - ITSEC 2012
    3. Terrasim Destructable Buildings, Custom Content Debut at I/ItSEC 2012
    4. Modeling and simulation is a standard term.
  2. Aliasing and anti-
    1. The underlying image intensity, as a function of x, is a signal, f(x).
    2. When the objects are small, say when they are far away, f(x) is changing fast.
    3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
    4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
    5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
    6. It's not just that you won't see the hi frequencies. That's obvious.
    7. Worse, you will see fake low frequency signals that were never in the original scene. They are called aliases of the hi-freq signals.
    8. These artifacts may jump out at you, because of the Mach band effect.
    9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
    10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
    11. This is like a strobe effect.
    12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called anti-aliasing.
    13. OpenGl solutions:
      1. Mipmaps.
      2. Compute scene on a higher-resolution frame buffer and average down.
      3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
    14. Refs:
      1. http://en.wikipedia.org/wiki/Aliasing
      2. http://en.wikipedia.org/wiki/Clear_Type
      3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
      4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
      5. http://en.wikipedia.org/wiki/Mipmap
      6. http://en.wikipedia.org/wiki/Jaggies
  3. Angel_UNM_14_10_4.ppt Imaging Applications
  4. Angel_UNM_14_10_5.ppt Rendering the Mandelbrot Set
  5. WebGL/7E/07/ hawaiiImage, render1
  6. Notes written on my tablet during the lecture: 1110.pdf

33.  Wed 11/12/14 (L21)

Regular lecture (in the lab room).

  1. Videos - 3D input
    1. Magic 3D Pen
    2. 123D Catch - How to Make 3D Models from Pictures
    3. SIGGRAPH 2013: Scott Metzger, The Foundry And Nvidia demonstrate the Rise Project
  2. Angel_UNM_14_11_1.ppt
  3. Angel_UNM_14_11_2.ppt

34.  Thurs 11/13/14 (L22)

Regular lecture.

  1. I now have two Oculus Rift DK2. One will be used for a term project (only one team asked). See Shen to borrow the other for a few days.
    The kit has a number of parts, cables, etc, that are listed in the manual. Please return them all.
  2. Stereo viewing. There are a number of techniques for stereo viewing of movies and videos, dating back to the 19th century.
    1. Make the left image red and the right one blue. Use red-blue glasses.
      This decreases the color experience, but the glasses are cheap.
    2. Polarize the left and right images differently. Movie theatres do this.
      The glasses are cheap. There is full color. The projection process is more complicated (= expensive).
    3. The glasses have shutters that block and clear the left and right eyes in turn. In sync, the TV or projector shows the left or right image.
      The projector is cheaper. The glasses are more expensive.
    4. Use a display with a parallax barrier.
      Only one person can view at a time. No glasses are needed.
    5. Use the fact that your eyes see bright objects faster than they see dim objects. The glasses have a simple grey filter over one eye. Beer companies have used this for commercials.
      No special display is needed. This works only with moving objects.
  3. Phoenix Aerial UAV LiDAR - V2
  4. Videos from students
    1. Head Tracking for Desktop VR Displays using the WiiRemote. The guy who made it is Johnny Lee. He does a bunch of other cool graphics and motion tracking stuff with the Nintendo Wii too. Here’s a link to his website with all of his other projects. Thanks to Nicholas Cesare.
    2. Cool Sound and Water Experiment related to aliasing. Thanks to Rebecca Nordhauser for this.
  5. Last week I was at the ACM SIGSPATIAL GIS conference in Dallas. Here is one big graphics dataset that was mentioned.
    1. http://chriswhong.com writes about processing a database listing all the taxi rides in Manhattan for a year.
  6. WebGL/7E/07/ render1: Sierpinski gasket rendered to a texture and then display on a square.
  7. render1v2: renders a triangle to a texture and displays it by a second rendering on a smaller square so blue clear color is visible around rendered square
  8. Angel_UNM_14_11_3.ppt - skip for now
  9. Angel_UNM_14_11_4.ppt - skip for now
  10. Angel_UNM_14_12_1.ppt Hierarchical Modeling 1
  11. Angel_UNM_14_12_2.ppt Hierarchical Modeling 2
  12. Angel_UNM_14_12_3.ppt Graphical Objects and Scene Graphs 1

Week 14

35.  Mon 11/17/14 (L23)

  1. Barbara Ruel, Director of Diversity and Women In Engineering Programs at Rensselaer Polytechnic Institute, will talk to us for a few minutes about a project being planned for next spring.
  2. Angel_UNM_14_12_4.ppt Graphical Objects and Scene Graphs 2
  3. Angel_UNM_14_12_5.ppt Rendering overview. At this point we've learned enough WebGL. The course now switches to learn the fundamental graphics algorithms used in the rasterizer stage of the pipeline.
  4. Notes written on my tablet during the lecture: 1117.pdf

36.  Wed 11/19/14

No lecture, just the regular chance to talk to Shen.

37.  Thurs 11/20/14 (L24)

A regular lecture.

  1. http://www.ted.com/talks/jinha_lee_a_tool_that_lets_you_touch_pixels TED talk: Reach into the computer and grab a pixel
  2. Angel_UNM_14_13_1.ppt Clipping
  3. View normalization or projection normalization
    1. We want to view the object with our desired perspective projection.
    2. To do this, we transform the object into another object that looks like an amusement park fun house (all the angles and lengths are distorted).
    3. However, the default parallel projection of this normalized object gives exactly the same result as our desired perspective projection of the original object.
    4. Therefore, we can always clip against a 2x2x2 cube, and project thus: (x,y,z)->(x,y,0) etc.
  4. Angel_UNM_14_13_2.ppt Polygon Rendering
  5. http://www.ted.com/talks/janet_iwasa_how_animations_can_help_scientists_test_a_hypothesis
  6. Notes written on my tablet during the lecture: 1120.pdf

Week 15

38.  Mon 11/24/14 (L25)

  1. Follow the instructions sent you my LMS to sign up for your 5-minute project presentation. That's one presentation per group. Anyone who hasn't signed up by the end of Monday will be assigned a slot.
  2. Chris Kluwe on how augmented reality will change sports and build empathy.
  3. Visibility methods:
    1. Painters:
      1. The painter's algorithm is tricky when faces are close in Z.
      2. Sorting the faces is hard and maybe impossible. Then you must split some faces.
      3. However sometimes some objects are always in front of some other objects. Then you can render the background before the foreground.
    2. Z-buffer:
      1. Subpixel objects randomly appear and disappear (aliasing).
      2. Artifacts occur when objects are closer than their Z-extent across one pixel.
      3. This happens on the edge where two faces meet.
    3. BSP tree:
      1. In 3D, many faces must be split to build the tree.
    4. The scanline algorithm can feed data straight to the video D/A. That was popular decades ago before frame buffers existed. It is popular again when frame buffers are the slowest part of the pipeline.
    5. A real implementation, with a moving foreground and fixed background, might combine techniques.
    6. References: wikipedia.
  4. Angel_UNM_14_13_3.ppt Rasterization
  5. Angel_UNM_14_13_4.ppt Display Issues
  6. Many of these algorithms were developed for HW w/o floating point, where even integer multiplication was expensive.
  7. Efficiency is now less important in most cases (unless you're implementing in HW).
  8. The idea of clipping with a 6-stage pipeline is an important.
  9. Jim Clark, http://en.wikipedia.org/wiki/James_H._Clark, a prof at Stanford, made a 12-stage pipeline using 12 copies of the same chip, and then left Stanford to found SGI.
    1. Later he bankrolled Netscape and 2 other companies.
    2. More recently he had the world's 4th largest yacht: http://www.forbes.com/sites/ryanmac/2012/05/15/billionaire-jim-clark-seeks-more-than-100-million-for-two-superyachts/.
  10. My note on Bresenham Line and Circle Drawing. Jack Bresenham, then at IBM invented these very fast ways to draw lines and circles with only integer addition and subtraction. My note gives step-by-step derivations by transforming slow and clear programs to fast and obscure programs.
  11. James Patten on The best computer interface maybe your hands.
  12. Raytracing jello brand gelatin
  13. My note on Two polygon filling algorithms

38.1  Curves.

This is the next chapter of Angel. Since we're out of time, and since WebGL does this worse than full OpenGL, here is a summary. Big questions:

  1. What math to use?
  2. How should the designer design a curve?
  3. OpenGL implementation.
  4. Reading: bezier.pdf
  5. Partial summary:
    1. To represent curves, use parametric (not explicit or implicit) equations.
    2. Use connected strings or segments of low-degree curves, not one hi-degree curve.
    3. If the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    4. That requires at least cubic equations.
    5. Higher degree equations are rarely used because they have bad properties such as:
      1. less local control,
      2. numerical instability (small changes in coefficients cause large changes in the curve),
      3. roundoff error.
    6. See my note on that: Hi Degree Polynomials.
    7. One 2D cartesian parametric cubic curve segment has 8 d.f.
      {$ x(t) = \sum_{i=0}^3 a_i t^i$},
      {$ y(t) = \sum_{i=0}^3 b_i t^i$}, for {$0\le t\le1$}.
    8. Requiring the graphic designer to enter those coefficients would be unpopular, so other APIs are common.
    9. Most common is the Bezier formulation, where the segment is specified by 4 control points, which also total 8 d.f.: P0, P1, P2, and P3.
    10. The generated curve starts at P0, goes near P1 and P2, and ends at P3.
    11. The curve stays inside the control polygon, the convex hull of the control points. A flatter control polygon means a flatter curve.
    12. A choice not taken would be to have the generated curve also go thru P2 and P3. That's called a Catmull-Rom-Oberhauser curve. However that would force the curve to go outside the control polygon by a nonintuitive amount. That is considered undesirable.
    13. Instead of 4 control points, a parametric cubic curve can also be specified by a starting point and tangent, and an ending point and tangent. That also has 8 d.f. It's called a Hermite curve.
    14. The three methods (polynomial, Bezier, Hermite) are easily interconvertible.
    15. Remember that we're using connected strings or segments of cubic curves, and if the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    16. That reduces each successive segment from 8 d.f. down to 2 d.f.
    17. This is called a B-spline.
    18. From a sequence of control points we generate a B-spline curve that is piecewise cubic and goes near, but probably not thru, any control point (except perhaps the ends).
    19. Moving one control point moves the adjacent few spline pieces. That is called local control. Designers like it.
    20. One spline segment can be replaced by two spline segments that, together, exactly draw the same curve. However they, together, have more control points for the graphic designer to move individually. So now the designer can edit smaller pieces of the total spline.
    21. Extending this from 2D to 3D curves is obvious.
    22. Extending to homogeneous coordinates is obvious. Increasing a control point's weight attracts the nearby part of the spline. This is called a rational spline.
    23. Making two control points coincide means that the curvature will not be continuous at the adjacent joint.
      Making three control points coincide means that the tangent will not be continuous at the adjacent joint.
      Making four control points coincide means that the curve will not be continuous at the adjacent joint.
      Doing this is called making the curve (actually the knot sequence) non-uniform. (The knots are the values of the parameter for the joints.)
    24. Putting all this together gives a non-uniform rational B-spline, or a NURBS.
    25. A B-spline surface is a grid of patches, each a bi-cubic parametric polynomial.
    26. Each patch is controlled by a 4x4 grid of control points.
    27. When adjacent patches match tangents and curvatures, the joint edge is invisible.
    28. The surface math is an obvious extension of the curve math.
      1. {$ x(u,v) = \sum_{i=0}^3\sum_{j=0}^3 a_{ij} u^i v^j $}
      2. {$y, z $} are similar.
      3. One patch has 48 d.f. for Cartesian points, or 64 d.f. for homogeneous points, although most of those are used to establish continuity with adjacent patches.
  6. My extra enrichment info on splines: Splines.
  7. Notes written on my tablet during the lecture: 1124.pdf

Week 16

39.  Mon 12/1/14

  1. Summary of CAD, ray tracing, and radiosity.
  2. http://geometrie.foretnik.net/files/NURBS-en.swf
  3. Notes written on my tablet during the lecture: 1201.pdf

40.  Wed 12/3/14

Student term project presentations.

  1. Kathryn Bennett,
  2. Andrew Lynch,
  3. Anthony Handwerker/Nick Colclasure,
  4. Joshua Goldberg/Dave Vanderzee/Nick Payne.
  5. Jay Jenson/ Vinson Liu/ Kyle Samson,

41.  Thurs 12/4/14

Student term project presentations.

  1. William Kronmiller,
  2. Ian Marshall/Jordan Horwich,
  3. Rachel King/Michael Mortimer,
  4. Andrew Morris/Dustin Tsui/Abhijith,
  5. Matthew Holmes
  6. Ben Mizera/Jimmy Li
  7. Jeremy Spitz
  8. Bryant Pong
  9. Bowen Nie/Yiwen Zhang
  10. Xitu Chen/Anli Ji
  11. John Conover/Christopher Ngai
  12. Spencer Godbout
  13. Trilok/Paul
  14. Tim Terrezza/Steven Cannavale

Term projects due.

  1. Notes written on my tablet during the lecture: 1204.pdf

Week N

42.  Sun 12/14/14 review

Review session, DCC (Darrin) 330, 5-7 pm.

  1. Some sample WebGL questions are here.
  2. 2013 final exam and

solutions.

  1. William Rory Kronmiller's notes. (Thanks!)
  2. There will be no questions on
    1. shears
    2. the specific projection matrix.

43.  Mon 12/15/14 final exam

Final Exam, 3-6 pm, Low 4050. Solution.