W Randolph Franklin home page
... (old version)
ComputerGraphicsFall2013/ home page Login


I (WRF) will update this page after each lecture, including important announcements etc, and, usually, a copy of what I wrote on my tablet or the overhead projector.

Quick links

  1. Course home page.
  2. Lectures, summarized on one page.
  3. Homeworks.
  4. RPI LMS WebCT.
  5. Google calendar.
  6. textbook.
  7. Programs I discussed in class (other than programs straight from Guha).

Contents (hide)

  1.   1.  Mon 8/26/2013 (L1)
  2.   2.  Linear algebra tutorial
  3.   3.  Wed 8/28/2013
  4.   4.  Thu 8/29/2013 (L2)
  5.   5.  Thu 9/5/2013 (L3)
  6.   6.  New Teaching Assistant
  7.   7.  Mon 9/9 (L4)
  8.   8.  Wed 9/11/13
  9.   9.  Thu 9/12/13 (L5)
  10. 10.  Mon 9/16 (L6)
  11. 11.  Wed 9/18
  12. 12.  Thurs 9/19 (L7)
  13. 13.  Mon 9/23/13 (L8)
  14. 14.  Wed 9/25/13
  15. 15.  Thurs 9/26/13 (L9)
  16. 16.  Mon 9/30/13 (L10)
  17. 17.  Wed 10/2/13
  18. 18.  Thurs 10/3/13 (L11)
  19. 19.  Mon 10/7/13 (L12)
    1. 19.1  EMPAC tour
    2. 19.2  Possible term project - ECSE interactive video display
  20. 20.  Wed 10/9/13 Review for midterm
  21. 21.  Thurs 10/10/13 Midterm exam
  22. 22.  Tues 10/15/13
    1. 22.1  Sample Questions
  23. 23.  Wed 10/16/13
  24. 24.  Thu 10/17 (L13)
    1. 24.1  Chapter 10 ctd.
    2. 24.2  Chapter 11: Color and Light
    3. 24.3  Sample questions:
  25. 25.  Mon 10/21/13 (L15)
    1. 25.1  Sample questions
  26. 26.  Wed 10/23/13
  27. 27.  Thurs 10/24/13 (L16)
    1. 27.1  Chapter 14: Raster algorithms, p 527ff.
    2. 27.2  Sample questions
  28. 28.  Mon 10/28/13 (L17)
    1. 28.1  Applied Parallel Computing for Engineers (Proposed spring course)
    2. 28.2  Chapters 15, 16, 17 More splines
    3. 28.3  Chapter 18 Projections, pp 641ff
    4. 28.4  Chapter 19: Fixed Functionality Pipelines, pp 675ff.
    5. 28.5  Chapter 20
  29. 29.  Thurs 10/31/13 (L18)
    1. 29.1  Sample questions
  30. 30.  Mon 11/4/13 (L19)
  31. 31.  Wed 11/6/13
  32. 32.  Thurs 11/7/13
  33. 33.  Mon 11/11/13 (L20)
    1. 33.1  Sample questions
  34. 34.  Thurs 11/14/13 (L21)
    1. 34.1  Sample questions:
  35. 35.  Mon 11/18/13 (L22)
    1. 35.1  ECSE-4965-01 Applied Parallel Computing for Engineers
    2. 35.2  Term Project Presentations
    3. 35.3  Relevant performance at EMPAC
    4. 35.4  Prof Radke special talk.
  36. 36.  Thurs 11/21/13
    1. 36.1  Studying for the final exam
    2. 36.2  WebGL intro.
    3. 36.3  Sample questions about WebGL:
    4. 36.4  NVIDIA 2013 GPU Technology Conference
  37. 37.  Mon 11/25/13
  38. 38.  Mon 12/2/13
    1. 38.1  What I did last Tues
    2. 38.2  Term Project Grading
    3. 38.3  Student term project presentations
    4. 38.4  Review for final exam
  39. 39.  Wed 12/4/13
    1. 39.1  Final exam conflicts
    2. 39.2  Student term project presentations
  40. 40.  Thurs 12/5/13
    1. 40.1  Student term project presentations
  41. 41.  Wed 12/12 6:30-9:30 pm
    1. 41.1  Presentation grades
    2. 41.2  Final exam
  42. 42.  After the semester and after you graduate
    1. 42.1  Computer Programming Position Available Beginning Jan 2014

Week 1

1.  Mon 8/26/2013 (L1)

  1. Discuss syllabus
  2. Intro to OpenGL.
    1. It's a platform-neutral competitor to DirectX. See http://en.wikipedia.org/wiki/Comparison_of_OpenGL_and_Direct3D .
    2. The competition improves both.
    3. The designers made a decision to do only rendering, no input, audio, or windowing.
    4. The OpenGL standards committee gradually adds and subtracts features.
    5. The goal is to exploit the latest GPUs while keeping the size manageable.
    6. Our text uses OpenGL2; the most recent version is OpenGL4.
    7. OpenGL2 is much easier to write programs in. The hello world program is much smaller.
    8. OpenGL4 exploits latest GPU features. However many expensive business laptops, such as my Thinkpad x201 cannot run it.
    9. The Khronos Group, the OpenGL standards committee, hates OpenGL2 and wishes it were dead, but the users push back.
    10. Using any version teaches general graphics programming paradigms.
    11. I'll mostly teach OpenGL2 because it is more useful.
    12. I'll also introduce OpenGL4.
    13. OpenGL4 being lower level opens a niche for a replacement easy-to-use graphics API. I recommend Qt.
    14. Tools like Blender are higher-level, very good for realistic scenes, but too fancy for simple graphics programming. I'll leave them to other classes.
    15. See http://en.wikipedia.org/wiki/Opengl .
    16. WebGL is an API for HTML5. It looks good.
  3. Show a simple OpenGL program, box.cpp, from the code zipfile on the publisher site.
  4. Reading assignment: Guha, chapter 1.
  5. Homework 1 is online; due next Wed.
  6. Student-faculty hike on Sun Sept 22:
    Jeff Trinkle is planning a hike for students and faculty members on Monument Mountain on Sunday September 22. Please let me know if you’re interested and able to join us.
  7. Lecture notes: 0826.pdf
    1. Watch History of Computer Animation - P1. Note that this is one person's view; there were many other contributors.

2.  Linear algebra tutorial

  1. For students confused by the linear algebra questions on homework 1, the following may help:
    1. vector methods.
    2. Vector Math for 3D Computer Graphics
    3. Working with vectors, from Maths: A Student's Survival Guide
  2. Also see wikipedia.
  3. Some older graphics texts have appendices summarizing the relevant linear algebra.
  4. The scalar (dot) product is needed to do things like this:
    1. compute how a ball bounces off a wall in pong
    2. compute lighting effects (how light reflects from a surface)
  5. The cross product's applications include:
    1. compute torque and angular momentum.
    2. compute the area of a parallelogram.
    It is the only operator that you'll probably ever see that is not associative. Ax(BxC) != (AxB)xC.
  6. The triple product (A.BxC) computes the volume of a parallelepiped.

3.  Wed 8/28/2013

Lab to start using OpenGL and libqglviewer.

4.  Thu 8/29/2013 (L2)

  1. Graphics display hardware, based on Guha page 15. The progress of Computer Graphics is largely the progress of hardware. We'll see more of this later. However, here's an intro.
    1. What physical principles are each type of HW based on?
      1. CRT: certain rare earth materials emit photons when hit by electrons. Explaining this is what got Einstein his Nobel (not relativity).
      2. LCD: electric field causes big asymmetric molecules to untwist so that they no longer rotate polarized light passing through them.
    2. What engineering challenges required solving?
      1. Shadow-mask CRT: electron beams travel varying distances at different angles, but don't hit the wrong phosphor even as the system gets hotter. The precision is 0.1%.
      2. Hi-performance graphics requires hi bandwidth memory.
      3. Virtual reality headsets require knowing where your head is and its angle (harder).
        Think of 3 ways to do this.
    3. What tech advances enabled the solutions?
      1. Raster graphics requires cheap memory.
      2. LCD panels require large arrays of transistors.
  2. Note on Engineering Grounded In Reality and famous graphics alumni.
  3. Executive summary of Portability And Standards.
  4. OpenGL levels of API abstraction:
    1. (lowest) OS
    2. GL
    3. GLU
    4. GLUT (or a competitor) GLUT - The OpenGL Utility Toolkit interfaces between OpenGL and your windowing system. It adds things, such as menus, mouse and keyboard interface, that were considered too platform-dependent and too far outside OpenGL's core mission to include in OpenGL. GLUT is platform independent, but quite basic. There are several alternatives, some of which are platform dependent but more powerful. You can't always have everything at once. However GLUT is the safe solution.
    5. (highest) your program
  5. I've assembled some important OpenGL points here" OpenGL Design tradeoffs and notes
  6. example of abstraction: a logical keyboard send keypresses. It might be
    1. keyboard
    2. touchscreen
    3. voice input
  7. Name several logical input devices.
  8. OpenGL conflicting coordinate systems
    1. window origin is top left
    2. OpenGL coordinate origin is bottom left
    3. sometimes need to convert y
  9. nested regions (this terminology is not standardized)
    1. (biggest) computer screen
    2. OpenGL window
    3. viewport, to draw into
  10. OpenGL transformation matrices
    1. ModelView
    2. Projection
  11. Matrix notes
    1. Transformation routines modify the existing matrix.
    2. Calling glOrtho twice in succession is probably not what you want to do.
    3. The last routine that you call is the first one applied to the object.
  12. SIGGRAPH 2013 : Computer Animation Festival Preview Trailer
  13. New programs from Guha:
    1. squareannulus*.cpp (p 68) show more efficient ways to plot lots of vertices.
      Storing vertices in vertex arrays is required in current OpenGL.
  14. Reading assignment: Guha, chapters 2, 3.
  15. Lecture notes: 0829.pdf

Week 2

5.  Thu 9/5/2013 (L3)

  1. Homework 2 is out; due in one week.
  2. New programs from Guha:
    1. circle.cpp (page 46) shows lineloop and approximating a curve with a polyline.
    2. circularannuluses.cpp (p 48) introduces the depth (or Z) buffer. With it, the nearest object on each pixel is displayed. W/o it, the last object drawn into each pixel is displayed.
    3. helix.cpp (p 51): perspective and ortho projections
      1. Example 2.1, p 53.
      2. Experiment 2.24, p 54.
      These builtin projection functions have been removed from OpenGL4; now you write your own. I've mentioned them because they are useful if you're using OpenGL2 or 3.0.
    4. hemisphere.cpp (p 59) shows
      1. rotation and translation
      2. perspective projection
      These builtin transformation functions have been removed from OpenGL4; now you write your own.
      Triangle fans and strips represent faceted objects with fewer vertices.
      Each new transformation concatenates onto the modelview matrix. When you plot a vertex, the current modelview matrix transforms it. The last transformation concatenated onto the modelview matrix is the first applied to the vertex.
    5. Ignore the display list examples in Section 3.2; display lists are deprecated.
    6. fonts.cpp, p 72: bitmapped and stroke fonts
    7. mouse.cpp: mouse button callback
    8. mouseMotion.cpp: mouse motion callback
    9. moveSphere.cpp, p 76: non ASCII key callback
    10. menus.cpp - p 77
    11. nopush.cpp - a new program to test what happens if you pop from the matrix stack w/o pushing. The program produces no output.
    12. extrapush.cpp - a new program to test the effect of more pushes than pops. The stack did not overflow, for the number of extra pushes that I tried.
    13. canvas.cpp - p 78 - Primitive drawing program.
  3. SIGGRAPH 2013: Technical Papers Preview Trailer
  4. Lecture notes: 0905.pdf

Week 3

6.  New Teaching Assistant

Because of messy multibody scheduling, our new teaching assistant lineup is:

  1. Dan Benedetti, benedd at rpi.edu,
    Benedetti.jpg
  2. Shan Zhong, zhongs2 at rpi.edu,
    Zhong.jpg

7.  Mon 9/9 (L4)

  1. SIGGRAPH 2013 : Emerging Technologies Preview Trailer
  2. New programs from Guha:
    1. moveSphere.cpp, p 76: non ASCII key callback
    2. menus.cpp - p 77
    3. canvas.cpp - p 78 - Primitive drawing program.
    4. glutObjects.cpp - p 80
      1. 9 builtin glut polyhedra
      2. rendered both shaded and as wire frames,
      3. rotated by user input
      4. initial look at lighting in OpenGL; we'll see this a lot more.
      5. enabling lighting,
      6. enabling specific lights,
      7. material properties,
      8. diffuse, ambient and specular lights.
    5. clippingPlanes.cpp - p 82
      1. add more clip planes to the 6 original ones (the sides of the view volunme)
      2. enable and disable clipping planes.
      3. enabling specific clipping plane,
      4. clipping plane equation.
    6. viewports.cpp - p 86
      1. change the viewport to a different part of the window, and draw more stuff.
    7. windows.cpp - p 87
      1. multiple top level windows, each with its own callbacks.
  3. CGI Modeling Showreel 2013
  4. Reading assignment: Guha, chapter 4.

8.  Wed 9/11/13

This lab is a chance for you to talk to the TAs.

Homework 3 is out; due in one week.

9.  Thu 9/12/13 (L5)

  1. TA office hours
    1. Dan Benedetti, Tues noon-1
    2. Shan Zhong, Fri 2-3
    Notes:
    1. They'll be in the Flipflop lounge, JEC 6037.
    2. Come near the start of the time; if there is no one there they may leave.
    3. If you need more time with the TAs, or a different time, then write them, and they will ty to accommodate you.
    4. They both also attend the lab.
    5. I stay after most lectures (except today - the Dept Head is hosting the ECSE faculty at the pub) as long as anyone wants to talk to me.
    6. I can also schedule other times - just ask.
    7. I'll entertain questions on any legal topic, it doesn't have to be Computer Graphics.
    8. For your future reference, I copied this section, with the other permanent material, to the course syllabus.
  2. SIGGRAPH 2012 Technical Papers Video Preview
  3. Transformation review
    1. Each type of common transformation (translate, rotate, scale, project) is a matrix.
    2. If applying several transformations, it is faster to first multiply the matrices, then just multiply all the points by that one matrix.
    3. Most OpenGl transformation routines modify one of two current transformation matrices: the modelview or the projection.
    4. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    5. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines. We'll cover this later.
    6. The last transformation catenated onto the current matrix is the first transformation applied to the object.
    7. OpenGL combines the two matrices, so the modelview matrix is applied first to the object.
  4. Rotations: My note on 3D rotation
    1. all rigid transformations in 3D that don't move the origin have a line of fixed points, i.e., an axis, that they rotate around.
    2. deriving the vector formula for a rotation given the axis and angle
    3. computing the matrix from a rotation axis and angle
    4. testing whether a matrix is a rotation
    5. if it is, then finding the axis and angle
  5. NVIDIA's Light-field Glasses Prototype demo @ Siggraph 2013
  6. Programs I modified in class: moveSphere1.cpp, menus1.cpp
  7. Lecture notes: 0912.pdf
  8. (Unrelated to this course). Tonight is the annual IgNobel award ceremony, parodying the upcoming real Nobels. However, in 2005, Roy Glauber, the janitor who usually sweeps paper airplanes off the stage during the Ignobel ceremony, won a real Nobel. So, maybe the Nobels parody the Ignobels.

Week 4

10.  Mon 9/16 (L6)

  1. SIGGRAPH 2011 Technical Papers Video Preview
  2. Rotations ctd: My note on 3D rotation
  3. Realtime Facial Animation With On-the-fly Correctives (SIGGRAPH 2013)
  4. Lecture notes: 0916.pdf

11.  Wed 9/18

Open lab.

  1. Homework 4 is online; due in one week.

12.  Thurs 9/19 (L7)

  1. 3-Sweep: Extracting Editable Objects from a Single Photo
  2. Rotations ctd: My note on 3D rotation
  3. Euler and angles and Gimbal lock
    1. http://www.youtube.com/watch?v=rrUCBOlJdt4&feature=related Gimble Lock - Explained.
      One problem with Euler angles is that multiple sets of Euler angles can degenerate to the same orientation. Conversely, making a small rotation from certain sets of Euler angles can require a jump in those angles. This is not just a math phenomenon; real gyroscopes experience it.
    2. http://en.wikipedia.org/wiki/Gimbal_lock
    3. What is Gimbal Lock and why does it occur? - an animator's view.
  4. Programs
    1. composeTransformations.cpp
  5. Lecture notes: 0919.pdf

Week 5

13.  Mon 9/23/13 (L8)

  1. Videos
    1. TED talk: Visualizing the medical data explosion by Anders Ynnerman
    2. Reconfigurable Camera Add-On, KaleidoCamera from SIGGRAPH 2013
  2. Rotation about the X, Y, or Z axis
  3. Programs
    1. rotatingHelix{1,2,3}.cpp, p 118. Animation with timer callback.
    2. ballAndTorus.cpp, p 120. Rotation about point not at origin.
    3. ballAndTorusWithFriction.cpp, p 124. Intro to physically-based modeling by adding friction.
    4. clown1.cpp, clown2.cpp, clown3.cpp develop an animated clown head, p 125
    5. floweringPlant.cpp, p 129
    6. boxWithLookAt.cpp transformation, p 133 - shows gluLookAt
    7. spaceTravel.cpp, p 153 - changes camera viewpoint as spacecraft moves
  4. Lecture notes: 0923.pdf

14.  Wed 9/25/13

Lab to talk to the TAs.

Homework 5 is online, due Oct 2.

15.  Thurs 9/26/13 (L9)

  1. Videos
    1. TED: Paul Debevec animates a photo-real digital face
    2. You can watch this at home: TED: Ed Ulbrich: How Benjamin Button got his face
    3. Physics demos Show off Fog at NVIDIA GTC keynote day 1
  2. Programs
    1. animateMan1.cpp etc, p 157 - interactively construct, then replay, an animation sequence with actor's joints' positions changing.
      I'll show the program running in class. You can study the code at home, and ask questions in class next Mon.
    2. ballAndTorusShadowed.cpp, p 161 - technique to simulate a shadow.
    3. getinfo.cpp shows how to retrieve OpenGL state such as version. OpenGL implementors have a lot of freedom for things like matrix stack size, and also often add extensions. You can read all this info.
      Note my printglstring macro, which prints a name, using the stringify operator, then gets and prints its value.
      #define printglstring(a) cout << #a << ": " << glGetString(a) << endl;
      I use similar macros to debug my own C++ programs. E.g., they print an expression, then print its value and maybe also print its execution time.
  3. Remote X sessions: (enrichment only) Running a program on one machine (the client) that does interactive graphics on another (the server) is hard to implement.
    1. That was one of the earliest goals of the ARPAnet in 1968. It didn't work out because the bandwidth over a phone line was too low, say 1KB/s. However the ARPAnet turned out to be useful for frivolous things like exchanging messages (smtp), as well as remote logins (telnet) and exchanging files (ftp). So it continued to grow...
      (Opinion): That's one of the most successful examples of government-private cooperation. The government (i.e., taxpayer) paid for the R&D, for several decades in this case. The immediate users were military and research universities. The tech was made freely available, and eventually private companies commercialized it.
    2. Project Athena was at MIT in the 1980s, sponsored by DEC and IBM, costing nearly $100M. It did distributed computing, including the X window system, which we still use. IMHO, it's still better than remote graphics on MS Windows.
    3. However, the default config for remote X sessions, which you get by ssh -X foo.rpi.edu is unusably slow. Too many low-level bits are sent, all encrypted.
    4. Here are some things I've discovered that help.
      1. Use a faster, albeit less secure, cipher:
        ssh -c arcfour,blowfish-cbc -C client.rpi.edu
      2. Use xpra; here's an example:
        1. On client: xpra start :7; DISPLAY=:7 xeyes&
        2. On server: xpra attach ssh:client.rpi.edu:7
      3. Use nx, which needs a server, e.g., FreeNX.
  4. Picking
    The user selects an object on the display with the mouse. How can we tell which object was selected? This is a little tricky.
    E.g., It's not enough to know what line of code was executed to draw that pixel (and even determining that isn't trivial). That line may be in a loop in a subroutine called from several places. We want to know, sort of, the whole current state of the program. Also, if that pixel was drawn several times, we want to know about only the last time that that pixel changed.
    Guha p 161-169 lists various messy methods, which are all now deprecated. The new official way is to use the color buffer to code the objects.
    First mention of the graphics pipeline, converting vertices to pixels.
  5. Other transformations in Chapter 5.
    1. Shears and reflections are easy.
    2. I won't examine on affine tranformations.
  6. Chapter 6: Advanced animation, p 241, space partitioning with Octrees
    1. Problems needing solving:
      1. If there are many objects but most are outside the viewing frustum, then OpenGL's processing objects that will later be clipped is slow.
      2. Testing the spacecraft against many asteroids for collisions is slow.
    2. Solutions:
      1. If there are only a few objects, don't worry, use the simple method.
      2. Otherwise, use quadtrees, octrees etc to partition space into blocks.
      3. I prefer a 1-level uniform grid; in practice it's usually not necessary to subdivide. Rather use a fine grid, say 100x100.
  7. Lecture notes: 0926.pdf

Week 6

16.  Mon 9/30/13 (L10)

  1. Videos
    1. SIGGRAPH Asia 2012 : Technical Papers Trailer
    2. CGI VFX - Making of "Hulk" Part 1 - The Avengers - Industrial Light & Magic
  2. Note on 3D Interpolation (to replace textbook section on animating with Euler angles).
  3. Quaternions. This is an alternative method to rotate in 3D. Its advantages are:
    1. It starts from the intuitive axis-angle API.
    2. Animating a large rotation in small steps (by varying the angle slowly) is easy. In contrast, stepping the 3 Euler angles does not work well, and there's no obvious way to gradually apply a {$3\times3$} rotation matrix, {$M$}. (You could compute {$M^{1/100}$} and apply it 100 times, but that computation is messy. Eigenvectors are useful here.)
    3. When combining multiple rotations, the axis and angle of the combo is easy to find.
    4. Having only 4 parameters to represent the 3 degrees of freedom of a 3D rotation is the right number. Using only 3 parameters, as Euler angles do, causes gimbal lock. That is, you cannot always represent a smooth rotation by smooth changes of the 3 parameters. OTOH, using 9 parameters, as with a matrix, gives too much opportunity for roundoff errors causing the matrix not to be exactly a rotation. (You can snap the matrix back to a rotation matrix, but that's messy.)
    5. Guha p 253-268.
  4. (Enrichment) Broader impact: Why should the taxpayer fund this stuff? This was a concern when I was a Program Director at NSF, recommending funding for computer graphics. We were competing against researchers promising to cure cancer. However, we did get these awards (not all in graphics) funded.
    When you are job hunting, you should ask yourself what benefit your potential employer would get from hiring you. When I write a proposal to a federal agency to get money, I mention how it would benefit from funding me.
  5. Lecture notes: 0930.pdf

17.  Wed 10/2/13

  1. Lab to talk to the TAs.
  2. Homework 6 online, due next week.

18.  Thurs 10/3/13 (L11)

  1. The final exam has been set for Wed 12/11 6:30-9:30.
    The first draft of the schedule had us on the very last exam slot. Because of student requests in previous years, I asked for an earlier time, and the registrar was kind enough to do this.
  2. Video
    1. SIGGRAPH 2012 : Emerging Technologies
  3. We will do Chapters 7, 8, and 9 quickly. Here are the important points:
    1. Often you have a triangle with some property given at each vertex. Examples include:
      1. position,
      2. color,
      3. normal vector to the surface.
    2. You have to interpolate that property to find its value at each interior pixel.
    3. Bilinear interpolation is used.
    4. If you have a more complicated polygon, like a quadrilateral or a ring, then first you triangulate it.
    5. That's tricky if the polygon is not convex; however we won't worry about that.
    6. The lines you added to triangulate the complicated polygon should be invisible in the resulting interpolation, i.e., the interpolated values in the two adjacent triangles should blend invisibly. That's not always completely true.
    7. There are many ways to triangulate a complicated polygon. They result in different interpolations.
    8. When animating a video with a complicated polygon, you also want the interpolations to be consistent from frame to frame. E.g., if the polygon rotates, you want to triangulate it consistently. That can be tricky, but is beyond this course.
  4. Lecture notes: 1003.pdf
    Warning I just noticed that Firefox's default PDF viewer does not display color typed text in my tablet notes. That's a problem since the colored typed text is used for important highlighted announcements. However (on linux) evince, okular, xpdf, and acroread all work.
    It's amazing how nonportable PDF can be. Once I created a simple drawing with inkscape that displayed differently four ways on four different PDF renderers:
    The problem is that the PDF standard is large and has parts that are rarely used. So, some PDF writers and readers ignore those parts, or get them wrong. Then there's the question of how to render a PDF file that is slightly wrong but whose meaning appears clear. Acrobat ignores illegal objects that others try to render.
    This was a problem when I was helping to assemble the abstract book for the Fall Workshop in Computational Geometry at RPI in 2008. The speakers submitted 2-page PDF extended abstracts created with a variety of programs. I then used a variety of PDF-mangling tools to combine them into one PDF file with running page numbers. Unfortunately, the result looked a little better with the free programs I viewed it with, than when printed with acroread.
    There's a lesson here when designing standards. It makes OpenGL's decision to delete a lot of things more understandable.
  5. Chapter 10: curves etc. I gave an introduction, which I'll expand at the next lecture the week after next.

Week 7

19.  Mon 10/7/13 (L12)

19.1  EMPAC tour

Today for 4pm, go to the EMPAC lobby for an inside tour of EMPAC.

Eric Ameres has kindly agreed to show us around. He is Senior Research Engineer, Adjunct Professor at RPI EMPAC, and came to RPI from being CTO - EVP Engineering at On2 Technologies.

See his youtube channel.

19.2  Possible term project - ECSE interactive video display

This possible term project would design a prototype of an interactive large screen video display for ECSE, to go in front of JEC6012, facing the elevators. The idea is that any member of the public could use a smart phone would display information

However, your prototype would use your laptop.

A large part of this project is designing the API. One idea is that the display would show a URL, which the user would browse to with his/her phone. At that URL would be an action menu. Leveraging on top of the phone's browser means that you don't need a separate app.

For security, to prevent remote control, we might frequently change the URL.

Although we could put a user keyboard in front of the display, letting them use their phones would be more exciting.

Possible info to display could be a faculty directory, ECSE courses, research projects, press releases, interesting stories from other sites like slashdot, etc, etc. The actual material is not so important yet, although you could populate it by scraping the ECSE web site, http://www.ecse.rpi.edu/ .

Another part of the API would be an easy way to ECSE members to add material. One method would leverage on top of AFS, because it's network-based and has flexible file permission facilities. There would be a storage point like /afs/rpi.edu/dept/ecse/videowall, with individual subdirectories for each person. The video wall control PC would read material from the directory, either randomly, in sequence, or controlled by the viewer.

The videowall might even have a few games for bored visitors to play while waiting for appointments. A game might even have two players, connected to different URLs on their phones.

If the prototype is good enough, then you're welcome to continue in the spring, perhaps as an independent study or URP. You'd have a budget to buy a large screen panel, with PC to control it, to install your system.

Eventually the videowall might have several screen, either with separate material, or joined to make one even larger display.

The goal is to make something to call attention to ECSE.

20.  Wed 10/9/13 Review for midterm

I'll be here to answer questions and review old exams.

Lecture notes: 1009.pdf

21.  Thurs 10/10/13 Midterm exam

Midterm exam will be in Amos Eaton 214, not in RI 211.

  1. Closed book & notes. You may bring one 2-sided letter-size note sheet.
  2. Questions are based on the programs and material covered in class, up through lecture 11.
  3. The midterm exam is here. Don't look at it before the exam.
  4. Especially don't look at the solution.

Week 8a

22.  Tues 10/15/13

Videos

  1. SIGGRAPH 2011 Computer Animation Festival Video Preview
  2. Evans and Sutherland flight simulator history.
  3. RealFlow Siggraph 2011 Showreel

Big idea: curves. Big questions:

  1. What math to use?
  2. How should the designer design a curve?
  3. OpenGL implementation.
  4. Reading: bezier.pdf
  5. Partial summary:
    1. To represent curves, use parametric (not explicit or implicit) equations.
    2. Use connected strings or segments of low-degree curves, not one hi-degree curve.
    3. If the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    4. That requires at least cubic equations.
    5. Higher degree equations are rarely used because they have bad properties such as:
      1. less local control,
      2. numerical instability (small changes in coefficients cause large changes in the curve),
      3. roundoff error.
    6. See my note on that: Hi Degree Polynomials.
    7. One 2D cartesian parametric cubic curve segment has 8 d.f.
      {$ x(t) = \sum_{i=0}^3 a_i t^i$},
      {$ y(t) = \sum_{i=0}^3 b_i t^i$}, for {$0\le t\le1$}.
    8. Requiring the graphic designer to enter those coefficients would be unpopular, so other APIs are common.
    9. Most common is the Bezier formulation, where the segment is specified by 4 control points, which also total 8 d.f.: P0, P1, P2, and P3.
    10. The generated curve starts at P0, goes near P1 and P2, and ends at P3.
    11. The curve stays inside the control polygon, the convex hull of the control points. A flatter control polygon means a flatter curve.
    12. A choice not taken would be to have the generated curve also go thru P2 and P3. That's called a Catmull-Rom-Oberhauser curve. However that would force the curve to go outside the control polygon by a nonintuitive amount. That is considered undesirable.
    13. Instead of 4 control points, a parametric cubic curve can also be specified by a starting point and tangent, and an ending point and tangent. That also has 8 d.f. It's called a Hermite curve.
    14. The three methods (polynomial, Bezier, Hermite) are easily interconvertible.
    15. Remember that we're using connected strings or segments of cubic curves, and if the adjacent segments match tangents and curvatures at their common joint, then the joint is invisible.
    16. That reduces each successive segment from 8 d.f. down to 2 d.f.
    17. This is called a B-spline.
    18. From a sequence of control points we generate a B-spline curve that is piecewise cubic and goes near, but probably not thru, any control point (except perhaps the ends).
    19. Moving one control point moves the adjacent few spline pieces. That is called local control. Designers like it.
    20. One spline segment can be replaced by two spline segments that, together, exactly draw the same curve. However they, together, have more control points for the graphic designer to move individually. So now the designer can edit smaller pieces of the total spline.
    21. Extending this from 2D to 3D curves is obvious.
    22. Extending to homogeneous coordinates is obvious. Increasing a control point's weight attracts the nearby part of the spline. This is called a rational spline.
    23. Making two control points coincide means that the curvature will not be continuous at the adjacent joint.
      Making three control points coincide means that the tangent will not be continuous at the adjacent joint.
      Making four control points coincide means that the curve will not be continuous at the adjacent joint.
      Doing this is called making the curve (actually the knot sequence) non-uniform. (The knots are the values of the parameter for the joints.)
    24. Putting all this together gives a non-uniform rational B-spline, or a NURBS.
    25. A B-spline surface is a grid of patches, each a bi-cubic parametric polynomial.
    26. Each patch is controlled by a 4x4 grid of control points.
    27. When adjacent patches match tangents and curvatures, the joint edge is invisible.
    28. The surface math is an obvious extension of the curve math.
      1. {$ x(u,v) = \sum_{i=0}^3\sum_{j=0}^3 a_{ij} u^i v^j $}
      2. {$y, z $} are similar.
      3. One patch has 48 d.f., although most of those are used to establish continuity with adjacent patches.
  6. My extra enrichment info on splines: Splines.
  7. Guha's treatment of evaluators etc. is weak. I recommend the googling for better descriptions. This one is good:
    OpenGL Programming Guide - Chapter 12 Evaluators and NURBS
  8. Different books define B-splines slightly differently, especially with the subscripts and end conditions.
  9. Programs covered:
    1. bezierCurves.cpp, p 390
      New OpenGL things:
      1. glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, order, controlPoints[0]);
      2. glEnable(GL_MAP1_VERTEX_3);
      3. glMapGrid1f(100, 0.0, 1.0);
      4. glEvalMesh1(GL_LINE, 0, 100);
    2. bezierCurveWithEvalMesh.cpp, p 391
      Draw 6th degree Bezier curve and control polygon.
    3. bezierCurveTangent.cpp, p 392
      Move control points of 2nd Bezier curve to make it join the 1st Bezier curve smoothly.
    4. bezierSurface.cpp, p 392
      Play with control grid of bicubic Bezier surface.
    5. bezierCanoe.cpp, p 394
      Design a canoe.
    6. torpedo.cpp, p 395
      and a torpedo.
    7. deCasteljau3.cpp, p 565
      Compute a point on a quadratic Bezier curve with 3 linear interpolations.
    8. sweepBezierSurface.cpp, p 578
      Make a surface by sweeping between 2 curves.
    9. bSplines.cpp, 594
      (Enrichment.)

Lecture notes: 1015.pdf

22.1  Sample Questions

  1. Designers usually use Bezier curves, where the curve goes near (but not through) the interior control points. They do not usually use Catmull-Rom curves, where the curve goes through all the control points. Why?
  2. How many degrees of freedom would a 2-D quadratric Cartesian Bezier curve have?
  3. Imagine a 2-D quadratric Cartesian Bezier curve with control points P0(0,0), P1(1,0), P2(1,1).
    1. What are the points on the curve with t=0, t=1/2, t=1?
    2. Imagine that you are joining a 2nd Bezier curve to the end of this one, so that there is C1 continuity at the joint. What are the 1st and 2nd control points of this new curve?
  4. What do these lines do:
    1. glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, order, controlPoints[0]);
    2. glEnable(GL_MAP1_VERTEX_3);
    3. glMapGrid1f(100, 0.0, 1.0);
    4. glEvalMesh1(GL_LINE, 0, 100);
    5. glMap2f(GL_MAP2_VERTEX_3, 0, 1, 3, 4, 0, 1, 12, 6, controlPoints[0][0]);
    6. glMapGrid2f(20, 0.0, 1.0, 20, 0.0, 1.0);
    7. glEvalMesh2(GL_LINE, 0, 20, 0, 20);
    8. glEvalCoord1f(u);
    9. glEnable(GL_MAP2_VERTEX_3);
  5. How many degrees of freedom would a 3D quadratric homogeneous Bezier patch have?
  6. What are the usually used conditions at the joint between 2 cubic Bezier curves so that the joint is invisible?
  7. Give 3 reasons why we usually use a sequence of low-degree curves instead of one high degree curve.
  8. Give 2 reasons why we use parametric curves instead of implicit curves.
  9. Give 1 reason why we use parametric curves instead of explicit curves.
  10. What do evaluators provide a way to do?
  11. What kind of curves can evaluators be used to draw?
  12. What does the word Rational in NURBS mean?
  13. What does the phrase non uniform in NURBS mean?
  14. When are 3D texture maps useful?
  15. An evaluator is often used to generate vertex positions. Name 3 other things an evaluator might generate.

23.  Wed 10/16/13

Lab to get back your graded midterms, and to query the TAs about any grading issues. Do it now; don't wait. This is your last chance to complain about homeworks 1-5.

No homework due this week.

Term project notes:

  1. Your term project proposal is due. Upload it to LMS, once per team. The other team members just upload a statement saying who the team leader is.
  2. You may have a larger team than 3 if you discuss it with me first.
  3. You do not have to use OpenGL, but may use any platform at least as powerful.
  4. You must do programming in a language at least as powerful as C. Merely designing something in Blender is not good enough.
  5. Unity is allowed.
  6. You are welcome to build on any existing package that you are legally allowed to modify.
  7. You are welcome to combine with other courses, if you ask all the instructors.

Week 8b

24.  Thu 10/17 (L13)

24.1  Chapter 10 ctd.

  1. Videos
    1. Animating Fire with Sound (SIGGRAPH 2011)
    2. Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and HDR Displays
    3. SIGGRAPH 2012 : Computer Animation Festival Trailer

More curves:

  1. The equation for a point on a cubic Bezier curve, where {$P_i$} are the control points. {$P(t)$} is the output curve.
    Each point {$P(t) $} on the curve is a weighted sum of the 4 control points:
    {$ P(t) = \sum_{i=0}^3 B_i(t) P_i $}.
    {$ B_i(t) = {3 \choose i} t^i (1-t)^{3-i} $}.
    {$ {3 \choose i} $} is the combinatorial choose function. {$ {3 \choose i} = \frac{3!}{i!(3-i)!} $}
    The {$ B_i(t) $} are the weights; they're called Bernstein polynomials.
    They sum to 1: {$ \sum_{i=0}^3 B_i(t) = 1 $}
    They change as you move along the curve.
    E.g., at the start, {$ P(0)= P_0 $}, so {$ B_0(0) = 1$} and {$ B_1(0) = B_2(0) = B_3(0) = 0 $}.
    At the middle, the weights are {$ 1/8. 3/8, 3/8, 1/8 $}.
  2. cubicSplineCurve2.cpp
    1. This shows how to do a NURBS curve in OpenGL. When you drag a control point, only the closest segments of the curve move.
    2. The starting and ending knots are each repeated four times. That makes the spline go through the ending control points.
    3. Interesting parts of the program:
    // Create NURBS object.
    nurbsObject = gluNewNurbsRenderer();
    gluNurbsProperty(nurbsObject, GLU_SAMPLING_METHOD, GLU_PATH_LENGTH);
    gluNurbsProperty(nurbsObject, GLU_SAMPLING_TOLERANCE, 10.0);
    // Draw the spline curve.
    glColor3f(0.0, 0.0, 0.0);
    gluBeginCurve(nurbsObject);
    gluNurbsCurve(nurbsObject, 34, knots, 3, ctrlpoints[0], 4, GL_MAP1_VERTEX_3);
    gluEndCurve(nurbsObject);
  3. bicubicSplineSurfaceLitTextured.cpp
    1. This shows a NURBS surface.
    2. When you rotate the surface, the light (the small black square) does not rotate. Therefore the highlights change.
    3. Lighting, texture, and the depth test are both enabled.
    4. Surface normals are automatically computed from the surface, and then used by the lighting.
    5. The texture modulates the computed lighting.
    6. Both surface coordinates (vertices) and texture coordinates are NURBS.
    7. Unfortunately it fails on my x201 laptop.
    8. We'll study lighting and textures later.
    9. Interesting NURBS code:
      gluBeginSurface(nurbsObject);
      gluNurbsSurface(nurbsObject, 19, uknots, 14, vknots, 30, 3, controlPoints[0][0], 4, 4, GL_MAP2_VERTEX_3);
      gluNurbsSurface(nurbsObject, 4, uTextureknots, 4, vTextureknots, 4, 2, texturePoints[0][0], 2, 2, GL_MAP2_TEXTURE_COORD_2);
      gluEndSurface(nurbsObject);
  4. trimmedBicubicSplineSurface.cpp
    The shows trimming a NURBS surface to cut out holes. The trimming curve can also be a B-spline. It is defined in the parameter space of the surface. If the surface's control points are moved, the trimming curve moves along with the surface.
    1. Interesting NURBS code to define a trim line.
      gluBeginTrim(nurbsObject);
      gluNurbsCurve(nurbsObject, 10, curveKnots, 2, curvePoints[0], 4, GLU_MAP1_TRIM_2);
      gluEndTrim(nurbsObject);

Other topics from chapter 10; read them on your own.

  1. swept volumes
  2. regular polyhedron
  3. quadrics

Week 8c

24.2  Chapter 11: Color and Light

This is the next topic. To prepare, read Guha, Section VI, Chapter 11, pages 403-524. (You don't have to read all of it for Mon.)

  1. Some points from the text
    1. Sensitivity curves of the 3 types of cones vs wavelength.
    2. Material reflectivity vs wavelength.
    3. Incoming light intensity vs wavelength.
    4. Retina is a neural net - Mach band effect.

Things I added to lecture, which are not in the book:

  1. Tetrachromacy
    Some women have 2 slightly different types of green cones in their retina. They see 4 primary colors.
  2. Metamers
    Different colors (either emitted or reflected) with quite different spectral distributions can appear perceptually identical. With reflected surfaces, this depends on what light is illuminating them. Two surfaces might appear identical under noon sunlight but quite different under incandescent lighting.
  3. CIE chromaticity diagram
    This maps spectral colors into a human perceptual coordinate system. Use it to determine what one color a mixture of colors will appear to be.
    1. More info: CIE_xyY
    2. Purple is not a spectral color.
  4. Color video standards: NTSC, SECAM, etc
  5. My note on NTSC And Other TV Formats.
  6. Failed ideas: mechanical color TV. http://en.wikipedia.org/wiki/Mechanical_television (enrichment)
  7. Additive vs subtractive colors.
  8. Phong lighting model: The total light at a pixel is the sum of
    1. Incoming ambient light times ambient reflectivity of the material at the pixel,
    2. Incoming diffuse light times diffuse reflectivity times a factor for the light source being low on the horizon,
    3. Incoming specular light times specular reflectivity times a factor for the eye not being aligned to the reflection vector, with an exponent for the material shininess,
    4. Light emitted by the material.
    See page 439.
  9. That is not intended to be completely physical, but to give the programmer lots of parameters to tweak.
  10. sphereInBox1.cpp - p 426.
  11. OpenGL has several possible levels of shading. Pick one of the following choices. Going down the list makes the shading better but costlier.
    1. Shade the whole polygon to be the color that you specified for one of the vertices.
    2. Bilinearly shade the polygon, triangle by triangle, from the colors you specified for its vertices.
    3. Use the Phong lighting model to compute the color of each vertex from that vertex's normal. Bilinearly interpolate that color over the polygon. That is called Gouraud shading.
    4. Bilinearly interpolate a surface normal at each pixel from normals that you specified at each vertex. Then normalize the length of each interpolated normal vector. Evaluate the Phong lighting model at each pixel from the interpolated normal. That is called Phong shading.
  12. Computing surface normals. See page 442ff.
    1. For a curved surface, the normal vector at a point on the surface is the cross product of two tangent vectors at that point. They must not be parallel to each other.
    2. If it's a parametric surface, partial derivatives are tangent vectors.
    3. A mesh is a common way to approximate a complicated surface.
    4. For a mesh of flat (planar) pieces (facets):
      1. Find the normal to each facet.
      2. Average the normals of the facets around each vertex to get a normal vector at each vertex.
      3. Apply Phong (or Gouraud) shading from those vertex normals.
  13. Steve Baker's notes on some color topics:
    1. basic OpenGL lighting
    2. Smooth Shading 'Gotcha's in OpenGL
  14. OpenGL programming guide on lighting: http://www.glprogramming.com/red/chapter05.html - very detailed.
  15. Maureen Stone: Representing Colors as 3 Numbers (enrichment)
  16. Why do primary schools teach that the primary colors are Red Blue Yellow?
  17. Homework 7 online, due 10/23/13.

Lecture notes: 1017.pdf

24.3  Sample questions:

  1. What do the following lines do:
    1. nurbsObject = gluNewNurbsRenderer();
    2. gluNurbsProperty(nurbsObject, GLU_SAMPLING_METHOD, GLU_PATH_LENGTH);
    3. gluNurbsProperty(nurbsObject, GLU_SAMPLING_TOLERANCE, 10.0);
    4. gluBeginCurve(nurbsObject);
    5. gluNurbsCurve(nurbsObject, 34, knots, 3, ctrlpoints[0], 4, GL_MAP1_VERTEX_3);
    6. gluEndCurve(nurbsObject);
    7. gluBeginSurface(nurbsObject);
    8. gluNurbsSurface(nurbsObject, 19, uknots, 14, vknots, 30, 3, controlPoints[0][0], 4, 4, GL_MAP2_VERTEX_3);
    9. gluNurbsSurface(nurbsObject, 4, uTextureknots, 4, vTextureknots, 4, 2, texturePoints[0][0], 2, 2, GL_MAP2_TEXTURE_COORD_2);
    10. gluEndSurface(nurbsObject);
    11. gluBeginTrim(nurbsObject);
    12. gluNurbsCurve(nurbsObject, 10, curveKnots, 2, curvePoints[0], 4, GLU_MAP1_TRIM_2);
    13. gluEndTrim(nurbsObject);
  2. What is the tristimulus theory of color?
  3. Why were experiments on people necessary to devise the CIE chromaticity diagram?
  4. What did Philo T Farnsworth invent?
  5. A material's color is represented by 10 numbers. Name them.
  6. What coordinate system do you define a trim line in?
  7. Define local control.

Week 9

25.  Mon 10/21/13 (L15)

  1. Videos
    1. SIGGRAPH 2011 : Real-Time Live Highlights
    2. TED: David Bolinsky animates a cell
  2. Lighting in OpenGL.
    1. This is now deprecated (see here), so I'll do it quickly. The point of this is to show you possibilities in lighting, though you now have to implement them yourselves.
    2. lightAndMaterial1.cpp - p. 428.
      1. interactively change material properties
      2. new: attenuating distant light
    3. lightAndMaterial2.cpp - p. 428.
      1. interactively change light properties
      2. moving light
    4. litTriangle.cpp - 2 sided lighting
    5. spotlight.cpp - p 437. New: spotlights, colormaterial mode
    6. litCylinder.cpp - p 457. User computes normals at the vertices and passes them to OpenGL for use in shading.
    7. When moving around in a scene with lights, you may fix the the lights to the camera, or fix them in the scene.
  3. Chapter 12: Textures
    1. Motivation: Adding per-pixel surface details without raising the geometric complexity of a scene.
    2. loadTextures.cpp - p. 468. Lots of texture stuff.
    3. User settings:
      1. What to do if the texture is too small for the polygon?
      2. What to do if a texel is much smaller (or much larger) than a pixel?
      3. Do you combine the texture with the color, or totally replace the color?
    4. fieldAndSky.cpp - p. 476. Aliasing.
    5. fieldAndSkyFiltered.cpp - p. 482. Filters.
    6. compareFilters.cpp - p 485. Animate the texture.

Lecture notes: 1021.pdf

25.1  Sample questions

  1. What preprocessing technique handles a texel being much smaller (or much larger) than a pixel?

26.  Wed 10/23/13

Lab to talk to the TAs.

I computed an estimated midgrade score out of 100 and uploaded it to LMS, to one of the columns labelled Midgrade. (The other Midgrade column is blank.) The formula is:

{$ min\left( 10\left(\frac{HW1}{32}+\frac{HW2}{32}+\frac{HW3}{30}+\frac{HW4}{30}+\frac{HW5}{20}\right) \\ + \frac{5}{4}Midterm + Knowitall,\\ 100\right) $}

27.  Thurs 10/24/13 (L16)

  1. Videos
    1. Mipmapping
    2. SIGGRAPH 2010 : Technical Papers Trailer
  2. More textures
    1. texturedTorpedo.cpp - p 486. Texture Bezier patches.
    2. litTexturedCylinder.cpp - p 489. Texture Bezier patches.
    3. fieldAndSkyLit.cpp - p 489. Combine lighting and texture.
  3. Chapter 13 - Special Visual Techniques: Blending etc.
    1. blendRectangles1.cpp - p 496.
      Instead of overwriting a pixel's color, combine the old and new values using the a values of the colors.
    2. sphereInGlassBox.cpp - p 499.
    3. ballAndTorusReflected.cpp - p 501.
    4. fieldAndSkyFogged.cpp - p 502.
    5. billboard.cpp - p 504.
    6. Antialiasing is important, but the antialiasing example doesn't show the benefit of antialiasing well.
    7. bumpMapping.cpp - p 521.
      Generate artificial perturbed normals to a flat surface, and feed those to the lighting equation.

27.1  Chapter 14: Raster algorithms, p 527ff.

  1. Many of these algorithms were developed for HW w/o floating point, where even integer multiplication was expensive.
  2. Efficiency is now less important in most cases (unless you're implementing in HW).
  3. The idea of clipping with a 6-stage pipeline is an important.
  4. Jim Clark, http://en.wikipedia.org/wiki/James_H._Clark, a prof at Stanford, made a 12-stage pipeline using 12 copies of the same chip, and then left Stanford to found SGI.
    1. Later he bankrolled Netscape and 2 other companies.
    2. More recently he had the world's 4th largest yacht: http://www.forbes.com/sites/ryanmac/2012/05/15/billionaire-jim-clark-seeks-more-than-100-million-for-two-superyachts/.
  5. My note on Bresenham Line and Circle Drawing. Jack Bresenham, then at IBM invented these very fast ways to draw lines and circles with only integer addition and subtraction. My note gives step-by-step derivations by transforming slow and clear programs to fast and obscure programs.
  6. Lecture notes: 1024.pdf

27.2  Sample questions

  1. Aliasing becomes a problem when objects are what compared to a pixel's size?
    1. smaller
    2. larger
    3. the same
    4. The pixel size is irrelevant; aliasing happens when objects are all the same size.
  2. The Bresenham line algorithm was devised to operate efficiently with what property of hardware?
  3. What might clipping do to the number of vertices?
    1. The number of vertices might stay the same or reduce but not grow.
    2. The number of vertices might stay the same or grow but not reduce.
    3. The number of vertices might stay the same or grow or reduce.
    4. The number of vertices must stay the same.
  4. In the 12 stage pipeline devised by Jim Clark, six stages are used for what operation?

Week 10a

28.  Mon 10/28/13 (L17)

  1. Videos
    1. Contemporary Tetrachromist Artist, Concetta Artico talks about "Metamorphosis'
    2. Folding and Crumpling Adaptive Sheets, SIGGRAPH 2013
    3. Structure-Aware Hair Capture (Siggraph 2013)

28.1  Applied Parallel Computing for Engineers (Proposed spring course)

A computer engineering course. Engineering techniques for parallel processing. Providing the knowledge and hands-on experience in developing applications software for processors on inexpensive widely-available computers with massively parallel computing resources. Multithread shared memory programming with OpenMP. NVIDIA GPU multicore programming with CUDA and Thrust. Using NVIDIA gaming and graphics cards on current laptops and desktops for general purpose parallel computing using linux.

Prereq: ECSE-2660 CANOS or equivalent, knowledge of C++, access to a computer with NVIDIA GPU running CUDA 2.1 or newer.

Credits: 3

28.2  Chapters 15, 16, 17 More splines

We've already done enough splines, though this material is interesting.

28.3  Chapter 18 Projections, pp 641ff

  1. OpenGL versions: OpenGL_version_note (enrichment)
  2. View normalization or projection normalization
    1. We want to view the object with our desired perspective projection.
    2. To do this, we transform the object into another object that looks like an amusement park fun house (all the angles and lengths are distorted).
    3. However, the default parallel projection of this normalized object gives exactly the same result as our desired perspective projection of the original object.
    4. Therefore, we can always clip against a 2x2x2 cube, and project thus: (x,y,z)->(x,y,0) etc.
    5. Guha p 643-655.
  3. Homogeneous coordinates
    1. This is today's big idea. We'll now see it in more detail than before.
      HomogeneousCoords
    2. Parallel lines intersect at an infinite point
    3. There is a line through any two distinct points
  4. OpenGL modelview vs projection matrices.
    1. Deprecated, but is interesting for its subdivision of transformations into two functions.
    2. The modelview matrix moves the world so that the camera is where you want it, relative to the objects. Unless you did a scale, the transformation is rigid - it preserves distances (and therefore also angles).
    3. The projection matrix view-normalizes the world to effect your desired projection and clipping. For a perspective projection, it does not preserve distances or angles, but does preserve straight lines.
  5. Debugging OpenGL: The OpenGL FAQ and Troubleshooting Guide is old but can be useful.
  6. Computer graphics in the real world (enrichment only)
    1. Forma Urbis Romae - reconstruction of a street map of 211AD Rome from 1186 pieces like this one:
  7. Another OpenGL tutorial
    The Practical Physicist's OpenGL tutorial Edward S. Boyden
  8. Steve Baker's notes on some graphics topics:
    1. GL_MODELVIEW vs GL_PROJECTION
    2. Euler angles are evil
  9. The OpenGL SuperBible has a lot of code. Each version has two parallel directory trees - for the source and the executables. Don't move files around. You need to compile and run from the executable tree.

Week 10b

28.4  Chapter 19: Fixed Functionality Pipelines, pp 675ff.

  1. The fixed graphics pipeline:
    1. Process vertices, e.g. to apply transformations and compute normals.
    2. Rasterize, i.e., interpolate data from vertices to pixels (fragments)
    3. Process fragments, e.g., to compute Phong shading.
  2. Chapter 19: ray tracing, radiosity. Other refs:
    1. http://en.wikipedia.org/wiki/Ray_casting
    2. http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
    3. http://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
    4. http://en.wikipedia.org/wiki/Radiosity_%283D_computer_graphics%29
  3. Visibility methods:
    1. Painters:
      1. The painter's algorithm is tricky when faces are close in Z.
      2. Sorting the faces is hard and maybe impossible. Then you must split some faces.
      3. However sometimes some objects are always in front of some other objects. Then you can render the background before the foreground.
    2. Z-buffer:
      1. Subpixel objects randomly appear and disappear (aliasing).
      2. Artifacts occur when objects are closer than their Z-extent across one pixel.
      3. This happens on the edge where two faces meet.
    3. BSP tree:
      1. In 3D, many faces must be split to build the tree.
    4. The scanline algorithm can feed data straight to the video D/A. That was popular decades ago before frame buffers existed. It is popular again when frame buffers are the slowest part of the pipeline.
    5. A real implementation, with a moving foreground and fixed background, might combine techniques.
    6. References: wikipedia.
  4. Shaders: Customize steps (a) and (c) above.;

28.5  Chapter 20

  1. GPU programming - vertex and fragment shaders etc.
    1. Guha's description in Chapter 20 is excellent.
    2. GLSL/RedSquare
    3. GLSL/MultiColoredSquare2
    4. GLSL/InterpolateTextures
    5. GLSL/BumpMappingPerVertexLighting
    6. GLSL/BumpMappingPerPixelLighting
  2. More shader examples. These are from the 4th edition of the OpenGL Superbible. The local copy of the code and book is here. (The 5th edition has some negative reviews because of how they updated their code for the latest OpenGL version.)
    The code is arranged in two parallel trees: src has the C++ source code, while projects has the Makefile, executable, and shader files. Some interesting examples are:
    1. chapt16/vertexblend
    2. chapt16/vertexshaders
    3. chapt17/bumpmap
    4. chapt17/fragmentshaders
    5. chapt17/imageproc
    6. chapt17/lighting
    7. chapt17/proctex
    Those chapters of the SuperBible also have good descriptions.

Week 10c

29.  Thurs 10/31/13 (L18)

  1. Enrichment reading material for Halloween:
  2. SIGGRAPH 91 and 92 preliminary programs. This is a bowdlerized version, from whence I have removed items that might give offense.
  3. Raytracing jello brand gelatin
  4. Videos - commercial applications of graphics
    1. NGRAIN augmented reality capability demo
    2. Modeling and Simulation is Critical to GM Powertrain Development
    3. Hydraulic Fracture Animation
    4. Deepwater Horizon Blowout Animation www.deepdowndesign.com
    5. On-board fire-fighting training simulator made for the Dutch navy
    6. Ship accident SMS , the best video I have ever seen
  5. More spline programs
    1. rationalBezierCurve2.cpp - p 661. Change the homogeneous weight of a control point. Interesting code:
      // Draw the red rational Bezier curve.
      glColor3f(1.0, 0.0, 0.0);
      glMap1f(GL_MAP1_VERTEX_4, 0.0, 1.0, 4, 3, controlPointsHomogeneous[0]);
      glEnable(GL_MAP1_VERTEX_4);
      glMapGrid1f(100, 0.0, 1.0);
      glEvalMesh1(GL_LINE, 0, 100);
    2. rationalBezierCurve3.cpp - p 664. Move and change the homogeneous weight of any of several control points.
    3. rationalBezierSurface.cpp - p 670. Move and change the homogeneous weight of any of several control points on a Bezier surface. Interesting code:
      // Specify and enable the Bezier surface.
      glMap2f(GL_MAP2_VERTEX_4, 0, 1, 4, 4, 0, 1, 16, 6, controlPointsHomogeneous[0][0]);
      glEnable(GL_MAP2_VERTEX_4);
      // Make a mesh approximation of the Bezier surface.
      glColor3f(0.0, 0.0, 0.0);
      glMapGrid2f(20, 0.0, 1.0, 20, 0.0, 1.0);
      glEvalMesh2(GL_LINE, 0, 20, 0, 20);
  6. Aliasing and anti-
    1. The underlying image intensity, as a function of x, is a signal, f(x).
    2. When the objects are small, say when they are far away, f(x) is changing fast.
    3. To display the image, the system evaluates f(x) at each pixel. That is, f(x) is sampled at x=0,1,2,3,...
    4. If f(x), when Fourier transformed, has frequencies higher than 1/2 (cycle per pixel), then that sampling is too coarse to capture the signal. See the Nyquist sampling theorem.
    5. When this hi-freq signal is sampled at too low a frequency, then the result computed for the frame buffer will have visual problems.
    6. It's not just that you won't see the hi frequencies. That's obvious.
    7. Worse, you will see fake low frequency signals that were never in the original scene. They are called aliases of the hi-freq signals.
    8. These artifacts may jump out at you, because of the Mach band effect.
    9. Aliasing can even cause (in NTSC) rapid intensity changes to cause fake colors and vv.
    10. Aliasing can occur with time signals, like a movie of a spoked wagon wheel.
    11. This is like a strobe effect.
    12. The solution is to filter out the hi frequencies before sampling, or sample with a convolution filter instead of sampling at a point. That's called anti-aliasing.
    13. OpenGl solutions:
      1. Mipmaps.
      2. Compute scene on a higher-resolution frame buffer and average down.
      3. Consider pixels to be squares not points. Compute the fraction of each pixel covered by each object, like a line. Lines have to have finite width.
    14. Refs:
      1. http://en.wikipedia.org/wiki/Aliasing
      2. http://en.wikipedia.org/wiki/Clear_Type
      3. http://en.wikipedia.org/wiki/Wagon-wheel_effect
      4. http://en.wikipedia.org/wiki/Spatial_anti-aliasing (The H Freeman referenced worked at RPI for 10 years).
      5. http://en.wikipedia.org/wiki/Mipmap
      6. http://en.wikipedia.org/wiki/Jaggies

29.1  Sample questions

  1. What is the aliasing problem (in Computer Graphics)?
  2. What are some solutions?
  3. What is a mipmap?
  4. What is a spline?
  5. Define cubic Bezier curve.
  6. What conditions are usually required when two Bezier curves meet at a joint?
  7. What happens if they're not met?

Week 11

30.  Mon 11/4/13 (L19)

  1. Videos - military applications of graphics
    1. I/ITSEC 2012 -- Rockwell Collins Shows Curved 6-projector Blended Simulation
    2. Complete Characters HD Soldier and Combat Animations - ITSEC 2012
    3. Terrasim Destructable Buildings, Custom Content Debut at I/ItSEC 2012
    4. Modeling and simulation is a standard term.
  2. Start current OpenGl Vertex Buffer Objects (VBO), Vertex Array Objects (VAO).
  3. Reference: the handout, plus online material. The TAs will have extra copies.
  4. Sample programs: NewCode/

31.  Wed 11/6/13

Lab to talk to the TAs.

32.  Thurs 11/7/13

No class.

Instead, you should watch this video of Sebastian Thrun's keynote talk at this year's NVidia technical conference. (This is a baseline of a good term project, given that Thrun was hampered by being at Stanford not RPI.) (Local cache).

It is also a marvelous example of a successful engineering project. Many different parts all have to work to make the total project succeed. They include laser rangefinding, image recognition, a road database accurate to the specific lane, and GPS navigation. This is also a model for government - university - industry interaction.

DARPA (The Defense Advanced Research Projects Agency) started this concept with a contest paying several $million in prizes. (DARPA started connecting computers in different cities with phone lines in 1968. This was the ARPAnet. They funded computer graphics in the 1970s and some early steps in virtual reality in the 1980s.)

In the 1st contest, the best vehicle failed after about 10 miles. Five vehicles completed the 130 mile course in the 2nd contest. The winning project leader was Sebastian Thrun, a Stanford CS prof. He quit and moved to Google, who has now been funding this for several years.

Here is the talk abstract:

What really causes accidents and congestion on our roadways? How close are we to fully autonomous cars? In his keynote address, Stanford Professor and Google Distinguished Engineer, Dr. Sebastian Thrun, will show how his two autonomous vehicles, Stanley (DARPA Grand Challenge winner), and Junior (2nd Place in the DARPA Urban Challenge) demonstrate how close yet how far away we are to fully autonomous cars. Using computer vision combined with lasers, radars, GPS sensors, gyros, accelerometers, and wheel velocity, the vehicle control systems are able to perceive and plan the routes to safely navigate Stanley and Junior through the courses. However, these closed courses are a far cry from everyday driving. Find out what the team will do next to get one step closer to the holy grail of computer vision, and a huge leap forward toward the concept of fully autonomous vehicles.

Finally, Dr Tony Tether, Director of DARPA when this happened, is an RPI BS EE grad.

Week 12a

33.  Mon 11/11/13 (L20)

  1. Videos - 3D input
    1. Magic 3D Pen
    2. 123D Catch - How to Make 3D Models from Pictures
    3. SIGGRAPH 2013: Scott Metzger, The Foundry And Nvidia demonstrate the Rise Project
  2. More shader examples. These are from the 4th edition of the OpenGL Superbible. The local copy of the code and book is here. (The 5th edition has some negative reviews because of how they updated their code for the latest OpenGL version.)
    The code is arranged in two parallel trees: src has the C++ source code, while projects has the Makefile, executable, and shader files. In linux, I use the zsh, a superset of most other shells. In zsh, I wrote these functions to change between the source and corresponding executable directories.
    # Assuming that you are in the OpenGL
    # SuperBible linux executable dir,
    # change to the corresponding src dir.
    
    function sb-x2s() {
      cd ${PWD:s+projects/linux+src+} 
      }
    
    # Assuming that you are in the OpenGL
    # SuperBible linux src dir, change to
    # the corresponding executable dir.
    
    function sb-s2x() {
      cd ${PWD:s+src+projects/linux+} 
      }
    
    
    Some interesting examples are:
    1. chapt16/vertexblend
    2. chapt16/vertexshaders
    3. chapt17/bumpmap
    4. chapt17/fragmentshaders
    5. chapt17/imageproc
    6. chapt17/lighting
    7. chapt17/proctex
    Those chapters of the SuperBible also have good descriptions.

33.1  Sample questions

  1. What is a bumpmap, and why is it useful?
  2. What is a texture sampler, and how is it more powerful than an array?
  3. If you want to create a highlight in the middle of a triangle, can you do that by computing the color at each vertex and interpolating (rastering) the color across the triangle?
  4. In this code to compute diffuse lighting:
    float intensity = max(0.0, dot(N, normalize(L)));
    what's the point of max?
  5. When I write C code to multiply a vector and a matrix, I probably write a for loop. However, in shaders you see simply this:
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    Why don't you write a loop there?

Week 12b

34.  Thurs 11/14/13 (L21)

  1. Videos from students
    1. Head Tracking for Desktop VR Displays using the WiiRemote. The guy who made it is Johnny Lee. He does a bunch of other cool graphics and motion tracking stuff with the Nintendo Wii too. Here’s a link to his website with all of his other projects. Thanks to Nicholas Cesare.
    2. Cool Sound and Water Experiment related to aliasing. Thanks to Rebecca Nordhauser for this.
  2. Videos on OpenGL by Jeffrey Chastine
    1. Tutorial 5 - Vertex Buffers in OpenGL
    2. Tutorial 6 - Vertex Buffers in OpenGL (code)
  3. OpenGL 4.0 pipeline and shaders
    1. This material is so new that many computers (including my laptop) cannot run the demo programs. In MS terms, your computer may require DirectX 11. The OpenGl people are also periodically adding new features. E.g., the compute shader was added to the OpenGL Core only in 4.3 (updated on 2/12/2013) (although it was available from 6/12/2012 as the ARB_compute_shader extension.
      So, I'll try to give an executive overview of the current pipeline with reference to some good tutorials:
      1. http://www.opengl.org/wiki/Rendering_Pipeline_Overview
      2. http://ogldev.atspace.co.uk/www/tutorial30/tutorial30.html
      3. http://ogldev.atspace.co.uk/index.html
      4. http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/creating-a-shader/
      5. OpenGL 4.0 Tessellation
      6. Raytracing via Compute Shader in OpenGL. Stan Epp, the author, writes,
        Since I got curious about raytracing I wanted to implement a raytracer making use of GPGPU by myself. The result is a naive real time raytracer which is entirely executed on the GPU. It makes use of the compute shader of the new OpenGL version 4.3. The raytracer doesn't use any techniques to accelerate the rendering. It computes intersection points of the rays and the objects in a brute force manner.
        The scenes consist of at most 9 point lights and 15 primitives and have a resolution of 1024x768. This demo has been made on a system with the following specifications: CPU: Intel Core i7 3770K, GPU: Nvidia Geforce GTX 560 TI, RAM: 16GB.
        The project is here. We'll particularly look at the shader code.
  4. Videos showing simulations
    1. 2012 Science & Engineering Visualization Challenge: Alya Red -- A Computational Heart
    2. 100k Particle system using Python OpenGL Source.

34.1  Sample questions:

  1. Immediate mode in OpenGL:
    1. What is it?
    2. Name a function that is immediate mode.
    3. What are the major advantages and disadvantages of immediate mode
    4. What happened to it in the current OpenGL?
  2. Name several things removed from the current (or recent) OpenGL.
  3. Vertex buffer object:
    1. What is it?
    2. What are its major advantages and disadvantages?
  4. What was the latest shader added to the OpenGL pipeline?
  5. What does it do?
  6. What does Primitive Assembly mean?
  7. What is Face Culling?
  8. What new shader computes the positions, normals, etc of new vertices created in a tesselation?
  9. Which shader outputs a depth value and color values that will potentially be written (without any more interpolation or other processing) to the frame buffer?
  10. What do these do:
    1. glEnableClientState(GL_VERTEX_ARRAY);
    2. glEnableClientState(GL_COLOR_ARRAY);
    3. glVertexPointer(3, GL_FLOAT, 0, vertices);
    4. glColorPointer(3, GL_FLOAT, 0, colors);
    5. glViewport (0, 0, (GLsizei)w, (GLsizei)h);
    6. glMatrixMode(GL_PROJECTION);
    7. glMatrixMode(GL_MODELVIEW);
    8. glLoadIdentity();
    9. glArrayElement(i);
    10. glClearColor(1.0, 1.0, 1.0, 0.0);
    11. glDrawElements(GL_TRIANGLE_STRIP, 10, GL_UNSIGNED_INT, stripIndices);
    12. float* bufferData = (float*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
    13. glGenBuffers(2, buffer);
    14. glBindBuffer(GL_ARRAY_BUFFER, buffer[VERTICES]);
    15. glBufferData(GL_ARRAY_BUFFER, sizeof(vertices) + sizeof(colors), NULL, GL_STATIC_DRAW);
    16. programId = glCreateProgram();
    17. glLinkProgram(programId);
  11. Why is it inefficient to do computations in the display routine that could be done at initialization time.

Week 13

35.  Mon 11/18/13 (L22)

35.1  ECSE-4965-01 Applied Parallel Computing for Engineers

(new course, Spring 2014)

Catalog description:

A computer engineering course. Engineering techniques for parallel processing. Providing the knowledge and hands-on experience in developing applications software for processors on inexpensive widely-available computers with massively parallel computing resources. Multithread shared memory programming with OpenMP. NVIDIA GPU multicore programming with CUDA and Thrust. Using NVIDIA gaming and graphics cards on current laptops and desktops for general purpose parallel computing using linux.

Mon and Thurs 4-5:20. 3 credits.

Instructor: W. Randolph Franklin.

Rationale:

This is a new experimental course to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. Specifically, this course will target NVIDIA GPUs because of their low cost (useful gaming cards cost only a few hundred dollars), and ubiquity (a majority of modern desktops and laptops have NVIDIA GPUs). The techniques learned here will also be applicable to larger parallel machines -- number 2 on the top 500 list has 18,688 NVIDIA GPUs.

Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors.

Students will learn tools such as OpenMP, CUDA, and Thrust via extensive programming work.

The target audiences are ECSE seniors and others with comparable background who wish to develop parallel software.

Prereq: ECSE-2660 CANOS or equivalent, knowledge of C++, access to a computer with NVIDIA GPU running CUDA 2.1 or newer.

This course will draw on Programming Massively Parallel Processors with CUDA

Textbooks TBD

35.2  Term Project Presentations

On Dec 2, 4 or 5, each team will give a 5 minute fast-forward powerpoint talk in class. We will allow for up to 14 presentations each day. Please email Shan Zhong, z h o n g s 2, with your preferred day(s) by Nov 24. Each team send only one email, but list all the members.

35.3  Relevant performance at EMPAC

Eric Ameres recommends Central Intelligence Agency, Nov 23 at 7pm.

35.4  Prof Radke special talk.

His new book is

Computer Vision for Visual Effects.

He is giving a course on this topic next spring.

"Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are made possible by research from the field of computer vision, the study of how to automatically understand images. We'll overview classical computer vision algorithms used on a regular basis in Hollywood (such as blue-screen matting, structure from motion, optical flow, and feature tracking) and exciting recent developments that form the basis for future effects (such as natural image matting, multi-image compositing, image retargeting, and view synthesis). We'll also discuss the technologies behind motion capture and three-dimensional data acquisition. The talk will motivate the concepts with many behind-the-scenes images and video clips from TV and movies."

36.  Thurs 11/21/13

36.1  Studying for the final exam

There was a question about what you need to know from the last part of the semester for the final exam. The obvious answer is everything.

I'll prepare studying points to help you. Also there will be a review.

36.2  WebGL intro.

It's "a JavaScript API for rendering interactive 3D graphics and 2D graphics[2] within any compatible web browser without the use of plug-ins." - http://en.wikipedia.org/wiki/WebGL. It uses your display's graphics HW. If your display lacks serious functionality, then WebGL can't add it. Apart from that, it is pretty device-independent.

It's a living API; see news at http://learningwebgl.com/blog/.

There are a series of lessons and demos starting: http://learningwebgl.com/blog/?p=11.

I'll preview WebGL by studying this raytracing example. Feel free to browse the source while I'm talking. In Firefox, you can see it with cntrl-U. This example can be better for learning than many on learningwebgl.com since this is more self-contained. The others reference several support functions in separate files. However the tutorials have good descriptions.

There is lots of other info on WebGL on the web.

WebGL is similar (but not identical) to OpenGL ES, designed for embedded systems. OpenGL ES is similar (but not identical) to OpenGL.

36.3  Sample questions about WebGL:

  1. What does this do:
    <body onload="webGLStart(); resizeCanvas(600); flipAnim()">
  2. Look at the following block of code:
      attribute vec2 aVertexPosition;
      attribute vec3 aPlotPosition;
      varying vec3 vPosition;
      void main(void)  {
        gl_Position = vec4(aVertexPosition, 1.0, 1.0);
        vPosition = aPlotPosition; }
    
    
    1. In two words, what is this block of code?
    2. What does attribute mean?
    3. What does varying mean?
    4. What variable(s) are output to the next stage?
    5. What is the next stage?

36.4  NVIDIA 2013 GPU Technology Conference

keynote presentation by Jen-Hsun Huang, Co-Founder, President, and CEO of NVIDIA. Hear about what''s next in computing and graphics, and preview disruptive technologies and exciting demonstrations across industries. (The link points to a streaming video and PDF slides.)

Week 14

37.  Mon 11/25/13

  1. No class.
  2. Thanksgiving trivia questions:
    1. What language did Samoset, the first Indian to greet the Pilgrims, use?
      Answer: http://www.holidays.net/thanksgiving/pilgrims.htm
    2. How many European countries had Squanto, the 2nd Indian to greet the Pilgrims, visited?
  3. Next week is for student fast forward presentations. (This technique is popular at conferences.)
    1. Each team is to prepare a 5-minute timed MS Powerpoint presentation on your project.
    2. I will run the presentations continuously w/o a break.
    3. Your five minutes will include the time it takes you to get to the front of the class.
    4. The whole presentation must be in one ppt or pptx file.
    5. You may talk while your presentation is running, or remain silent (but then it should have sound).
    6. You must not talk into the next person's presentation.
  4. Regardless of your presentation time, your project is due on Dec 5. (RPI rule: after the last class is only for studying.)
  5. Submit your term project on LMS.

Week 15

38.  Mon 12/2/13

38.1  What I did last Tues

I gave a keynote talk at GeoInfo 2013, the XIV Brazilian Symposium on GeoInformatics. That was 4 days in Brazil plus almost a full day of traveling each way.

38.2  Term Project Grading

Fast forward presentation
project clearly described10
good use of video or graphics10
neat and professional10
Project itself
Graphics programming with good coding style10
Use of interactivity or 3D10
A nontrivial amount of it works (and that is shown)10
Unusually creative10
Writeup
describes key design decisions10
good examples10
neat and professional10
Total  100

Notes:

  1. In addition to the above list, rules violations such as late submission of the powerpoint file, or a project that is seriously off-topic will have an effect.
  2. A 10-minute demonstration to Shan or Dan is optional. If you do, they will give me a modifier of up to 10 points either way. I.e., a good demo will help, a bad one hurt.

38.3  Student term project presentations

  1. Format:
    1. Each team gets 5 minutes, measured from when the previous team ends until when I stand up and stop you talking.
    2. If you gave me your powerpoint file in time, then it is on the classroom computer. To make things easier, you may start it.
    3. If you did not give me your file in time, you're welcome to load it onto the classroom computer or to use your own computer. In either case, the setup time must be part of your 5 minutes.
    4. To treat everyone on all 3 days the same, these rules will apply today and Wed as well as Thurs, even though there are not so many people today and Wed.
    5. There are 64 registered students (62 for 3 credits, and 2 for 4 credits). Only 53 have contacted us about your presentation. If any of the remaining 11 are still alive, could we talk? Thanks.
  2. Presenters:
    1. Kevin Fung, Scott Todd
    2. Rebecca Nordhauser
    3. Timothy Mahon, M. Ryan Fredericks.

38.4  Review for final exam

We will use the rest of today as a review for the final exam, by looking at the sample questions that I've been adding to this page.

39.  Wed 12/4/13

39.1  Final exam conflicts

Only two of the students who told me in class about conflicts have emailed me. Could the other(s) please email me and tell me what exams you have on Thurs 12/12 and Fri 12/13 (so I can find an acceptable time). Thanks.

39.2  Student term project presentations

  1. Kelly DeBarr
  2. Wesley Miller, Eric Zhang, Cameron Chu
  3. Anamyaya Wonaphotimuke, Jesse Levine, Eunkyoung Lee
  4. Viktor Rumanuk, Nathan Berkley, Steve Cropp
  5. Robert Barron
  6. Kevin Lyman, Dan Schlegel, Amanda Knight
  7. Gabriel Violette, Geo Kersey
  8. Ying Lu, Jielei Wang
  9. Joshua Carrelli
  10. Ryan Tuck
  11. Sam Moodys
  12. Andrew Ryan
  13. Ashley Tanski, Nick Lewis
  14. Julia Strandberg
  15. Kevin Hendricks

40.  Thurs 12/5/13

40.1  Student term project presentations

  1. Ben Ciummo, Eric Lowry, Matt Hancock
  2. David Hedin, Raj Pandya
  3. Arthur Jones, Katie Sousa
  4. Nicholas Cesare, Andrew Leing, Kevin Turner
  5. Gaby Ciavardoni
  6. Phil Watters
  7. Ian Kettlewell, Jonathan Go
  8. Kevin Law
  9. Kai VanDrunen
  10. Brandon Waite, Mike Boswell
  11. Jesse Freitas
  12. Tom Weithers
  13. Wade Okawa-Scannell, Kevin Sullivan
  14. Kevan DuPont, Jacob Oarethu, Michael Napolitano
  15. Daniel Newton, James Ross

Week 16

41.  Wed 12/12 6:30-9:30 pm

41.1  Presentation grades

I uploaded presentation comments and grades to LMS on Sunday.

41.2  Final exam

  1. You are allowed two 2-sided letter-size note sheets.
  2. Calculators are not allowed since there are no serious calculation questions.
  3. Please space yourselves out in the exam room.

42.  After the semester and after you graduate

I'll be available to discuss most legal topics.

42.1  Computer Programming Position Available Beginning Jan 2014

Brett R. Fajen writes:

I am looking for an undergraduate student to work as a programmer in my virtual reality lab. The lab is equipped with a head-mounted display, a motion tracking system, and an Alienware machine. The primary responsibility will be to write code to develop virtual environments that will be used for research on human visual perception and motor control.

Candidates must have experience with Python and C++. A background in graphics programming and/or game-engine architecture is helpful but not necessary. The position pays $12 per hour.

This is an excellent opportunity for students to develop their programming skills and gain experience working with virtual reality and motion capture equipment.

For more information, please contact Brett Fajen (fajenb@THEUSUALRPIDOMAIN).

  Brett R. Fajen, Associate Professor
  Department of Cognitive Science
  Rensselaer Polytechnic Institute
  Carnegie Building 301D
  110 8th Street
  Troy NY 12180-3590

  Voice: (518) 276-8266
  Fax: (518) 276-8268
  Email: fajenb@THEUSUALRPIDOMAIN
  http://panda.cogsci.rpi.edu