Final Project
Click here to see a collage of all student final projects
Overview
There will be one large final project in this course which accounts for 25% of the final grade and which will be done over roughly half of the course. The projects all span fun topics which should lead to some tangible deliverables at the end of the course. They must be done in groups. Each project is novel in some way, where novelty could be measured by the problem, the approach, or the accessibility of existing methods to non-experts. Project groups and topics will be determined about a third of the way through the course
Project Presentations / Documentation
Due to the larger class size this semester, it is simply not feasible for everyone to present their final projects during class time. Instead, students will be required to make videos summarizing their work. Talking over a powerpoint is fine, but this is also an opportunity to make a more visually polished result, possibly even with animations, which is appropriate given the nature of this course. Once the videos are completed, each student will be randomly assigned 3-4 videos from other groups in the course to watch and for which to provide critical feedback. This feedback will factor into the participation grade.
NOTE: I will of course also be watching each video carefully myself.
NOTE ALSO: This fabulous idea about video recorded presentations is adapted from one of my research collaborators Paul Bendich
No formal writeup will be required for the projects, but students will be expected to submit a brief document summarizing their accomplishments in bullet form, providing a summary of completed code, and providing directions to use the code
Grading Rubric
20% | Initial project milestone: How close did you get to accomplishing the initial goals we set out? |
45% | Technical refinement: How much did the project mature over the time you worked on it? How close are you to the final goal that we had? |
15% | Narrated video: Graded for overall clarity, quality of figures/animations, and demonstration of what you did so that other students can understand it |
10% | Code/Documentation/Mini Report: In lieu of a formal final report, you will submit a brief summary of what you accomplished, along with code and directions on how to use it. You can think of this as an extended README. You will be graded on the quality of your code and documentation (how easy is it for someone who doesn't know your project to run your code or to get started replicating your results?).
You should also submit a single slide with a representative screenshot and some bullet points describing what you did for the class final project collage
|
10% | Above and beyond: How much did you do to refine this project and to make it your own? Did you put any unique twists on it that weren't suggested by the instructor? |
Project Topics
Below are a list of projects that students can choose from. Towards the beginning of the second unit of the course, groups will rank their top three choices, and the instructor will assign projects based on interest and the technical background of each group.
NOTE: There will likely be more than one group to working on the same project. In that case, some collaboration is expected between the groups getting through core issues. Otherwise, unique solutions are expected, or the groups should carve out different portions of the same problem
- Equidecomposability of 3D Surface Meshes
Given a polygon A and a polygon B with the same area, it is always possible to cut polygon A up into a finite number of pieces that can be rigidly rearranged (rotated/translated in the plane) to form polygon B. We learn in the course that one popular 3D surface representation is the triangle mesh, which is just a bunch of 2D polygons stitched together. The goal of this project will be to cut two meshes with the same surface area (after rescaling) into pieces which can be arranged into each other and to create a cool animation of the parts flying from one shape to the other.
This project is perfect for students who are stronger on the programming side of the spectrum, as the math is surprisingly straightforward but the code is devilishly tricky due to numerical precision issues. A fantastic deliverable for the course would be a Javascript application which can take two triangle meshes as input, scale them so they have the same surface area, and show the pieces flying from one shape to the other. Such an app would be worthy of the front page of Reddit!
- Ghissi Alterpiece: Virtual Real Time Rendering for The North Carolina Museum of Art
For those in the class who want to learn more about real time rendering with WebGL in the browser and shaders, or who want to explore a space in between image processing and geometry processing, there is an exciting opportunity that has popped up here at Duke. Professor Ingrid Daubechies is working on a project re-imagining paintings from the Ghissi Alterpiece in a virtual world, which will be put on display in the North Carolina Museum of Art in Fall 2016. The group working on this project would add some 3D geometry effects to enhance the background of the paintings and to help simulate what specular lighting effects would have looked like as a person walked around the painting, and also possibly do geometric-based rejuvenation of the existing paintings. The output of this project may actually be featured in the North Carolina Museum of art, so it's a very unique opportunity.
- Making Nasher Museum Medieval Statues Speak in The Browser
I have had some ongoing work with the Nasher Art Museum modeling heads, texturing them, and making them speak using the Laplacian Mesh algorithm (assignment 4) to transfer 3D facial landmarks of me talking onto the heads, as part of a Bass Connections project. Click here to see my prototype in action. It would be awesome to port this to the browser for the Nasher museum's web site, but this is challenging because there is a lack of fast numerical tools in Javascript. Also, my speech acquisition app for people who want to record speech for the statue is quite hacky at the moment, and it's important to have a good interface for art history students without much technical knowledge to be able to record meaningful monalogues for the statues. Therefore, there would be two deliverables for this project
- A Cholesky Factorization implementation for sparse matrices. This is a numerical algorithm that doesn't exist in Javascript at the moment and which is vital to making the speech transfer fast. It will be a good contribution to the Javascript development community
- A user interface on top of the Intel RealSense 3D sensor for recording 3D facial landmarks synchronized with sound, and a way for placing these landmarks on virtual statues
This project is great for those (like me) who are interested in the intersection of technology and the humanities, and it will also give the group a jump start on assignment #4. It may also be featured in the Nasher Art Museum on Duke's campus as an exhibit, depending on progress
- 3D Face Fitting And Parameterization for Expression Synthesis / Face Morphing
This project is closely related to the talking heads Nasher museum project. Students will implement the technique in this paper, which is a variant on the ICP algorithm, to fit a 3D face model corrupted by noise to a database of faces. The group will then use PCA in the database to synthesize new expressions on the scanned face. The end goal will be to take a scan of one of the students in this class and make him/her smile, frown, look suprised, etc, regardless of the expression he/she had when the initial 3D scan came in. It is also possible to change other aspects of the face, as shown here (you can make the face younger/older, more masculine/feminine, etc). Time permitting, this group can also try to fit pictures of faces to 3D models to modify the facial expressions, following techniques in this paper, so that a 3D scan isn't even necessary.
- 3D Blood Vessel Shape Statistics
In this project, students will compute shape statistics on the surfaces of 3D blood vessel data to summarize information about the shear stress of blood vessels in that region. Students will primarily be working with open data from a 3D vascular modeling competition. The goal will be to come up with more descriptive summary of regions of the blood vessel than simply the average shear stress in a plane slice. Examples could include computing spin images or spherical harmonic descriptors at different parts of the blood vessel (in this way, it is basically an application of group assignment 2). Time permitting, students may also use these statistics to help match regions between two different hearts to put the hearts into correspondence.
- Animating MOCAP data in the browser / 3D Lemur Tracking (?)
There is lots of interesting 3D data on the web of tracked joints during human activities, including the CMU MOCAP database. There are a number of tools for viewing this data, but most of them require C++ knowledge, a Matlab license, or some form of technical ability beyond the average computer user. This project will enable people with limited technical skill to browse and interactively visualize motion capture databases, which could open these databases up to a wider research audience. This group should also experiment with motion capture software in the Kinect v2 to see if they can create an interface to catpure and display new data. Time permitting, the group may also try to do "skinning" animations, in which they animate surfaces based on the motion capture data.
The group(s) who work on this project may also take this work in a fairly unique direction working with the Duke Lemur Center. The scientists there would like to get motion capture data of lemurs as they move around in 3D, and it would be interesting to see if software for the Kinect which is calibrated to work on humans will work at all on lemurs. It would be great if the group could make enough progress to get to this point, as this is a truly novel use of 3D geometry which is certainly unique to Duke!