Group Assignment 1
Image Sources for Acoustic Simulations (100 Points)

By Chris Tralie

Click here to see art contest results!

Overview

Now that students have had some practice with vector manipulations in Javascript, it's time to apply these skills to a larger and more comprehensive problem. In this first partner assignment, students will implement the image sources algorithm to model all specular reflections (angle in equals angle out) between a sound source and a sound receiver up to a certain order (number of bounces) in a virtual 3D environment modeled by a collection of polygons. Each polygon has material properties that describe how much sound is absorbed or reflected, and the polygons are placed somewhere within a scene graph (hierarchy of transformations between polygons) specified in JSON. Based on the lengths of the paths from source to receiver and the material properties, students will then compute an impulse response (time function that records when each of the bounces occurs) which can be used to simulate how any sound emanating from the source would sound like at the receiver. Students can then run and listen to sound simulations in the environment they created real time in the browser

Point System

In this assignment, groups of 1 or 2 will have to earn 100 points, while groups of 3 will have to earn 120 points (groups of 3 final score will be scaled down by a factor of 5/6). The tasks highlighted in blue are mandatory for all teams, but otherwise you can get to 100 points however you'd like out of the remaining tasks. If you choose to do more than the base number of required points,then each point beyond that is worth 9/10 as many as the previous one (leading to a geometric series). So for instance, if you were on a team of 2 and earned 105 points, you would get 100 + (9/10) + (9/10)^2 + (9/10)^3 + (9/10)^4 + (9/10)^5 points. Thus, the absolute highest score you could get is bounded from above at 110 points (since that geometric series asymptotically approaches 10).
I stole this idea from my undergraduate adviser .

Deadlines

The first section (50/100 points) will be due at 11:59 PM on Monday 2/22. Groups who do not make this deadline will be docked 15 points, but these can be made up by the final deadline with extra tasks. No late days can be used on this first submission. The final deadline will be at 11:59 PM on Wednesday 3/2. For the final deadline, groups of 1-2 must submit an assignment with at least 100 points of tasks completed, and groups of 3 must submit assignments with 120 points worth of tasks completed.

What To Submit

You should submit some sort of README file that for every section that includes what you did, and any known bugs. Even for the basic tasks, there are different ways of doing things (such as finding areas of polygons), and there are optional special cases that I mentioned during the course of the assignment, so please tell me exactly what you did to get full credit and possibly extra points. For the scene file sections, please describe each scene file and provide screenshots.
(PDF Format would be best but .docx is also fine)

Also please don't forget to submit all scene files and sounds that go along with them.

Getting Started / Software Description

Click here to download the starter code

The main file you will be editing is Algorithms.js. You can also add debugging code in the rendering functions in SceneFile.js. The main entry point for running the code is index.html.

NOTE: Because of Javascript cross-site scripting attacks, certain browsers may not let you run this code directly from your computer. You may have to create a virtual webserver on your computer first. The easiest way to do this is to use Python's built-in webserver. Pull up a terminal at the root of the code directory and type

python -m SimpleHTTPServer
Or if you have Python3 installed, type
py -m http.server
By default, this will provide access to your code on port 8000 on your local machine, so you would go to the link http://127.0.0.1:8000/ to run your code. Please post on Piazza if you are having trouble with this

Scene File Format / Running Code

The most important organizing principle in this assignment is the scene graph. We talked about scene graphs in class; they are a way to organize a complex hierarchy of transformations in a virtual environment. In this assignment, you will load environments into the simulator that are specified as scene graphs in JSON. Below is an example which places two boxes on top of a square: At the root of each scene, you need to specify an initial position of the receiver and the source (which can be interactively changed in the GUI in index.html). You then need to specify an array of children, each of which is an object with four fields: NOTE: It is also possible to have a "dummy node"; that is, a node with no mesh and just a transformation and children. It may be useful to add a dummy node above a second of something you create in the scene graph to apply some transformation to a bunch of nodes under it. DummyNodeExample.scn shows an example of such a scene with a dummy node that transforms the two boxes below it up by 5 meters.

Click here to see a live demo of the scene graph software on a simple scene

Once you have loaded the scene, you can switch between receiver/source/external to change the positions of those objects (external is just an external camera viewer that doesn't impact the simulation). Use the following controls to navigate the virtual environment and move around these objects:

WForward
SBackward
ALeft
DRight
EUp
CDown
Click + DragRotate up/down and left/right


Then, once you've filled in the core techniques, a typical run of the program goes in the following order, using the provided buttons
  1. Positions source/receiver
  2. Compute image sources of a certain order
  3. Extract all paths from source to receiver based on generated images
  4. Load in audio file
  5. Compute impulse response
  6. Play impulse response (listen to it)
  7. Recompute convolution
  8. Play convolution (to hear the loaded sound with all of the echoes)
Some sample sounds have been provided in the sounds directory. Note that if you change the position of the source/receiver, you will have to repeat steps 2-3. If you load a new sound with a different sampling rate, you will need to redo step 5 (since the impulse response sample times depend on the sampling rate). In both of these cases, you will need to repeat step 7 to recompute the convolution in order for the auralized echoes to be correct.

Accessing Polygon Faces / Polygon Normals

Every node that isn't a dummy node has a mesh object field, and every mesh object has an array of faces. Every object in faces has a function getVerticesPos() that returns an array of vec3 objects describing the location of that face's vertices in the node's coordinate system. You can assume that the nodes specify a convex polygon (this makes the containment test in rayIntersectPolygon() easier). Below is a code snippet that demonstrates looping through all of the faces of the mesh associated with a particular node in the scene graph and getting all of the vertices for each face Also have a look at the code in the provided function scene.rayIntersectFaces() for an example of accessing the mesh face geometry this way (this code also provides a good example of how to recursively traverse the scene graph). Note that verts contains locations of the vertices in the mesh with respect to that node's coordinate system, but you need to place them in world coordinates when generating image sources or extracting paths based on that node's position in the scene graph and the transformations that occurred along the hierarchy above it.

Similarly, if you want to compute the normal of a face, you can use the function face.getNormal() to get the normal in the node's coordinate system, but as discussed in class, you will have to transform this normal into world coordinates a special way with a "normal matrix" (the function mat3.normalFromMat4() may come in handy here). To avoid this mess, you can also just transform all of the points into world coordinates and compute the normal from scratch the ordinary way (the same way you did in mini assignment 1 for the above or below test) using the transformed vertices.

Core Technique

This is the basic image sources algorithm that you will run to create an impulse response from a chosen source position to a chosen receiver position. Every task in this section is required, and successful completion of the tasks will get you to 65/100 points.

Scene File Submissions

Once the algorithm is working, you should test it out on a variety of scene files with different geometry to make sure you're getting results which make sense. Some examples are below. In each example you submit, please write in your README what it is showing and what you observe

Additional Sound Effects

The tasks in this section extend the basic single channel (mono) specular reflection model only to include more features, such as (possibly frequency dependent) sound transmission and 3D sound.

Computational Improvements

Other