Sale!

# AMATH 482-582 Homework 4 Extended Yale Faces B Database – Eigenfaces & Music Genre Identification solved

Original price was: \$35.00.Current price is: \$30.00.

Category:

5/5 - (1 vote)

## Yale Faces B

perform the following analysis.

1. Do an SVD analysis of the images (where each image is reshaped into a column vector and
each column is a new image).
2. What is the interpretation of the U, Σ and V matrices?
3. What does the singular value spectrum look like and how many modes are necessary for good
image reconstructions? (i.e. what is the rank r of the face space?)
4. compare the difference between the cropped (and aligned) versus uncropped images.
This is an exploratory homework. So play around with the data and make sure to plot the different
things like the modes and singular value spectrum. Good luck, and have fun.

## Music Classification

Music genres are instantly recognizable to us, wether it be jazz, classical, blues, rap, rock, etc. One
can always ask how the brain classifies such information and how it makes a decision based upon
hearing a new piece of music. The objective of this homework is to attempt to write a code that can
classify a given piece of music by sampling a 5 second clip.

As an example, consider Fig. 1. Four classic pieces of music are demonstrated spanning genres
of rap, jazz, classic rock and classical. Specifically, a 3-second sample is given of Dr. Dre’s Nuthin’
but a ’G’ thang (The Chronic), John Coltrane’s A Love Supreme (A Love Supreme), Led Zeppelin’s
Over The Hills and Far Away (Houses of the Holy), and Mozart’s Kyrie (Requiem). Each has a
different signature, thus begging the question wether a computer could distinguish between genres
based upon such a characterization of the music.

• (test 1) Band Classification: Consider three different bands of your choosing and of different
genres. For instance, one could pick Michael Jackson, Soundgarden, and Beethoven. By taking
5-second clips from a variety of each of their music, i.e. building training sets, see if you can
build a statistical testing algorithm capable of accurately identifying ”new” 5-second clips of
music from the three chosen bands.

• (test 2) The Case for Seattle: Repeat the above experiment, but with three bands from
within the same genre. This makes the testing and separation much more challenging. For
instance, one could focus on the late 90s Seattle grunge bands: Soundgarden, Alice in Chains,
and Pearl Jam. What is your accuracy in correctly classifying a 5-second sound clip? Compare
this with the first experiment with bands of different genres.

• (test 3) Genre Classification: One could also use the above algorithms to simplify broadly
classify songs as jazz, rock, classical etc. In this case, the training sets should be various bands
2 2.5 3 3.5 4 4.5 5
−1
0
1
2 2.5 3 3.5 4 4.5 5
−0.5
0
0.5

2 2.5 3 3.5 4 4.5 5
−0.2
0
0.2
2 2.5 3 3.5 4 4.5 5
−0.2
0
0.2

time (seconds)
Figure 1: Instantly recognizable, these four pieces of music are (in order of top to bottom): Dr.
Dre’s Nuthin’ but a ’G’ thang (The Chronic), John Coltrane’s A Love Supreme (A Love Supreme),
Led Zeppelin’s Over The Hills and Far Away (Houses of the Holy), and Mozart’s Kyrie (Requiem).
Illustrated is a 3-second clip from time 2 seconds to 5 seconds of each of these songs.

within each genre. For instance, classic rock bands could be classified using sounds clips from
Zep, AC/DC, Floyd, etc. while classical could be classified using Mozart, Beethoven, Bach,
etc. Perhaps you can limit your results to three genres, for instance, rock, jazz, classical.

WARNING and NOTES: You will probably want to SVD the spectrogram of songs versus the songs
themselves. Interestingly, this will give you the dominant spectrogram modes associated with a given
band. Moreover, you may want to re-sample your data (i.e. take every other point) in order to keep
the data sizes more manageable. Regardless, you will need lots of processing time.