Sale!

CPE 695A/WS Homework 2 Solved

Original price was: $40.00.Current price is: $35.00. $29.75

Category:

Description

5/5 - (1 vote)

1. [6 points] Prove Bayes’ Theorem. Briefly explain why it is useful for machine learning problems, i.e.,
by converting posterior probability to likelihood and prior probability.
2. [10 points] In Lecture 3-1, we gave the normal equation (i.e., closed-form solution) for linear
regression using MSE as the cost function. Prove that the closed-form solution for Ridge Regression
is π’˜ = (πœ†πΌ + 𝑋
𝑇
βˆ™ 𝑋)
βˆ’1
βˆ™ 𝑋
𝑇
βˆ™ π’š, where 𝐼 is the identity matrix, 𝑋 = (π‘₯
(1)
, π‘₯
(2)
, … , π‘₯
(π‘š)
)
𝑇
is the input
data matrix, π‘₯
(𝑖) = (1, π‘₯1, π‘₯2, … , π‘₯𝑛) is the 𝑖th data sample, and 𝑦 = (𝑦
(1)
, 𝑦
(2)
, … , 𝑦
π‘š). Assume the
hypothesis function β„Žπ‘€(π‘₯) = 𝑀0 + 𝑀1π‘₯1 + 𝑀2π‘₯2 + β‹― + 𝑀𝑛π‘₯𝑛 , and 𝑦
(𝑗)
is the measurement of
β„Žπ‘€(π‘₯) for the 𝑗th training sample. The cost function of the Ridge Regression is 𝐸(π’˜) = 𝑀𝑆𝐸(π’˜) +
πœ†
2
βˆ‘ 𝑀𝑖
π‘š 2
𝑖=1
. [Hint: please refer to the proof of the normal equation of linear regression. [ Note: Please
use the following rectified definition of MSE when you prove: 𝑀𝑆𝐸(𝑀) = βˆ‘ (π’˜π‘‡
βˆ™ 𝒙
(π’Š) βˆ’ y
(i)
)
π‘š
𝑖=1
2
. ] .
3. [10 points] Recall the multi-class Softmax Regression model on page 16 of Lecture 3-3. Assume we
have K different classes. The posterior probability is π‘Μ‚π‘˜ = 𝛿(π‘ π‘˜
(π‘₯))π‘˜ =
exp (π‘ π‘˜(π‘₯))
βˆ‘ exp (𝑠𝑗
(π‘₯))
𝐾
𝑗=1
for π‘˜ =
1, 2, … ,𝐾, where π‘ π‘˜
(π‘₯) = πœƒπ‘˜
𝑇
βˆ™ π‘₯, and input π‘₯ is an n-dimension vector.
1) To learn this Softmax Regression model, how many parameters we need to estimate? What are
these parameters?
2) Consider the cross-entropy cost function 𝐽(𝛩) (see page 16 of Lecture 3-3) of π‘š training samples
{(π‘₯𝑖
, 𝑦𝑖)}𝑖=1,2,…,π‘š. Derive the gradient of 𝐽(𝛩) regarding to πœƒπ‘˜ as shown in page 17 of Lecture 3-3
Programming Problem:
4. [44 points] In this problem, we write a program to find the coefficients for a linear regression model
for the dataset provided (data2.txt). Assume a linear model: y = w0 + w1*x. You need to
1) Plot the data (i.e., x-axis for 1
st column, y-axis for 2
nd column),
and use Python to implement the following methods to find the coefficients:
2) Normal equation, and
3) Gradient Descent using batch AND stochastic modes respectively:
a) Determine an appropriate termination condition (e.g., when cost function is less than a
threshold, and/or after a given number of iterations).
b) Print the cost function vs. iterations for each mode; compare and discuss batch and
stochastic modes in terms of the accuracy and the speed of convergence.
c) Choose a best learning rate. For example, you can plot cost function vs. learning rate to
determine the best learning rate.
Please implement the algorithms by yoursef and do NOT use the fit() function of the library.