Sale!

CS 145 Homework #1 solved

Original price was: $35.00.Current price is: $30.00. $25.50

Category:

Description

5/5 - (3 votes)
  1. Linear Regression

1.1 In LinearRegression\linearRegression.py, fill in the missing lines in the python code to estimate , using (1) closed-form solution, (2) batch gradient descent, and (3) stochastic gradient descent.

  1. Report the learned weights and MSE (Mean Square Error) in the test dataset for each version, are they the same and why?
  2. Apply z-score normalization for each feature x and report whether the normalization affect and MSE (Mean Square Error) in the test dataset, for all three versions of the algorithm, and why?

1.2 Ridge regression is to add an l2 regularization term to the original mean square error cost function in linear regression: , where  is a trade-off between the two items. Please derive the closed-form solution for  .

 

  1. Logistic Regression and Model Selection

In LogisticRegression\LogisticRegression.py, fill in the missing lines in the python code to estimate , using (1) batch gradient descent and (2) Newton

UCLA CS 145 Homework #1

DUE DATE: Friday 01/25/2019 11:59 pm

 

Note:

  • You are expected to submit both a report and code. For your code, please include clear README files. When not specifically mentioned, use the default data provided in our program package.
  • “########## Please Fill Missing Lines Here ##########” is used where input from you is needed.

 

  1. Linear Regression

1.1 In LinearRegression\linearRegression.py, fill in the missing lines in the python code to estimate , using (1) closed-form solution, (2) batch gradient descent, and (3) stochastic gradient descent.

  1. Report the learned weights and MSE (Mean Square Error) in the test dataset for each version, are they the same and why?
  2. Apply z-score normalization for each feature x and report whether the normalization affect and MSE (Mean Square Error) in the test dataset, for all three versions of the algorithm, and why?

1.2 Ridge regression is to add an l2 regularization term to the original mean square error cost function in linear regression: , where  is a trade-off between the two items. Please derive the closed-form solution for  .

 

  1. Logistic Regression and Model Selection

In LogisticRegression\LogisticRegression.py, fill in the missing lines in the python code to estimate , using (1) batch gradient descent and (2) Newton Raphson method.

  1. Report the learned weights and accuracy (Mean Square Error) in the test dataset for each version, are they the same and why? Discuss the pros and cons of the two methods.
  2. Similar to linear regression, regularization can be added to logistic regression. Consider the new objective function as:

where n is the total number of data points in the training dataset and p is the dimensionality of attributes. Please compute its first derivative  and implement a regularized batch gradient descent algorithm accordingly. Discuss how the regularization is affecting the training loss (in terms of average log likelihood) and test accuracy based on the experimental results.

 

Raphson method.

  1. Report the learned weights and accuracy (Mean Square Error) in the test dataset for each version, are they the same and why? Discuss the pros and cons of the two methods.
  2. Similar to linear regression, regularization can be added to logistic regression. Consider the new objective function as:

where n is the total number of data points in the training dataset and p is the dimensionality of attributes. Please compute its first derivative  and implement a regularized batch gradient descent algorithm accordingly. Discuss how the regularization is affecting the training loss (in terms of average log likelihood) and test accuracy based on the experimental results.