Logistic regression: iterative algorithm The algorithm alternates between the following four steps untol convergence 1. Estimate s; = o(a¹ x); i = 1,.., Nsamples 2. Evaluate the error e = s, -y, where y, are the true labels 3. Evaluate the gradient g = x₁(s; - y₁) 4. Update a → a-yG using gradient descent with a step size g Here, x; are the vectorized digits of dimension 784 x 1. X; are vectors of length 785 x 1, obtained by adding a 1 to the end. Note that the above operations can be computed efficiently in the matrix form as 1. Estimate s = o(a¹ X), where s is a 1x matrix. 2. Evaluate the error e = s-y 3. Evaluate the gradient X(s - y) 4. Update a → a-yG using gradient descent with a step size g Complete the code below Nsamples, Nfeatures = X_train.shape Nclasses = 2 a = np.random.randn (Nfeatures+1,Nclasses-1) Xtilde = np.concatenate((X_train,np.ones((Nsamples,1))),axis=1). T gamma = 1e-1 for iter in range(1500): z = np.dot(a.T,Xtilde) y_pred sigmoid(z) = error = y_train - y_pred.T gradient = -np.dot (xtilde, error)/Nsamples a = a gamma*gradient if(np.mod (iter, 100)==0): print("Error = ",np.sum(error**2)) fig, ax plt.subplots (1,2) = ax[0].plot(s[:,0:200].T) ax[0].plot(y_train[0:200]) ax[0].set_title('True and predicted labels") ax[1].plot(error) ax[1].set_title('Prediction Errors') plt.show() plt.imshow(np.reshape(a[:-1], (28,28))) plt.title("weights")

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question

I am trying to solve this but getting stuck into an infinity loop. Can someone please help me resolve this code. 

Logistic regression: iterative algorithm
The algorithm alternates between the following four steps untol convergence
1. Estimate s; = o(a¹ x); i = 1,.., Nsamples
2. Evaluate the error e = s, -y, where y, are the true labels
3. Evaluate the gradient g =
X(S₁ - y₁)
4. Update a → a- yG using gradient descent with a step size g
Here, x, are the vectorized digits of dimension 784 x 1. X; are vectors of length 785 x 1, obtained by adding a 1 to the end. Note that the above operations
can be computed efficiently in the matrix form as
1. Estimate s = o(a¹ X), where s is a 1xN matrix.
2. Evaluate the error e = s-y
3. Evaluate the gradient X(s - y)
4. Update a → a- yG using gradient descent with a step size g
Complete the code below
Nsamples, Nfeatures = X_train.shape
Nclasses = 2
a = np.random.randn(Nfeatures+1,Nclasses-1)
Xtilde = np.concatenate((X_train, np.ones ((Nsamples,1))),axis=1). T
gamma = 1e-1
for iter in range(1500):
z = np.dot(a.T,Xtilde)
y_pred = sigmoid(z)
error = y_train - y_pred. T
gradient = -np.dot (Xtilde, error)/Nsamples
a = a - gamma* gradient
if(np.mod (iter, 100)==0):
print("Error
fig, ax = plt.subplots (1,2)
ax[0].plot(s[:,0:200].T)
ax[0].plot(y_train [0:200])
ax[0].set_title('True and predicted labels')
ax[1].plot(error)
ax[1].set_title('Prediction Errors')
plt.show()
=
", np.sum(error**2))
plt.imshow(np.reshape(a[:-1], (28,28)))
plt.title("weights")
Transcribed Image Text:Logistic regression: iterative algorithm The algorithm alternates between the following four steps untol convergence 1. Estimate s; = o(a¹ x); i = 1,.., Nsamples 2. Evaluate the error e = s, -y, where y, are the true labels 3. Evaluate the gradient g = X(S₁ - y₁) 4. Update a → a- yG using gradient descent with a step size g Here, x, are the vectorized digits of dimension 784 x 1. X; are vectors of length 785 x 1, obtained by adding a 1 to the end. Note that the above operations can be computed efficiently in the matrix form as 1. Estimate s = o(a¹ X), where s is a 1xN matrix. 2. Evaluate the error e = s-y 3. Evaluate the gradient X(s - y) 4. Update a → a- yG using gradient descent with a step size g Complete the code below Nsamples, Nfeatures = X_train.shape Nclasses = 2 a = np.random.randn(Nfeatures+1,Nclasses-1) Xtilde = np.concatenate((X_train, np.ones ((Nsamples,1))),axis=1). T gamma = 1e-1 for iter in range(1500): z = np.dot(a.T,Xtilde) y_pred = sigmoid(z) error = y_train - y_pred. T gradient = -np.dot (Xtilde, error)/Nsamples a = a - gamma* gradient if(np.mod (iter, 100)==0): print("Error fig, ax = plt.subplots (1,2) ax[0].plot(s[:,0:200].T) ax[0].plot(y_train [0:200]) ax[0].set_title('True and predicted labels') ax[1].plot(error) ax[1].set_title('Prediction Errors') plt.show() = ", np.sum(error**2)) plt.imshow(np.reshape(a[:-1], (28,28))) plt.title("weights")
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 4 steps with 4 images

Blurred answer
Knowledge Booster
Dynamic Table
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education