Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
In backward propagation, we calculate the derivatives of weights and bias.
We compute the gradient of weights by:
calculating the dot product of X and (A-Y).T
dividing the result by m, the total number of samples in the dataset.
We compute the gradient of bias by:
calculating the np.sum(A-Y)
dividing the result by m, the total number of samples in the dataset.
Note:
np.dot(a,b) calculates the dot product of the two vectors a and b.
np.sum(x) calculates the sum of elements in the input array.
The back_prop function below, calculates the gradients of w and b. Copy-paste the following function.
def back_prop(X, A, Y):
# calculate gradients for w,b
m = X.shape[1]
dw = (1/m) * np.dot(X, (A-Y).T)
db = (1/m) * np.sum(A-Y)
grads = {'dw': dw, 'db': db}
return grads
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...