Halloween Sale: Flat 70% + Addl. 25% Off + 30 Days Extra Lab on all Courses | Use Coupon HS25 in Checkout | Offer Expires InEnroll Now
The forward and backward propagation are important steps to get the gradients and costs in the process of training our algorithm.
In the forward propagation, we calculate the dot product of the feature vector, ie X and the weights matrix, and then the resultant is added with the bias vector. Next, the sigmoid function is applied to get the activations and the cost is calculated.
In backward propagation, we calculate the gradients of the weights and bias matrices.
Let us use
propagate function which calls both the
forward_prop function and
back_prop function the appropriate places of the below
def propagate(w, b, X, Y): #Forward propagation A, cost = << your code comes here >>(w, b, X, Y) #Backward propagation grads = << your code comes here >>(X, A, Y) return grads, cost
For example, we could get the gradients and cost returned by the function
propagate as follows:
w, b, X, Y = np.array([, ]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]]) grads, cost = propagate(w, b, X, Y)
No hints are availble for this assesment
Answer is not availble for this assesment
Note - Having trouble with the assessment engine? Follow the steps listed here