Registrations Closing Soon for DevOps Certification Training by CloudxLab | Registrations Closing inEnroll Now
In forward propagation, we aim to calculate the activations and cost.
To calculate activations, we need to follow 2 steps for that:
Calculate the dot product of X and w.T, add then b.
Pass the above-obtained result to the sigmoid function.
To compute the cost, we:
np.log calculates the natural logarithm of all the elements in the array.
np.dot(a,b) calculates the dot product of the two vectors a and b.
np.sum(x) calculates the sum of elements in the input array.
Create a list
x with elements 1,2,3.
x = << your code comes here >>
Calculate the natural logarithm of all the elements in x, using
np.log function, store it in
x_log = << your code comes here >>(x)
Calculate the dot product of vectors a and b using
np.dot() function, store the result in c.
a = np.array([[1,2],[3,4]]) b = np.array([[10,20],[30,40]]) c = << your code comes here >>(a,b) print(c)
np.ones() to create a NumPy array containing 1 as all of its elements, and shape (4,4).
ones_array = << your code comes here >>(shape=(4,4))
Calculate the sum of the elements of
sum_of_ones_array = << your code comes here >>(ones_array)
forward_prop function below, calculates the activations
A and the cross-entropy cost
cost. Copy-paste the following function.
def forward_prop(w, b, X, Y): # calculate activations z = np.dot(w.T, X) + b A = sigmoid(z) # calculate cost m = X.shape cost = (-1/m) * np.sum(Y * np.log(A) + (1-Y) * (np.log(1-A))) cost = np.squeeze(cost) return A, cost
No hints are availble for this assesment
Answer is not availble for this assesment
Note - Having trouble with the assessment engine? Follow the steps listed here