Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
In forward propagation, we aim to calculate the activations and cost.
To calculate activations, we need to follow 2 steps for that:
Calculate the dot product of X and w.T, add then b.
Pass the above-obtained result to the sigmoid function.
To compute the cost, we:
Note:
np.log
calculates the natural logarithm of all the elements in the array.
np.dot(a,b)
calculates the dot product of the two vectors a and b.
np.sum(x)
calculates the sum of elements in the input array.
Create a list x
with elements 1,2,3.
x = << your code comes here >>
Calculate the natural logarithm of all the elements in x, using np.log
function, store it in x_log
.
x_log = << your code comes here >>(x)
Calculate the dot product of vectors a and b using np.dot()
function, store the result in c.
a = np.array([[1,2],[3,4]])
b = np.array([[10,20],[30,40]])
c = << your code comes here >>(a,b)
print(c)
Use np.ones()
to create a NumPy array containing 1 as all of its elements, and shape (4,4).
ones_array = << your code comes here >>(shape=(4,4))
Calculate the sum of the elements of ones_array
using np.sum()
function.
sum_of_ones_array = << your code comes here >>(ones_array)
The forward_prop
function below, calculates the activations A
and the cross-entropy cost cost
. Copy-paste the following function.
def forward_prop(w, b, X, Y):
# calculate activations
z = np.dot(w.T, X) + b
A = sigmoid(z)
# calculate cost
m = X.shape[1]
cost = (-1/m) * np.sum(Y * np.log(A) + (1-Y) * (np.log(1-A)))
cost = np.squeeze(cost)
return A, cost
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...