Login using Social Account
     Continue with GoogleLogin using your credentials
Let us now implement the function which calculates the accuracy. This function takes as arguments the predicted labels and the actuals of the corresponding dataset.
We do this in 2 steps:
Using np.abs(Y_predicted - Y_actual)
, we calculate the absolute difference between the actual labels and predicted labels.
Then, we use np.mean()
and calculate accuracy.
Note:
np.abs
gets the absolute value of each element in the input array.
np.mean
returns the mean of the elements in the input array.
Let us assume y_actual
and y_predicted
are the actual labels and predicted labels respectively. Copy the following code.
y_actual = np.array([1,1,1,0,1])
print("y_actual :", y_actual )
y_predicted = np.array([1,0,0,0,1])
print("y_predicted :", y_predicted )
Get the absolute differences of the corresponding elements in y_actual
and y_predicted
using np.abs()
, and store them in c
.
c = << your code comes here >>(y_actual - y_predicted)
Store the mean of the elements of c
in c_mean
using np.mean
.
c_mean = << your code goes here >>(c)
Accuracy could be calculated as:
accuracy = 100 - (c_mean * 100)
This logic is written in the following get_accuracies
function. Copy-paste the following get_accuracies
function.
def get_accuracies(Y_predicted, Y_actual):
abs_diff = np.abs(Y_predicted - Y_actual)
accuracy = 100 - np.mean(abs_diff) * 100
return accuracy
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...