End-to-End ML Project- Beginner friendly

81 / 94

Measuring error

Let's compare our actual and predicted values. We'll compare them for our first 5 data points.

Predicted values = [210644.60459286, 317768.80697211, 210956.43331178,  59218.98886849, 189747.55849879]

Actual values = [286600.0, 340600.0, 196900.0,  46300.0, 254500.0]

We can notice that predictions are not good.

Let's measure the model's performance by calculating the error in our predictions using the metric RMSE(Root Mean Square Error) (which we have discussed before).

We'll do it by using the mean_squared_error() function from sklearn.metrics. It measures mean squared error regression loss. To make it function like RMSE, we have to specify the parameter squared to False. By default its value is True. Its syntax is-

mean_squared_error(actual_target_values, predicted_target_values, squared = some_boolean_value)

where actual_target_values is our target variable(actual target values for the data points) and predicted_target_values is the value that our model predicted for the data points.

For further details about the method, refer to MSE documentation

INSTRUCTIONS
  1. Import mean_squared_error from sklearn.metrics.

  2. Measure the RMSE between actual values i.e. housing_labels and predicted values i.e. predictions and store the output in a variable lin_rmse. Don't forget to specify the parameter squared.



Note - Having trouble with the assessment engine? Follow the steps listed here

Loading comments...