{"id":3750,"date":"2022-03-28T05:50:13","date_gmt":"2022-03-28T05:50:13","guid":{"rendered":"https:\/\/cloudxlab.com\/blog\/?p=3750"},"modified":"2022-10-13T10:34:10","modified_gmt":"2022-10-13T10:34:10","slug":"classification-metrics","status":"publish","type":"post","link":"https:\/\/cloudxlab.com\/blog\/classification-metrics\/","title":{"rendered":"Classification metrics and their Use Cases"},"content":{"rendered":"\n<p id=\"block-5884ef89-25cb-445e-93bd-e2962d69ffcd\"><br>In this blog, we will discuss about commonly used classification metrics. We will be covering  <strong>Accuracy Score<\/strong>, <strong>Confusion Matrix<\/strong>, <strong>Precision<\/strong>, <strong>Recall<\/strong>, <strong>F-Score<\/strong>,  <strong>ROC-AUC<\/strong> and will then learn how to extend them to the <strong>multi-class classification<\/strong>. We will also discuss in which scenarios, which metric will be most suitable to use.<\/p>\n\n\n\n<p id=\"68e5\">First let\u2019s understand some important terms used throughout the blog-<\/p>\n\n\n\n<p id=\"0372\"><strong>True Positive (TP):&nbsp;<\/strong>When you predict an observation belongs to a class and it actually does belong to that class.<\/p>\n\n\n\n<p id=\"5380\"><strong>True Negative (TN):&nbsp;<\/strong>When you predict an observation does not belong to a class and it actually does not belong to that class.<\/p>\n\n\n\n<p id=\"1f68\"><strong>False Positive (FP)<\/strong>: When you predict an observation belongs to a class and it actually does not belong to that class.<\/p>\n\n\n\n<p id=\"a8ac\"><strong>False Negative(FN):&nbsp;<\/strong>When you predict an observation does not belong to a class and it actually does belong to that class.<\/p>\n\n\n\n<p id=\"32a2\">All classification metrics work on these four terms. Let\u2019s start understanding classification metrics-<\/p>\n\n\n\n<!--more-->\n\n\n\n<h2 id=\"48a6\">Accuracy Score-<\/h2>\n\n\n\n<p id=\"2cfb\">Classification Accuracy is what we usually mean, when we use the term <strong>accuracy<\/strong>. It is the ratio of number of correct predictions to the total number of input samples.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/746\/0*Jfl8h92KtjNnnlk1.gif\" alt=\"Accuracy score\"\/><\/figure>\n\n\n\n<p id=\"6ab1\">For binary classification, we can calculate accuracy in terms of positives and negatives using the below formula:<\/p>\n\n\n\n<pre id=\"2f6e\" class=\"wp-block-preformatted\">Accuracy=(TP+TN)\/(TP+TN+FP+FN)<\/pre>\n\n\n\n<p id=\"7ac2\">It works well only if there are equal number of samples belonging to each class. For example, consider that there are 98% samples of class A and 2% samples of class B in our training set. Then our model can easily get&nbsp;<strong>98% training accuracy<\/strong>&nbsp;by simply predicting every training sample belonging to class A. When the same model is tested on a test set with 60% samples of class A and 40% samples of class B, then the&nbsp;<strong>test accuracy would drop down to 60%.&nbsp;<\/strong>Classification Accuracy is great, but gives us the false sense of achieving high accuracy.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>So, you should use accuracy score only for class balanced data.<\/p><\/blockquote>\n\n\n\n<p id=\"2dd7\">You can use it by-<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> accuracy_score<\/pre>\n\n\n\n<p id=\"7921\">In sklearn, there is also <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.balanced_accuracy_score.html#sklearn.metrics.balanced_accuracy_score\" target=\"_blank\" rel=\"noreferrer noopener\">balanced_accuracy_score<\/a> which works for imbalanced class data. The&nbsp;<code><a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.balanced_accuracy_score.html#sklearn.metrics.balanced_accuracy_score\" rel=\"noreferrer noopener\" target=\"_blank\"><strong>balanced_accuracy_score<\/strong><\/a><\/code>&nbsp;function computes the&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Accuracy_and_precision\" rel=\"noreferrer noopener\" target=\"_blank\">balanced accuracy<\/a>, which avoids inflated performance estimates on imbalanced datasets. It is the macro-average of recall scores per class or, equivalently, raw accuracy where each sample is weighted according to the inverse prevalence of its true class. Thus for balanced datasets, the score is equal to accuracy.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/1400\/0*H0shp9Z6mR9HH9qX.png\" alt=\"\"\/><\/figure>\n\n\n\n<h2 id=\"3290\">Confusion matrix-<\/h2>\n\n\n\n<p id=\"ad44\">A confusion matrix is a table that is often used to&nbsp;<strong>describe the performance of a classification model<\/strong>&nbsp;on a set of test data for which the true values are known.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/1400\/0*ljB7owdVnYuel-vF.png\" alt=\"Confusion Matrix\"\/><\/figure>\n\n\n\n<p id=\"acdd\">It is extremely useful for measuring Recall, Precision, Specificity, Accuracy and most importantly AUC-ROC Curve.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> confusion_matrix<\/pre>\n\n\n\n<h2 id=\"e7be\">Precision-<\/h2>\n\n\n\n<p id=\"beff\">It is the ratio of the true positives and all the positives. It tells you that Out of all the positive classes we have predicted, how many are actually positive.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/1104\/0*H1NPa1Jp58Ee6N_o.png\" alt=\"Precision Formula\"\/><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> precision_score<\/pre>\n\n\n\n<h2 id=\"e432\">Recall(True Positive Rate)-<\/h2>\n\n\n\n<p>It tells you that out of all the positive classes, how many we predicted correctly.<\/p>\n\n\n\n<p>Recall should be as high as possible. Note that it is also called as sensitivity.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/1400\/0*r0HQklFcAKzakSn1.png\" alt=\"\"\/><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> recall_score<\/pre>\n\n\n\n<h2 id=\"1db0\">F1-Score-<\/h2>\n\n\n\n<p id=\"d848\">It is difficult to compare two models with low precision and high recall or vice versa. If you try to increase precision, then it may decrease recall and vice versa. So it ends up in a lot of confusion.<\/p>\n\n\n\n<p>So to make them comparable, we use F1-Score . F1-score helps to measure Recall and Precision at the same time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/716\/0*C13DJ3QJKDu39HXw.png\" alt=\"\"\/><\/figure>\n\n\n\n<p id=\"76c8\">It uses Harmonic Mean in place of Arithmetic Mean by punishing the extreme values more.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> f1_score<\/pre>\n\n\n\n<p id=\"a5f4\">We use it when we have imbalanced class data. In most real-life classification problems, imbalanced class distribution exists and thus F1-score is a better metric to evaluate our model on than accuracy.<\/p>\n\n\n\n<p id=\"867f\">But it is Less interpretable. Precision and recall are more interpretable than f1-score, since precision measures the type-1 error and recall measures the type-2 error. However, f1-score measures the trade-off between these two. So, instead of working with both and confusing ourselves, we use f1-score.<\/p>\n\n\n\n<p id=\"0415\"><strong>Specificity(True Negative Rate):&nbsp;<\/strong>It tells you what fraction of all negative samples are correctly predicted as negative by the classifier.. To calculate specificity, use the following formula.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/928\/0*t1J1PyacuKoBIdKS.png\" alt=\"\"\/><\/figure>\n\n\n\n<p id=\"af16\"><strong>False Positive Rate&nbsp;<\/strong>: FPR tells us what proportion of the negative class got incorrectly classified by the classifier.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/926\/0*GB45Et8SevTBmQXb.png\" alt=\"\"\/><\/figure>\n\n\n\n<p id=\"6ba8\"><strong>False Negative Rate:&nbsp;<\/strong>False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified by the classifier.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"https:\/\/miro.medium.com\/max\/300\/0*82GvGNRjkilZhZns.gif\" alt=\"\"\/><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>See, If you know clearly what task you have to accomplish, then it&#8217;s better to use precision and recall over f1-score. For example, suppose government launched a scheme for free cancer detection. Now, it&#8217;s costly to perform a single test. So, government assigned you a task to build a machine learning model to identify if a person is having cancer. It will be an initial screening test as government will take predictions from your model and will test those persons which your model predicted to have cancer, with real machines that whether they really are having cancer or not. That will reduce the cost of the scheme to much a extent.<\/p><p>So in such case, it will be more important to identify all the persons having cancer because we can tolerate that a person not having cancer is detected to have cancer because after testing with real machines, the truth will prevail but we can&#8217;t tolerate a person having cancer been not detected cancer because that can cost a person his life. So, here you will use recall metric to check the performance of your model.<\/p><p>But, if you are working on such a task where both precision and recall are important equally, then you may use f1-score over precision and recall.<\/p><\/blockquote>\n\n\n\n<h2 id=\"f17f\">ROC-AUC curve-<\/h2>\n\n\n\n<p id=\"1026\">Not only numerical metrics, we also have plot metrics like ROC(Receiver Characteristic Operator) and AUC(Area Under the Curve) curve.<\/p>\n\n\n\n<p id=\"30c2\">AUC \u2014 ROC curve is a performance measurement for the classification problems at various threshold settings. This graph is plotted between true positive and false positive rates. The area under the curve (AUC) is the summary of this curve that tells about how good a model is when we talk about its ability to generalize.<\/p>\n\n\n\n<p id=\"b310\">If any model captures more AUC than other models then it is considered to be a good model among all others. So, we can conclude that more the AUC the better the model will be on classifying actual positive and actual negative.<\/p>\n\n\n\n<ul><li>If the value of AUC comes as 1 then we can be assured that the model is perfect while classifying the positive class as positive and negative class as negative. <\/li><li>If the value of AUC comes as 0, then the model is worst while classifying the same. The model will predict positive class as negative and negative class as positive. <\/li><li>If the value is 0.5 then the model will struggle to differentiate between positive and negative classes. The predictions will be merely random.<\/li><li>The desired range for value of AUC is 0.5-1.0 as then there will be more chance that our model will be able to differentiate positive class values from the negative class values.<\/li><\/ul>\n\n\n\n<p id=\"549d\">Let\u2019s take a predictive model for example. Say, we are building a logistic regression model to detect whether a person is having cancer or not. Suppose our model returns a <strong>prediction score of 0.8<\/strong> for a particular patient, that means the patient is more likely to have cancer. For another patient, it returns&nbsp;<strong>prediction score of 0.2<\/strong>&nbsp;that means the patient most likely doesn&#8217;t have cancer. But, what about a patient with a&nbsp;<strong>prediction score of 0.6<\/strong>? <\/p>\n\n\n\n<p id=\"549d\">In this scenario, we must define a&nbsp;<strong>classification threshold<\/strong>. By default, the logistic regression model assumes the&nbsp;<strong>classification threshold to be 0.5<\/strong>, that is all patients getting a prediction score of 0.5 or above are having cancer otherwise not. But note that thresholds are completely problem-dependent. In order to achieve the desired output, we can&nbsp;<strong>tune the threshold<\/strong>. But now the question is how do we tune the threshold?<\/p>\n\n\n\n<p id=\"8427\">For different threshold values we will get different TPR and FPR. So, in order to visualize which threshold is best suited for the classifier we plot the ROC curve. A typical ROC curve looks like-<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"333\" height=\"329\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2022\/03\/s.png\" alt=\"\" class=\"wp-image-3753\"\/><\/figure>\n\n\n\n<p id=\"7ba8\">The ROC curve of a random classifier with the random performance level (as shown below) always shows a straight line. This random classifier ROC curve is considered to be the baseline for measuring the performance of a classifier. Two areas separated by this ROC curve indicates an estimation of the performance level \u2014 good or poor.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"420\" height=\"421\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2022\/03\/s.drawio-1.png\" alt=\"\" class=\"wp-image-3755\"\/><\/figure>\n\n\n\n<p>All ROC curves that fall under the area in the bottom-right corner indicate poor performance levels and are not desired whereas ROC curves that fall under the area in top-left corner indicate good performance level and are desired ones. The perfect ROC curve is denoted by the blue line. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Smaller values on the x-axis of the plot indicate lower false positives and higher true negatives. Larger values on the y-axis of the plot indicate higher true positives and lower false negatives.<\/p><\/blockquote>\n\n\n\n<p id=\"ba44\">Although the theoretical range of the AUC ROC curve score is between 0 and 1, the actual scores of meaningful classifiers are greater than 0.5, which is the AUC ROC curve score of a random classifier. The ROC curve shows the trade-off between <strong>Recall<\/strong> (or TPR) and <strong>specificity <\/strong>(1 \u2014 FPR).<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from sklearn.metrics import<\/strong> roc_curve, auc<\/pre>\n\n\n\n<p id=\"ffd1\">Sometimes, we replace the <strong>y-axis by precision<\/strong> and <strong>x-axis by recall<\/strong>. Then the plot is called as <strong>precision recall curve<\/strong> which does the same thing(calculates the value of precision and recall at different thresholds). But it is restricted only to binary classification in sklearn.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> precision_recall_curve<\/pre>\n\n\n\n<h2 id=\"a5d6\">Extending the above to multiclass classification-<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"295\" height=\"245\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2022\/03\/s.drawio-2.drawio.png\" alt=\"\" class=\"wp-image-3758\"\/><\/figure>\n\n\n\n<p id=\"8dce\">So in confusion matrix for multiclass classification, we don\u2019t use TP,FP,FN and TN. We just use predicted classes on y-axis and actual classes on x-axis. In above figure, <em>cell<\/em><sub><em>1<\/em><\/sub> denotes how many classes were apple and actually predicted apple and <em>cell<sub>2<\/sub><\/em> denotes how many classes were banana but predicted as apple. In the same way <em>cell<sub>8<\/sub><\/em> denotes how many classes were banana but were predicted as watermelon.<\/p>\n\n\n\n<p id=\"49bf\">The true positive, true negative, false positive and false negative for each class would be calculated by adding the cell values as follows:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"552\" height=\"151\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2022\/03\/a.drawio.png\" alt=\"\" class=\"wp-image-3757\"\/><\/figure>\n\n\n\n<p id=\"b3d5\">Precision and recall scores and F-1 scores can also be defined in the multi-class setting. Here, the metrics can be \u201caveraged\u201d across all the classes in many possible ways. Some of them are:<\/p>\n\n\n\n<ul><li><strong>micro<\/strong>: Calculate metrics globally by counting the total number of times each class was correctly predicted and incorrectly predicted.<\/li><li><strong>macro<\/strong>: Calculate metrics for each \u201cclass\u201d independently, and find their unweighted mean. This does not take label imbalance into account.<\/li><li><strong>None<\/strong>: Return the accuracy score for each class corresponding to each class.<\/li><\/ul>\n\n\n\n<p id=\"f14a\">ROC curves are typically used in binary classification to study the output of a classifier. To extend them, you have to convert your problem into binary by using&nbsp;<code>OneVsAll<\/code>&nbsp;approach, so you&#8217;ll have&nbsp;<code>n_class<\/code>&nbsp;number of ROC curves.<\/p>\n\n\n\n<p id=\"5b77\">In sklearn there is also classification_report which gives summary of precision, recall and f1-score for each class. It also gives a parameter support which just tells the occurence of that class in the dataset.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>from<\/strong> <strong>sklearn.metrics<\/strong> <strong>import<\/strong> classification_report<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>In this blog, we will discuss about commonly used classification metrics. We will be covering Accuracy Score, Confusion Matrix, Precision, Recall, F-Score, ROC-AUC and will then learn how to extend them to the multi-class classification. We will also discuss in which scenarios, which metric will be most suitable to use. First let\u2019s understand some important &hellip; <a href=\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Classification metrics and their Use Cases&#8221;<\/span><\/a><\/p>\n","protected":false},"author":36,"featured_media":3796,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[67,29,28],"tags":[143,149,142,144,147,150,145,146,148],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Classification metrics and their Use Cases | CloudxLab Blog<\/title>\n<meta name=\"description\" content=\"In this blog, we will discuss several classification metrics and when to use each of them. We&#039;ll also see how to extend them to multiclass classification.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Classification metrics and their Use Cases | CloudxLab Blog\" \/>\n<meta property=\"og:description\" content=\"In this blog, we will discuss several classification metrics and when to use each of them. We&#039;ll also see how to extend them to multiclass classification.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/\" \/>\n<meta property=\"og:site_name\" content=\"CloudxLab Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cloudxlab\" \/>\n<meta property=\"article:published_time\" content=\"2022-03-28T05:50:13+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-10-13T10:34:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2022\/03\/Untitled-Diagram.drawio-12.drawio.png\" \/>\n\t<meta property=\"og:image:width\" content=\"322\" \/>\n\t<meta property=\"og:image:height\" content=\"322\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:site\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"11 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"CloudxLab Blog\",\"description\":\"Learn AI, Machine Learning, Deep Learning, Devops &amp; Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/cloudxlab.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/cloudxlab.com\/blog\/wp-content\/uploads\/2022\/03\/Untitled-Diagram.drawio-12.drawio.png\",\"contentUrl\":\"https:\/\/cloudxlab.com\/blog\/wp-content\/uploads\/2022\/03\/Untitled-Diagram.drawio-12.drawio.png\",\"width\":322,\"height\":322},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#webpage\",\"url\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/\",\"name\":\"Classification metrics and their Use Cases | CloudxLab Blog\",\"isPartOf\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#primaryimage\"},\"datePublished\":\"2022-03-28T05:50:13+00:00\",\"dateModified\":\"2022-10-13T10:34:10+00:00\",\"author\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4438d405318314ec50940bde93ef548a\"},\"description\":\"In this blog, we will discuss several classification metrics and when to use each of them. We'll also see how to extend them to multiclass classification.\",\"breadcrumb\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/classification-metrics\/#webpage\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4438d405318314ec50940bde93ef548a\",\"name\":\"Shubh Tripathi\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/76bb13891affbf9da48fa9701d774ff0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/76bb13891affbf9da48fa9701d774ff0?s=96&d=mm&r=g\",\"caption\":\"Shubh Tripathi\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3750"}],"collection":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/comments?post=3750"}],"version-history":[{"count":13,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3750\/revisions"}],"predecessor-version":[{"id":3957,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/3750\/revisions\/3957"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media\/3796"}],"wp:attachment":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media?parent=3750"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/categories?post=3750"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/tags?post=3750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}