Evaluation Metrics
When we make predictions using our model, as we did earlier, how do we know whether the predictions are good or not? We need to be able to evaluate how well our model performs. Evaluation metrics commonly used in binary classification include prediction accuracy and error, precision and recall, the area under the precision-recall curve, the receiver operating characteristic (ROC) curve, the area under ROC curve (AUC), and the F-measure.
Last updated
Was this helpful?