Objectives
Logloss
CrossEntropy
metrics
Classification metrics
| Precision | - |
|---|---|
| Recall | - |
| F | - |
| F1 | - |
| BalancedAccuracy | - |
| BalancedErrorRate | - |
| MCC | - |
| Accuracy | - |
| CtrFactor | - |
| AUC | - |
| QueryAUC | - |
| NormalizedGini | - |
| BrierScore | - |
| HingeLoss | - |
| HammingLoss | - |
| ZeroOneLoss | - |
| Kappa | - |
| WKappa | - |
| LogLikelihoodOfPrediction | - |
Classification metrics
See the Classification metrics section of the user guide for further details.
| metrics.accuracy_score(y_true, y_pred, *[, …]) | Accuracy classification score. |
|---|---|
| metrics.auc(x, y) | Compute Area Under the Curve (AUC) using the trapezoidal rule. |
| metrics.average_precision_score(y_true, …) | Compute average precision (AP) from prediction scores. |
| metrics.balanced_accuracy_score(y_true, …) | Compute the balanced accuracy. |
| metrics.brier_score_loss(y_true, y_prob, *) | Compute the Brier score loss. |
| metrics.classification_report(y_true, y_pred, *) | Build a text report showing the main classification metrics. |
| metrics.cohen_kappa_score(y1, y2, *[, …]) | Cohen’s kappa: a statistic that measures inter-annotator agreement. |
| metrics.confusion_matrix(y_true, y_pred, *) | Compute confusion matrix to evaluate the accuracy of a classification. |
| metrics.dcg_score(y_true, y_score, *[, k, …]) | Compute Discounted Cumulative Gain. |
| metrics.det_curve(y_true, y_score[, …]) | Compute error rates for different probability thresholds. |
| metrics.f1_score(y_true, y_pred, *[, …]) | Compute the F1 score, also known as balanced F-score or F-measure. |
| metrics.fbeta_score(y_true, y_pred, *, beta) | Compute the F-beta score. |
| metrics.hamming_loss(y_true, y_pred, *[, …]) | Compute the average Hamming loss. |
| metrics.hinge_loss(y_true, pred_decision, *) | Average hinge loss (non-regularized). |
| metrics.jaccard_score(y_true, y_pred, *[, …]) | Jaccard similarity coefficient score. |
| metrics.log_loss(y_true, y_pred, *[, eps, …]) | Log loss, aka logistic loss or cross-entropy loss. |
| metrics.matthews_corrcoef(y_true, y_pred, *) | Compute the Matthews correlation coefficient (MCC). |
| metrics.multilabel_confusion_matrix(y_true, …) | Compute a confusion matrix for each class or sample. |
| metrics.ndcg_score(y_true, y_score, *[, k, …]) | Compute Normalized Discounted Cumulative Gain. |
| metrics.precision_recall_curve(y_true, …) | Compute precision-recall pairs for different probability thresholds. |
| metrics.precision_recall_fscore_support(…) | Compute precision, recall, F-measure and support for each class. |
| metrics.precision_score(y_true, y_pred, *[, …]) | Compute the precision. |
| metrics.recall_score(y_true, y_pred, *[, …]) | Compute the recall. |
| metrics.roc_auc_score(y_true, y_score, *[, …]) | Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. |
| metrics.roc_curve(y_true, y_score, *[, …]) | Compute Receiver operating characteristic (ROC). |
| metrics.top_k_accuracy_score(y_true, y_score, *) | Top-k Accuracy classification score. |
| metrics.zero_one_loss(y_true, y_pred, *[, …]) | Zero-one classification loss. |
https://catboost.ai/en/docs/concepts/loss-functions-classification#used-for-optimization
