Metrics#
- class pygod.metrics.eval_average_precision(labels, pred)[source]#
Average precision score for binary classification.
- Parameters:
labels (numpy.ndarray) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (numpy.ndarray) – Outlier scores in shape of
(N, )
.
- Returns:
ap – Average precision score.
- Return type:
- class pygod.metrics.eval_ndcg(labels, pred)[source]#
Normalized discounted cumulative gain for ranking.
- Parameters:
labels (numpy.ndarray) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (numpy.ndarray) – Outlier scores in shape of
(N, )
.
- Returns:
ndcg – NDCG score.
- Return type:
- class pygod.metrics.eval_precision_at_k(labels, pred, k)[source]#
Precision score for top k instances with the highest outlier scores.
- Parameters:
labels (numpy.ndarray) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (numpy.ndarray) – Outlier scores in shape of
(N, )
.k (int) – The number of instances to evaluate.
- Returns:
precision_at_k – Precision for top k instances with the highest outlier scores.
- Return type:
- class pygod.metrics.eval_recall_at_k(labels, pred, k)[source]#
Recall score for top k instances with the highest outlier scores.
- Parameters:
labels (numpy.ndarray) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (numpy.ndarray) – Outlier scores in shape of
(N, )
.k (int) – The number of instances to evaluate.
- Returns:
recall_at_k – Recall for top k instances with the highest outlier scores.
- Return type:
- class pygod.metrics.eval_roc_auc(labels, pred)[source]#
ROC-AUC score for binary classification.
- Parameters:
labels (numpy.ndarray) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (numpy.ndarray) – Outlier scores in shape of
(N, )
.
- Returns:
roc_auc – Average ROC-AUC score across different labels.
- Return type: