pygod.metric#
- pygod.metric.eval_average_precision(label, score)[source]#
Average precision score for binary classification.
- Parameters:
label (torch.Tensor) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.score (torch.Tensor) – Outlier scores in shape of
(N, )
.
- Returns:
ap – Average precision score.
- Return type:
- pygod.metric.eval_f1(label, pred)[source]#
F1 score for binary classification.
- Parameters:
label (torch.Tensor) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.pred (torch.Tensor) – Outlier prediction in shape of
(N, )
.
- Returns:
f1 – F1 score.
- Return type:
- pygod.metric.eval_precision_at_k(label, score, k=None)[source]#
Precision score for top k instances with the highest outlier scores.
- Parameters:
label (torch.Tensor) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.score (torch.Tensor) – Outlier scores in shape of
(N, )
.k (int, optional) – The number of instances to evaluate.
None
for precision. Default:None
.
- Returns:
precision_at_k – Precision for top k instances with the highest outlier scores.
- Return type:
- pygod.metric.eval_recall_at_k(label, score, k=None)[source]#
Recall score for top k instances with the highest outlier scores.
- Parameters:
label (torch.Tensor) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.score (torch.Tensor) – Outlier scores in shape of
(N, )
.k (int, optional) – The number of instances to evaluate.
None
for recall. Default:None
.
- Returns:
recall_at_k – Recall for top k instances with the highest outlier scores.
- Return type:
- pygod.metric.eval_roc_auc(label, score)[source]#
ROC-AUC score for binary classification.
- Parameters:
label (torch.Tensor) – Labels in shape of
(N, )
, where 1 represents outliers, 0 represents normal instances.score (torch.Tensor) – Outlier scores in shape of
(N, )
.
- Returns:
roc_auc – Average ROC-AUC score across different labels.
- Return type: