matchingresult
matchingresult ¶
Module for abstract MatchingResult class
Classes¶
MatchingResult ¶
Bases: ABC
Abstract class to calculate precision (per class) and recall (per class)
based on true_pos
, false_pos
and false_neg
instance labels
Parameters:
-
true_pos
–true_positives = List of correctly predicted instance labels (prediction = ground truth)
-
false_pos
–false_positives = List of instance labels that were found too much (prediction, but no ground truth)
-
false_neg
–false_negatives = List of instance labels that were not found correctly (ground truth, but no prediction)
Functions¶
__add__ ¶
Adds two MatchingResult objects
Source code in niceml/utilities/matchingresult.py
calculate_per_class_precision ¶
Calculates the precision per class
Source code in niceml/utilities/matchingresult.py
calculate_per_class_recall ¶
Calculates the recall per target class
Source code in niceml/utilities/matchingresult.py
calculate_precision ¶
calculate_recall ¶
get_containing_class_names ¶
Returns all class names of the true positives, false positives and false negatives
Functions¶
match_classification_prediction_and_gt ¶
Matches region and class of predictions to ground truth labels and checks if the ground truth label could be found by minimum one prediction. Not matching ground truth labels are counted as false negative. Not matching predictions are counted as false positives. The sum of false negative and true positive is the amount of ground truth labels
Parameters:
-
pred_labels
(List[InstanceLabel]
) –prediction labels (bounding box or mask)
-
gt_labels
(List[InstanceLabel]
) –ground truth labels (bounding box or mask)
-
matching_iou
(float
, default:0.5
) –Minimum iou for region matching
Returns:
-
MatchingResult
–MatchingResult with true_pos, false_pos and false_neg
Source code in niceml/utilities/matchingresult.py
match_detection_prediction_and_gt ¶
Matches regions of predictions to ground truth label regions and checks if the ground truth label could be found by minimum one prediction. Not matching ground truth labels are counted as false negative. Not matching predictions are counted as false positives. The sum of false negative and true positive is the amount of ground truth labels
Parameters:
-
pred_labels
(List[InstanceLabel]
) –prediction labels (bounding box or mask)
-
gt_labels
(List[InstanceLabel]
) –ground truth labels (bounding box or mask)
-
matching_iou
(float
, default:0.5
) –Minimum iou for region matching
Returns:
-
MatchingResult
–MatchingResult with true_pos, false_pos and false_neg