Evaluations¶
Detection VOC¶
eval_detection_voc¶
-
chainercv.evaluations.
eval_detection_voc
(pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels, gt_difficults=None, iou_thresh=0.5, use_07_metric=False)¶ Calculate average precisions based on evaluation code of PASCAL VOC.
This function evaluates predicted bounding boxes obtained from a dataset which has \(N\) images by using average precision for each class. The code is based on the evaluation code used in PASCAL VOC Challenge.
Parameters: - pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\)
sets of bounding boxes.
Its index corresponds to an index for the base dataset.
Each element of
pred_bboxes
is a set of coordinates of bounding boxes. This is an array whose shape is \((R, 4)\), where \(R\) corresponds to the number of bounding boxes, which may vary among boxes. The second axis corresponds to \(y_{min}, x_{min}, y_{max}, x_{max}\) of a bounding box. - pred_labels (iterable of numpy.ndarray) – An iterable of labels.
Similar to
pred_bboxes
, its index corresponds to an index for the base dataset. Its length is \(N\). - pred_scores (iterable of numpy.ndarray) – An iterable of confidence
scores for predicted bounding boxes. Similar to
pred_bboxes
, its index corresponds to an index for the base dataset. Its length is \(N\). - gt_bboxes (iterable of numpy.ndarray) – An iterable of ground truth
bounding boxes
whose length is \(N\). An element of
gt_bboxes
is a bounding box whose shape is \((R, 4)\). Note that the number of bounding boxes in each image does not need to be same as the number of corresponding predicted boxes. - gt_labels (iterable of numpy.ndarray) – An iterable of ground truth
labels which are organized similarly to
gt_bboxes
. - gt_difficults (iterable of numpy.ndarray) – An iterable of boolean
arrays which is organized similarly to
gt_bboxes
. This tells whether the corresponding ground truth bounding box is difficult or not. By default, this isNone
. In that case, this function considers all bounding boxes to be not difficult. - iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value.
- use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric
for calculating average precision. The default value is
False
.
Returns: The keys, value-types and the description of the values are listed below.
- ap (numpy.ndarray): An array of average precisions. The \(l\)-th value corresponds to the average precision for class \(l\). If class \(l\) does not exist in either
pred_labels
orgt_labels
, the corresponding value is set tonumpy.nan
. - map (float): The average of Average Precisions over classes.
Return type: - pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\)
sets of bounding boxes.
Its index corresponds to an index for the base dataset.
Each element of
calc_detection_voc_ap¶
-
chainercv.evaluations.
calc_detection_voc_ap
(prec, rec, use_07_metric=False)¶ Calculate average precisions based on evaluation code of PASCAL VOC.
This function calculates average precisions from given precisions and recalls. The code is based on the evaluation code used in PASCAL VOC Challenge.
Parameters: - prec (list of numpy.array) – A list of arrays.
prec[l]
indicates precision for class \(l\). Ifprec[l]
isNone
, this function returnsnumpy.nan
for class \(l\). - rec (list of numpy.array) – A list of arrays.
rec[l]
indicates recall for class \(l\). Ifrec[l]
isNone
, this function returnsnumpy.nan
for class \(l\). - use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric
for calculating average precision. The default value is
False
.
Returns: This function returns an array of average precisions. The \(l\)-th value corresponds to the average precision for class \(l\). If
prec[l]
orrec[l]
isNone
, the corresponding value is set tonumpy.nan
.Return type: - prec (list of numpy.array) – A list of arrays.
calc_detection_voc_prec_rec¶
-
chainercv.evaluations.
calc_detection_voc_prec_rec
(pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels, gt_difficults=None, iou_thresh=0.5)¶ Calculate precision and recall based on evaluation code of PASCAL VOC.
This function calculates precision and recall of predicted bounding boxes obtained from a dataset which has \(N\) images. The code is based on the evaluation code used in PASCAL VOC Challenge.
Parameters: - pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\)
sets of bounding boxes.
Its index corresponds to an index for the base dataset.
Each element of
pred_bboxes
is a set of coordinates of bounding boxes. This is an array whose shape is \((R, 4)\), where \(R\) corresponds to the number of bounding boxes, which may vary among boxes. The second axis corresponds to \(y_{min}, x_{min}, y_{max}, x_{max}\) of a bounding box. - pred_labels (iterable of numpy.ndarray) – An iterable of labels.
Similar to
pred_bboxes
, its index corresponds to an index for the base dataset. Its length is \(N\). - pred_scores (iterable of numpy.ndarray) – An iterable of confidence
scores for predicted bounding boxes. Similar to
pred_bboxes
, its index corresponds to an index for the base dataset. Its length is \(N\). - gt_bboxes (iterable of numpy.ndarray) – An iterable of ground truth
bounding boxes
whose length is \(N\). An element of
gt_bboxes
is a bounding box whose shape is \((R, 4)\). Note that the number of bounding boxes in each image does not need to be same as the number of corresponding predicted boxes. - gt_labels (iterable of numpy.ndarray) – An iterable of ground truth
labels which are organized similarly to
gt_bboxes
. - gt_difficults (iterable of numpy.ndarray) – An iterable of boolean
arrays which is organized similarly to
gt_bboxes
. This tells whether the corresponding ground truth bounding box is difficult or not. By default, this isNone
. In that case, this function considers all bounding boxes to be not difficult. - iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value..
Returns: This function returns two lists:
prec
andrec
.prec
: A list of arrays.prec[l]
is precision for class \(l\). If class \(l\) does not exist in eitherpred_labels
orgt_labels
,prec[l]
is set toNone
.rec
: A list of arrays.rec[l]
is recall for class \(l\). If class \(l\) that is not marked as difficult does not exist ingt_labels
,rec[l]
is set toNone
.
Return type: tuple of two lists
- pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\)
sets of bounding boxes.
Its index corresponds to an index for the base dataset.
Each element of
Instance Segmentation VOC¶
eval_instance_segmentation_voc¶
-
chainercv.evaluations.
eval_instance_segmentation_voc
(pred_masks, pred_labels, pred_scores, gt_masks, gt_labels, iou_thresh=0.5, use_07_metric=False)¶ Calculate average precisions based on evaluation code of PASCAL VOC.
This function evaluates predicted masks obtained from a dataset which has \(N\) images by using average precision for each class. The code is based on the evaluation code used in FCIS.
Parameters: - pred_masks (iterable of numpy.ndarray) – An iterable of \(N\)
sets of masks. Its index corresponds to an index for the base
dataset. Each element of
pred_masks
is an object mask and is an array whose shape is \((R, H, W)\), where \(R\) corresponds to the number of masks, which may vary among images. - pred_labels (iterable of numpy.ndarray) – An iterable of labels.
Similar to
pred_masks
, its index corresponds to an index for the base dataset. Its length is \(N\). - pred_scores (iterable of numpy.ndarray) – An iterable of confidence
scores for predicted masks. Similar to
pred_masks
, its index corresponds to an index for the base dataset. Its length is \(N\). - gt_masks (iterable of numpy.ndarray) – An iterable of ground truth
masks whose length is \(N\). An element of
gt_masks
is an object mask whose shape is \((R, H, W)\). Note that the number of masks \(R\) in each image does not need to be same as the number of corresponding predicted masks. - gt_labels (iterable of numpy.ndarray) – An iterable of ground truth
labels which are organized similarly to
gt_masks
. Its length is \(N\). - iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value.
- use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric
for calculating average precision. The default value is
False
.
Returns: The keys, value-types and the description of the values are listed below.
- ap (numpy.ndarray): An array of average precisions. The \(l\)-th value corresponds to the average precision for class \(l\). If class \(l\) does not exist in either
pred_labels
orgt_labels
, the corresponding value is set tonumpy.nan
. - map (float): The average of Average Precisions over classes.
Return type: - pred_masks (iterable of numpy.ndarray) – An iterable of \(N\)
sets of masks. Its index corresponds to an index for the base
dataset. Each element of
calc_instance_segmentation_voc_prec_rec¶
-
chainercv.evaluations.
calc_instance_segmentation_voc_prec_rec
(pred_masks, pred_labels, pred_scores, gt_masks, gt_labels, iou_thresh)¶ Calculate precision and recall based on evaluation code of PASCAL VOC.
This function calculates precision and recall of predicted masks obtained from a dataset which has \(N\) images. The code is based on the evaluation code used in FCIS.
Parameters: - pred_masks (iterable of numpy.ndarray) – An iterable of \(N\)
sets of masks. Its index corresponds to an index for the base
dataset. Each element of
pred_masks
is an object mask and is an array whose shape is \((R, H, W)\), where \(R\) corresponds to the number of masks, which may vary among images. - pred_labels (iterable of numpy.ndarray) – An iterable of labels.
Similar to
pred_masks
, its index corresponds to an index for the base dataset. Its length is \(N\). - pred_scores (iterable of numpy.ndarray) – An iterable of confidence
scores for predicted masks. Similar to
pred_masks
, its index corresponds to an index for the base dataset. Its length is \(N\). - gt_masks (iterable of numpy.ndarray) – An iterable of ground truth
masks whose length is \(N\). An element of
gt_masks
is an object mask whose shape is \((R, H, W)\). Note that the number of masks \(R\) in each image does not need to be same as the number of corresponding predicted masks. - gt_labels (iterable of numpy.ndarray) – An iterable of ground truth
labels which are organized similarly to
gt_masks
. Its length is \(N\). - iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value.
Returns: This function returns two lists:
prec
andrec
.prec
: A list of arrays.prec[l]
is precision for class \(l\). If class \(l\) does not exist in eitherpred_labels
orgt_labels
,prec[l]
is set toNone
.rec
: A list of arrays.rec[l]
is recall for class \(l\). If class \(l\) that is not marked as difficult does not exist ingt_labels
,rec[l]
is set toNone
.
Return type: tuple of two lists
- pred_masks (iterable of numpy.ndarray) – An iterable of \(N\)
sets of masks. Its index corresponds to an index for the base
dataset. Each element of
Semantic Segmentation IoU¶
eval_semantic_segmentation¶
-
chainercv.evaluations.
eval_semantic_segmentation
(pred_labels, gt_labels)¶ Evaluate metrics used in Semantic Segmentation.
This function calculates Intersection over Union (IoU), Pixel Accuracy and Class Accuracy for the task of semantic segmentation.
The definition of metrics calculated by this function is as follows, where \(N_{ij}\) is the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.
- \(\text{IoU of the i-th class} = \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)
- \(\text{mIoU} = \frac{1}{k} \sum_{i=1}^k \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)
- \(\text{Pixel Accuracy} = \frac {\sum_{i=1}^k N_{ii}} {\sum_{i=1}^k \sum_{j=1}^k N_{ij}}\)
- \(\text{Class Accuracy} = \frac{N_{ii}}{\sum_{j=1}^k N_{ij}}\)
- \(\text{Mean Class Accuracy} = \frac{1}{k} \sum_{i=1}^k \frac{N_{ii}}{\sum_{j=1}^k N_{ij}}\)
The more detailed description of the above metrics can be found in a review on semantic segmentation [1].
The number of classes \(n\_class\) is \(max(pred\_labels, gt\_labels) + 1\), which is the maximum class id of the inputs added by one.
[1] Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor Villena-Martinez, Jose Garcia-Rodriguez. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017. Parameters: - pred_labels (iterable of numpy.ndarray) – A collection of predicted
labels. The shape of a label array
is \((H, W)\). \(H\) and \(W\)
are height and width of the label.
For example, this is a list of labels
[label_0, label_1, ...]
, wherelabel_i.shape = (H_i, W_i)
. - gt_labels (iterable of numpy.ndarray) – A collection of ground
truth labels. The shape of a ground truth label array is
\((H, W)\), and its corresponding prediction label should
have the same shape.
A pixel with value
-1
will be ignored during evaluation.
Returns: The keys, value-types and the description of the values are listed below.
- iou (numpy.ndarray): An array of IoUs for the \(n\_class\) classes. Its shape is \((n\_class,)\).
- miou (float): The average of IoUs over classes.
- pixel_accuracy (float): The computed pixel accuracy.
- class_accuracy (numpy.ndarray): An array of class accuracies for the \(n\_class\) classes. Its shape is \((n\_class,)\).
- mean_class_accuracy (float): The average of class accuracies.
Return type:
calc_semantic_segmentation_confusion¶
-
chainercv.evaluations.
calc_semantic_segmentation_confusion
(pred_labels, gt_labels)¶ Collect a confusion matrix.
The number of classes \(n\_class\) is \(max(pred\_labels, gt\_labels) + 1\), which is the maximum class id of the inputs added by one.
Parameters: - pred_labels (iterable of numpy.ndarray) – A collection of predicted labels. The shape of a label array is \((H, W)\). \(H\) and \(W\) are height and width of the label.
- gt_labels (iterable of numpy.ndarray) – A collection of ground
truth labels. The shape of a ground truth label array is
\((H, W)\), and its corresponding prediction label should
have the same shape.
A pixel with value
-1
will be ignored during evaluation.
Returns: A confusion matrix. Its shape is \((n\_class, n\_class)\). The \((i, j)\) th element corresponds to the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.
Return type:
calc_semantic_segmentation_iou¶
-
chainercv.evaluations.
calc_semantic_segmentation_iou
(confusion)¶ Calculate Intersection over Union with a given confusion matrix.
The definition of Intersection over Union (IoU) is as follows, where \(N_{ij}\) is the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.
- \(\text{IoU of the i-th class} = \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)
Parameters: confusion (numpy.ndarray) – A confusion matrix. Its shape is \((n\_class, n\_class)\). The \((i, j)\) th element corresponds to the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction. Returns: An array of IoUs for the \(n\_class\) classes. Its shape is \((n\_class,)\). Return type: numpy.ndarray