Evaluations

Detection VOC

eval_detection_voc_ap

chainercv.evaluations.eval_detection_voc_ap(pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels, gt_difficults=None, iou_thresh=0.5, use_07_metric=False)

Calculate average precisions based on evaluation code of PASCAL VOC.

This function evaluates predicted bounding boxes obtained from a dataset which has \(N\) images by using average precision for each class. The code is based on the evaluation code used in PASCAL VOC Challenge.

Parameters:
  • pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\) sets of bounding boxes. Its index corresponds to an index for the base dataset. Each element of pred_bboxes is a set of coordinates of bounding boxes. This is an array whose shape is \((R, 4)\), where \(R\) corresponds to the number of bounding boxes, which may vary among boxes. The second axis corresponds to y_min, x_min, y_max, x_max of a bounding box.
  • pred_labels (iterable of numpy.ndarray) – An iterable of labels. Similar to pred_bboxes, its index corresponds to an index for the base dataset. Its length is \(N\).
  • pred_scores (iterable of numpy.ndarray) – An iterable of confidence scores for predicted bounding boxes. Similar to pred_bboxes, its index corresponds to an index for the base dataset. Its length is \(N\).
  • gt_bboxes (iterable of numpy.ndarray) – An iterable of ground truth bounding boxes whose length is \(N\). An element of gt_bboxes is a bounding box whose shape is \((R, 4)\). Note that the number of bounding boxes in each image does not need to be same as the number of corresponding predicted boxes.
  • gt_labels (iterable of numpy.ndarray) – An iterable of ground truth labels which are organized similarly to gt_bboxes.
  • gt_difficults (iterable of numpy.ndarray) – An iterable of boolean arrays which is organized similarly to gt_bboxes. This tells whether the corresponding ground truth bounding box is difficult or not. By default, this is None. In that case, this function considers all bounding boxes to be not difficult.
  • iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value.
  • use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric for calculating average precision. The default value is False.
Returns:

This function returns an array of average precisions. The \(l\)-th value corresponds to the average precision for class \(l\). If class \(l\) does not exist in either pred_labels or gt_labels, the corresponding value is set to numpy.nan.

Return type:

ndarray

calc_detection_voc_ap

chainercv.evaluations.calc_detection_voc_ap(prec, rec, use_07_metric=False)

Calculate average precisions based on evaluation code of PASCAL VOC.

This function calculates average precisions from given precisions and recalls. The code is based on the evaluation code used in PASCAL VOC Challenge.

Parameters:
  • prec (list of numpy.array) – A list of arrays. prec[l] indicates precision for class \(l\). If prec[l] is None, this function returns numpy.nan for class \(l\).
  • rec (list of numpy.array) – A list of arrays. rec[l] indicates recall for class \(l\). If rec[l] is None, this function returns numpy.nan for class \(l\).
  • use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric for calculating average precision. The default value is False.
Returns:

This function returns an array of average precisions. The \(l\)-th value corresponds to the average precision for class \(l\). If prec[l] or rec[l] is None, the corresponding value is set to numpy.nan.

Return type:

ndarray

calc_detection_voc_prec_rec

chainercv.evaluations.calc_detection_voc_prec_rec(pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels, gt_difficults=None, iou_thresh=0.5)

Calculate precision and recall based on evaluation code of PASCAL VOC.

This function calculates precision and recall of predicted bounding boxes obtained from a dataset which has \(N\) images. The code is based on the evaluation code used in PASCAL VOC Challenge.

Parameters:
  • pred_bboxes (iterable of numpy.ndarray) – An iterable of \(N\) sets of bounding boxes. Its index corresponds to an index for the base dataset. Each element of pred_bboxes is a set of coordinates of bounding boxes. This is an array whose shape is \((R, 4)\), where \(R\) corresponds to the number of bounding boxes, which may vary among boxes. The second axis corresponds to y_min, x_min, y_max, x_max of a bounding box.
  • pred_labels (iterable of numpy.ndarray) – An iterable of labels. Similar to pred_bboxes, its index corresponds to an index for the base dataset. Its length is \(N\).
  • pred_scores (iterable of numpy.ndarray) – An iterable of confidence scores for predicted bounding boxes. Similar to pred_bboxes, its index corresponds to an index for the base dataset. Its length is \(N\).
  • gt_bboxes (iterable of numpy.ndarray) – An iterable of ground truth bounding boxes whose length is \(N\). An element of gt_bboxes is a bounding box whose shape is \((R, 4)\). Note that the number of bounding boxes in each image does not need to be same as the number of corresponding predicted boxes.
  • gt_labels (iterable of numpy.ndarray) – An iterable of ground truth labels which are organized similarly to gt_bboxes.
  • gt_difficults (iterable of numpy.ndarray) – An iterable of boolean arrays which is organized similarly to gt_bboxes. This tells whether the corresponding ground truth bounding box is difficult or not. By default, this is None. In that case, this function considers all bounding boxes to be not difficult.
  • iou_thresh (float) – A prediction is correct if its Intersection over Union with the ground truth is above this value..
Returns:

This function returns two lists: prec and rec.

  • prec: A list of arrays. prec[l] is precision for class \(l\). If class \(l\) does not exist in either pred_labels or gt_labels, prec[l] is set to None.
  • rec: A list of arrays. rec[l] is recall for class \(l\). If class \(l\) does not exist in either pred_labels or gt_labels, rec[l] is set to None.

Return type:

tuple of two lists

PCK

eval_pck

chainercv.evaluations.eval_pck(pred, expected, alpha, L)

Calculate PCK (Percentage of Correct Keypoints).

This function calculates number of vertices whose positions are correctly pred. A pred keypoint is correctly matched to the ground truth if it lies within Euclidean distance \(\alpha \cdot L\) of the ground truth keypoint, where \(L\) is the size of the image and \(0 < \alpha < 1\) is a variable we control. \(L\) is determined differently depending on the context. For example, in evaluation of keypoint matching for CUB dataset, \(L=\sqrt{h^2 + w^2}\) is used.

Parameters:
  • pred (ndarray) – An array of shape \((K, 2)\) \(N\) is the number of keypoints to be evaluated. The two elements of the second axis corresponds to \(y\) and \(x\) coordinate of the keypoint.
  • expected (ndarray) – Same kind of array as pred. This contains ground truth location of the keypoints that the user tries to predict.
  • alpha (float) – A control variable \(\alpha\).
  • L (float) – A size of an image. The definition changes from the tasks to task.
Returns:

float

Semantic Segmentation IoU

eval_semantic_segmentation_iou

chainercv.evaluations.eval_semantic_segmentation_iou(pred_labels, gt_labels)

Evaluate Intersection over Union from labels.

This function calculates Intersection over Union (IoU) for the task of semantic segmentation.

The definition of IoU and a related metric, mean Intersection over Union (mIoU), are as follow, where \(N_{ij}\) is the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.

  • \(\text{IoU of the i-th class} = \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)
  • \(\text{mIoU} = \frac{1}{k} \sum_{i=1}^k \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)

mIoU can be computed by taking numpy.nanmean of the IoUs returned by this function. The more detailed description of the above metric can be found in a review on semantic segmentation [1].

The number of classes \(n\_class\) is \(max(pred\_labels, gt\_labels) + 1\), which is the maximum class id of the inputs added by one.

[1]Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor Villena-Martinez, Jose Garcia-Rodriguez. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017.
Parameters:
  • pred_labels (iterable of numpy.ndarray) – A collection of predicted labels. The shape of a label array is \((H, W)\). \(H\) and \(W\) are height and width of the label. For example, this is a list of labels [label_0, label_1, ...], where label_i.shape = (H_i, W_i).
  • gt_labels (iterable of numpy.ndarray) – A collection of ground truth labels. The shape of a ground truth label array is \((H, W)\), and its corresponding prediction label should have the same shape. A pixel with value -1 will be ignored during evaluation.
Returns:

An array of IoUs for the \(n\_class\) classes. Its shape is \((n\_class,)\).

Return type:

numpy.ndarray

calc_semantic_segmentation_confusion

chainercv.evaluations.calc_semantic_segmentation_confusion(pred_labels, gt_labels)

Collect a confusion matrix.

The number of classes \(n\_class\) is \(max(pred\_labels, gt\_labels) + 1\), which is the maximum class id of the inputs added by one.

Parameters:
  • pred_labels (iterable of numpy.ndarray) – A collection of predicted labels. The shape of a label array is \((H, W)\). \(H\) and \(W\) are height and width of the label.
  • gt_labels (iterable of numpy.ndarray) – A collection of ground truth labels. The shape of a ground truth label array is \((H, W)\), and its corresponding prediction label should have the same shape. A pixel with value -1 will be ignored during evaluation.
Returns:

A confusion matrix. Its shape is \((n\_class, n\_class)\). The \((i, j)\) th element corresponds to the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.

Return type:

numpy.ndarray

calc_semantic_segmentation_iou

chainercv.evaluations.calc_semantic_segmentation_iou(confusion)

Calculate Intersection over Union with a given confusion matrix.

The definition of Intersection over Union (IoU) is as follows, where \(N_{ij}\) is the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.

  • \(\text{IoU of the i-th class} = \frac{N_{ii}}{\sum_{j=1}^k N_{ij} + \sum_{j=1}^k N_{ji} - N_{ii}}\)
Parameters:confusion (numpy.ndarray) – A confusion matrix. Its shape is \((n\_class, n\_class)\). The \((i, j)\) th element corresponds to the number of pixels that are labeled as class \(i\) by the ground truth and class \(j\) by the prediction.
Returns:An array of IoUs for the \(n\_class\) classes. Its shape is \((n\_class,)\).
Return type:numpy.ndarray