Extensions

Evaluator

DetectionCOCOEvaluator

class chainercv.extensions.DetectionCOCOEvaluator(iterator, target, label_names=None, comm=None)[source]

An extension that evaluates a detection model by MS COCO metric.

This extension iterates over an iterator and evaluates the prediction results. The results consist of average precisions (APs) and average recalls (ARs) as well as the mean of each (mean average precision and mean average recall). This extension reports the following values with keys. Please note that if label_names is not specified, only the mAPs and mARs are reported.

The underlying dataset of the iterator is assumed to return img, bbox, label or img, bbox, label, area, crowded.

key

description

ap/iou=0.50:0.95/area=all/max_dets=100/<label_names[l]>

1

ap/iou=0.50/area=all/max_dets=100/<label_names[l]>

1

ap/iou=0.75/area=all/max_dets=100/<label_names[l]>

1

ap/iou=0.50:0.95/area=small/max_dets=100/<label_names[l]>

1 5

ap/iou=0.50:0.95/area=medium/max_dets=100/<label_names[l]>

1 5

ap/iou=0.50:0.95/area=large/max_dets=100/<label_names[l]>

1 5

ar/iou=0.50:0.95/area=all/max_dets=1/<label_names[l]>

2

ar/iou=0.50/area=all/max_dets=10/<label_names[l]>

2

ar/iou=0.75/area=all/max_dets=100/<label_names[l]>

2

ar/iou=0.50:0.95/area=small/max_dets=100/<label_names[l]>

2 5

ar/iou=0.50:0.95/area=medium/max_dets=100/<label_names[l]>

2 5

ar/iou=0.50:0.95/area=large/max_dets=100/<label_names[l]>

2 5

map/iou=0.50:0.95/area=all/max_dets=100

3

map/iou=0.50/area=all/max_dets=100

3

map/iou=0.75/area=all/max_dets=100

3

map/iou=0.50:0.95/area=small/max_dets=100

3 5

map/iou=0.50:0.95/area=medium/max_dets=100

3 5

map/iou=0.50:0.95/area=large/max_dets=100

3 5

ar/iou=0.50:0.95/area=all/max_dets=1

4

ar/iou=0.50/area=all/max_dets=10

4

ar/iou=0.75/area=all/max_dets=100

4

ar/iou=0.50:0.95/area=small/max_dets=100

4 5

ar/iou=0.50:0.95/area=medium/max_dets=100

4 5

ar/iou=0.50:0.95/area=large/max_dets=100

4 5

1(1,2,3,4,5,6)

Average precision for class label_names[l], where \(l\) is the index of the class. If class \(l\) does not exist in either pred_labels or gt_labels, the corresponding value is set to numpy.nan.

2(1,2,3,4,5,6)

Average recall for class label_names[l], where \(l\) is the index of the class. If class \(l\) does not exist in either pred_labels or gt_labels, the corresponding value is set to numpy.nan.

3(1,2,3,4,5,6)

The average of average precisions over classes.

4(1,2,3,4,5,6)

The average of average recalls over classes.

5(1,2,3,4,5,6,7,8,9,10,11,12)

Skip if gt_areas is None.

Parameters
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, bbox, label, area, crowded.

  • target (chainer.Link) – A detection link. This link must have predict() method that takes a list of images and returns bboxes, labels and scores.

  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, average precision and average recalls for each class are reported.

  • comm (CommunicatorBase) – A ChainerMN communicator. If it is specified, this extension scatters the iterator of root worker and gathers the results to the root worker.

DetectionVOCEvaluator

class chainercv.extensions.DetectionVOCEvaluator(iterator, target, use_07_metric=False, label_names=None, comm=None)[source]

An extension that evaluates a detection model by PASCAL VOC metric.

This extension iterates over an iterator and evaluates the prediction results by average precisions (APs) and mean of them (mean Average Precision, mAP). This extension reports the following values with keys. Please note that 'ap/<label_names[l]>' is reported only if label_names is specified.

  • 'map': Mean of average precisions (mAP).

  • 'ap/<label_names[l]>': Average precision for class label_names[l], where \(l\) is the index of the class. For example, this evaluator reports 'ap/aeroplane', 'ap/bicycle', etc. if label_names is voc_bbox_label_names. If there is no bounding box assigned to class label_names[l] in either ground truth or prediction, it reports numpy.nan as its average precision. In this case, mAP is computed without this class.

Parameters
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, bbox, label or img, bbox, label, difficult. img is an image, bbox is coordinates of bounding boxes, label is labels of the bounding boxes and difficult is whether the bounding boxes are difficult or not. If difficult is returned, difficult ground truth will be ignored from evaluation.

  • target (chainer.Link) – A detection link. This link must have predict() method that takes a list of images and returns bboxes, labels and scores.

  • use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric for calculating average precision. The default value is False.

  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, average precision for each class is also reported with the key 'ap/<label_names[l]>'.

  • comm (CommunicatorBase) – A ChainerMN communicator. If it is specified, this extension scatters the iterator of root worker and gathers the results to the root worker.

InstanceSegmentationCOCOEvaluator

class chainercv.extensions.InstanceSegmentationCOCOEvaluator(iterator, target, label_names=None, comm=None)[source]

An extension that evaluates a instance segmentation model by MS COCO metric.

This extension iterates over an iterator and evaluates the prediction results. The results consist of average precisions (APs) and average recalls (ARs) as well as the mean of each (mean average precision and mean average recall). This extension reports the following values with keys. Please note that if label_names is not specified, only the mAPs and mARs are reported.

The underlying dataset of the iterator is assumed to return img, mask, label or img, mask, label, area, crowded.

key

description

ap/iou=0.50:0.95/area=all/max_dets=100/<label_names[l]>

6

ap/iou=0.50/area=all/max_dets=100/<label_names[l]>

6

ap/iou=0.75/area=all/max_dets=100/<label_names[l]>

6

ap/iou=0.50:0.95/area=small/max_dets=100/<label_names[l]>

6 10

ap/iou=0.50:0.95/area=medium/max_dets=100/<label_names[l]>

6 10

ap/iou=0.50:0.95/area=large/max_dets=100/<label_names[l]>

6 10

ar/iou=0.50:0.95/area=all/max_dets=1/<label_names[l]>

7

ar/iou=0.50/area=all/max_dets=10/<label_names[l]>

7

ar/iou=0.75/area=all/max_dets=100/<label_names[l]>

7

ar/iou=0.50:0.95/area=small/max_dets=100/<label_names[l]>

7 10

ar/iou=0.50:0.95/area=medium/max_dets=100/<label_names[l]>

7 10

ar/iou=0.50:0.95/area=large/max_dets=100/<label_names[l]>

7 10

map/iou=0.50:0.95/area=all/max_dets=100

8

map/iou=0.50/area=all/max_dets=100

8

map/iou=0.75/area=all/max_dets=100

8

map/iou=0.50:0.95/area=small/max_dets=100

8 10

map/iou=0.50:0.95/area=medium/max_dets=100

8 10

map/iou=0.50:0.95/area=large/max_dets=100

8 10

ar/iou=0.50:0.95/area=all/max_dets=1

9

ar/iou=0.50/area=all/max_dets=10

9

ar/iou=0.75/area=all/max_dets=100

9

ar/iou=0.50:0.95/area=small/max_dets=100

9 10

ar/iou=0.50:0.95/area=medium/max_dets=100

9 10

ar/iou=0.50:0.95/area=large/max_dets=100

9 10

6(1,2,3,4,5,6)

Average precision for class label_names[l], where \(l\) is the index of the class. If class \(l\) does not exist in either pred_labels or gt_labels, the corresponding value is set to numpy.nan.

7(1,2,3,4,5,6)

Average recall for class label_names[l], where \(l\) is the index of the class. If class \(l\) does not exist in either pred_labels or gt_labels, the corresponding value is set to numpy.nan.

8(1,2,3,4,5,6)

The average of average precisions over classes.

9(1,2,3,4,5,6)

The average of average recalls over classes.

10(1,2,3,4,5,6,7,8,9,10,11,12)

Skip if gt_areas is None.

Parameters
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, mask, label, area, crowded.

  • target (chainer.Link) – A detection link. This link must have predict() method that takes a list of images and returns masks, labels and scores.

  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, average precision and average recalls for each class are reported.

  • comm (CommunicatorBase) – A ChainerMN communicator. If it is specified, this extension scatters the iterator of root worker and gathers the results to the root worker.

InstanceSegmentationVOCEvaluator

class chainercv.extensions.InstanceSegmentationVOCEvaluator(iterator, target, iou_thresh=0.5, use_07_metric=False, label_names=None, comm=None)[source]

An evaluation extension of instance-segmentation by PASCAL VOC metric.

This extension iterates over an iterator and evaluates the prediction results by average precisions (APs) and mean of them (mean Average Precision, mAP). This extension reports the following values with keys. Please note that 'ap/<label_names[l]>' is reported only if label_names is specified.

  • 'map': Mean of average precisions (mAP).

  • 'ap/<label_names[l]>': Average precision for class label_names[l], where \(l\) is the index of the class. For example, this evaluator reports 'ap/aeroplane', 'ap/bicycle', etc. if label_names is sbd_instance_segmentation_label_names. If there is no bounding box assigned to class label_names[l] in either ground truth or prediction, it reports numpy.nan as its average precision. In this case, mAP is computed without this class.

Parameters
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, bbox, label or img, bbox, label, difficult. img is an image, bbox is coordinates of bounding boxes, label is labels of the bounding boxes and difficult is whether the bounding boxes are difficult or not. If difficult is returned, difficult ground truth will be ignored from evaluation.

  • target (chainer.Link) – An instance-segmentation link. This link must have predict() method that takes a list of images and returns bboxes, labels and scores.

  • iou_thresh (float) – Intersection over Union (IoU) threshold for calulating average precision. The default value is 0.5.

  • use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric for calculating average precision. The default value is False.

  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, average precision for each class is also reported with the key 'ap/<label_names[l]>'.

  • comm (CommunicatorBase) – A ChainerMN communicator. If it is specified, this extension scatters the iterator of root worker and gathers the results to the root worker.

SemanticSegmentationEvaluator

class chainercv.extensions.SemanticSegmentationEvaluator(iterator, target, label_names=None, comm=None)[source]

An extension that evaluates a semantic segmentation model.

This extension iterates over an iterator and evaluates the prediction results of the model by common evaluation metrics for semantic segmentation. This extension reports values with keys below. Please note that 'iou/<label_names[l]>' and 'class_accuracy/<label_names[l]>' are reported only if label_names is specified.

  • 'miou': Mean of IoUs (mIoU).

  • 'iou/<label_names[l]>': IoU for class label_names[l], where \(l\) is the index of the class. For example, if label_names is camvid_label_names, this evaluator reports 'iou/Sky', 'ap/Building', etc.

  • 'mean_class_accuracy': Mean of class accuracies.

  • 'class_accuracy/<label_names[l]>': Class accuracy for class label_names[l], where \(l\) is the index of the class.

  • 'pixel_accuracy': Pixel accuracy.

If there is no label assigned to class label_names[l] in the ground truth, values corresponding to keys 'iou/<label_names[l]>' and 'class_accuracy/<label_names[l]>' are numpy.nan. In that case, the means of them are calculated by excluding them from calculation.

For details on the evaluation metrics, please see the documentation for chainercv.evaluations.eval_semantic_segmentation().

Parameters
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, label. img is an image, label is pixel-wise label.

  • target (chainer.Link) – A semantic segmentation link. This link should have predict() method that takes a list of images and returns labels.

  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, IoU and class accuracy for each class are also reported with the keys 'iou/<label_names[l]>' and 'class_accuracy/<label_names[l]>'.

  • comm (CommunicatorBase) – A ChainerMN communicator. If it is specified, this extension scatters the iterator of root worker and gathers the results to the root worker.

Visualization Report

DetectionVisReport

class chainercv.extensions.DetectionVisReport(iterator, target, label_names=None, filename='detection_iter={iteration}_idx={index}.jpg')[source]

An extension that visualizes output of a detection model.

This extension visualizes the predicted bounding boxes together with the ground truth bounding boxes.

Internally, this extension takes examples from an iterator, predict bounding boxes from the images in the examples, and visualizes them using chainercv.visualizations.vis_bbox(). The process can be illustrated in the following code.

batch = next(iterator)
# Convert batch -> imgs, gt_bboxes, gt_labels
pred_bboxes, pred_labels, pred_scores = target.predict(imgs)
# Visualization code
for img, gt_bbox, gt_label, pred_bbox, pred_label, pred_score \
        in zip(imgs, gt_boxes, gt_labels,
               pred_bboxes, pred_labels, pred_scores):
    # the ground truth
    vis_bbox(img, gt_bbox, gt_label)
    # the prediction
    vis_bbox(img, pred_bbox, pred_label, pred_score)

Note

gt_bbox and pred_bbox are float arrays of shape \((R, 4)\), where \(R\) is the number of bounding boxes in the image. Each bounding box is organized by \((y_{min}, x_{min}, y_{max}, x_{max})\) in the second axis.

gt_label and pred_label are intenger arrays of shape \((R,)\). Each label indicates the class of the bounding box.

pred_score is a float array of shape \((R,)\). Each score indicates how confident the prediction is.

Parameters
  • iterator – Iterator object that produces images and ground truth.

  • target – Link object used for detection.

  • label_names (iterable of strings) – Name of labels ordered according to label ids. If this is None, labels will be skipped.

  • filename (str) – Basename for the saved image. It can contain two keywords, '{iteration}' and '{index}'. They are replaced with the iteration of the trainer and the index of the sample when this extension save an image. The default value is 'detection_iter={iteration}_idx={index}.jpg'.