Extensions

Detection

DetectionVisReport

chainercv.extensions.DetectionVisReport(iterator, target, label_names=None, filename='detection_iter={iteration}_idx={index}.jpg')

An extension that visualizes output of a detection model.

This extension visualizes the predicted bounding boxes together with the ground truth bounding boxes.

Internally, this extension takes examples from an iterator, predict bounding boxes from the images in the examples, and visualizes them using chainercv.visualizations.vis_bbox(). The process can be illustrated in the following code.

batch = next(iterator)
# Convert batch -> imgs, gt_bboxes, gt_labels
pred_bboxes, pred_labels, pred_scores = target.predict(imgs)
# Visualization code
for img, gt_bbox, gt_label, pred_bbox, pred_label, pred_score \
        in zip(imgs, gt_boxes, gt_labels,
               pred_bboxes, pred_labels, pred_scores):
    # the ground truth
    vis_bbox(img, gt_bbox, gt_label)
    # the prediction
    vis_bbox(img, pred_bbox, pred_label, pred_score)

Note

gt_bbox and pred_bbox are float arrays of shape \((R, 4)\), where \(R\) is the number of bounding boxes in the image. Each bounding box is organized by (y_min, x_min, y_max, x_max) in the second axis.

gt_label and pred_label are intenger arrays of shape \((R,)\). Each label indicates the class of the bounding box.

pred_score is a float array of shape \((R,)\). Each score indicates how confident the prediction is.

Parameters:
  • iterator – Iterator object that produces images and ground truth.
  • target – Link object used for detection.
  • label_names (iterable of str) – Name of labels ordered according to label ids. If this is None, labels will be skipped.
  • filename (str) – Basename for the saved image. It can contain two keywords, '{iteration}' and '{index}'. They are replaced with the iteration of the trainer and the index of the sample when this extension save an image. The default value is 'detection_iter={iteration}_idx={index}.jpg'.

DetectionVOCEvaluator

chainercv.extensions.DetectionVOCEvaluator(iterator, target, use_07_metric=False, label_names=None)

An extension that evaluates a detection model by PASCAL VOC metric.

This extension iterates over an iterator and evaluates the prediction results by average precisions (APs) and mean of them (mean Average Precision, mAP). This extension reports the following values with keys. Please note that 'ap/<label_names[l]>' is reported only if label_names is specified.

  • 'map': Mean of average precisions (mAP).
  • 'ap/<label_names[l]>': Average precision for class label_names[l], where \(l\) is the index of the class. For example, this evaluator reports 'ap/aeroplane', 'ap/bicycle', etc. if label_names is voc_detection_label_names. If there is no bounding box assigned to class label_names[l] in either ground truth or prediction, it reports numpy.nan as its average precision. In this case, mAP is computed without this class.
Parameters:
  • iterator (chainer.Iterator) – An iterator. Each sample should be following tuple img, bbox, label or img, bbox, label, difficult. img is an image, bbox is coordinates of bounding boxes, label is labels of the bounding boxes and difficult is whether the bounding boxes are difficult or not. If difficult is returned, difficult ground truth will be ignored from evaluation.
  • target (chainer.Link) – A detection link. This link must have predict() method which takes a list of images and returns bboxes, labels and scores.
  • use_07_metric (bool) – Whether to use PASCAL VOC 2007 evaluation metric for calculating average precision. The default value is False.
  • label_names (iterable of strings) – An iterable of names of classes. If this value is specified, average precision for each class is also reported with the key 'ap/<label_names[l]>'.