tracklab.wrappers.reid package

Submodules

tracklab.wrappers.reid.bpbreid_api module

class tracklab.wrappers.reid.bpbreid_api.BPBReId(cfg, tracking_dataset, dataset, device, save_path, job_id, use_keypoints_visibility_scores_for_reid, training_enabled, batch_size)[source]

Bases: DetectionLevelModule

collate_fn()

Puts each data field into a tensor with outer dimension batch size

download_models(load_weights, pretrained_path, backbone)[source]
input_columns = ['bbox_ltwh']
output_columns = ['embeddings', 'visibility_scores', 'body_masks']
preprocess(image, detection: Series, metadata: Series)[source]

Adapts the default input to your specific case.

Parameters:
  • image – a numpy array of the current image

  • detection – a Series containing all the detections pertaining to a single image

  • metadata – additional information about the image

Returns:

input for the process function

Return type:

preprocessed_sample

process(batch, detections: DataFrame, metadatas: DataFrame)[source]

The main processing function. Runs on GPU.

Parameters:
  • batch – The batched outputs of preprocess

  • detections – The previous detections.

  • metadatas – The previous image metadatas

Returns:

Either a DataFrame containing the new/updated detections.

The DataFrames can be either a list of Series, a list of DataFrames or a single DataFrame. The returned objects will be aggregated automatically according to the name of the Series/index of the DataFrame. It is thus mandatory here to name correctly your series or index your dataframes. The output will override the previous detections with the same name/index.

Return type:

output

train()[source]

tracklab.wrappers.reid.bpbreid_dataset module

class tracklab.wrappers.reid.bpbreid_dataset.ReidDataset(tracking_dataset: TrackingDataset, reid_config, pose_model=None, masks_dir='', **kwargs)[source]

Bases: ImageDataset

ad_pid_column(gt_dets)[source]
annotations_dir = 'posetrack_data'
build_reid_set(tracking_set, reid_config, split, is_test_set)[source]

Build ReID metadata for a given MOT dataset split. Only a subset of all MOT groundtruth detections is used for ReID. Detections to be used for ReID are selected according to the filtering criteria specified in the config ‘reid_cfg’. Image crops and human parsing labels (masks) are generated for each selected detection only. If the config is changed and more detections are selected, the image crops and masks are generated only for these new detections.

dataset_dir = 'PoseTrack21'
gallery_filter(q_pid, q_camid, q_ann, g_pids, g_camids, g_anns)[source]

camid refers to video id: remove gallery samples from the different videos than query sample

static get_masks_config(masks_dir)[source]
images_anns_filename = 'reid_crops_anns.json'
img_ext = '.jpg'
load_reid_annotations(gt_dets, reid_anns_filepath, columns)[source]
masks_anns_filename = 'reid_masks_anns.json'
masks_dirs = {'gaussian_joints': (10, False, '.npy', ['p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p9', 'p10', 'p11', 'p12', 'p13', 'p14', 'p15', 'p16']), 'gaussian_keypoints': (17, False, '.npy', ['p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p9', 'p10', 'p11', 'p12', 'p13', 'p14', 'p15', 'p16']), 'pose_on_img': (35, False, '.npy', ['p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p9', 'p10', 'p11', 'p12', 'p13', 'p14', 'p15', 'p16', 'p17', 'p18', 'p19', 'p20', 'p21', 'p22', 'p23', 'p24', 'p25', 'p26', 'p27', 'p28', 'p29', 'p30', 'p31', 'p32', 'p33', 'p34'])}
masks_ext = '.npy'
reid_anns_dir = 'anns'
reid_dir = 'reid'
reid_fig_dir = 'figures'
reid_images_dir = 'images'
reid_masks_dir = 'masks'
rescale_and_filter_keypoints(keypoints, bbox_ltwh, new_w, new_h)[source]
sample_detections_for_reid(dets_df, reid_cfg)[source]
save_reid_img_crops(gt_dets, save_path, set_name, reid_anns_filepath, metadatas_df, max_crop_size)[source]

Save on disk all detections image crops from the ground truth dataset to build the reid dataset. Create a json annotation file with crops metadata.

save_reid_masks_crops(gt_dets, masks_save_path, fig_save_path, set_name, reid_anns_filepath, metadatas_df, fig_size, masks_size, mode='gaussian_keypoints')[source]

Save on disk all human parsing gt for each reid crop. Create a json annotation file with human parsing metadata.

to_torchreid_dataset_format(dataframes)[source]
uniform_tracklet_sampling(_df, max_samples_per_id, column)[source]