tracklab.pipeline package

Submodules

tracklab.pipeline.datasetlevel_module module

tracklab.pipeline.detectionlevel_module module

class tracklab.pipeline.detectionlevel_module.DetectionLevelModule(batch_size: int)[source]

Bases: Module

Abstract class to implement a module that operates directly on detections.

This can for example be a top-down pose estimator, or a reidentifier module.

The functions to implement are
  • __init__, which can take any configuration needed

  • preprocess

  • process

  • datapipe (optional)returns an object which will be used to create the pipeline.

    (Only modify this if you know what you’re doing)

  • dataloader (optional) : returns a dataloader for the datapipe

You should also provide the following class properties :
  • input_columns : what info you need for the detections

  • output_columns : what info you will provide when called

  • collate_fn (optional)the function that will be used for collating the inputs

    in a batch. (Default : pytorch collate function)

A description of the expected behavior is provided below.

collate_fn()

Function that takes in a batch of data and puts the elements within the batch into a tensor with an additional outer dimension - batch size. The exact output type can be a torch.Tensor, a Sequence of torch.Tensor, a Collection of torch.Tensor, or left unchanged, depending on the input type. This is used as the default function for collation when batch_size or batch_sampler is defined in DataLoader.

Here is the general input type (based on the type of the element within the batch) to output type mapping:

  • torch.Tensor -> torch.Tensor (with an added outer dimension batch size)

  • NumPy Arrays -> torch.Tensor

  • float -> torch.Tensor

  • int -> torch.Tensor

  • str -> str (unchanged)

  • bytes -> bytes (unchanged)

  • Mapping[K, V_i] -> Mapping[K, default_collate([V_1, V_2, …])]

  • NamedTuple[V1_i, V2_i, …] -> NamedTuple[default_collate([V1_1, V1_2, …]), default_collate([V2_1, V2_2, …]), …]

  • Sequence[V1_i, V2_i, …] -> Sequence[default_collate([V1_1, V1_2, …]), default_collate([V2_1, V2_2, …]), …]

Parameters:

batch – a single batch to be collated

Examples

>>> # Example with a batch of `int`s:
>>> default_collate([0, 1, 2, 3])
tensor([0, 1, 2, 3])
>>> # Example with a batch of `str`s:
>>> default_collate(['a', 'b', 'c'])
['a', 'b', 'c']
>>> # Example with `Map` inside the batch:
>>> default_collate([{'A': 0, 'B': 1}, {'A': 100, 'B': 100}])
{'A': tensor([  0, 100]), 'B': tensor([  1, 100])}
>>> # Example with `NamedTuple` inside the batch:
>>> # xdoctest: +SKIP
>>> Point = namedtuple('Point', ['x', 'y'])
>>> default_collate([Point(0, 0), Point(1, 1)])
Point(x=tensor([0, 1]), y=tensor([0, 1]))
>>> # Example with `Tuple` inside the batch:
>>> default_collate([(0, 1), (2, 3)])
[tensor([0, 2]), tensor([1, 3])]
>>> # Example with `List` inside the batch:
>>> default_collate([[0, 1], [2, 3]])
[tensor([0, 2]), tensor([1, 3])]
>>> # Two options to extend `default_collate` to handle specific type
>>> # Option 1: Write custom collate function and invoke `default_collate`
>>> def custom_collate(batch):
...     elem = batch[0]
...     if isinstance(elem, CustomType):  # Some custom condition
...         return ...
...     else:  # Fall back to `default_collate`
...         return default_collate(batch)
>>> # Option 2: In-place modify `default_collate_fn_map`
>>> def collate_customtype_fn(batch, *, collate_fn_map=None):
...     return ...
>>> default_collate_fn_map.update(CustoType, collate_customtype_fn)
>>> default_collate(batch)  # Handle `CustomType` automatically
dataloader(engine: TrackingEngine)[source]
property datapipe
input_columns = None
output_columns = None
abstract preprocess(image, detection: Series, metadata: Series) Any[source]

Adapts the default input to your specific case.

Parameters:
  • image – a numpy array of the current image

  • detection – a Series containing all the detections pertaining to a single image

  • metadata – additional information about the image

Returns:

input for the process function

Return type:

preprocessed_sample

abstract process(batch: Any, detections: DataFrame, metadatas: DataFrame)[source]

The main processing function. Runs on GPU.

Parameters:
  • batch – The batched outputs of preprocess

  • detections – The previous detections.

  • metadatas – The previous image metadatas

Returns:

Either a DataFrame containing the new/updated detections.

The DataFrames can be either a list of Series, a list of DataFrames or a single DataFrame. The returned objects will be aggregated automatically according to the name of the Series/index of the DataFrame. It is thus mandatory here to name correctly your series or index your dataframes. The output will override the previous detections with the same name/index.

Return type:

output

tracklab.pipeline.imagelevel_module module

class tracklab.pipeline.imagelevel_module.ImageLevelModule(batch_size: int)[source]

Bases: Module

Abstract class to implement a module that operates directly on images.

This can for example be a bounding box detector, or a bottom-up pose estimator (which outputs keypoints directly).

The functions to implement are
  • __init__, which can take any configuration needed

  • preprocess

  • process

  • datapipe (optional)returns an object which will be used to create the pipeline.

    (Only modify this if you know what you’re doing)

  • dataloader (optional) : returns a dataloader for the datapipe

You should also provide the following class properties :
  • input_columns : what info you need for the detections

  • output_columns : what info you will provide when called

  • collate_fn (optional)the function that will be used for collating the inputs

    in a batch. (Default : pytorch collate function)

A description of the expected behavior is provided below.

collate_fn()

Function that takes in a batch of data and puts the elements within the batch into a tensor with an additional outer dimension - batch size. The exact output type can be a torch.Tensor, a Sequence of torch.Tensor, a Collection of torch.Tensor, or left unchanged, depending on the input type. This is used as the default function for collation when batch_size or batch_sampler is defined in DataLoader.

Here is the general input type (based on the type of the element within the batch) to output type mapping:

  • torch.Tensor -> torch.Tensor (with an added outer dimension batch size)

  • NumPy Arrays -> torch.Tensor

  • float -> torch.Tensor

  • int -> torch.Tensor

  • str -> str (unchanged)

  • bytes -> bytes (unchanged)

  • Mapping[K, V_i] -> Mapping[K, default_collate([V_1, V_2, …])]

  • NamedTuple[V1_i, V2_i, …] -> NamedTuple[default_collate([V1_1, V1_2, …]), default_collate([V2_1, V2_2, …]), …]

  • Sequence[V1_i, V2_i, …] -> Sequence[default_collate([V1_1, V1_2, …]), default_collate([V2_1, V2_2, …]), …]

Parameters:

batch – a single batch to be collated

Examples

>>> # Example with a batch of `int`s:
>>> default_collate([0, 1, 2, 3])
tensor([0, 1, 2, 3])
>>> # Example with a batch of `str`s:
>>> default_collate(['a', 'b', 'c'])
['a', 'b', 'c']
>>> # Example with `Map` inside the batch:
>>> default_collate([{'A': 0, 'B': 1}, {'A': 100, 'B': 100}])
{'A': tensor([  0, 100]), 'B': tensor([  1, 100])}
>>> # Example with `NamedTuple` inside the batch:
>>> # xdoctest: +SKIP
>>> Point = namedtuple('Point', ['x', 'y'])
>>> default_collate([Point(0, 0), Point(1, 1)])
Point(x=tensor([0, 1]), y=tensor([0, 1]))
>>> # Example with `Tuple` inside the batch:
>>> default_collate([(0, 1), (2, 3)])
[tensor([0, 2]), tensor([1, 3])]
>>> # Example with `List` inside the batch:
>>> default_collate([[0, 1], [2, 3]])
[tensor([0, 2]), tensor([1, 3])]
>>> # Two options to extend `default_collate` to handle specific type
>>> # Option 1: Write custom collate function and invoke `default_collate`
>>> def custom_collate(batch):
...     elem = batch[0]
...     if isinstance(elem, CustomType):  # Some custom condition
...         return ...
...     else:  # Fall back to `default_collate`
...         return default_collate(batch)
>>> # Option 2: In-place modify `default_collate_fn_map`
>>> def collate_customtype_fn(batch, *, collate_fn_map=None):
...     return ...
>>> default_collate_fn_map.update(CustoType, collate_customtype_fn)
>>> default_collate(batch)  # Handle `CustomType` automatically
dataloader(engine: TrackingEngine)[source]
property datapipe
input_columns = None
output_columns = None
abstract preprocess(image, detections: DataFrame, metadata: Series) Any[source]

Adapts the default input to your specific case.

Parameters:
  • image – a numpy array of the current image

  • detections – a DataFrame containing all the detections pertaining to a single image

  • metadata – additional information about the image

Returns:

input for the process function

Return type:

preprocessed_sample

abstract process(batch: Any, detections: DataFrame, metadatas: DataFrame)[source]

The main processing function. Runs on GPU.

Parameters:
  • batch – The batched outputs of preprocess

  • detections – The previous detections.

  • metadatas – The previous image metadatas

Returns:

Either a DataFrame containing the new/updated detections

or a tuple containing detections and metadatas (in that order) The DataFrames can be either a list of Series, a list of DataFrames or a single DataFrame. The returned objects will be aggregated automatically according to the name of the Series/index of the DataFrame. It is thus mandatory here to name correctly your series or index your dataframes. The output will override the previous detections with the same name/index.

Return type:

output

tracklab.pipeline.module module

class tracklab.pipeline.module.MetaModule(name, bases, namespace, **kwargs)[source]

Bases: ABCMeta

property level
property name
class tracklab.pipeline.module.Module[source]

Bases: object

forget_columns = []
get_input_columns(level)[source]
get_output_columns(level)[source]
input_columns = None
property level
property name
output_columns = None
training_enabled = False
validate_input(dataframe)[source]
validate_output(dataframe)[source]
class tracklab.pipeline.module.Pipeline(models: List[Module])[source]

Bases: object

is_empty()[source]
validate(load_columns: dict[str, set])[source]
class tracklab.pipeline.module.Skip(**kwargs)[source]

Bases: Module

property name

tracklab.pipeline.videolevel_module module

class tracklab.pipeline.videolevel_module.VideoLevelModule[source]

Bases: Module

Abstract class to implement a module that operates on whole videos, image per image.

This can for example be an offline tracker, or a video visualizer, by implementing process_video directly, or an online tracker, by implementing preprocess and process

The functions to implement are
  • __init__, which can take any configuration needed

  • process

You should also provide the following class properties :
  • input_columns : what info you need for the detections

  • output_columns : what info you will provide when called

A description of the expected behavior is provided below.

input_columns = None
output_columns = None
abstract process(detections: DataFrame, metadatas: DataFrame)[source]

The main processing function. Runs on GPU.

Parameters:
  • detections – The previous detections.

  • metadatas – The previous image metadatas

Returns:

Either a DataFrame containing the new/updated detections

or a tuple containing detections and metadatas (in that order) The DataFrames can be either a list of Series, a list of DataFrames or a single DataFrame. The returned objects will be aggregated automatically according to the name of the Series/index of the DataFrame. It is thus mandatory here to name correctly your series or index your dataframes. The output will override the previous detections with the same name/index.

Return type:

output