tracklab.engine package
Submodules
tracklab.engine.batch module
tracklab.engine.engine module
- class tracklab.engine.engine.TrackingEngine(modules: Pipeline, tracker_state: TrackerState, num_workers: int, callbacks: Dict[Callback] = None)[source]
Bases:
ABC
Manages the full tracking pipeline.
After initializing the
TrackingEngine
, users should calltrack_dataset
which will track each video in turn. The call stack looks like :track_dataset | video_step | -> detect_multi_step -> detect_single_step -> reid_step -> track_step
Implementors of
TrackingEngine
will need to at least implementvideo_loop()
. for example, an online engine will simply call each step in turn for every image in a video. An offline engine might instead call each step for all the images before doing the next step in the pipeline.You should take care to implement the different callback hooks, by calling:
self.fabric.call("a_callback_function", *args, **kwargs)
- Parameters:
detect_multi_model – The bbox/pose detection model
detect_single_model – The pose detection model
reid_model – Reid model
track_model – tracking model
tracker_state – contains inputs and outputs
callbacks – called at different steps
num_workers – number of workers for preprocessing
- default_step(batch: Any, task: str, detections: DataFrame, image_pred: DataFrame, **kwargs)[source]
- abstract video_loop(tracker_state: TrackerState, video_metadata: Series, video_id: int) DataFrame [source]
Run tracking on one video.
The pipeline for each video looks like :
detect_multi -> (detect_single) -> reid -> track
- Parameters:
tracker_state (TrackerState) – tracker state object
video_metadata (pd.Series) – metadata for the video
video_id (int) – id of the video
- Returns:
a dataframe of all detections
- Return type:
detections
tracklab.engine.offline module
- class tracklab.engine.offline.OfflineTrackingEngine(modules: Pipeline, tracker_state: TrackerState, num_workers: int, callbacks: Dict[Callback] = None)[source]
Bases:
TrackingEngine
- video_loop(tracker_state, video, video_id)[source]
Run tracking on one video.
The pipeline for each video looks like :
detect_multi -> (detect_single) -> reid -> track
- Parameters:
tracker_state (TrackerState) – tracker state object
video_metadata (pd.Series) – metadata for the video
video_id (int) – id of the video
- Returns:
a dataframe of all detections
- Return type:
detections
tracklab.engine.pipelined module
- class tracklab.engine.pipelined.PipelinedTrackingEngine(**kwargs)[source]
Bases:
TrackingEngine
Pipelined implementation of an online tracking engine.
- video_loop(video, video_id) DataFrame [source]
Run tracking on one video.
The pipeline for each video looks like :
detect_multi -> (detect_single) -> reid -> track
- Parameters:
tracker_state (TrackerState) – tracker state object
video_metadata (pd.Series) – metadata for the video
video_id (int) – id of the video
- Returns:
a dataframe of all detections
- Return type:
detections