You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just learned about DALI and wanted to ask if it was the correct tool for my use case.
I have a dataset of videos and I want to load them in a Dataloader in PyTorch.
I work on multiple GPUs.
My pipeline goes like this:
Get file_name by accessing fnames[index] (fnames: List[str])
Get the number of frames of the video stored at file_name. (the number of frames might be different for each video)
Compute an indexing of T: int frames I want to extract. This value T is a constant and so will be the same for each video but the indexing might differ. (If I want T=3 frames uniformly sampled in a video of 101 frames it would be [0,50,100] while it would be [0,100,200] in a video of 201 frames)
Extract the T frames in the video (hopefully without having to decode the entire video)
Convert this into a PyTorch tensor of shape T,C,H,W
Now what I want is the batch version of this in a distributed manner. So a pipeline that gives me some frames of shape B,T,C,H,W.
I am currently using a custom DataLoader currently and in __get_item__(index: int) -> torch.Tensor I call a load_video(fname: str, rel_indices: np.ndarray) -> torch.Tensor that can be implemented with different engines (Decord, torchvision.io, ...) which are all too slow.
If I understand correctly, the setup of DALI is different as it directly processes batches?
Do you think DALI could be useful in my use case and if so how could I implement this? Keep in mind that I am working in a distributed setup with multiple GPUs (and potentially multiple nodes later on) and that the number of frames extracted T is significantly smaller that the number of frames available.
Thanks!
Check for duplicates
I have searched the open bugs/issues and have found no duplicates for this bug report
The text was updated successfully, but these errors were encountered:
Thank you for reaching out. I'm afraid DALI doesn't support the sampling patterns you ask for. What it can do is sample video with constant steps and stride, while in your case, you look for the equal distribution of a fixed number of samples.
Describe the question.
Hi, I just learned about DALI and wanted to ask if it was the correct tool for my use case.
I have a dataset of videos and I want to load them in a Dataloader in PyTorch.
I work on multiple GPUs.
My pipeline goes like this:
file_name
by accessingfnames[index]
(fnames: List[str]
)file_name
. (the number of frames might be different for each video)T: int
frames I want to extract. This valueT
is a constant and so will be the same for each video but the indexing might differ. (If I wantT=3
frames uniformly sampled in a video of 101 frames it would be[0,50,100]
while it would be[0,100,200]
in a video of 201 frames)T
frames in the video (hopefully without having to decode the entire video)T,C,H,W
Now what I want is the batch version of this in a distributed manner. So a pipeline that gives me some frames of shape
B,T,C,H,W
.I am currently using a custom
DataLoader
currently and in__get_item__(index: int) -> torch.Tensor
I call aload_video(fname: str, rel_indices: np.ndarray) -> torch.Tensor
that can be implemented with different engines (Decord, torchvision.io, ...) which are all too slow.If I understand correctly, the setup of DALI is different as it directly processes batches?
Do you think DALI could be useful in my use case and if so how could I implement this? Keep in mind that I am working in a distributed setup with multiple GPUs (and potentially multiple nodes later on) and that the number of frames extracted
T
is significantly smaller that the number of frames available.Thanks!
Check for duplicates
The text was updated successfully, but these errors were encountered: