Skip to content

Latest commit

 

History

History
65 lines (37 loc) · 2.12 KB

File metadata and controls

65 lines (37 loc) · 2.12 KB

简体中文 | English

BMN


Contents

Introduction

BMN model contains three modules: Base Module handles the input feature sequence, and out- puts feature sequence shared by the following two modules; Temporal Evaluation Module evaluates starting and ending probabilities of each location in video to generate boundary probability sequences; Proposal Evaluation Module con- tains the BM layer to transfer feature sequence to BM fea- ture map, and contains a series of 3D and 2D convolutional layers to generate BM confidence map.


BMN Overview

Data

We use ActivityNet dataset to train this model,data preparation please refer to ActivityNet dataset.

Train

You can start training by such command:

export CUDA_VISIBLE_DEVICES=0,1,2,3

python -B -m paddle.distributed.launch --gpus="0,1,2,3"  --log_dir=log_bmn main.py  --validate -c configs/localization/bmn.yaml

Test

You can start testing by such command:

python main.py --test -c configs/localization/bmn.yaml -w output/BMN/BMN_epoch_00009.pdparams -o DATASET.batch_size=1
  • For now, we only support testing with single card and batch_size=1.

  • Please download activity_net_1_3_new.json label file and specify the path to METRIC.ground_truth_filename in config file.

  • Args -w is used to specifiy the model path,you can download our model in BMN.pdparams

Test accuracy in ActivityNet1.3:

AR@1 AR@5 AR@10 AR@100 AUC
33.26 49.48 56.86 75.19 67.23%

Reference