Pre-training for Video Understanding Challenge




Track 1 Leaderboard

# Team Name BLEU@4 METEOR CIDEr-D SPICE
1 Bigsea 26.33 21.07 35.32 7.88
2 starwar 25.70 21.39 33.91 7.60
3 CV_MM 22.59 20.11 29.40 7.22

Track 2 Leaderboard

# Team Name Top-1 accuracy
1 AutoX-4Paradigm 62.39
2 CV_MM 59.87
3 Bigsea 57.65

Metrics

For the evaluation in the downstream task of video captioning, we will use and publish in a leaderboard the automatic metric results, including BLEU@4, METEOR, CIDEr and SPICE, on the testing set of MSR-VTT dataset.

For the evaluation in the downstream task of video categorization, we will report the top-1 accuracy on the testing set of Downstream dataset.



Citations

@article{autogif2020, title={Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training}, author={Yingwei Pan and Yehao Li and Jianjie Luo and Jun Xu and Ting Yao and Tao Mei}, journal={arXiv preprint arXiv:2007.02375}, year={2020}} @inproceedings{msrvtt, title={MSR-VTT: A Large Video Description Dataset for Bridging Video and Language}, author={Jun Xu and Tao Mei and Ting Yao and Yong Rui}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2016}}