Please use this identifier to cite or link to this item:
https://hdl.handle.net/11147/6802
Title: | The visual object tracking VOT2013 challenge results | Authors: | Kristan, Matej Pflugfelder, Roman Leonardis, Ales Matas, Jiri Porikli, Fatih Cehovin, Luka Nebehay, Georg Fernandez, Gustavo Vojir, Tomas Gatt, Adam Khajenezhad, Ahmad Salahledin, Ahmed Soltani-Farani, Ali Zarezade, Ali Petrosino, Alfredo Milton, Anthony Bozorgtabar, Behzad Li, Bo Chan, Chee Seng Heng, Cher Keng Ward, Dale Kearney, David Monekosso, Dorothy Karaimer, Hakkı Can Rabiee, Hamid R. Zhu, Jianke Gao, Jin Xiao, Jingjing Zhang, Junge Xing, Junliang Huang, Kaiqi Lebeda, Karel Cao, Lijun Maresca, Mario Edoardo Lim, Mei Kuan El Helw, Mohamed Felsberg, Michael Remagnino, Paolo Bowden, Richard Goecke, Roland Stolkin, Rustam Lim, Samantha YueYing Maher, Sara Poullot, Sebastien Wong, Sebastien Satoh, Shin’ichi Chen, Weihua Hu, Weiming Zhang, Xiaoqin Li, Yang Zhi Heng, Niu |
Keywords: | Visual object tracking challenge VOT2013 Object appearance |
Publisher: | Institute of Electrical and Electronics Engineers Inc. | Abstract: | Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net). | Description: | 2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013; Sydney, NSW; Australia; 1 December 2013 through 8 December 2013 | URI: | http://doi.org/10.1109/ICCVW.2013.20 http://hdl.handle.net/11147/6802 |
ISBN: | 978-1-4799-3022-7 |
Appears in Collections: | Computer Engineering / Bilgisayar Mühendisliği Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
Show full item record
CORE Recommender
SCOPUSTM
Citations
209
checked on Nov 15, 2024
WEB OF SCIENCETM
Citations
132
checked on Nov 9, 2024
Page view(s)
262
checked on Nov 18, 2024
Download(s)
1,308
checked on Nov 18, 2024
Google ScholarTM
Check
Altmetric
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.