Character Face in Video

Citation Author(s):
Zilinghan
Li
University of Illinois at Urbana-Champaign
Xiwei
Wang
University of Illinois at Urbana-Champaign
Zhenning
Zhang
University of Illinois at Urbana-Champaign
Volodymyr
Kindratenko
National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign
Submitted by:
Xiwei Wang
Last updated:
Tue, 01/03/2023 - 00:46
DOI:
10.21227/q2pn-z054
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Measuring the appearance time slots of characters in videos is still an unsolved problem in computer vision, and the related dataset is insufficient and unextracted. The Character Face In Video (CFIV) dataset provides the labeled appearing time slots for characters of interest for ten video clips on Youtube, two faces per character for training, and a script for downloading each video. Additionally, three videos contain around 100 images per character for evaluating the accuracy of the face recognizer.

Instructions: 

There are overall ten folders from vct1 to vct10, containing the information of ten video clips on Youtube.

In each folder, the youtube_download.py file can be used to download the video clip from Youtube, the face folder contains two faces per character of interest for training the video character tracker, and the time_slot folder contains the appearing time slots for those characters of interest. 

Additionally, three video folders (vct2, vct4, and vct8) contain more data for further evaluations of the face recognizer. Specifically, the face1 folder contains only one face image per character for evaluating the performance of the face recognizer by using one image per character in training. The testset folder contains around 100 labeled images per character for evaluating the face recognition accuracy, and the unseen folder contains about 100 images of a character never appearing in the video clip to assess how well the face recognizer can distinguish unseen faces.

Dataset Files

LOGIN TO ACCESS DATASET FILES