UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (2024)

UCF101: A Dataset of 101 Human Actions

Classes From Videos in The Wild

Khurram Soomro, Amir Roshan Zamir and Mubarak Shah

CRCV-TR-12-01

November 2012

Keywords: Action Dataset, UCF101, UCF50, Action Recognition

Center for Research in Computer Vision

University of Central Florida

4000 Central Florida Blvd.

Orlando, FL 32816-2365 USA

Khurram Soomro, Amir Roshan Zamir and Mubarak Shah
Center for Research in Computer Vision, Orlando, FL 32816, USA
{{\{ksoomro, aroshan, shah}}\}@cs.ucf.edu
http://crcv.ucf.edu/data/UCF101.php

Abstract

We introduce UCF101 which is currently the largest dataset of human actions.It consists of 101 action classes, over 13k clips and 27 hours of video data.The database consists of realistic user-uploaded videos containing camera motion and cluttered background.Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5%.To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.

1 Introduction

The majority of existing action recognition datasets suffer from two disadvantages: 1) The number of their classes is typically very low compared to the richness of performed actions by humans in reality, e.g. KTH [11], Weizmann [3], UCF Sports [10], IXMAS [12] datasets includes only 6, 9, 9, 11 classes respectively. 2) The videos are recorded in unrealistically controlled environments. For instance, KTH, Weizmann, IXMAS are staged by actors; HOHA [7] and UCF Sports are composed of movie clips captured by professional filming crew. Recently, web videos have been used in order to utilize unconstrained user-uploaded data to alleviate the second issue [6, 8, 9, 5]. However, the first disadvantage remains unresolved as the largest existing dataset does not include more than 51 actions while several works showed that the number of classes play a crucial role in evaluating an action recognition method [4, 9]. Therefore, we have compiled a new dataset with 101 actions and 13320 clips which is nearly twice bigger than the largest existing dataset in terms of number of actions and clips. (HMDB51 [5] and UCF50 [9] are the currently the largest ones with 6766 clips of 51 actions and 6681 clips of 50 actions respectively.)

UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (1)

The dataset is composed of web videos which are recorded in unconstrained environments and typically includecamera motion, various lighting conditions, partial occlusion, low quality frames, etc. Fig. 1 shows sample frames of 6 action classes from UCF101.

2 Dataset Details

UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (2)

Action Classes: UCF101 includes total number of 101 action classes which we have divided into five types: Human-Object Interaction, Body-Motion Only, Human-Human Interaction, Playing Musical Instruments, Sports.

UCF101 is an extension of UCF50 which included the following 50 action classes:{Baseball Pitch, Basketball Shooting, Bench Press, Biking, Billiards Shot, Breaststroke, Clean and Jerk, Diving, Drumming, Fencing, Golf Swing, High Jump, Horse Race, Horse Riding, Hula Hoop, Javelin Throw,, Juggling Balls, Jumping Jack, Jump Rope, Kayaking, Lunges, Military Parade, Mixing Batter, Nun chucks, Pizza Tossing, Playing Guitar, Playing Piano, Playing Tabla, Playing Violin, Pole Vault, Pommel Horse, Pull Ups, Punch, Push Ups, Rock Climbing Indoor, Rope Climbing, Rowing, Salsa Spins, Skate Boarding, Skiing, Skijet, Soccer Juggling, Swing, TaiChi, Tennis Swing, Throw Discus, Trampoline Jumping, Volleyball Spiking, Walking with a dog, Yo Yo}.The color class labels specifywhich predefined action type they belong to.

UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (3)
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (4)

The following 51 new classes are introduced in UCF101:{Apply Eye Makeup, Apply Lipstick, Archery, Baby Crawling, Balance Beam, Band Marching, Basketball Dunk,Blow Drying Hair, Blowing Candles, Body Weight Squats, Bowling,Boxing-Punching Bag, Boxing-Speed Bag, Brushing Teeth, Cliff Diving, Cricket Bowling, Cricket Shot, Cutting In Kitchen, Field Hockey Penalty, Floor Gymnastics, Frisbee Catch, Front Crawl, Hair cut, Hammering, Hammer Throw, Handstand Pushups, Handstand Walking, Head Massage, Ice Dancing, Knitting, Long Jump, Mopping Floor, Parallel Bars, Playing Cello, Playing Daf, Playing Dhol, Playing Flute, Playing Sitar, Rafting, Shaving Beard, Shot put, Sky Diving, Soccer Penalty, Still Rings, Sumo Wrestling, Surfing, Table Tennis Shot, Typing, Uneven Bars, Wall Pushups, Writing On Board}.Fig. 2 shows a sample frame for each action class of UCF101.

Clip Groups: The clips of one action class are divided into 25 groups which contain 4-7 clips each. The clips in one group share some common features, such as the background or actors.

The bar chart of Fig. 3 shows the number of clips in each class. The colors on each bar illustrate the durations of different clips included in that class. The chart shown in Fig. 4 illustrates the average clip length (green) and total duration of clips (blue) for each action class.

The videos are downloaded from YouTube [2] and the irrelevant ones are manually removed. All clips have fixed frame rate and resolution of 25 FPS and 320ร—240320240320\times 240 respectively. The videos are saved in .avi files compressed using DivX codec available in k-lite package [1]. The audio is preserved for the clips of the new 51 actions. Table 1 summarizes the characteristics of the dataset.

Actions101
Clips13320
Groups per Action25
Clips per Group4-7
Mean Clip Length7.21 sec
Total Duration1600 mins
Min Clip Length1.06 sec
Max Clip Length71.04 sec
Frame Rate25 fps
Resolution320ร—\times240
AudioYes (51 actions)

Naming Convention: The zipped file of the dataset (available at http://crcv.ucf.edu/data/UCF101.php ) includes 101 folders each containing the clips of one action class. The name of each clip has the following form:

๐šŸโ€‹_โ€‹Xโ€‹_โ€‹๐šโ€‹Yโ€‹_โ€‹๐šŒโ€‹Z.๐šŠ๐šŸ๐š’formulae-sequence๐šŸ_X_๐šY_๐šŒZ๐šŠ๐šŸ๐š’{\tt v\_\textbf{X}\_g\textbf{Y}\_c\textbf{Z}.avi}

where X, Y and Z represent action class label, group and clip number respectively. For instance, v_ApplyEyeMakeup_g03_c04.avi corresponds to the clip 4 of group 3 of action class ApplyEyeMakeup.

DatasetNumber of ActionsClipsBackgroundCamera MotionRelease YearResource
KTH [11]6600StaticSlight2004Actor Staged
Weizmann [3]981StaticNo2005Actor Staged
UCF Sports [10]9182DynamicYes2009TV, Movies
IXMAS [12]11165StaticNo2006Actor Staged
UCF11 [6]111168DynamicYes2009YouTube
HOHA [7]122517DynamicYes2009Movies
Olympic [8]16800DynamicYes2010YouTube
UCF50 [9]506681DynamicYes2010YouTube
HMDB51 [5]516766DynamicYes2011Movies, YouTube, Web
UCF10110113320DynamicYes2012YouTube

3 Experimental Results

We performed an experiment using bag of words approach which is widely accepted as a standard action recognition method to provide baseline results on UCF101.

From each clip, we extracted Harris3D corners (using the implementation by [7]) and computed 162 dimensional HOG/HOF descriptors for each. We clustered a randomly selected set of 100,000 space-time interest points (STIP) using k-means to build the codebook. The size of our codebook is k=4000 which is shown to yield good results over a wide range of datasets. The descriptors were assigned to their closest video words using nearest neighbor classifier, and each clip was represented by a 4000-dimensional histogram of its words. Utilizing a leave-one-group-out 25-fold cross validation scenario, a SVM was trained using the histogram vectors of the training folds. We employed a nonlinear multiclass SVM with histogram intersection kernel and 101 classes each representing one action. For testing, a similar histogram representation for the query video was computed and classified using the trained SVM. This method yielded an overall accuracy of 44.5%; The confusion matrix for all 101 actions is shown in Fig. 5.

The accuracy for the predefined action types are: Sports (50.54%), Playing Musical Instrument (37.42%), Human-Object Interaction (38.52%), Body-Motion Only (36.26%), Human-Human Interaction (44.14%). Sports actions achieve the highest accuracy since performing sports typically requires distinctive motions which makes the classification easier. Moreover, the background in sports clips are generally less cluttered compared to other action types. Unlike Sports Actions, Human-Object Interaction clips typically have a highly cluttered background. Additionally, the informative motions typically occupy a small portion of the motions in the clips which explains the low recognition accuracy of this action class.

We recommend a 25-fold cross validation experimental setup using all the videos in the dataset to keep consistency of the reported tests on UCF101; the baseline results provided in this section were computed using the same scenario.

4 Related Datasets

UCF Sports, UCF11, UCF50 and UCF101 are the four action datasets compiled by UCF in chronological order; each one includes its precursor. We made two minor modifications in the portion of UCF101 which includes UCF50 videos: the number of groups is fixed to 25 for all the actions, and each group includes up to 7 clips. Table 2 shows a list of existing action recognition datasets with detailed characteristics of each. Note that UCF101 is remarkably larger than the rest.

UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (5)

5 Conclusion

We introduced UCF101 which is the most challenging dataset for action recognition compared to the existing ones. It includes 101 action classes and over 13k clips which makes it outstandingly larger than other datasets. UCF101 is composed of unconstrained videos downloaded from YouTube which feature challenges such as poor lighting, cluttered background and severe camera motion. We provided baseline action recognition results on this new dataset using standard bag of words method with overall accuracy of 44.5%.

References

  • [1]K-lite codec package.http://codecguide.com/.
  • [2]Youtube.http://www.youtube.com/.
  • [3]M.Blank, L.Gorelick, E.Shechtman, M.Irani, and R.Basri.Actions as space-time shapes, 2005.International Conference on Computer Vision (ICCV).
  • [4]G.Johansson, S.Bergstrom, and W.Epstein.Perceiving events and objects, 1994.Lawrence Erlbaum Associates.
  • [5]H.Kuehne, H.Jhuang, E.Garrote, T.Poggio, and T.Serre.Hmdb: A large video database for human motion recognition, 2011.International Conference on Computer Vision (ICCV).
  • [6]J.Liu, J.Luo, and M.Shah.Recognizing realistic actions from videos โ€in the wildโ€, 2009.IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [7]M.Marszaล‚ek, I.Laptev, and C.Schmid.Actions in context, 2009.IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [8]J.Niebles, C.Chen, and L.Fei-Fei.Modeling temporal structure of decomposable motion segments foractivity classi๏ฌcation, 2010.European Conference on Computer Vision (ECCV).
  • [9]K.Reddy and M.Shah.Recognizing 50 human action categories of web videos, 2012.Machine Vision and Applications Journal (MVAP).
  • [10]M.Rodriguez, J.Ahmed, and M.Shah.Action mach: A spatiotemporal maximum average correlation height๏ฌlter for action recognition, 2008.IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [11]C.Schuldt, I.Laptev, and B.Caputo.Recognizing human actions: A local svm approach, 2004.International Conference on Pattern Recognition (ICPR).
  • [12]D.Weinland, E.Boyer, and R.Ronfard.Action recognition from arbitrary views using 3d exemplars, 2007.International Conference on Computer Vision (ICCV).
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild (2024)
Top Articles
Latest Posts
Article information

Author: Melvina Ondricka

Last Updated:

Views: 6000

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Melvina Ondricka

Birthday: 2000-12-23

Address: Suite 382 139 Shaniqua Locks, Paulaborough, UT 90498

Phone: +636383657021

Job: Dynamic Government Specialist

Hobby: Kite flying, Watching movies, Knitting, Model building, Reading, Wood carving, Paintball

Introduction: My name is Melvina Ondricka, I am a helpful, fancy, friendly, innocent, outstanding, courageous, thoughtful person who loves writing and wants to share my knowledge and understanding with you.