Overview

Action recognition has received increasing attentions from the computer vision and machine learning community in the last decades. Ever since then, the recognition task has evolved from single view recording under controlled laboratory environment to unconstrained environment (i.e., surveillance environment or user generated videos). Furthermore, recent work focused on other aspect of action recognition problem, such as cross-view classification, cross domain learning, multi-modality learning, and action localization. Despite the large variations of studies, we observed limited works that explore the open-set and open-view classification problem, which is a genuine inherited properties in action recognition problem. In other words, a well designed algorithm should robustly identify an unfamiliar action as “unknown” and achieved similar performance across sensors with similar field of view. To address this issue, we recorded a new dataset, namely Multi-Camera Action Dataset (MCAD), which is designed to evaluate the open-view classification problem under surveillance environment.

In our multi-camera action dataset, different from common action datasets we use a total of five cameras, which can be divided into two types of cameras (StaticandPTZ), to record actions. Particularly, there are three Static cameras (Cam04 & Cam05 & Cam06) with fish eye effect and two PanTilt-Zoom (PTZ) cameras (PTZ04 & PTZ06). Static camera has a resolution of 1280×960 pixels, while PTZ camera has a resolution of 704×576 pixels and a smaller field of view than Static camera. What’s more, we don’t control the illumination environment. We even set two contrasting conditions (Daytime and Nighttime environment) which makes our dataset more challenge than many controlled datasets with strongly controlled illumination environment.The distribution of the cameras is shown in the picture on the right.

We identified 18 units single person daily actions with/without object which are inherited from the KTH, IXMAS, and TRECIVD datasets etc. The list and the definition of actions are shown in the table. These actions can also be divided into 4 types actions. Micro action without object (action ID of 01, 02 ,05) and with object (action ID of 10, 11, 12 ,13). Intense action with object (action ID of 03, 04 ,06, 07, 08, 09) and with object (action ID of 14, 15, 16, 17, 18). We recruited a total of 20 human subjects. Each candidate repeats 8 times (4 times during the day and 4 times in the evening) of each action under one camera. In the recording process, we use five cameras to record each action sample separately. During recording stage we just tell candidates the action name then they could perform the action freely with their own habit, only if they do the action in the field of view of the current camera. This can make our dataset much closer to reality. As a results there is high intra action class variation among different action samples as shown in picture of action samples.

Sample images (shown in relative pixel resolution) of MCAD dataset. Each column indicates unique action recorded with different indivduals on 5 distinct camera views. Column 1-3 are recorded during day time and the remaining are recorded during night time.


Reference

This dataset is made available to the scientific community for non-commercial research purposes such as academic research, teaching, scientific publications or personal experimentation. If you use this database, please cite MCAD as follows:

    paper:

        Wenhui Liu, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan Kankanhalli, Multi-Camera Action Dataset (MCAD): A

        Dataset for Studying Non-overlapped Cross-Camera Action Recognition, CoRR abs/1607.06408, 2015.

    Bibtex entry:

        @article{MCAD,

                AUTHOR = {Wenhui Li and Yongkang Wong and An-An Liu and Yang Li and Yu-Ting Su and Mohan Kankanhalli},

                TITLE = {Multi-Camera Action Dataset ({MCAD}): A Dataset for Studying Non-overlapped Cross-Camera Action Recognition},

                JOURNAL = {CoRR},

                VOLUME = {abs/1607.06408},

                YEAR = {2016}}


Download

    1. Video

    2. Download links of image files

    3. Download links of dense trajactory features

    4. Person bounding box


    Human Annotated 2D Joints


Contacts

If you have any questions regarding to the dataset, please contact:

    {yongkang döt wong ät ieee döt org}


Ackonwledgement

       

       

       

This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centre in Singapore Funding Initiative.