UPM Institutional Repository

Vision-based human action recognition using time delay input radial basis function networks


Citation

Kalhor, Davood (2011) Vision-based human action recognition using time delay input radial basis function networks. Masters thesis, Universiti Putra Malaysia.

Abstract

Understanding human actions from video sequences is one of the most active and challenging research topics in computer vision. In spite of several promising works,particularly in recent years, to achieve high accuracy, there is still a lack of efficient systems for real-time applications, thereby increasing demand for faster systems. In other words, when addressing high performance systems for real-time applications both accuracy and speed should be considered. In practice, however, concurrently achieving high accuracy and high speed is very challenging. This thesis is motivated to deal with this problem and proposes a method, which is sufficiently fast for realtime human action recognition at 10 frames per second (fps). The proposed method consists of two main parts. In the first part, a feature vector is extracted for each frame, and then an action descriptor is constructed from a concatenation of these vectors. The choice of appropriate features is of vital importance to successful design of a high-performance system. This thesis, unlike most of the previous works in which very complex and high dimensional feature vectors have been used to describe actions, proposes a new descriptor with low dimensionality and complexity while preserving required power of discrimination. The feature vector is built by merging three information channels from grid-based shape features, bounding box, and the mass center of silhouettes. In the second part,these feature vectors are classified utilizing a Time Delay Input Radial Basis Function Network (TDIRBFN). This network has no integration layer and therefore a smaller number of model parameters and less computation during model selection. A growing-cell approach is suggested to train this network. This work is evaluated using leave-one-actor-out protocol and a human action dataset (provided by University of Illinois at Urbana-Champaign) containing 14 actions. Based on experimental results, implemented in MATLAB environment, the average execution time for constructing feature vectors is almost 20 ms (50 fps), significantly smaller than the literature. The proposed method can be trained to meet two different objectives, high speed (the main requirement of real-time systems) and high accuracy (the main requirement of non-real-time systems). The achieved results are 15.5 fps (classifier speed) and 90.66% (accuracy), for the first objective, and 94.52% (accuracy) and 2.37 fps (classifier speed), for the second objective. A comparative analysis demonstrates that the proposed system, in addition to comparable accuracy with the literature, outperforms state-of-the-art methods in terms of both speed and overall performance. The findings of this work are significant in that they offer simpler descriptors as well as the TDIRBFN as an alternative method for classification of human actions, particularly for real-time applications.


Download File

[img]
Preview
PDF
FK 2011 155 ir.pdf

Download (785kB) | Preview

Additional Metadata

Item Type: Thesis (Masters)
Subject: Computer vision
Subject: Gesture
Subject: Time delay systems
Call Number: FK 2011 155
Chairman Supervisor: Professor Ishak Aris, PhD
Divisions: Faculty of Engineering
Depositing User: Haridan Mohd Jais
Date Deposited: 03 Mar 2016 03:26
Last Modified: 03 Mar 2016 03:26
URI: http://psasir.upm.edu.my/id/eprint/41836
Statistic Details: View Download Statistic

Actions (login required)

View Item View Item