Int J Performability Eng ›› 2018, Vol. 14 ›› Issue (1): 101-110.doi: 10.23940/ijpe.18.01.p11.101110

• Original articles • Previous Articles     Next Articles

A Novel Double-Layer Framework for Joint Segmentation and Recognition of Multiple Actions

Cuiwei Liua, Yaguang Lub, Xiangbin Shia, b, Deyuan Zhanga, and Fang Liua   

  1. aComputer Science, Shenyang Aerospace University, Shenyang, 110136, China
    bSchool of Information, Liaoning University, Shenyang, 110036, China

Abstract:

This paper aims to address the problem of joint segmentation and recognition of multiple actions in a long-term video. Since features obtained from a single frame cannot describe human motion in a period, some literatures initially divide a long-term video into many video clips with fixed length and represent a long-term video as a sequence of video clips. However, a fixed-length video clip may contain frames from two adjacent actions, which would significantly affect the performance of action segmentation and recognition. In this paper, we develop a double-layer framework for segmenting and recognizing multiple actions in a long-term video. In the first layer, a novel unsupervised method based on the directions of velocity is proposed to initially divide an input video into a series of clips with unfixed length. The second layer takes a sequence of video clips as input, and employs a joint segmentation and recognition method to group video clips into several segments while simultaneously labeling the action category for each segment. Experiments conducted on the IXMAS action dataset verify the effectiveness of the proposed approach.


Submitted on October 2, 2017; Revised on November 15, 2017; Accepted on December 10, 2017
References: 32