Jednak się nie przyda? Nic nie szkodzi! U nas możesz zwrócić towar do 30 dni
Bon prezentowy to zawsze dobry pomysł. Obdarowany może za bon prezentowy wybrać cokolwiek z naszej oferty.
30 dni na zwrot towaru
The focus of this work is on classifying the most common non-manual (facial) gestures in Sign Language. This goal is achieved in two consecutive steps: First, automatic facial landmarking is performed based on Multi-resolution Active Shape Models (MRASMs). Second, the tracked landmarks are normalized and expression classification is done based on multivariate Continuous Hidden Markov Model (CHMMs). We collected a video database of expressions from Turkish Sign Language (TSL) to test the proposed approach. The expressions used are universal and the results are applicable to other sign languages. Single view vs. multi-view and person specific vs. generic MRASM trackers are compared both for tracking and expression recognition. The multi-view person-specific tracker performs the best and tracks the landmarks robustly. For expression classification, the proposed CHMM classifier is tested on different training and test set combinations and the results are reported. We observe that the classification performances of distinct classes are very high.