A Comprehensive Review of Hand Gesture Recognition: Vision-Based vs. Wearable Sensor Approach
Keywords:
Gesture Recognition, Machine Learning, Deep Learning, Vision data, Wearable sensorAbstract
This comprehensive review systematically compares vision-based and wearable sensor approaches for hand gesture recognition (HGR), addressing a critical gap in existing literature that often treats these modalities in isolation. While previous surveys have predominantly focused on either vision-based methods or sensor-based approaches separately, this review provides an integrated analysis of both paradigms, highlighting their complementary strengths and limitations across the entire HGR pipeline: from data acquisition and preprocessing to feature extraction and classification. We examine diverse datasets including UCI MYO Thalami and RGB-based collections, analyzing preprocessing techniques (data augmentation, noise reduction, normalization) and both traditional machine learning (SVM, ANN, KNN) and deep learning methods (CNN, RNN, LSTM). Our comparative analysis reveals that sensor-based methods excel in controlled environments with precise motion capture, while vision-based approaches offer greater usability at the cost of environmental sensitivity. This review contributes a unified framework for selecting appropriate HGR technologies based on application requirements, computational constraints, and user needs, particularly in healthcare rehabilitation, virtual reality, and assistive technologies for hearing-impaired communities