Subject Area: Computer Science
This study presents a Deep Convolutional Neural Network (Deep CNN) model for multimodal biometric recognition, integrating facial and gait features to enhance person identification, especially in scenarios with limited training data. The proposed system employs advanced preprocessing techniques, including the RetinaFace algorithm for robust Left-Right (LR) face detection and Gait Energy Image (GEI) extraction for effective gait representation. Feature extraction is performed using separate deep CNN-based extractors for facial and gait modalities. Subsequently, feature-level fusion is applied to combine the extracted features into a unified representation for classification. The model was evaluated using two widely recognized datasets: CASIA-B and Extended Yale-B, encompassing biometric data from 25 individuals under diverse conditions. The proposed system achieved an average accuracy of 92.3%, precision of 91.4%, recall of 93.0%, and an F1-score of 92.2%, demonstrating high reliability and robustness. These results highlight the model's effectiveness in handling variations in body size, clothing, and environmental conditions, making it suitable for real-world applications such as identity verification, security surveillance, and behavioral monitoring. Overall, this work showcases the potential of deep learning-based multimodal biometric systems in improving the accuracy and dependability of automated human recognition technologies.