1. Dr. SANTOSH SINGH- University Department of Information Technology, University of Mumbai.
2. SUJATA KOTIAN - University Department of Information Technology, University of Mumbai.
One of the most expressive mediums used by humans to communicate feelings is through speech but the recognition of fundamental moods is still fairly difficult to accomplish especially with low resource languages like Hindi. Speech emotion recognition (SER) systems used now utilize acoustic and prosodic characteristics to a great extent, but the contribution of behavioral cues, including pauses, speech rhythm, rate, and emphasis, remains almost untested. This paper describes a behavior-conscious multimodal deep learning system that can determine the contribution of these higher-level speech behaviors to Hindi SER results. The Hindi emotional speech corpus was curated and preprocessed via standardized pipelines of normalization, segmentation, and extraction of features to produce three complementary feature groups, namely: acoustic, prosodic, and behavioral descriptors. They were fed to a dual-branch multimodal system that comprised convolutional or transformer-based acoustic encoders, as well as BiLSTM-attention encoders of behavioral-prosodic patterns. Experiments on ablation showed that adding behavioral features can greatly enhance the classification accuracy and macro-F1, particularly to the low-arousal emotions like sadness and neutral that spectral cues alone are not very discriminable. The entire multimodal model has proven to be better than the acoustic-only and bimodal acoustic-prosodic models and has proven to be more robust to poor spectral conditions, suggesting that behavioral cues are still consistent regardless of the poor spectral conditions. The interpretability of the approach was further confirmed by the qualitative analysis of attention weights that revealed that the network has a tendency to concentrate on behaviorally salient speech areas. In general, the results show the value of employing behavior intelligence in deep learning models of Hindi SER and would form a sound foundation that could be used to create culturally adaptive, noise-resilient affective technologies.
Hindi Speech Emotion Recognition; Behavioral Speech Features; Multimodal Deep Learning; Prosody; Acoustic Modeling; Attention Mechanisms; Low-Resource Languages; Speech Rhythm; Pause Analysis; Affective Computing.