[1] France, D. J., et al. “Acoustical properties of speech as indicators of depression and suicidal risk.” Proc. IEEE, Trans.Biomedical Eng, vol. 47(7), 2007, pp. 829-837.
[2] Pao, T., C. Wang. “A study on the search of the most discriminative speech features in the speaker dependent speech emotion recognition.” Proc. IEEE Fifth Int. Sym. Parallel Architectures, Algoritm and Programming, 2012, pp. 157-162.
[3] Chandaka, S., A. Chatterjee, S. Munshi. “Support vector machines employing cross-correlation for emotional speech recognition.” Measurment, vol. 42, 2009, pp. 611-618.
[4] Hubner, D., B. Velasenko. “The performance of the speaking rate parameter in emotion recognition from speech”, Proc. IEEE Int. Conf. Multimedia and Expo Workshops, 2012, pp. 296-301.
[5] Esmaileyan, Z., H. Marvi. “A Database for Automatic Persian Speech Emotion Recognition: Collection, Processing and Evaluation.” IJE Trans. A: Basics, Vol. 27, No. 1, 2014, pp. 79 -90.
[6] Breazeal, C., L. Aryananda. “Recognition of affective communicative intent in robot directed speech.” Autonomous Robots, vol. 2, 2002, pp. 83-104.
[7] New, T., S. Foo, D. Silva. “Speech Emotion Recognition using hidden Markov model.” speech communication, vol. 41, 2003, pp.603-623.
[8] Slaney, M., G. McRoberts. “Babyears: a recognition system for affective vocalizations.” speech communication, vol. 39, 2003, pp. 367–384.
[9] Burkhardt, F., et al, “A Database of German Emotional Speech.” in Proceedings of the interspeech, Lissabon, Portugal, 2005, pp.1517-1520.
[10] Keshtiari, N., M. Kuhlmann, M. Eslami, G. Klann-Delius. “Recognizing emotional speech in Persian: A validated database of Persian emotional speech (Persian ESD).” 2014.
[11] Albornoz, E., et al. “Spoken Emotion recognition using hierarchical classifier.” Computer speech and Language, 2011, pp. 556–570.
[12] Yang, B., M. Lugger. “Emotion recognition from speech signals using new harmony features.” signal processing 90, 2010, pp.1415-1423.
[13] Bitouk, D., et al. “class level spectral features for emotion recognition.” speech communication, vol. 52, 2010, pp.613-625.
[14] Hassan, A., R. Damper. “Classification of emotional speech using 3DEC hierarchical classifier.” speech communication 54, 2012, pp. 903-916.
[15] Hubner, D., et al. “The Performance of The Speaking Rate Parameter in emotion recognition from speech.” IEEE, International conference on Multimedia and Expo Workshops, 2012.
[16] Gaurav, M. “Performance Analysis of spectral and prosodic features and their fusion for emotion recognition in speech.” IEEE. SLT, 2008.
[17] Yao, J., Y. Zhang. “Bionic Wavelet Transform: A New Time-Frequency Method Based on an Auditory Model.” Proc. IEEE, Trans.Biomedical Eng, vol. 48(8), 2001, pp. 856-863.
[18] Chen, F., Zhang. “A new implementation of discrete bionic wavelet transform: Adaptive tiling.” Digit. Signal Process. vol. 16, issue 3, 2006, pp. 233-246.
[19] Ntalampiars, S., N. Fakotakis. “Modeling the temporal evolution of acoustic parameters for speech emotion recognition.” IEEE Trans. on Affective Computing, Vol. 3, No. 1, 2012, pp. 116-125.