[1] STURM B L. The state of the art ten years after a state of the art:future research in music information retrieval[J]. Journal of New Music Research, 2014, 43(2):147-172. [2] BHALKE D G, RAO C B R, BORMANE D S. Automatic musical instrument classification using fractional Fourier transform based-MFCC features and counter propagation neural network[J]. Journal of Intelligent Information Systems, 2016, 46(3):425-446. [3] LOUGHRAN R, WALKER J, O'NEILL M, et al. Musical instrument identification using principal component analysis and multi-layered perceptrons[C]//ICALIP 2008:Proceedings of the 2008 International Conference on Audio, Language and Image Processing. Piscataway, NJ:IEEE, 2008:643-648. [4] BURRED J J, ROBEL A, SIKORA T. Dynamic spectral envelope modeling for timbre analysis of musical instrument sounds[J]. IEEE Transactions on Audio Speech & Language Processing, 2010, 18(3):663-674. [5] YU L F, SU L, YANG Y H. Sparse cepstral codes and power scale for instrument identification[C]//ICASSP 2014:Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway, NJ:IEEE, 2014:7460-7464. [6] HAN Y, LEE S, NAM J, et al. Sparse feature learning for instrument identification:effects of sampling and pooling methods[J]. Journal of the Acoustical Society of America, 2016, 139(5):2290-2298. [7] HU Y, LIU G. Instrument identification and pitch estimation in multi-timbre polyphonic musical signals based on probabilistic mixture model decomposition[J]. Journal of Intelligent Information Systems, 2013, 40(1):141-158. [8] WEESE J L. A convolutive model for polyphonic instrument identification and pitch detection using combined classification[J]. Machine Learning, 2013, 15(2):12-17. [9] ARORA V, BEHERA L. Instrument identification using PLCA over stretched manifolds[C]//NCC 2014:Proceedings of the 201420th National Conference on Communications. Piscataway, NJ:IEEE, 2014:1-5. [10] PATIL K, PRESSNITZER D, SHAMMA S, et al. Music in our ears:the biological bases of musical timbre perception[J]. PLOS Computational Biology, 2012, 8(11):e1002759. [11] BINER L, SCHAFER R. Theory and Applications of Digital Speech Processing[M]. Upper Saddle River, NJ:Prentice Hall Press, 2011:124-136. [12] MEDDIS R, LOPEZPOVEDA E, FAY R R, et al. Computational Models of the Auditory System[M]. Berlin:Springer, 2010:135-149. [13] ABDI H, WILLIAMS L J. Principal component analysis[J]. Wiley Interdisciplinary Reviews:Computational Statistics, 2010, 2(4):433-459. [14] LU H, PLATANIOTIS K N, VENETSANOPOULOS A N. MPCA:multilinear principal component analysis of tensor objects[J]. IEEE Transactions on Neural Networks, 2008, 19(1):1-18. [15] University of IOWA Electronic Music Studio. A musical instrument database[DB/OL].[2017-03-08]. http://theremin.music.uiowa.edu/MISflute.html. [16] JIANG Z, LIN Z, DAVIS L S. Label consistent K-SVD:learning a discriminative dictionary for recognition[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013, 35(11):2651-2664. [17] 韩纪庆,张磊,郑铁然.语音信号处理[M].北京:清华大学出版社,2004:76-85.(HAN J Q, ZHANG L, ZHENG T R. Voice Signal Processing[M]. Beijing:Tsinghua University Press, 2004:76-85.) |