|
X. Ai, “Researching of Mongolian Language Model based on Speech Recognition,” Hohhot: Inner Mongolia University, 2007
|
|
H. Bourlard and N. Morgan, “Continuous Speech Recognition by Connectionist Statistical Methods” IEEE Transactions on Neural Networks, vol. 4, no. 6, pp. 893-909, 1993
|
|
P. F. Brown, P. V. Desouza, and R. L. Mercer, “Class-based N-gram Models of Natural Language,” Computational Linguistics, vol. 18, no. 4, pp. 467-479, 1997
|
|
Y. Bengio, Ducharme and Jean, “A Neural Probabilistic Language Model,” Journal of Machine Learning Research, vol. 3, no. 6, pp. 1137-1155, 2003
|
|
A. Coates and A. Y. Ng, “Learning Feature Representations with K-Means,” Lecture Notes in Computer Science, 2012, vol. 7700, pp.561-580
|
|
S. F. Chen and J. Goodman, “An Empirical Study of Smoothing Techniques for Language Modeling,” Meeting of the Association for Computational Linguistics, pp. 24-27, June 1996, University of California, Santa Cruz, California, Usa, Proceedings. pp. 359-393, 1996
|
|
H. X. Hou, Q. Liu and Z. W. Liu, “Skip-N Mongolian Statistical Language Model,” Journal of Inner Mongolia University (natural edition), vol. 39, no. 2, pp. 220-224, 2008
|
|
R. Kuhn and R. M. De “A Cache-Based Natural Language Model for Speech Recognition,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 12, no. 6, pp. 219-228, 1990
|
|
R. Kneser and H. Ney, “Improved Backing-off for N-gram Language Modeling,” In Proceedings of the international conference on acoustics, speech and signal processing (ICASSP), pp. 181-184, 1995
|
|
H. Li, D. Qu and W. L. Zhang, “Recurrent Neural Network Language Model with Global Word Vector Features,” signal peocessing, vol. 32, no. 6, pp. 715-723, 2016
|
|
Y. X. Li, J. Q. Zhang and D. Pan, “A Study of Speech,” Journal of Integrative Plant Biology, vol. 51, no. 9, pp. 1936-1944, 2014
|
|
T. A. Mikolov, “Statistical Language Models based on Neural Networks,” 2012
|
|
T. Mikolov, M. Karaat and L. Burget, “Recurrent Neural Network based Language Mode,” Proceedings of Interspeech, pp. 1045-1048, 2010
|
|
W. D. Mulder S. Bethard and M. F. Moens, “A Survey on the Application of Recurrent Neural Networks to Statistical Language Modeling,” Computer Speech & Language, vol. 30, no. 1, pp. 61-98, 2015
|
|
Z. Q. Ma, Z. G. Zhang and R. Yan, “N-Gram Based Language Identification for Mongolian Text,” Journal of Chinese Information Processing, vol. 30, no. 1, pp. 133-139, 2016
|
|
I. Sutskever. “Training Recurrent Neural Networks,” Toronto: University of Toronto, 2013
|
|
A. Vinciarelli, S. Bengio and H. Bunke, “Offline Recognition of Unconstrained Handwritten Texts Using HMMs and Statistical Language Models,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 26, no. 6, pp. 709-20, 2004
|
|
L. Wang, J. A. Yang and L. Chen, “Recurrent Neural Network based Chinese Language Modeling Method,” Acoustic technology, vol. 34, no. 5, pp. 431-436, 2015
|
|
P. Xu and F. Jelinek, “Random Forests in Language Modelin,” Conference on Empirical Methods in Natural Language Processing, EMNLP 2004, A Meeting of Sigdat,A Special Interest Group of the Acl,Held in Conjunction with ACL 2004, pp. 25-26 July 2004, Barcelona, Spain. pp. 325-332, 2004
|
|
Y. K. Xing and S. P. Ma, “A Survey on Statistical Language Models,” Computer Science, vol. 30, no. 9, pp. 22-26, 2003
|
|
C. X. Zhai, “Statistical Language Models for Information Retrieval,” Now Publishers Inc, 2008
|
|
J. Zhang, D. Qu and Z. Li, “Recurrent Neural Network Language Model based on Word Vector Features,” Pattern Recognition and Artificial Intelligence, vol. 28, no. 4, pp. 299-305, 2015
|
|
L. Zhou, “Exploration of the Working Principle and Application of Word2vec,” Library and Information Guide, no. 2, pp. 145-148, 2015
|