Int J Performability Eng ›› 2025, Vol. 21 ›› Issue (2): 65-73.doi: 10.23940/ijpe.25.02.p1.6573
• Original article • Next Articles
Akanksha Mehndiratta* and Krishna Asawa
Submitted on
;
Revised on
;
Accepted on
Contact:
*E-mail address: Akanksha Mehndiratta and Krishna Asawa. Modeling Discourse for Dialogue Systems using Spectral Learning [J]. Int J Performability Eng, 2025, 21(2): 65-73.
Add to citation manager EndNote|Reference Manager|ProCite|BibTeX|RefWorks
[1] Wu Y., Li Z., Wu W., and Zhou M., 2018. Response selection with topic clues for retrieval-based chatbots. [2] Wu Y., Wu W., Xing C., Zhou M., and Li Z., 2016. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. [3] Zhang Z., Li J., Zhu P., Zhao H., and Liu G., 2018. Modeling multi-turn conversation with deep utterance aggregation. [4] Hu B., Lu Z., Li H., and Chen Q., 2014. Convolutional neural network architectures for matching natural language sentences. [5] Yan Z., Duan N., Bao J., Chen P., Zhou M., and Li Z., 2018. Response selection from unstructured documents for human-computer conversation systems. [6] Lowe R., Pow N., Serban I., and Pineau J., 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. [7] Tay Y., Tuan L.A., and Hui S.C., 2018. Co-stack residual affinity networks with multi-level attention refinement for matching text sequences. [8] Chen Q., and Wang W., 2019. Sequential matching model for end-to-end multi-turn response selection. InICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7350-7354. [9] Yuan C., Zhou W., Li M., Lv S., Zhu F., Han J., and Hu S., 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 111-120. [10] Yan R., Song Y., and Wu H., 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. InProceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 55-64. [11] Humeau S., Shuster K., Lachaux M.A., and Weston J., 2019. Poly-encoders: transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. [12] Xu Y., Zhao H., and Zhang Z., 2021. Topic-aware multi-turn dialogue modeling. In [13] Zhou X., Li L., Dong D., Liu Y., Chen Y., Zhao W.X., Yu D., and Wu H., 2018. Multi-turn response selection for chatbots with deep attention matching network. In [14] Tao C., Wu W., Xu C., Hu W., Zhao D., and Yan R., 2019. One time of interaction may not be enough: go deep with an interaction-over-interaction network for response selection in dialogues. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1-11. [15] Tao C., Wu W., Xu C., Hu W., Zhao D., and Yan R., 2019. Multi-representation fusion network for multi-turn response selection in retrieval-based chatbots. InProceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 267-275. [16] Hotelling H.,1992. Relations between two sets of variates. InBreakthroughs in Statistics: Methodology and Distribution, pp. 162-190. [17] Vinyals O., and Le Q., 2015. A neural conversational model. [18] Sordoni A., Galley M., Auli M., Brockett C., Ji Y., Mitchell M., Nie J.Y., Gao J., and Dolan B., 2015. A neural network approach to context-sensitive generation of conversational responses. [19] Serban I., Sordoni A., Bengio Y., Courville A., and Pineau J., 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In [20] Li J., Monroe W., Ritter A., Galley M., Gao J., and Jurafsky D., 2016. Deep reinforcement learning for dialogue generation. [21] Wen T.H., Vand yke D., Mrksic N., Gasic M., Rojas-Barahona L.M., Su P.H., Ultes S., and Young S., 2016. A network-based end-to-end trainable task-oriented dialogue system. [22] Zhang Y., Galley M., Gao J., Gan Z., Li X., Brockett C., and Dolan B., 2018. Generating informative and diverse conversational responses via adversarial information maximization. [23] Yang L., Qiu M., Qu C., Guo J., Zhang Y., Croft W.B., Huang J., and Chen H., 2018. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. Inthe 41st International ACM Sigir Conference on Research & Development in Information Retrieval, pp. 245-254. [24] Asher N., Hunter J., Morey M., Benamara F., and Afantenos S., 2016. Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus. In10th International Conference on Language Resources and Evaluation (LREC 2016), pp. 2721-2727. [25] Asher N., and Lascarides A., 2003. Logics of Conversation. Cambridge University Press. [26] Li J., Liu M., Kan M.Y., Zheng Z., Wang Z., Lei W., Liu T., and Qin B., 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. [27] Shi Z., and Huang M., 2019. A deep sequential model for discourse parsing on multi-party dialogues. In [28] Majumder B.P., Li S., Ni J., and McAuley J., 2020. Interview: large-scale modeling of media dialog with discourse patterns and knowledge grounding. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 8129-8141. [29] Gu X., Yoo K.M., and Ha J.W., 2021. Dialogbert: discourse-aware response generation via learning to recover and rank utterances. In [30] Santra B., Roychowdhury S., Mand al A., Gurram V., Naik A., Gupta M., and Goyal P., 2021. Representation learning for conversational data using discourse mutual information maximization. [31] Wu Y., Wu W., Xu C., and Li Z., 2018. Knowledge enhanced hybrid neural network for text matching. In [32] Zhou X., Dong D., Wu H., Zhao S., Yu D., Tian H., Liu X., and Yan R., 2016. Multi-view response selection for human-computer conversation. InProceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 372-381. [33] Bach F.R., and Jordan M.I., 2005. A probabilistic interpretation of canonical correlation analysis. [34] Foster D.P., Kakade S.M., and Zhang T., 2008. Multi-view dimensionality reduction via canonical correlation analysis. [35] Mehndiratta A., and Asawa K., 2024. Suitability of CCA for generating latent state/variables in multi-view textual data. [36] Li J., Galley M., Brockett C., Gao J., and Dolan B., 2015. A diversity-promoting objective function for neural conversation models. |
[1] | Mehndiratta Akanksha and Asawa Krishna. Discovering Elementary Discourse Units in Textual Data using Canonical Correlation Analysis [J]. Int J Performability Eng, 2024, 20(12): 723-732. |
|