Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph
| dc.contributor.author | Qi, Xianglong | |
| dc.contributor.author | Gao, Yang | |
| dc.contributor.author | Wang, Ruibin | |
| dc.contributor.author | Zhao, Minghua | |
| dc.contributor.author | Cui, Shengjia | |
| dc.contributor.author | Mortazavi, Mohsen | |
| dc.date.accessioned | 2026-02-06T17:58:41Z | |
| dc.date.issued | 2022 | |
| dc.department | Doğu Akdeniz Üniversitesi | |
| dc.description.abstract | Representation of language is the first and critical task for Natural Language Understanding (NLU) in a dialogue system. Pretraining, embedding model, and fine-tuning for intent classification and slot-filling are popular and well-performing approaches but are time consuming and inefficient for low-resource languages. Concretely, the out-of-vocabulary and transferring to different languages are two tough challenges for multilingual pretrained and cross-lingual transferring models. Furthermore, quality-proved parallel data are necessary for the current frameworks. Stepping over these challenges, different from the existing solutions, we propose a novel approach, the Hypergraph Transfer Encoding Network "HGTransEnNet. The proposed model leverages off-the-shelf high-quality pretrained word embedding models of resource-rich languages to learn the high-order semantic representation of low-resource languages in a transductive clustering manner of hypergraph modeling, which does not need parallel data. The experiments show that the representations learned by "HGTransEnNet"for low-resource language are more effective than the state-of-the-art language models, which are pretrained on a large-scale multilingual or monolingual corpus, in intent classification and slot-filling tasks on Indonesian and English datasets. © 2022 Xianglong Qi et al. | |
| dc.identifier.doi | 10.1155/2022/8407713 | |
| dc.identifier.issn | 1024-123X | |
| dc.identifier.scopus | 2-s2.0-85138986513 | |
| dc.identifier.scopusquality | N/A | |
| dc.identifier.uri | https://doi.org/10.1155/2022/8407713 | |
| dc.identifier.uri | https://search.trdizin.gov.tr/tr/yayin/detay/ | |
| dc.identifier.uri | https://hdl.handle.net/11129/7694 | |
| dc.identifier.volume | 2022 | |
| dc.indekslendigikaynak | Scopus | |
| dc.language.iso | en | |
| dc.publisher | Hindawi Limited | |
| dc.relation.ispartof | Mathematical Problems in Engineering | |
| dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.snmz | KA_Scopus_20260204 | |
| dc.subject | Classification (of information) | |
| dc.subject | Computational linguistics | |
| dc.subject | Embeddings | |
| dc.subject | Large dataset | |
| dc.subject | Modeling languages | |
| dc.subject | Semantics | |
| dc.subject | Speech processing | |
| dc.subject | Critical tasks | |
| dc.subject | Dialogue systems | |
| dc.subject | High-order | |
| dc.subject | Higher-order | |
| dc.subject | Hyper graph | |
| dc.subject | Low resource languages | |
| dc.subject | Natural language understanding | |
| dc.subject | Parallel data | |
| dc.subject | Semantic representation | |
| dc.subject | Filling | |
| dc.title | Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph | |
| dc.type | Article |










