基本信息来源于合作网站,原文需代理用户跳转至来源网站获取       
摘要:
Most State-Of-The-Art (SOTA) Neural Machine Translation (NMT) systems today achieve outstanding results based only on large parallel corpora.The large-scale parallel corpora for high-resource languages is easily obtainable.However,the translation quality of NMT for morphologically rich languages is still unsatisfactory,mainly because of the data sparsity problem encountered in Low-Resource Languages (LRLs).In the low-resource NMT paradigm,Transfer Learning (TL) has been developed into one of the most efficient methods.It is difficult to train the model on high-resource languages to include the information in both parent and child models,as well as the initially trained model that only contains the lexicon features and word embeddings of the parent model instead of the child languages feature.In this work,we aim to address this issue by proposing the language-independent Hybrid Transfer Learning (HTL) method for LRLs by sharing lexicon embedding between parent and child languages without leveraging back translation or manually injecting noises.First,we train the High-Resource Languages (HRLs) as the parent model with its vocabularies.Then,we combine the parent and child language pairs using the oversampling method to train the hybrid model initialized by the previously parent model.Finally,we fine-tune the morphologically rich child model using a hybrid model.Besides,we explore some exciting discoveries on the original TL approach.Experimental results show that our model consistently outperforms five SOTA methods in two languages Azerbaijani(Az) and Uzbek (Uz).Meanwhile,our approach is practical and significantly better,achieving improvements of up to 4.94 and 4.84 BLEU points for low-resource child languages Az → Zh and Uz → Zh,respectively.
推荐文章
Spatial prediction of landslide susceptibility using GIS-based statistical and machine learning mode
Landslide susceptibility mapping
Statistical model
Machine learning model
Four cases
基于word embedding和CNN的情感分类模型
卷积神经网络
自然语言处理
深度学习
词嵌入
情感分类
Data Transfer Object模式探讨
Data Transfer Object 三层应用 DataSet
基于位置敏感Embedding的中文命名实体识别
命名实体识别
表示学习
Embedding
多尺度聚类
条件随机场
内容分析
关键词云
关键词热度
相关文献总数  
(/次)
(/年)
文献信息
篇名 Enriching the Transfer Learning with Pre-Trained Lexicon Embedding for Low-Resource Neural Machine Translation
来源期刊 清华大学学报自然科学版(英文版) 学科
关键词
年,卷(期) 2022,(1) 所属期刊栏目 REGULAR ARTICLES
研究方向 页码范围 150-163
页数 14页 分类号
字数 语种 英文
DOI 10.26599/TST.2020.9010029
五维指标
传播情况
(/次)
(/年)
引文网络
引文网络
二级参考文献  (0)
共引文献  (0)
参考文献  (0)
节点文献
引证文献  (0)
同被引文献  (0)
二级引证文献  (0)
2022(0)
  • 参考文献(0)
  • 二级参考文献(0)
  • 引证文献(0)
  • 二级引证文献(0)
引文网络交叉学科
相关学者/机构
期刊影响力
清华大学学报自然科学版(英文版)
双月刊
1007-0214
11-3745/N
16开
北京市海淀区双清路学研大厦B座908
1996
eng
出版文献量(篇)
2269
总下载数(次)
0
  • 期刊分类
  • 期刊(年)
  • 期刊(期)
  • 期刊推荐
论文1v1指导