基本信息来源于合作网站,原文需代理用户跳转至来源网站获取       
摘要:
Since a complete DNA chain contains a large data (usually billions of nucleotides), it’s challenging to figure out the function of each sequence segment. Several powerful predictive models for the function of DNA sequence, including, CNN (convolutional neural network), RNN (recurrent neural network), and LSTM [1] (long short-term memory) have been proposed. However, all of them have some flaws. For example, the RNN can hardly have long-term memory. Here, we build on one of these models, DanQ, which uses CNN and LSTM together. We extend DanQ by developing an improved DanQ model and applying it to predict the function of DNA sequence more efficiently. In the most primitive DanQ model, the regulatory grammar is learned by the regulatory motifs captured by the convolution layer and the long-term dependencies between the motifs captured by the recurrent layer, so as to increase the prediction accuracy. Through the testing of some models, DanQ has greatly improved in some indicators. For the regulatory markers, DanQ achieves improvements above 50% of the area under the curve, via the measurement of the precision-recall curve.
推荐文章
Deep web接口查询能力估计
查询接口
查询能力
Deep Web数据源自动分类
Deep Web
查询接口
朴素贝叶斯分类
Spatial prediction of landslide susceptibility using GIS-based statistical and machine learning mode
Landslide susceptibility mapping
Statistical model
Machine learning model
Four cases
内容分析
关键词云
关键词热度
相关文献总数  
(/次)
(/年)
文献信息
篇名 An Improved Deep Learning Model for Predicting DNA Sequence Function
来源期刊 智能信息管理(英文) 学科 工学
关键词 BLSTM Convolutional NEURAL Network DanQ Model RANDOM DROPOUT
年,卷(期) 2020,(1) 所属期刊栏目
研究方向 页码范围 36-42
页数 7页 分类号 TP1
字数 语种
DOI
五维指标
传播情况
(/次)
(/年)
引文网络
引文网络
二级参考文献  (0)
共引文献  (0)
参考文献  (0)
节点文献
引证文献  (0)
同被引文献  (0)
二级引证文献  (0)
2020(0)
  • 参考文献(0)
  • 二级参考文献(0)
  • 引证文献(0)
  • 二级引证文献(0)
研究主题发展历程
节点文献
BLSTM
Convolutional
NEURAL
Network
DanQ
Model
RANDOM
DROPOUT
研究起点
研究来源
研究分支
研究去脉
引文网络交叉学科
相关学者/机构
期刊影响力
智能信息管理(英文)
半月刊
2160-5912
武汉市江夏区汤逊湖北路38号光谷总部空间
出版文献量(篇)
114
总下载数(次)
0
总被引数(次)
0
论文1v1指导