SIA OpenIR  > 机器人学研究室
Multi-class Latent Concept Pooling for computer-aided endoscopy diagnosis
Wang S(王帅); Cong Y(丛杨); Fan HJ(范慧杰); Fan BJ(范保杰); Liu LQ(刘连庆); Yang YS(杨云生); Tang YD(唐延东); Zhao HC(赵怀慈); Yu HB(于海斌)
作者部门机器人学研究室
关键词Computer-aided Diagnosis Multi-class Sparse Dictionary Learning Latent Concept Pooling Endoscopy
发表期刊ACM Transactions on Multimedia Computing, Communications and Applications
ISSN1551-6857
2017
卷号13期号:2页码:1-18
收录类别SCI ; EI
EI收录号20171503556213
WOS记录号WOS:000401537300003
产权排序1
资助机构NSFC (61375014, 61533015, U1613214, 61333019, and 61401455)
摘要Successful computer-aided diagnosis systems typically rely on training datasets containing sufficient and richly annotated images. However, detailed image annotation is often time consuming and subjective, especially for medical images, which becomes the bottleneck for the collection of large datasets and then building computer-aided diagnosis systems. In this article, we design a novel computer-aided endoscopy diagnosis system to deal with the multi-classification problem of electronic endoscopy medical records (EEMRs) containing sets of frames, while labels of EEMRs can be mined from the corresponding text records using an automatic text-matching strategy without human special labeling. With unambiguous EEMR labels and ambiguous frame labels, we propose a simple but effective pooling scheme called Multi-class Latent Concept Pooling, which learns a codebook from EEMRs with different classes step by step and encodes EEMRs based on a soft weighting strategy. In our method, a computer-aided diagnosis system can be extended to new unseen classes with ease and applied to the standard single-instance classification problem even though detailed annotated images are unavailable. In order to validate our system, we collect 1,889 EEMRs with more than 59K frames and successfully mine labels for 348 of them. The experimental results show that our proposed system significantly outperforms the state-of-the-art methods. Moreover, we apply the learned latent concept codebook to detect the abnormalities in endoscopy images and compare it with a supervised learning classifier, and the evaluation shows that our codebook learning method can effectively extract the true prototypes related to different classes from the ambiguous data.
语种英语
WOS标题词Science & Technology ; Technology
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Computer Science, Theory & Methods
关键词[WOS]ABNORMAL EVENT DETECTION ; CAPSULE ENDOSCOPY ; IMAGE CLASSIFICATION ; FEATURE-SELECTION ; VIDEO SEGMENTATION ; RECOGNITION ; FEATURES ; DESCRIPTORS ; INFORMATION
WOS研究方向Computer Science
引用统计
文献类型期刊论文
条目标识符http://ir.sia.cn/handle/173321/20369
专题机器人学研究室
通讯作者Cong Y(丛杨)
作者单位1.Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2.College of Automation, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China
3.Chinese PLA General Hospital, Beijing, 100853, China
推荐引用方式
GB/T 7714
Wang S,Cong Y,Fan HJ,et al. Multi-class Latent Concept Pooling for computer-aided endoscopy diagnosis[J]. ACM Transactions on Multimedia Computing, Communications and Applications,2017,13(2):1-18.
APA Wang S.,Cong Y.,Fan HJ.,Fan BJ.,Liu LQ.,...&Yu HB.(2017).Multi-class Latent Concept Pooling for computer-aided endoscopy diagnosis.ACM Transactions on Multimedia Computing, Communications and Applications,13(2),1-18.
MLA Wang S,et al."Multi-class Latent Concept Pooling for computer-aided endoscopy diagnosis".ACM Transactions on Multimedia Computing, Communications and Applications 13.2(2017):1-18.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Multi-class Latent C(1531KB)期刊论文作者接受稿开放获取ODC PDDL浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang S(王帅)]的文章
[Cong Y(丛杨)]的文章
[Fan HJ(范慧杰)]的文章
百度学术
百度学术中相似的文章
[Wang S(王帅)]的文章
[Cong Y(丛杨)]的文章
[Fan HJ(范慧杰)]的文章
必应学术
必应学术中相似的文章
[Wang S(王帅)]的文章
[Cong Y(丛杨)]的文章
[Fan HJ(范慧杰)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Multi-class Latent Concept Pooling for computer-aided endoscopy diagnosis.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。