SIA OpenIR  > 机器人学研究室
Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction
Jiang SQ(蒋树强)1,2; Min WQ(闵巍庆)1,3; Mei SH(梅舒欢)1,4
Department机器人学研究室
Source PublicationIEEE TRANSACTIONS ON MULTIMEDIA
ISSN1520-9210
2019
Volume21Issue:6Pages:1609-1619
Indexed BySCI
WOS IDWOS:000469337400021
Contribution Rank2
Funding OrganizationBeijing Natural Science Foundation ; National Natural Science Foundation of China ; Lenovo Outstanding Young Scientists Program ; National Program for Special Support of Eminent Professionals ; National Program for Support of Top-notch Young Professionals ; China Postdoctoral Science Foundation ; State Key Laboratory of Robotics
KeywordFeature extraction knowledge transfer supervised learning video signal processing Web 2.0
AbstractIn this paper, we focus on visual venue category prediction, which can facilitate various applications for location-based service and personalization. Considering the complementarity of different media platforms, it is reasonable to leverage venue-relevant media data from different platforms to boost the prediction performance. Intuitively, recognizing one venue category involves multiple semantic cues, especially objects and scenes and, thus, they should contribute together to venue category prediction. In addition, these venues can be organized in a natural hierarchical structure, which provides prior knowledge to guide venue category estimation. Taking these aspects into account, we propose a Hierarchy-dependent Cross-platform Multi-view Feature Learning (HCM-FL) framework for venue category prediction from videos by leveraging images from other platforms. HCM-FL includes two major components, namely Cross-Platform Transfer Deep Learning (CPTDL) and Multi-View Feature Learning with the Hierarchical Venue Structure (MVFL-HVS). CPTDL is capable of reinforcing the learned deep network from videos using images from other platforms. Specifically, CPTDL first trained a deep network using videos. These images from other platforms are filtered by the learnt network and these selected images are then fed into this learnt network to enhance it. Two kinds of pre-trained networks on the ImageNet and Places dataset are employed. Therefore, we can harness both object-oriented and scene-oriented deep features through these enhanced deep networks. MVFL-HVS is then developed to enable multi-view feature fusion. It is capable of embedding the hierarchical structure ontology to support more discriminative joint feature learning. We conduct the experiment on videos from Vine and images from Foursquare. These experimental results demonstrate the advantage of our proposed framework in jointly utilizing multi-platform data, multi-view deep features, and hierarchical venue structure knowledge.
Language英语
WOS SubjectComputer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS Research AreaComputer Science ; Telecommunications
Funding ProjectState Key Laboratory of Robotics ; China Postdoctoral Science Foundation[2017T100110] ; National Program for Support of Top-notch Young Professionals ; National Program for Special Support of Eminent Professionals ; Lenovo Outstanding Young Scientists Program ; National Natural Science Foundation of China[61602437] ; National Natural Science Foundation of China[61532018] ; Beijing Natural Science Foundation[4174106] ; Beijing Natural Science Foundation[4174106] ; National Natural Science Foundation of China[61532018] ; National Natural Science Foundation of China[61602437] ; Lenovo Outstanding Young Scientists Program ; National Program for Special Support of Eminent Professionals ; National Program for Support of Top-notch Young Professionals ; China Postdoctoral Science Foundation[2017T100110] ; State Key Laboratory of Robotics
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.sia.cn/handle/173321/24724
Collection机器人学研究室
Corresponding AuthorJiang SQ(蒋树强)
Affiliation1.Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
2.University of Chinese Academy of Sciences,Beijing 100049, China
3.State key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
4.Shandong University of Science and Technology, Shandong 266590, China
Recommended Citation
GB/T 7714
Jiang SQ,Min WQ,Mei SH. Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2019,21(6):1609-1619.
APA Jiang SQ,Min WQ,&Mei SH.(2019).Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction.IEEE TRANSACTIONS ON MULTIMEDIA,21(6),1609-1619.
MLA Jiang SQ,et al."Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction".IEEE TRANSACTIONS ON MULTIMEDIA 21.6(2019):1609-1619.
Files in This Item:
File Name/Size DocType Version Access License
Hierarchy-Dependent (4658KB)期刊论文出版稿开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Jiang SQ(蒋树强)]'s Articles
[Min WQ(闵巍庆)]'s Articles
[Mei SH(梅舒欢)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Jiang SQ(蒋树强)]'s Articles
[Min WQ(闵巍庆)]'s Articles
[Mei SH(梅舒欢)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Jiang SQ(蒋树强)]'s Articles
[Min WQ(闵巍庆)]'s Articles
[Mei SH(梅舒欢)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.