000 06619nam a2200769 i 4500
001 8845048
003 IEEE
005 20200413152933.0
006 m eo d
007 cr cn |||m|||a
008 190927s2019 caua fob 000 0 eng d
020 _a9781681736297
_qelectronic
020 _z9781681736303
_qhardcover
020 _z9781681736280
_qpaperback
024 7 _a10.2200/S00938ED1V01Y201907IVM020
_2doi
035 _a(CaBNVSL)thg00979531
035 _a(OCoLC)1121141680
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aHM742
_b.N546 2019eb
082 0 4 _a302.23/1
_223
100 1 _aNie, Liqiang,
_eauthor.
245 1 0 _aMultimodal learning toward micro-video understanding /
_cLiqiang Nie, Meng Liu, and Xuemeng Song.
264 1 _a[San Rafael, California] :
_bMorgan & Claypool,
_c[2019]
300 _a1 PDF (xv, 170 pages) :
_bcolor illustrations.
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aSynthesis lectures on image, video, and multimedia processing,
_x1559-8144 ;
_v#20
538 _aMode of access: World Wide Web.
538 _aSystem requirements: Adobe Acrobat Reader.
500 _aPart of: Synthesis digital library of engineering and computer science.
504 _aIncludes bibliographical references (pages).
505 0 _a1. Introduction -- 1.1. Micro-video proliferation -- 1.2. Practical tasks -- 1.3. Research challenges -- 1.4. Our solutions -- 1.5. Book structure
505 8 _a2. Data collection -- 2.1. Dataset i for popularity prediction -- 2.2. Dataset ii for venue category estimation -- 2.3. Dataset iii for micro-video routing -- 2.4. Summary
505 8 _a3. Multimodal transductive learning for micro-video popularity prediction -- 3.1. Background -- 3.2. Research problems -- 3.3. Feature extraction -- 3.4. Related work -- 3.5. Notations and preliminaries -- 3.6. Multimodal transductive learning -- 3.7. Multi-modal transductive low-rank learning -- 3.8. Summary
505 8 _a4. Multimodal cooperative learning for micro-video venue categorization -- 4.1. Background -- 4.2. Research problems -- 4.3. Related work -- 4.4. Multimodal consistent learning -- 4.5. Multimodal complementary learning -- 4.6. Multimodal cooperative learning -- 4.7. Summary
505 8 _a5. Multimodal transfer learning in micro-video analysis -- 5.1. Background -- 5.2. Research problems -- 5.3. Related work -- 5.4. External sound dataset -- 5.5. Deep multi-modal transfer learning -- 5.6. Experiments -- 5.7. Summary
505 8 _a6. Multimodal sequential learning for micro-video recommendation -- 6.1. Background -- 6.2. Research problems -- 6.3. Related work -- 6.4. Multimodal sequential learning -- 6.5. Experiments -- 6.6. Summary
505 8 _a7. Research frontiers -- 7.1. Micro-video annotation -- 7.2. Micro-video captioning -- 7.3. Micro-video thumbnail selection -- 7.4. Semantic ontology construction -- 7.5. Pornographic content identification.
506 _aAbstract freely available; full-text restricted to subscribers or individual document purchasers.
510 0 _aCompendex
510 0 _aINSPEC
510 0 _aGoogle scholar
510 0 _aGoogle book search
520 _aMicro-videos, a new form of user-generated content, have been spreading widely across various social platforms, such as Vine, Kuaishou, and TikTok. Different from traditional long videos, micro-videos are usually recorded by smart mobile devices at any place within a few seconds. Due to their brevity and low bandwidth cost, micro-videos are gaining increasing user enthusiasm. The blossoming of micro-videos opens the door to the possibility of many promising applications, ranging from network content caching to online advertising. Thus, it is highly desirable to develop an effective scheme for high-order micro-video understanding. Micro-video understanding is, however, non-trivial due to the following challenges: (1) how to represent micro-videos that only convey one or few high-level themes or concepts; (2) how to utilize the hierarchical structure of venue categories to guide micro-video analysis; (3) how to alleviate the influence of low quality caused by complex surrounding environments and camera shake; (4) how to model multimodal sequential data, i.e. textual, acoustic, visual, and social modalities to enhance micro-video understanding; and (5) how to construct large-scale benchmark datasets for analysis. These challenges have been largely unexplored to date. In this book, we focus on addressing the challenges presented above by proposing some state-of-the-art multimodal learning theories. To demonstrate the effectiveness of these models, we apply them to three practical tasks of micro-video understanding: popularity prediction, venue category estimation, and micro-video routing. Particularly, we first build three large-scale real-world micro-video datasets for these practical tasks. We then present a multimodal transductive learning framework for micro-video popularity prediction. Furthermore, we introduce several multimodal cooperative learning approaches and a multimodal transfer learning scheme for micro-video venue category estimation. Meanwhile, we develop a multimodal sequential learning approach for micro-video recommendation. Finally, we conclude the book and figure out the future research directions in multimodal learning toward micro-video understanding.
530 _aAlso available in print.
588 _aTitle from PDF title page (viewed on September 27, 2019).
650 0 _aSocial media
_xData processing.
650 0 _aSocial media
_xForecasting.
650 0 _aLearning.
650 0 _aMultiple intelligences.
653 _amicro-video understanding
653 _amultimodal transductive learning
653 _amultimodal cooperative learning
653 _amultimodal transfer learning
653 _amultimodal sequential learning
653 _apopularity prediction
653 _avenue category estimation
653 _amicro-video recommendation
700 1 _aLiu, Meng
_q(Computer scientist),
_eauthor.
700 1 _aSong, Xuemeng
_c(Computer scientist),
_eauthor.
776 0 8 _iPrint version:
830 0 _aSynthesis lectures on image, video, and multimedia processing ;
_v#20.
830 0 _aSynthesis digital library of engineering and computer science.
856 4 0 _3Abstract with links to full text
_uhttps://doi.org/10.2200/S00938ED1V01Y201907IVM020
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/servlet/opac?bknumber=8845048
999 _c562438
_d562438