This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
"Sinopsis" puede pertenecer a otra edición de este libro.
Fan Liu is a Research Fellow with the School of Computing, National University of Singapore (NUS). His research interests lie primarily in multimedia computing and information retrieval. His work has been published in a set of top forums, including ACM SIGIR, MM, WWW, TKDE, TOIS, TMM, and TCSVT. He is an area chair of ACM MM and a senior PC member of CIKM.
Zhenyang Li is a Postdoc with the Hong Kong Generative Al Research and Development Center Limited. His research interest is primarily in recommendation and visual question answering. His work has been published in a set of top forums, including ACM MM, TIP, and TMM.
Liqiang Nie is Professor at and Dean of the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen). His research interests are primarily in multimedia computing and information retrieval. He has co-authored more than 200 articles and four books. He is a regular area chair of ACM MM, NeurIPS, IJCAI, and AAAI, and a member of the ICME steering committee. He has received many awards, like the ACM MM and SIGIR best paper honorable mention in 2019, SIGMM rising star in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and SIGIR best student paper in 2021.
This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: Best Price, Torrance, CA, Estados Unidos de America
Condición: New. SUPER FAST SHIPPING. Nº de ref. del artículo: 9783031831874
Cantidad disponible: 1 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 49563644-n
Cantidad disponible: 1 disponibles
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Paperback. Condición: new. Paperback. This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9783031831874
Cantidad disponible: 1 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. 152 pp. Englisch. Nº de ref. del artículo: 9783031831874
Cantidad disponible: 2 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 49563644
Cantidad disponible: 1 disponibles
Librería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Nº de ref. del artículo: 2054249771
Cantidad disponible: Más de 20 disponibles
Librería: Books Puddle, New York, NY, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 26403601002
Cantidad disponible: 4 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. Print on Demand. Nº de ref. del artículo: 410634677
Cantidad disponible: 4 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. PRINT ON DEMAND. Nº de ref. del artículo: 18403600992
Cantidad disponible: 4 disponibles
Librería: Revaluation Books, Exeter, Reino Unido
Paperback. Condición: Brand New. 169 pages. 9.25x6.10x9.25 inches. In Stock. Nº de ref. del artículo: x-303183187X
Cantidad disponible: 1 disponibles