In today’s world, where intelligent technologies are deeply transforming human-computer interaction and virtual reality, multi-modal human modeling, analysis and synthesis have become central topics in computer vision. As application scenarios grow increasingly complex, new technologies continue to emerge to address these challenges. These techniques demand systematic summarization and practical guidance.
To meet this need, Multi-Modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis and synthesis―progressing from local details to holistic perspectives, and from face features to body dynamics.
This book begins by examining the anatomy structures and characteristics of human faces and bodies, then analyzes how traditional methods and deep learning approaches provide robust optimization solutions for modeling. For example, it explores how to address challenges in face recognition caused by lighting changes, occlusions, face expressions and aging, as well as methods for body localization, reconstruction, recognition and anomaly detection in multi-modal scenarios. It also explains how multi-modal data can drive realistic face and body synthesis. A standout feature is its focus on Huawei’s MindSpore framework, bridging the gap between algorithms and engineering through practical case studies. From building face detection and recognition pipelines with the MindSpore toolkit to accelerating model training via automatic parallel computing, and solving large language model (LLM) training challenges, each step is supported by reproducible code and design logic.
Designed for researchers and engineers in computer vision and AI, this book balances theoretical foundations with industry-ready technical details. Whether you aim to enhance the reliability of biometric recognition, explore creative possibilities in virtual-real interactions or optimize the deployment of deep learning frameworks, this guide serves as an essential link between academic advancements and real-world applications.
"Sinopsis" puede pertenecer a otra edición de este libro.
Jun Yu is currently an associate professor and laboratory director with the Department of Automation and the Institute of Advanced Technology, University of Science and Technology of China. His research interests are Multimedia Computing and Intelligent Robot. He has published 200+ journal articles and conference papers in TPAMI, IJCV, JMLR, TIP, TMM, etc. He has received 6 Best Paper Awards from premier conferences, including CVPR PBVS, ICCV MFR, ICME, FG, and won 60+ championships from Grand Challenges held in NeurIPS, CVPR, ICCV, MM, ECCV, IJCAI, AAAI.
Changwei Luo is an assistant research fellow at Department of Electronic Engineering, Tsinghua University. He is also working with Academy of Military Sciences, Beijing, China. His research interests cover computer vision and human-machine interaction. He has published more than 40 papers.
Chang Wen Chen is currently Chair Professor of Visual Computing at The Hong Kong Polytechnic University. He was previously an Empire Innovation Professor of Computer Science and Engineering at the University at Buffalo, State University of New York from 2008 to 2021. He also served as Dean of the School of Science and Engineering at The Chinese University of Hong Kong, Shenzhen from 2017 to 2020. He was Allen Henry Endow Chair Professor at the Florida Institute of Technology from 2003 to 2007. He was on the faculty of Electrical and Computer Engineering at the University of Missouri-Columbia from 1996 to 2003 and on the faculty of Electrical and Computer Engineering at the University of Rochester from 1992 to 1996.
He is currently the Associate Editor-in-Chief of IEEE Transactions on Biometrics, Behavior, and Identity Science. He has been an Editor-in-Chief or Editor for several other major IEEE Transactions and Journals, including the IEEE Transactions on Multimedia, IEEE Transactions on Circuits and Systems for Video Technology, Proceedings of IEEE, IEEE Journal of Selected Areas in Communications, and IEEE Journal of Emerging and Selected Topics in Circuits and Systems. He has served as Conference Chair for several major IEEE, ACM and SPIE conferences related to multimedia video communications and signal processing. His research has been supported by NSF, DARPA, Air Force, NASA, Whitaker Foundation, Microsoft, Intel, Kodak, Huawei, and Technicolor.
Chen received his BS degree from University of Science and Technology of China in 1983, MSEE degree from University of Southern California in 1986, and Ph.D. degree from University of Illinois at Urbana-Champaign in 1992. He and his students have received nine Best Paper Awards or Best Student Paper Awards. He has also received several research and professional achievement awards, including Sigma Xi Excellence in Graduate Research Mentoring Award in 2003, Alexander von Humboldt Research Award in 2009, the University at Buffalo Exceptional Scholar - Sustained Achievement Award in 2012, the State University of New York System Chancellor’s Award for Excellence in Scholarship and Creative Activities in 2016, and the Distinguished ECE Alumni Award from University of Illinois at Urbana-Champaign in 2019. He is an IEEE Fellow since 2004, an SPIE Fellow since 2007 and a member of Academia Europaea since 2021.
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 19,49 gastos de envío desde Alemania a España
Destinos, gastos y plazos de envíoLibrería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Jun Yu is currently an associate professor and laboratory director with the Department of Automation and the Institute of Advanced Technology, University of Science and Technology of China. His research interests are Multimedia Computing and Intel. Nº de ref. del artículo: 2294618875
Cantidad disponible: Más de 20 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. Nº de ref. del artículo: 409165764
Cantidad disponible: 3 disponibles
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
Hardback. Condición: New. New copy - Usually dispatched within 4 working days. 500. Nº de ref. del artículo: B9781032527642
Cantidad disponible: Más de 20 disponibles
Librería: Books Puddle, New York, NY, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 26404021275
Cantidad disponible: 3 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. Nº de ref. del artículo: 18404021265
Cantidad disponible: 3 disponibles
Librería: THE SAINT BOOKSTORE, Southport, Reino Unido
Hardback. Condición: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 500. Nº de ref. del artículo: C9781032527642
Cantidad disponible: Más de 20 disponibles
Librería: CitiRetail, Stevenage, Reino Unido
Hardcover. Condición: new. Hardcover. In todays world, where intelligent technologies are deeply transforming human-computer interaction and virtual reality, multi-modal human modeling, analysis and synthesis have become central topics in computer vision. As application scenarios grow increasingly complex, new technologies continue to emerge to address these challenges. These techniques demand systematic summarization and practical guidance.To meet this need, Multi-Modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis and synthesisprogressing from local details to holistic perspectives, and from face features to body dynamics.This book begins by examining the anatomy structures and characteristics of human faces and bodies, then analyzes how traditional methods and deep learning approaches provide robust optimization solutions for modeling. For example, it explores how to address challenges in face recognition caused by lighting changes, occlusions, face expressions and aging, as well as methods for body localization, reconstruction, recognition and anomaly detection in multi-modal scenarios. It also explains how multi-modal data can drive realistic face and body synthesis. A standout feature is its focus on Huaweis MindSpore framework, bridging the gap between algorithms and engineering through practical case studies. From building face detection and recognition pipelines with the MindSpore toolkit to accelerating model training via automatic parallel computing, and solving large language model (LLM) training challenges, each step is supported by reproducible code and design logic.Designed for researchers and engineers in computer vision and AI, this book balances theoretical foundations with industry-ready technical details. Whether you aim to enhance the reliability of biometric recognition, explore creative possibilities in virtual-real interactions or optimize the deployment of deep learning frameworks, this guide serves as an essential link between academic advancements and real-world applications. Multi-modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis, and synthesisprogressing from local details to holistic perspectives, and from face features to body dynamics. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Nº de ref. del artículo: 9781032527642
Cantidad disponible: 1 disponibles
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Hardcover. Condición: new. Hardcover. In todays world, where intelligent technologies are deeply transforming human-computer interaction and virtual reality, multi-modal human modeling, analysis and synthesis have become central topics in computer vision. As application scenarios grow increasingly complex, new technologies continue to emerge to address these challenges. These techniques demand systematic summarization and practical guidance.To meet this need, Multi-Modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis and synthesisprogressing from local details to holistic perspectives, and from face features to body dynamics.This book begins by examining the anatomy structures and characteristics of human faces and bodies, then analyzes how traditional methods and deep learning approaches provide robust optimization solutions for modeling. For example, it explores how to address challenges in face recognition caused by lighting changes, occlusions, face expressions and aging, as well as methods for body localization, reconstruction, recognition and anomaly detection in multi-modal scenarios. It also explains how multi-modal data can drive realistic face and body synthesis. A standout feature is its focus on Huaweis MindSpore framework, bridging the gap between algorithms and engineering through practical case studies. From building face detection and recognition pipelines with the MindSpore toolkit to accelerating model training via automatic parallel computing, and solving large language model (LLM) training challenges, each step is supported by reproducible code and design logic.Designed for researchers and engineers in computer vision and AI, this book balances theoretical foundations with industry-ready technical details. Whether you aim to enhance the reliability of biometric recognition, explore creative possibilities in virtual-real interactions or optimize the deployment of deep learning frameworks, this guide serves as an essential link between academic advancements and real-world applications. Multi-modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis, and synthesisprogressing from local details to holistic perspectives, and from face features to body dynamics. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9781032527642
Cantidad disponible: 1 disponibles