Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.
"Sinopsis" puede pertenecer a otra edición de este libro.
Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.
Dr Aaron Rasheed Rababaah is an Associate Professor of Computer Science at the American University of Kuwait. He holds BSc in Idustrial Engineering, MSc in Computer Science and PhD in Computer Systems Engineering. He has 8 years teaching experience at 4 universities. His research interests include: intelligent systems, machine vision, and robotics.
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 19,49 gastos de envío desde Alemania a España
Destinos, gastos y plazos de envíoLibrería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Autor/Autorin: Rababaah AaronDr Aaron Rasheed Rababaah is an Associate Professor of Computer Science at the American University of Kuwait. He holds BSc in Idustrial Engineering, MSc in Computer Science and PhD in Computer Systems Engineering. He ha. Nº de ref. del artículo: 151238823
Cantidad disponible: Más de 20 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems. 240 pp. Englisch. Nº de ref. del artículo: 9783330651531
Cantidad disponible: 2 disponibles
Librería: AHA-BUCH GmbH, Einbeck, Alemania
Taschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems. Nº de ref. del artículo: 9783330651531
Cantidad disponible: 1 disponibles
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
Taschenbuch. Condición: Neu. Neuware -Data collected by multi-modality sensors to detect and characterize behavior of entities and events over a given situation. In order to transform the multi-modality sensors data into useful information leading to actionable information, there is an essential need for a robust data fusion model. A robust fusion model should be able to acquire data from multi-agent sensors and take advantage of spatio-temporal characteristics of multi-modality sensors to create a better situational awareness ability and in particular, assisting with soft fusion of multi-threaded information from variety of sensors under task uncertainties. This book presents a novel Image-based model for multi-modality data fusion. The concept of this fusion model is biologically-inspired by the human brain energy perceptual model. Similar to the human brain having designated regions to map immediate sensory experiences and fusing collective heterogeneous sensory perceptions to create a situational understanding for decision-making, the proposed image-based fusion model follows an analogous data to information fusion scheme for actionable decision-making applied to surveillance intelligent systems.VDM Verlag, Dudweiler Landstraße 99, 66123 Saarbrücken 240 pp. Englisch. Nº de ref. del artículo: 9783330651531
Cantidad disponible: 2 disponibles
Librería: Revaluation Books, Exeter, Reino Unido
Paperback. Condición: Brand New. 240 pages. 8.66x5.91x0.55 inches. In Stock. Nº de ref. del artículo: 3330651539
Cantidad disponible: 1 disponibles