With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies.
"Sinopsis" puede pertenecer a otra edición de este libro.
Dr. Shikha Khullar is an accomplished academic and researcher with a Ph.D. in Data Mining and Artificial Intelligence. She currently serves as an Associate Professor at Poornima University, where she is recognized for her dedication to academic excellence and her dynamic involvement in radical research. With a deep-rooted passion for innovation and discovery, Dr. Khullar actively engages in interdisciplinary research that bridges technology and real-world applications, particularly in areas such as intelligent systems, smart data analytics, and sustainable digital solutions. Her work is characterized by a forward-thinking approach, where she not only explores theoretical models but also contributes to practical implementations that address pressing societal and industrial challenges. At Poornima University, she has been instrumental in mentoring young minds, guiding Ph.D. scholars, and contributing to curriculum development in emerging areas such as Artificial Intelligence, Blockchain, and Smart Agriculture Technologies. She is the member of IAENG and ERDA.
"Sobre este título" puede pertenecer a otra edición de este libro.
Librería: CitiRetail, Stevenage, Reino Unido
Hardcover. Condición: new. Hardcover. With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Nº de ref. del artículo: 9798337362243
Cantidad disponible: 1 disponibles
Librería: AussieBookSeller, Truganina, VIC, Australia
Hardcover. Condición: new. Hardcover. With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. This item is printed on demand. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability. Nº de ref. del artículo: 9798337362243
Cantidad disponible: 1 disponibles
Librería: AHA-BUCH GmbH, Einbeck, Alemania
Buch. Condición: Neu. Neuware - With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. Nº de ref. del artículo: 9798337362243
Cantidad disponible: 2 disponibles