This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.
"Sinopsis" puede pertenecer a otra edición de este libro.
Anders Søgaard is a father of three and a published poet, as well as a Full Professor in Computer Science the University of Copenhagen. He is currently funded by the Novo Nordisk Foundation, the Lundbeck Foundation, and the Innovation Fund Denmark; before that, he held an ERC Starting Grant and a Google Focused Research Award. He has won best paper awards at NAACL, EACL, CoNLL, etc. He previously wrote Semi-Supervised Learning and Domain Adaptation in NLP (Morgan & Claypool, 2013) and Cross-Lingual Word Embeddings (Morgan & Claypool, 2019), the latter with co-authors Ivan Vulic, Sebastian Ruder, and Manaal Faruqui.
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 17,04 gastos de envío desde Estados Unidos de America a España
Destinos, gastos y plazos de envíoEUR 5,19 gastos de envío desde Reino Unido a España
Destinos, gastos y plazos de envíoLibrería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9783031010217_new
Cantidad disponible: Más de 20 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ('this algorithm never does too badly') than about useful rules of thumb ('in this case this algorithm may perform really well'). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant. 104 pp. Englisch. Nº de ref. del artículo: 9783031010217
Cantidad disponible: 2 disponibles
Librería: California Books, Miami, FL, Estados Unidos de America
Condición: New. Nº de ref. del artículo: I-9783031010217
Cantidad disponible: Más de 20 disponibles
Librería: AHA-BUCH GmbH, Einbeck, Alemania
Taschenbuch. Condición: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ('this algorithm never does too badly') than about useful rules of thumb ('in this case this algorithm may perform really well'). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant. Nº de ref. del artículo: 9783031010217
Cantidad disponible: 1 disponibles
Librería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One. Nº de ref. del artículo: 608129271
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 44545671-n
Cantidad disponible: Más de 20 disponibles
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
Taschenbuch. Condición: Neu. Neuware -This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ('this algorithm never does too badly') than about useful rules of thumb ('in this case this algorithm may perform really well'). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 104 pp. Englisch. Nº de ref. del artículo: 9783031010217
Cantidad disponible: 2 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 44545671
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 44545671-n
Cantidad disponible: Más de 20 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. Print on Demand. Nº de ref. del artículo: 401726189
Cantidad disponible: 4 disponibles