The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible.
"Sinopsis" puede pertenecer a otra edición de este libro.
The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible.
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 2,26 gastos de envío en Estados Unidos de America
Destinos, gastos y plazos de envíoEUR 3,41 gastos de envío en Estados Unidos de America
Destinos, gastos y plazos de envíoLibrería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
Condición: New. Nº de ref. del artículo: ABLIING23Mar2716030033340
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 19201975-n
Cantidad disponible: 15 disponibles
Librería: Grand Eagle Retail, Mason, OH, Estados Unidos de America
Paperback. Condición: new. Paperback. The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9781461368045
Cantidad disponible: 1 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 19201975
Cantidad disponible: 15 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9781461368045_new
Cantidad disponible: Más de 20 disponibles
Librería: moluna, Greven, Alemania
Condición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. The Generalized LR parsing algorithm (some call it Tomita s algorithm ) was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems . Nº de ref. del artículo: 4194967
Cantidad disponible: Más de 20 disponibles
Librería: Books Puddle, New York, NY, Estados Unidos de America
Condición: New. pp. 184. Nº de ref. del artículo: 2658584882
Cantidad disponible: 4 disponibles
Librería: Majestic Books, Hounslow, Reino Unido
Condición: New. Print on Demand pp. 184 49:B&W 6.14 x 9.21 in or 234 x 156 mm (Royal 8vo) Perfect Bound on White w/Gloss Lam. Nº de ref. del artículo: 51007725
Cantidad disponible: 4 disponibles
Librería: Biblios, Frankfurt am main, HESSE, Alemania
Condición: New. PRINT ON DEMAND pp. 184. Nº de ref. del artículo: 1858584888
Cantidad disponible: 4 disponibles
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
Taschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -The Generalized LR parsing algorithm (some call it 'Tomita's algorithm') was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible. 184 pp. Englisch. Nº de ref. del artículo: 9781461368045
Cantidad disponible: 2 disponibles