Building on the success of the first edition, which offered a practical introductory approach to the techniques of error concealment, this book, now fully revised and updated, provides a comprehensive treatment of the subject and includes a wealth of additional features. The Art of Error Correcting Coding, Second Edition explores intermediate and advanced level concepts as well as those which will appeal to the novice.
All key topics are discussed, including Reed-Solomon codes, Viterbi decoding, soft-output decoding algorithms, MAP, log-MAP and MAX-log-MAP. Reliability-based algorithms GMD and Chase are examined, as are turbo codes, both serially and parallel concatenated, as well as low-density parity-check (LDPC) codes and their iterative decoders.
This edition provides an essential resource to engineers, computer scientists and graduate students alike for understanding and applying ECC techniques in the transmission and storage of digital information.
"Sinopsis" puede pertenecer a otra edición de este libro.
Robert H. Morelos-Zaragoza received BSEE and MSEE degrees from the National Autonomous University of Mexico (UNAM) in 1985 and 1987 respectively, and a PhD in Electrical Engineering from the University of Hawaii at Manoa in 1992. He has held numerous research posts in Mexico and Japan. In 2002, Robert joined the Department of Electrical Engineering at San José State University, as an Associate Professor. His current research interests include error correcting coding (ECC/FEC), advanced digital communication receiver design, software-defined radio (SDR), space-time signal processing techniques and ultra-wideband (UWB) communication systems. Prof. Morelos-Zaragoza is a senior member of IEEE, and member of IEICE (Japan) and of Eta Kappa Nu.
Building on the success of the first edition, which offered a practical introductory approach to the techniques of error concealment, this book, now fully revised and updated, provides a comprehensive treatment of the subject and includes a wealth of additional features. The Art of Error Correcting Coding, Second Edition explores intermediate and advanced level concepts as well as those which will appeal to the novice.
All key topics are discussed, including Reed-Solomon codes, Viterbi decoding, soft-output decoding algorithms, MAP, log-MAP and MAX-log-MAP.  Reliability-based algorithms GMD and Chase are examined, as are turbo codes, both serially and parallel concatenated, as well as low-density parity-check (LDPC) codes and their iterative decoders.
This edition provides an essential resource to engineers, computer scientists and graduate students alike for understanding and applying ECC techniques in the transmission and storage of digital information.
The history of error correcting coding (ECC) started with the introduction of the Hamming codes (Hamming 1974), at or about the same time as the seminal work of Shannon (1948). Shortly after, Golay codes were invented (Golay 1974). These two first classes of codes are optimal, and will be defined in a subsequent section.
Figure 1.1 shows the block diagram of a canonical digital communications/storage system. This is the famous Figure 1 in most books on the theory of ECC and digital communications (Benedetto and Biglieri 1999). The information source and destination will include any source coding scheme matched to the nature of the information. The ECC encoder takes as input the information symbols from the source and adds redundant symbols to it, so that most of the errors - introduced in the process of modulating a signal, transmitting it over a noisy medium and demodulating it - can be corrected (Massey 1984; McEliece 1977; Moon 2005).
Usually, the channel is assumed to be such that samples of an additive noise process are added to the modulated symbols (in their equivalent complex baseband representation). The noise samples are assumed to be independent from the source symbols. This model is relatively easy to track mathematically and includes additive white Gaussian noise (AWGN) channels, flat Rayleigh fading channels, and binary symmetric channels (BSC). The case of frequency-selective channels can also be included, as techniques such as spread-spectrum and multicarrier modulation (MCM) effectively transform them into either AWGN channels or flat Rayleigh fading channels.
At the receiver end, the ECC decoder utilizes the redundant symbols and their relationship with the information symbols in order to correct channel errors. In the case of error detection, the ECC decoder can be best thought of as a reencoder of the received information, followed by a check that the redundant symbols generated are the same as those received.
In classical ECC theory, the combination of modulation, noisy medium and demodulation was modeled as a discrete memoryless channel with input [bar.v] and output [bar.r]. An example of this is binary transmission over an AWGN channel, which is modeled as a BSC. This is illustrated in Figure 1.2. The BSC has a probability of channel error p - or transition probability - equal to the probability of a bit error for binary signaling over an AWGN channel,
p = Q ([square root of 2[E.sub.b]/[N.sub.0]]), (1.1)
where [E.sub.b]/[N.sub.0] is the energy-per-bit-to-noise ratio - also referred to as the bit signal-to-noise ratio (SNR) or SNR per bit - and
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2)
is the Gaussian Q-function. In terms of the complementary error function, the Q-function can be written as
Q(x) = 1/2 erfc (x/[square root of 2]). (1.3)
Equation (1.2) is useful in analytical derivations and Equation (1.3) is used in the computation with C programs or Matlab scripts of performance bounds and approximations.
Massey (1974) suggested considering ECC and modulation as a single entity, known in modern literature as coded modulation. This approach provides a higher efficiency and coding gain rather than the serial concatenation of ECC and modulation, by joint design of codes and signal constellations. Several methods of combining coding and modulation are covered in this book, including the following: trellis-coded modulation (TCM) (Ungerboeck 1982) and multilevel coded modulation (MCM) (Imai and Hirakawa 1977). In a coded modulation system, the (soft-decision) channel outputs are directly processed by the decoder. In contrast, in a classical ECC system, the hard-decision bits from the demodulator are fed to a binary decoder.
Codes can be combined in several ways. An example of serial concatenation (that is, concatenation in the classical sense) is the following. For years, the most popular concatenated ECC scheme has been the combination of an outer Reed-Solomon (RS) code, through intermediate interleaving, and an inner binary convolutional code. This scheme has been used in numerous applications, ranging from space communications to digital broadcasting of high definition television. The basic idea is that the soft-decision decoder of the convolutional code produces bursts of errors that can be broken into smaller pieces by the deinterleaving process and handled effectively by the RS decoder. RS codes are nonbinary codes that work with symbols composed of several bits, and can deal with multiple bursts of errors. Serial concatenation has the advantage that it requires two separate decoders, one for the inner code and one for the outer code, instead of a single but very complex decoder for the overall code.
This book examines these types of ECC systems. First, basic code constructions and their decoding algorithms, in the Hamming space (that is, dealing with bits), are presented. In the second part of the book, important soft-decision decoding (SDD) algorithms for binary transmission are introduced. These algorithms work over the Euclidean space and achieve a reduction in the required transmitted power per bit of at least 2 dB, compared with Hamming-space (hard-decision) decoders. Several kinds of soft-decision decoders are considered, with attention given to their algorithmic aspects (the "how" they work), rather than to their theoretical aspects (the 'why' they work). Finally, combinations of codes and interleaving for iterative decoding and of coding and modulation for bandwidth-efficient transmission are the topic of the last part of the book.
1.1 Error correcting coding: Basic concepts
All error correcting codes are based on the same basic principle: redundancy is added to information in order to correct any errors that may occur in the process of transmission or storage. In a basic (and practical) form, redundant symbols are appended to information symbols to obtain a coded sequence or code word. For the purpose of illustration, a code word obtained by encoding with a block code is shown in Figure 1.3. Such an encoding is said to be systematic. Systematic encoding means that the information symbols always appear in the first (leftmost) k positions of a code word. The remaining (rightmost) n - k symbols in a code word are a function of the information symbols, and provide redundancy that can be used for error correction and/or detection purposes. The set of all code sequences is called an error correcting code, and will henceforth be denoted by ITLITL.
1.1.1 Block codes and convolutional codes
According to the manner in which redundancy is added to messages, ECC can be divided into two classes: block and convolutional. Both types of coding schemes have found practical applications. Historically, convolutional codes have been preferred, apparently because of the availability of the soft-decision Viterbi decoding algorithm and the belief over many years that block codes could not be efficiently decoded with soft-decisions. However, recent developments in the theory and design of SDD algorithms for linear block codes have helped to dispel this belief. Moreover, the best ECC known to date remain block codes (long irregular low-density parity-check (LDPC) codes).
Block codes process the information on a block-by-block basis, treating each block of information bits independently from others. In other words, block coding is a memoryless operation, in the sense that code words are independent from each other. In contrast, the output of a convolutional encoder depends not only on the current input information, but also on previous inputs or outputs, either on a block-by-block or a bit-by-bit basis. For simplicity of exposition, we begin with a study of the structural properties of block codes. Many of these properties are common to both types of codes.
It should be noted that block codes have, in fact, memory, when encoding is thought of as a bit-by-bit process and within a code word. Most recently, the difference between block and convolutional codes has become less and less well defined, especially after recent advances in the understanding of the trellis structure of block codes, and the tail-biting structure of some convolutional codes. Indeed, colleagues working on convolutional codes sometimes refer to block codes as "codes with time-varying trellis structure." Similarly, researchers working with block codes may consider convolutional codes as "codes with a regular trellis structure."
1.1.2 Hamming distance, Hamming spheres and error correcting capability
Consider an error correcting code ITLITL with binary elements. As mentioned above, block codes are considered for simplicity of exposition. In order to achieve error correcting capabilities, not all the [2.sup.n] possible binary vectors of length n are allowed to be transmitted. Instead, ITLITL is a subset of the n-dimensional binary vector space [V.sub.2] = [{0, 1}.sup.n], such that its elements are as far apart as possible.
Consider two vectors [[bar.x].sub.1] = ([x.sub.1,0], [x.sub.1,1], ..., [x.sub.1,n-1]) and [[bar.x].sub.2] = ([x.sub.2,0], [x.sub.2,1], ..., [x.sub.2,n-1]) in [V.sub.2]. Then the Hamming distance between [[bar.x].sub.1] and [[bar.x].sub.2], denoted [d.sub.H] ([[bar.x].sub.1], [[bar.x].sub.2]), is defined as the number of elements in which the vectors differ,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.4)
where [absolute value of A] denotes the number of elements in (or the cardinality of) a set A and [direct sum] denotes addition modulo-2 (exclusive-OR).
Given a code ITLITL, its minimum Hamming distance, [d.sub.min], is defined as the minimum Hamming distance among all possible distinct pairs of code words in ITLITL,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.5)
Throughout the book, the array (n, k, [d.sub.min]) is used to denote the parameters of a block code of length n, that encodes messages of length k bits and has a minimum Hamming distance [d.sub.min]. The assumption is made that the size of the code is [absolute value of ITLITL] = [2.sup.k].
Example 1.1.1 The simplest error correcting code is a binary repetition code of length 3. It repeats each bit three times, so that a "0" is encoded onto the vector (000) and a "1" onto the vector (111). Since the two code words differ in all three positions, the Hamming distance between them is equal to three. Figure 1.4 is a pictorial representation of this code. The three-dimensional binary space corresponds to the set of [2.sup.3] = 8 vertices of the three-dimensional unit-volume cube. The Hamming distance between code words (000) and (111) equals the number of edges in a path between them. This is equivalent to the number of coordinates that one needs to change to convert (000) into (111), or vice versa. Thus [d.sub.H ((000), (111)) = 3. There are only two code words in this case, as a result, [d.sub.min] = 3.
The binary vector space [V.sub.2] is also known as a Hamming space. Let [bar.v] denote a code word of an error correcting code ITLITL. A Hamming sphere [S.sub.t] ([bar.v]), of radius t and centered around [bar.v], is the set of vectors in [V.sub.2] at a distance less than or equal to t from the center [bar.v],
[S.sub.t] ([bar.v]) = {[bar.x] [member of] [V.sub.2]|[d.sub.H] ([bar.x], [bar.v]) [less than or equal to] t}. (1.6)
Note that the size of (or the number of code words in) [S.sub.t] ([bar.v]) is given by the following expression
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.7)
Example 1.1.2 Figure 1.5 shows the Hamming spheres of radius t = 1 around the code words of the (3, 1, 3) binary repetition code.
Note that the Hamming spheres for this code are disjoint, that is, there is no vector in [V.sub.2] (or vertex in the unit-volume three-dimensional cube) that belongs to both [S.sub.1](000) and [S.sub.1](111). As a result, if there is a change in any one position of a code word [bar.v], then the resulting vector will still lie inside a Hamming sphere centered at [bar.v]. This concept is the basis of understanding and defining the error correcting capability of a code C.
The error correcting capability, t, of a code ITLITL is the largest radius of Hamming spheres [S.sub.t] ([bar.v]) around all the code words [bar.v] [member of] ITLITL, such that for all different pairs [[bar.v].sub.i], [[bar.v].sub.j] [member of] ITLITL, the corresponding Hamming spheres are disjoint, that is,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.8)
In terms of the minimum distance of ITLITL, [d.sub.min], an equivalent and more common definition is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.9)
where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] denotes the largest integer less than or equal to x.
Note that in order to compute the minimum distance [d.sub.min] of a block code ITLITL, in accordance with Equation (1.5), a total of [2.sup.k-1] ([2.sup.k] - 1) distances between distinct pairs of code words are needed. This is practically impossible even for codes of relatively modest size, say, k = 50 (for which approximately [2.sup.99] code word pairs need to be examined). One of the advantages of linear block codes is that the computation of [d.sub.min] requires to know the Hamming weight of all [2.sup.k] - 1 nonzero code words.
1.2 Linear block codes
As mentioned above, finding a good code means finding a subset of [V.sub.2] with elements as far apart as possible. This is very difficult. In addition, even when such a set is found, there is still the problem of how to assign code words to information messages.
Linear codes are vector subspaces of [V.sub.2]. This means that encoding can be accomplished by matrix multiplications. In terms of digital circuitry, simple encoders can be built using exclusive-OR gates, AND gates and D flip-flops. In this chapter, the binary vector space operations of sum and multiplication are meant to be the output of exclusive-OR (or modulo 2 addition) and AND gates, respectively. The tables of addition and multiplication for the binary elements in {0, 1} are as follows:
a b a + b a b
0 0 0 0 0 1 1 0 1 0 1 0 1 1 0 1
which are seen to correspond to the outputs of a binary exclusive-OR gate and an AND gate, respectively.
1.2.1 Generator and parity-check matrices
Let ITLITL denote a binary linear (n, k, [d.sub.min]) code. Now, ITLITL is a k-dimensional vector subspace, and therefore it has a basis, say {[[bar.v].sub.0], [[bar.v].sub.1], ..., [[bar.v].sub.k-1]}, such that any code word [bar.v] [member of] C can be represented as a linear combination of the elements on the basis of:
[bar.v] = [u.sub.0][[bar.v].sub.0] + [u.sub.1][[bar.v].sub.1] + + [u.sub.k-1] [[bar.v].sub.k-1], (1.10)
where [u.sub.i] [member of] {0, 1}, 1 [less than or equal to] i < k. Equation (1.10) can be written in terms of a generator matrix G and a message vector, [bar.u] = ([u.sub.0], [u.sub.1], ..., [u.sub.k-1]), as follows:
[bar.v] = [bar.u]G (1.11)
where
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.12)
Due to the fact that ITLITL is a k-dimensional vector space in [V.sub.2], there is an (n - k)-dimensional dual space [ITLITL.sup.[??]], generated by the rows of a matrix H, called the parity-check matrix, such that [GH.sup.[??]] = 0, where [H.sup.[??]] denotes the transpose of H. In particular, note that for any code word [bar.v] [member of] ITLITL,
[bar.v][H.sup.[??]] = [bar.0] (1.13)
(Continues...)
Excerpted from The Art of Error Correcting Codingby Robert H. Morelos-Zaragoza Copyright © 2006 by John Wiley & Sons, Ltd. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
"Sobre este título" puede pertenecer a otra edición de este libro.
EUR 4,56 gastos de envío desde Reino Unido a España
Destinos, gastos y plazos de envíoGRATIS gastos de envío desde Estados Unidos de America a España
Destinos, gastos y plazos de envíoLibrería: Romtrade Corp., STERLING HEIGHTS, MI, Estados Unidos de America
Condición: New. Brand New. Soft Cover International Edition. Different ISBN and Cover Image. Priced lower than the standard editions which is usually intended to make them more affordable for students abroad. The core content of the book is generally the same as the standard edition. The country selling restrictions may be printed on the book but is no problem for the self-use. This Item maybe shipped from US or any other country as we have multiple locations worldwide. Nº de ref. del artículo: ABNR-6838
Cantidad disponible: 5 disponibles
Librería: Phatpocket Limited, Waltham Abbey, HERTS, Reino Unido
Condición: Good. Your purchase helps support Sri Lankan Children's Charity 'The Rainbow Centre'. Ex-library, so some stamps and wear, but in good overall condition. Our donations to The Rainbow Centre have helped provide an education and a safe haven to hundreds of children who live in appalling conditions. Nº de ref. del artículo: Z1-B-007-02569
Cantidad disponible: 1 disponibles
Librería: moluna, Greven, Alemania
Gebunden. Condición: New. This book includes additional features to the first edition which offered a practical introductory approach to the techniques of error concealment. Comprehensive treatment of the subject is provided here, exploring intermediate and advanced level concepts a. Nº de ref. del artículo: 594692228
Cantidad disponible: Más de 20 disponibles
Librería: Ria Christie Collections, Uxbridge, Reino Unido
Condición: New. In. Nº de ref. del artículo: ria9780470015582_new
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 3319902
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: As New. Unread book in perfect condition. Nº de ref. del artículo: 3319902
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
Condición: New. Nº de ref. del artículo: 3319902-n
Cantidad disponible: Más de 20 disponibles
Librería: AHA-BUCH GmbH, Einbeck, Alemania
Buch. Condición: Neu. Neuware. Nº de ref. del artículo: 9780470015582
Cantidad disponible: 2 disponibles
Librería: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irlanda
Condición: New. This book includes additional features to the first edition which offered a practical introductory approach to the techniques of error concealment. Comprehensive treatment of the subject is provided here, exploring intermediate and advanced level concepts as well as those which will appeal to the novice. Num Pages: 278 pages, Illustrations. BIC Classification: TJFM; TJK; URY. Category: (P) Professional & Vocational. Dimension: 252 x 179 x 23. Weight in Grams: 654. . 2006. 2nd Edition. Hardcover. . . . . Nº de ref. del artículo: V9780470015582
Cantidad disponible: Más de 20 disponibles
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
Condición: New. Nº de ref. del artículo: 3319902-n
Cantidad disponible: Más de 20 disponibles