I received the note below from Berlekamp, about recent developments in Error Correcting Codes. Posted with his permission. --Rich -------------- (from Elwyn Berlekamp, Nov 22 2005) Coding theory remains an active and dynamic subject, as it has been for the past 58 years. Although I haven't published any papers in this area since 1996, I remain an interested spectator. Low Density Parity Check Codes were invented and studied by Bob Gallager in his PhD thesis in the early 1960s. Even now, most of what is known about them was published by him then. He became an assistant professor at MIT soon after he finished his PhD. Coding theory was then very active at MIT. I was Gallager's second PhD student. Shannon, Elias, and Wozencraft were also on my committee. No one favored further work on low density parity check codes at that time, because they were felt to be relatively well understood but impractical. Convolutional codes with sequential decoding enjoyed popularity in the early and mid 1960s. In the 1970s, Viterbi decoding became more popular, especially in military voice communication systems. Reed-Solomon codes, mostly using decoding algorithms with which I was personally involved, also became popular, especially in computer memory devices such as magnetic discs, optical discs, and magnetic tapes. They are now enjoy very widespread use. Turbo codes appeared in the late 1990s. A revival of interest in low density parity check codes began about the same time. It is true that, for certain channels including the white Gaussian noise channel, both turbo codes and low density parity check codes can achieve performance much closer to the Shannon capacity than any of their competitors. Each is now the topic of many papers. Many theorists study turbo codes in an effort to better understand their good performance, which was initially demonstrated almost entirely empirically. There are some system environments in which one or the other of these two types of coding systems is the clear winner. It is also true that most communication and computer memory systems, even new ones, continue to use older coding techniques such as Reed-Solomon. That's because real systems require tradeoffs among many parameters. In 1980 I published an overview entitled "The Technology of Error-Correcting Codes" in the Proceedings of the IEEE. Most of the considerations discussed there, including code rate, block length, latency, and interface issues, remain highly relevant. The advantages of low density parity check codes become decisive only at VERY long block lengths. A decade or two ago, memory was viewed as too expensive. Latency has now become a bigger concern. Turbo codes require considerably more decoding computation than Viterbi codes or RS codes. The relative performance advantage/disadvantage is strongly dependant on the code rate, and for various reasons not directly related to the coding, most memory systems use very high-rate codes, an environment which is especially favorable to Reed-Solomon. So I would say that the performance claims of turbo codes and low density parity check codes are true, but their impact is often exaggerated. Another major development of the 1990s is the work of Madhu Sudhan and others on list decoding of low-rate Reed-Solomon codes. This buys improved RS performance at the cost of more computation. It's primary use, to date, has been in computer science programs doing probabilistic search rather than in any of the more traditional applications of coding theory.