Jump to content

Talk:Soft-in soft-out decoder

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I'm considering adding the following:

As a simple, but unrealistic, example imagine a 100 dimensional multidimensional parity-check code with data bits arranged into a hypercube 2 bits to a side. Hence there are 2^100 data bits and, disregarding parity on parity, 100 * 2^99 parity bits. We then transimit a message consisting entirely of zeros over a noisy transmission line that results in 10% of the bits being flipped. when we analyze the resulting pattern we would naturally expect that if a data bit is zero then 90% of its parity bits will agree with its value and 10% will will disagree. For data bits that are ones we expect the exact opposite. Correcting this would be simple.

But if we now transmit a message over a transmission line that results in 30% of all bits being flipped then it is more difficult. we assume that data bits that are right will have more parity bits that agree with its value but we can no longer be sure about that. A first pass through the decoder will correct many mistakes but it will also introduce a small number of mistakes of its own. but the message as a whole will contain far fewer mistakes than it did before. multiple passes through the decoder get progressively closer to the original error free message. just-emery (talk) 17:41, 2 June 2009 (UTC)[reply]

I think you're confusing the iterative turbo code principle with SISO decoding. An iterative decoder will necessarily use a SISO decoder during each iteration, but the two concepts are distinct. Oli Filth(talk|contribs) 20:09, 2 June 2009 (UTC)[reply]
Thats fine with me. Of course it should still be pointed out that it can be used iteratively. The only thing you have said or done that I disagree with is when you said that a many dimensional parity scheme cant correct more than n/2 errors. do you maintain that the purely theoretical 100 dimensional parity scheme described above cannot, using an iterative soft-in soft-out decoder, correct more than 50 mistakes? (of course its not guaranteed to correct more than 50 mistakes) just-emery (talk) 21:31, 2 June 2009 (UTC)[reply]
You mean this edit? Yes, on second thoughts, I spoke too soon there with my edit summary. You're right, a soft decoder may correct an error syndrome that would appear to a hard decoder as uncorrectable. But I don't think soft-decision decoders are usually referred to in terms of their error-correction capability; rather, the coding gain at a given SNR, for instance. As this is true for all block codes, I don't think we should pay undue weight to the soft-decoding aspect, but nevertheless, I've restored some detail about this to that article. Oli Filth(talk|contribs) 22:47, 2 June 2009 (UTC)[reply]
Yes. your edit put it very well. just-emery (talk) 17:37, 3 June 2009 (UTC)[reply]
You obviously know a great deal about error correction. The Fact that even you didnt know this is exactly why I think it needs to be pointed out. It will obviously not work with only 2 dimensions so the statement as it now reads might be confusing to someone unfamiliar with the subject. just-emery (talk) 04:07, 4 June 2009 (UTC)[reply]
The fact that having more dimensions introduces this new capability is itself a very interesting observation that ought ideally to be worked into one of the articles somehow. (I cant be sure but this might only be true for Multidimensional parity-check codes) just-emery (talk) 04:27, 4 June 2009 (UTC)[reply]
Forgive my slowness, but why couldn't a soft decoder work with a 2D code? It should theoretically be possible to form a maximum-likelihood decision based on a Euclidean (soft) metric, just like any other block code. Oli Filth(talk|contribs) 17:13, 4 June 2009 (UTC)[reply]
It would work but, if the input is hard, it wont be able to correct any more mistakes than a hard-decision decoder would. just-emery (talk) 23:22, 4 June 2009 (UTC)[reply]
Going back to the original point, in the case of MDPC, I'm not sure how dimensionality affects the issue, unless it happens that the 2D version happens to be perfect, which it isn't (but even if it were, then discussions about perfect codes are only really relevant if one is using an optimal decoder). Oli Filth(talk|contribs) 02:23, 6 June 2009 (UTC)[reply]
soft decoding is based on redundant parity bits allowing one to be able to make fine distictions between how right or how wrong you think the bit is. 2 dimensions just isnt redundant enough. it should be pretty obvious why a 2 dimensional code with 2 errors cant be solved by any kind of decoder if the input is hard. there are 2 possible configurations that are equally likely to produce that outcome. just-emery (talk) 02:37, 6 June 2009 (UTC)[reply]
if the input is hard, then in any number of dimensions there will always be some configuration of 1+n/2 errors that no decoder can repair. in 2 dimensions it just happens to be that ANY configuration of 2 or more errors in unfixable.just-emery (talk) 02:48, 6 June 2009 (UTC)[reply]

(outdent) That's quite an intuitive reason, which is obvious once one draws the situation on paper! (Formally, every codeword is guaranteed to have one neighbour a Hamming distance of 4 away). So what's the proof that the equivalent problem doesn't exist in higher dimensions? Oli Filth(talk|contribs) 03:15, 6 June 2009 (UTC)[reply]

Yeah. fair enough. how about a conterexample. if the input is hard then a 3 dimensional code shouldnt be able to fix 2 errors but simply imagine that 2 parity bits of one dimension flip. since the groups of data bits they correspond to dont cross one another they should be fixable. just-emery (talk) 02:48, 7 June 2009 (UTC)[reply]
in addition, if the coordinates of each parity bit is given by x,y,z then the parity bits must not share more than one value. i.e. if they are z dimension parity bits then their values for x and y must be different (this is why it wont work in 2 dimensions). then it would be solvable. just-emery (talk) 18:04, 7 June 2009 (UTC)[reply]

fuzzy logic

[edit]

previously the article stated that the decoder uses fuzzy logic. this was editted out and replaced with the phrase 'the incoming data may take on values other than 0 or 1, in order to indicate reliability'. It isnt entirely clear to me that it doest in fact use a form of fuzzy logic. I would just like to hear some explanation of what the difference between the two is. just-emery (talk) 21:39, 2 June 2009 (UTC)[reply]

I know almost nothing about fuzzy logic, so for all I know the maths is very similar! But on the other hand, I've never heard a soft decoder referred to as using fuzzy logic; they're almost exclusively explained in terms of straightforward maximum-likelihood or maximum a posteriori criteria. If you can find a source that describes them as "fuzzy logic" then that would be fine, but otherwise I think it would be synthesis to denote them as such in the article. Oli Filth(talk|contribs) 22:19, 2 June 2009 (UTC)[reply]
http://74.125.155.132/search?q=cache:Wfb_8eKmXecJ:www.newworldencyclopedia.org/entry/Fuzzy_logic+%22fuzzy+logic%22+decoder+ecc+OR+%22error+correction%22&cd=6&hl=en&ct=clnk&gl=us&client=firefox-a just-emery (talk) 22:49, 2 June 2009 (UTC)[reply]
Unfortunately, that's largely derived from the Fuzzy logic article here, and I not sure that it counts as a reliable source. Even if it were, all it says is that "fuzzy logic may be used to do error correction", but not that "soft decoding uses fuzzy logic". Oli Filth(talk|contribs) 23:03, 2 June 2009 (UTC)[reply]
Its the 'New World Encyclopedia'!
http://www.newworldencyclopedia.org/entry/New_World_Encyclopedia:About
The New World Encyclopedia is intended for use by teachers and students who are drawn to the ease of use of Wikipedia, but are concerned about quality, consistency, and core values. New World Encyclopedia combines the great benefits of open source internet media with those of traditional and careful editorial supervision by scholars
the relevant part is:
A more sophisticated practical example is the use of fuzzy logic in high-performance error correction to improve information reception over a limited-bandwidth communication link affected by data-corrupting noise using turbo codes. The front-end of a decoder produces a likelihood measure for the value intended by the sender (0 or 1) for each bit in the data stream. The likelihood measures might use a scale of 256 values between extremes of "certainly 0" and "certainly 1." Two decoders may analyze the data in parallel, arriving at different likelihood results for the values intended by the sender. Each can then use as additional data the other's likelihood results, and repeats the process to improve the results until consensus is reached as to the most likely values.
This does not appear anywhere in the wikipedia article. And it specifically mentions 'turbo codes'. just-emery (talk) 00:15, 3 June 2009 (UTC)[reply]
compare the above to:
http://en.wikipedia.org/wiki/Turbo_code#Solving_hypotheses_to_find_bits just-emery (talk) 00:45, 3 June 2009 (UTC)[reply]
Offhand, I don't know how reliable it is, but it's certainly no textbook or academic journal. I would be reluctant to include something to the effect of "soft decoders use fuzzy logic" based on this single data point, given that I've never seen it described this way anywhere else, including the several texts I have on ECC.
It uses (Bayesian) Belief propagation. According to the article Fuzzy logic there seems to be some dispute over whether Bayesian probability is a form of fuzzy logic or not. I would happily settle for a link to belief propagation instead of a link to fuzzy logic. just-emery (talk) 19:17, 3 June 2009 (UTC)[reply]
It sounds likely that this is relevant. Again, I would be perfectly happy to see a link if we can find a suitable source that describes it as such. Oli Filth(talk|contribs) 21:56, 3 June 2009 (UTC)[reply]
I'm not sure what you're getting at with turbo codes? Oli Filth(talk|contribs) 10:23, 3 June 2009 (UTC)[reply]

Belief propagation

[edit]

here is a nice online pdf about error correcting codes: http://www.cs.toronto.edu/~mackay/itprnn/book.pdf which I found here: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html the chapter on LDPC codes begins with: "The best algorithm known is the sum–product algorithm, also known as iterative probabilistic decoding or belief propagation". So its at least certain that LDPC codes use it. just-emery (talk) 19:24, 4 June 2009 (UTC)[reply]

Indeed! And elsewhere in the book, MacKay describes the BCJR algorithm as "belief propagation". So certainly we're at the stage when we can least say "some SISO algorithms use belief propagation". I would certainly stop short of eliminating the word "some" in that statement; however, I'm no expert on the topic, so I don't wish to make any ill-informed decisions! Oli Filth(talk|contribs) 20:28, 4 June 2009 (UTC)[reply]
Absolutely agree.just-emery (talk) 20:38, 4 June 2009 (UTC)[reply]
this article:http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4234%2F33323%2F01576590.pdf&authDecision=-203
states that there is a difference between 'reliability' decoding and 'adaptive belief propagation' decoding.
"the soft output values delivered by the adaptive belief propagation algorithm are used as reliability values to perform reduced order reliability-based decoding of the code considered. This approach allows to bridge the gap between the error performance achieved by the lower order reliability-based decoding algorithms which remain sub-optimum, and the maximum likelihood decoding, which is too complex to be implemented for most codes employed in practice. Simulations results for various linear block codes are given and elaborated"
just-emery (talk) 20:25, 4 June 2009 (UTC)[reply]
Adaptive belief propagation is just one type of belief propagation. Its very good but very complex. just-emery (talk) 22:06, 4 June 2009 (UTC)[reply]
this:http://www.wipo.int/pctdb/en/wo.jsp?IA=WO2006%2F123543&WO=2006%2F123543&DISPLAY=DESC
on the other hand states:
"As shown in Figure 1 for conventional BP (belief propagation) decoding, the check node processor 110 and a bit node processor 120 operate serially while passing reliability messages to each other based on the belief propagation principle"
It also states:
"Although BP decoding for these LDPC codes provides excellent performance, it is too complex for hardware implementation. BP decoding can be simplified by a check node processor with a simple minimum operation, resulting in a min-sum decoding method.". just-emery (talk) 20:44, 4 June 2009 (UTC)[reply]
the article Decoding methods mentions maximum likelihood decoding but doesnt mention reliability based decoding. just-emery (talk) 21:24, 4 June 2009 (UTC)[reply]
Finally A real reference!
http://ita.ucsd.edu/workshop/09/files/paper/paper_151.pdf
"Methods for decoding LDPC codes can be classified into three general categories: 1) soft-decision decoding; 2) hard-decision decoding; and 3) reliability-based decoding. Soft-decision decoding algorithms for LDPC codes are mostly iterative in nature and devised based on belief propagation"
just-emery (talk) 22:45, 4 June 2009 (UTC)[reply]

shannon limit/sub-optimal decoders

[edit]

Are we in agreement that codes that use optimal maximum likelyhood decoders are only able to operate far below the shannon limit (due to the need for a small block size/short constraint length) and that getting close to the shannon limit therefore requires (in practice not theory) the use of sub-optimal decoders? just-emery (talk) 18:21, 7 June 2009 (UTC) Regarding LDPC codes see Gallager section 1.3 and 4.1 just-emery (talk) 20:33, 7 June 2009 (UTC)[reply]

If by "close", you mean as close as turbo codes and LDPCs can get (0.5dB or (much) lower), then yes, in practice. Oli Filth(talk|contribs) 21:10, 7 June 2009 (UTC)[reply]