Jump to content

Talk:Frame check sequence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Merging with Block check character

[edit]
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
No consensus to merge

Against :

FCS is a term that is used in Data link layer protocols in particular, while BCC is used in another world. - DéRahier 12:38, 1 October 2007 (UTC)[reply]
  • Strongly Against as per DéRahier. FCS appears at the end of Ethernet frames at layer 2, but BCC appears periodically within IBM bisync communications with a mainframe at layer 5. If the bisync com were contained in Ethernet frames across a LAN or MAN or WAN, then there would be a BCS at layer 5 within the payload of the Ethernet frame and then an FCS at layer 2 in the trailer of the Ethernet frame. The FCS in the Ethernet trailer is 4 bytes and uses the polynomial x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1. The BCC in the IBM terminal HDLC stream is 16 bits and uses the polynomial X^16+X^15+X^2+1. Ethernet frames are not an HDLC stream nor vice versa. I must admit that I am annoyed with people who do nontechnical "copy editing" such as merging articles that shouldn't be merged and deleting articles that shouldn't be deleted without doing the necessary homework. Simple Google searches on "Ethernet FCS polynomial" and "IBM BCC polynomial" pulled up this information quite easily in the first few search results. I am afraid that I see little commonality at all between FCS and BCC other than they are a bunch of ones and zeros. Perhaps we should merge all teledatacom and computer articles into one article called Ones and zeros. Please do some of the obvious Google searches to see if topics are synonymous *before* suggesting that their articles be merged. —optikos 11:31, 4 October 2007 (UTC)[reply]
I agree. I have found too much recent merging and simplification of wikipedia articles, which eliminates useful information. I gave up helping. I came here searching for relevance for fcs errors (when you find them) I know you tend to see them with a duplex mismatch, but forgot which side shows the fcs errors.--128.196.164.121 (talk) 19:58, 26 November 2007 (UTC)[reply]

For They're both the same thing with different names, and should be described (and understood) together. Protocols get encapsulated within others all the time; there's no limit on the number of checksums you can have in a single packet if you count all the different layers of encapsulation.

The complaint above seems somewhat ridiculous to me. Ethernet and HDLC checksums not just both bits, they're both fixed-length CRC trailers protecting variable-length data frames. (Even a simple longitudinal XOR is actually a CRC with a polynomial of x8+1.) The only difference is the polynomials used and the terminology used to describe them. This is not "nontechnical copy editing", it's good pedagogy recognizing multiple instances of the same basic principle and explaining them together. I expect harmonic oscillator to explain both the mechanical and electrical forms, and I expect one unified article to explain checksum trailers. 71.41.210.146 (talk) 13:35, 12 January 2008 (UTC)[reply]

Well, I consider your desire to conflate a BCC character in a transmission-block string at layer 5 with a FCS bitfield in an 802.3 frame at layer 2 to be ridiculous too. To find the location of a BCC at the end of the transmission-block string, we parse that transmission-block string by searching for EOB, EOT, ETX, or ETB characters as mentioned in the transmission block article and in Federal Standard 1037C. There is absolutely no searching for EOB, EOT, ETX or ETB characters in an 802.3 Ethernet frame. We don't go looking for EOB, EOT, ETX, or ETB characters to find FCS. Why? Because an 802.3 Ethernet frame is not a transmission block because it is not a string of graphemes. Layer 5 might have graphemes, but layer 2 absolutely certainly does not. This is an incredibly fundamental difference between layers 2 and 5. Instead, Layer 2 has frames (not transmission blocks) where the frame has some sort of header, and some sort of a priori determination of length, and optionally (present in 802.3's case) a frame trailer, where the FCS is located. So in layer 5 we see a string of graphemes that are parsed to find the end of transmission block via a posteriori means. In layer 2 we see a frame whose length is known a priori from the length field in the 802.3 frame header and is not organized into a string of graphemes. Therefore, perhaps you can see that I consider your desire to conflate entirely separate and different concepts as itself ridiculous. Therefore, trying to play a trump card that claims that my position is ridiculous and yours is not is probably not a fruitful line of reasoning, because we both consider each other's premise deserving of ridicule and exposure of the facts. For me, I think that the numerous facts are on my side of the discussion that separate BCC from FCS quite distinctly. Let us count the ways in which they are different:
  • 1) FCS is 32-bit, but BCC is 16-bit.
  • 2) FCS is calculated by the polynomial x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1, but BCC is calculated by the polynomial X^16+X^15+X^2+1.
  • 3) FCS appears within an 802.3 Ethernet frame, but BCC appears within a transmission-block string.
  • 4) FCS's position is determined a priori by the 802.3 frame-header's length field, but BCC's position is determined a posteriori by parsing graphemes, looking for EOB, EOT, ETX, or ETB graphemes from a character-set.
  • 5) FCS appears in layer 2 in the OSI protocol-stack reference model, but BCC appears in layer 5.
  • 6) FCS is internationally standardized by IEEE, but BCC and HDLC are IBM proprietarianism.
  • 7) FCS is an aspect defined by its layer-2 technology (802.3) to determine the validity of that layer-2 technology's frame (as opposed to being corrupted), but BCC is a layer-5 add-on that supplements a layer-2 technology (HDLC) to rectify an insufficient amount of validation in that layer-2 technology.
  • 8) FCS is known to layer-2 802.3 Ethernet, but BCC is unknown to its corresponding layer-2 technology: HDLC.
  • 9) FCS is always calculated at line rate by the framer silicon or NPs that are responsible for layer-2, but BCC either is calculated by layer-5 software at general-purpose-software speed or imposes an arcane layer-2-to-layer-5 jump to be performed by the HDLC framer silicon or NPs.
  • 10) FCS is a subclass of 32-bit CRCs, but BCC is a subclass of 16-bit CRCs. 16-bit CRC and 32-bit CRC are a subclass of CRC in general. FCS and BCC are second-cousins, not siblings and not even first cousins.
  • Wow! off the top of my head I have listed 10 ways that they are quite different and undeserving of article conflation. I once again reiterate: I do not see how "they are both the same thing with different names" as you claim. That is a quite adept David-Copperfield-like "different name" that can 1) double the number of bits from 16 to 32, 2) swap out HDLC for 802.3 layer-2 technology, 3) swap out IBM proprietarianism for an IEEE international standard, 4) swap out locating via a posterioi string parsing with locating via a priori arithmetic, 5) swap out a layer-5 & layer-2 symbiosis for an entirely layer-2 self-contained solution, 6) swap out silicon and NPs at layer 2 for general-purpose software at layer 5, and so forth. That is quite a "name difference" that can accomplish all of that for "the same thing"! That is a very hard-working "name difference". Too hard-working, perhaps. But then again, if FCS and BCC are "the same thing", why would a "name difference" need to accomplish all of this drastic swap-out in the first place?! If they were the same thing to begin with, none of that swap out would be necessary for the so-called "name difference" to accomplish as one of its duties.

Harmonic oscillator is a good superordinate term (i.e., name of a superclass) that contains electrical harmonic oscillator and mechanical harmonic oscillator are subordinate terms (i.e., names of subclasses). An FCS is not a BCC, in the same way that a mechanical harmonic oscillator is not an electrical harmonic oscillator. A BCC is not an FCS, in the same way that an electrical harmonic oscillator is not a mechnical harmonic oscillator. What superordinate term (i.e., what superclass) do you propose to house any alleged commonality between BCC and FCS? Are you proposing merging both BCC and FCS into the CRC article? If so, that is not what is proposed. I stand firmly against merging a subclass into another subclass. I am willing to entertain merging all subclasses into a superclass as sections within the superclass article. Should we merge the articles for the subclasses sport-utility vehicle and pick-up truck? They are both subclasses of the superclass passenger vehicle. So, should we merge sport-utility vehicle into pick-up truck or should we merge pick-up truck into sport-utility vehicle? Or should we merge both sport-utility vehicle andpick-up truck into passenger vehicle, much as automobile already has been? —optikos (talk) 19:05, 13 January 2008 (UTC)[reply]

Whew, that's a lot to respond to! Let me try.
  • 1) and 2) That's a non-difference. Other protocols, such as HDLC, use a 16-bit FCS. It's a trailing checksum; the details of its computation are irrelevant.
  • 3) Right, they both appear within some sort of "data packet".
  • 4) Factual disagreement Standard DIX Ethernet doesn't have a header length. While 802.3 went and added one, 99% of all ethernet packets transmitted these days use that field for the "EtherType" value, typically setting it to 0x0800 for IPv4. (See Ethernet II framing for details.) The length is determined by searching for a trailing delimiter. The exact delimiter is physical-layer dependent. This is a cessation of signal in the simple 10 Mbit manchester code versions, and reserved 4b/5b code values for 100baseTX (IEEE 802.3-2005, section 2, paragraph 24.2.2.1.5).
    Likewise, HDLC frames are terminated by a "flag" pattern of 6 consecutive 1 bits. Within the data, a 0 bit is "stuffed" after 5 consecutive 1 bits, and removed by the receiver, so the flag pattern cannot occur in normal data. (7 or more consecutive 1 bits is an abort, something like an RS-232 break.) It is very unusual for a link layer protocol to use a length prefix; the only one I know of that does is DDCMP.
  • 5) This is opinion, but I think the layer difference is irrelevant. It's a packetization/encapsulation operation. I point out that IBM's Bisync protocol, which is also a link-layer protocol, has a "block check code". (Frame format is SYN1 SYN2 (optional SOH+header) STX data ETX BCC)
  • 6) Factual disagreement IBM developed SDLC, then ISO got hold of it and created HDLC out of it. While the two are related, the HDLC name implies the non-IBM-proprietary version. HDLC is defined by the IS0 13239 standard.
  • 7) I'm afraid that I don't understand what you're saying. "Block Check Character" is simply the term for "checksum" that tends to be used in certain contexts. There are dozens of examples in computer science of synonyms that are still in common use. Program counter vs. Instruction pointer. Translation lookaside buffer vs. Address translation cache. Frame vs. packet vs. datagram vs. transmission block. Secondady storage vs. DASD. IBM and large standards bodies like ISO are often given to making up their own terminology. Not to mention the disagreement over bit numbering!
  • 8) As I said in the previous point, the name difference is an accident of history. People who called their packets "frames" developed the term "frame check sequence". People who called their packets "blocks" developed the term "block check character" or "block check code". They're no more different than an egg is from un oeuf.
  • 9) Factual disagreement An FCS is not always calculated by silicon. Hardware is used when the data rate makes it necessary, and software when that's fast enough. The Point-to-Point Protocol uses a stripped-down version of HDLC, which includes a "frame check sequence" uniformly computed in software. Heck, I make one product at work that supports both 16- and 32-bit FCSs on HDLC, and uses hardware to compute the 16-bit FCS (because it's available), but turns it off and uses software when generating a 32-bit FCS (because the hardware doesn't support it). The implementation choice is a strictly internal detail that doesn't affect the signal on the wire in the least. Modern "WinModems" do most of V.32bis in software, and all of the higher layers - including V.42 HDLC-style packetization - in software.
    (Just to give an example in the other direction, the Motorola MC68302 "Integrated multiprotocol processor" does Bisync, including BCC generation and checking, in hardware.)
  • 10) Excuse me? You're saying that the difference between a 16- and a 32-bit CRC is a major qualitative difference? That's.... crazy. USB uses 5- and 16-bit CRCs, ATM cells use an 8-bit CRC on the cell header (but AAL5 uses a 32-bit CRC on the packets overall), Ultra-ATA uses a 16-bit CRC, HDLC uses 16-or 32-bit CRC (depending on the version, and some flavors like PPP negotiate it at run time), PGP uses a 24-bit CRC, lots of standards (802.3, IEEE 1394) use a 32-bit CRC, and some use a 64-bit CRC. They're all the same basic math. Are you suggesting that the Cyclic redundancy check article needs to be split into different articles based on CRC size? It's trivial to make a single piece of hardware, or a single software routine, that can compute CRCs of different sizes. This is strictly a quantitative difference. The CRC size is chosen to make the error budget work out.
I wouldn't really object to merging both the FCS and BCC articles into CRC, but there's a problem... the CRC article is already big enough. Not to mention than an FCS doesn't have to be a CRC (it just usually is), and the term tends to be used in a particular context (networking), and that's worth explaining. It's also worth explaining the context in which the term BCC tends to be used. But the reason for combining those is that they're both really small articles, and it's just not worth carrying two around when they should have essentially identical content.
Pickup truck and sport-utility vehicle are both similar small trucks, but there is a distinction (pickup trucks are by definition open; SUVs are enclosed), and there's enough interest in each to justify two decent-sized articles.
For a better example, look at the articles on car trunk and car boot: both link to the same article, because they're both the same thing, just on different sides of the pond. There are numerous terminology differences - truck vs. lorry, gas vs. petrol, tire vs. tyre, hood vs. bonnet, turn signal vs. blinker, etc. but everyone recognizes the terms are synonymous. The fact that British automobiles have the steering wheel on the other side, tend to be smaller, are often sold under different names, are more likely to use metric components, and don't cope with desert temperatures very well doesn't change that fact. I could point out dozens of differences, but none of them are important differences.
Just like with english dialects in different countries, there are different dialects of technical jargon in different environments. See the Jargon File for lots of examples. They are slowly being eroded by global communication, but had decades to develop before the interent became omnipresent, and are enshrined in various standards documents and cultural traditions. For some strange reason we still talk about core dumps, even though magentic cores are long gone.
I maintain that the difference between a Bisync block check code and a PPP frame check sequence is the name and nothing but. Yes, Bisync uses either IBM's CRC-16 polynomial (x^16+x^15+x^2+1) or a 1-byte longitudinal redundancy code (x^8+1), while PPP uses either the 16-bit CRC-CCITT (x^16+x^12+x^5+1) or CRC-32 (negotiable at connection time), but complaining that trailing 16-bit CRCs are somehow significantly different because they use different polynomials is utterly ridiculous.
Both are packet-based data link layer protocols, both use an embedded trailing delimiter, both provide for byte-stuffing to escape delimiters that appear in the data stream, and both use a trailing checksum to validate those packets.
Finally, the most important point is that the Block check character article is a 2-paragraph stub that hasn't received a single non-cosmetic edit since it was imported from Federal Standard 1037C six years ago, and should be merged somewhere if reasonably possible.
There is a (slightly) larger article on a synonymous term, and the synonymous term is somewhat more popular (as it's used by both the HDLC and 802.3 protocol families, which are much more popular than Bisync-derived protocols), so that's the direction I'd suggest merging, but that detail is open to discussion.
71.41.210.146 (talk) 15:26, 14 January 2008 (UTC)[reply]
Let's cut to the chase scene. What would it take for you to lose this argument? What would convince you that you are perhaps incorrect? Or is there no such line of reasoning that would convince you? —optikos (talk) 02:44, 4 February 2008 (UTC)[reply]
What it would take is a significant weight of WP:SOURCEs that make a strong distinction. As compared to sources that consider them variants on the same thing:
  • http://www.tml.tkk.fi/Studies/Tik-110.300/1998/Essays/error_detection.html (section 3.2, "Block Check")
  • http://faculty.washington.edu/ddey/msis-523/download/ch03.pdf (page 6)
  • http://www.freepatentsonline.com/EP0405041.html "It generally involves a computing mechanism based upon a a polynomial value in order to perform a Cyclic Redundancy Checking (CRC), the result of which being a Block Check Character (BCC) also called Frame Check Sequence (FCS)."
  • Carl Stephen Clifton (1987), What every Engineer should know about Data Communications, CRC press, p. 90, ISBN 0-8247-7566-X, This field is followed by the 16-bit frame check sequence (FCS) and the final flag. The FCS is a block check character (BCC) or cyclic redundancy check character… (emphasis added)
  • http://www.cob.niu.edu/faculty/m10cfg1/summer460_03/dc_chp4.ppt (slides 24–25)
  • The Philips SCN2652 multi-protocol communications controller uses the term "FCS" with bit-oriented protocols, and "BCC" with byte-oriented ones, but offers CRC-CCITT for both. (For compatibility, the bit-oriented CRC is preset to −1, while the byte-oriented one is preset to 0.)
  • Dario J. Toncich (1993), Data Communications and Networking for Manufacturing Industries (PDF) (2nd ed.), pp. 235–236, ISBN 0-646-10522-1
  • http://www.cse.shirazu.ac.ir/~zjahromi/data%20course/halsall/ch3_4.pdf beginning on page 4
  • http://www.patentstorm.us/patents/5043989/description.html "Block Check Character (BCC) also called Frame Check Sequence (FCS)" (emphasis added)
Note that many of these sources reserve the term BCC for 1-byte longitudinal parity (CRC with polynomial x8−1), and use the name FCS for more sophisticated CRCs, which I agree is a common distinction that should be documented, but they all agree that they are alternatives with equivalent function but different performance. (Like propeller vs. jet engines, or diesel vs. petrol engines.) If someone had enough material to fill two articles, it would be a reasonable place to divide them, but that's far from the case here.
71.41.210.146 (talk) 15:02, 17 September 2009 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Frame standard

[edit]

Hi, I'm a bit confused by the image and it's explication in this article. I thought only the IEEE frame format had a SFD (length 1B + a preamble of 7B) as opposed to the DIX frame format which has a 8B preamble withouth SFD. If that's so, then this is an image of an IEEE frame format. However, this format contains a "length" field (length [46;1500]B), whereas the image displays a "EtherType" field which makes me think of the DIX frame format's "Type" field (length >1500B). Any clarification? 91.183.186.227 (talk) 10:04, 5 November 2014 (UTC)[reply]

First of all, there is no longer a distinction between "DIX" and "IEEE" frame formats. In the original IEEE 802.3 specification, the 2-byte field after the destination and source addresses was a length field, but, as of IEEE Std 802.3y-1998, it is a field that's a type field if it has a value >= 1536 and a length field if it has a value <= 1500 (yes, that's really what IEEE Std 802.3-2008, section 1, part 3.2.6 "Length/Type field" says, including no valid interpretation of that field for values between 1501 and 1535).
Second of all, the 8 octet stuff before the frame data is the same in D/I/X Ethernet and 802.3 Ethernet - 7 octets of 10101010 followed by one octet of 10101011. The difference between "8-octet preamble" and "7-octet preamble followed by a one-octet SFD" is the difference between calling the 8th octet, with its different value, the 8th octet of the preamble or a separate Start Frame Delimiter. (At least that's what my D/I/X 1.0 spec says; perhaps the 2.0 spec decided to call the 8th octet an SFD.) Guy Harris (talk) 21:45, 5 November 2014 (UTC)[reply]