Talk:Byte/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Paradoxical "Citation Needed?"

In the beginning of the article we see "There is no standard[citation needed] for the size of a byte in bits."

If we were to find a valid citation for that statement, wouldn't the statement be invalid, as whatever we cite would define the standard? (Even if the definition was no specific number?)

And it seems to me that by like leaving it as-is, we leave original research on wikipedia!

How can we deal with this eyesore?

Universalss (talk) 06:50, 31 August 2009 (UTC)

Removed "citation needed" and rearranged some for readability. Please feel free to refine it further!
HenkeB (not logged in) 83.255.41.8 (talk) 22:13, 16 September 2009 (UTC)

I see there are also paradoxical "Citation Needed" in the body of the article:

"The unit symbol for the byte is specified in IEEE 1541 and the Metric Interchange Format[8] as the upper-case character B, while other standards, such as the International Electrotechnical Commission (IEC) standard IEC 60027, appear silent on the subject.[citation needed]" ... and "It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, i.e. the decibyte, is never used.[citation needed]"

If an authority is silent on a subject, where do we find a evidence of their silence? Similarly, citations for the non-use of the term 'decibyte' don't exist anywhere precisely because the term is unused! Do we require citations for the non-existence of unicorns?

Benklaasen (talk) Wed Sep 21 17:45:48 GMTDT 2011 —Preceding undated comment added 16:46, 21 September 2011 (UTC).

THERE ARE EIGHT BITS IN A BYTE!98.210.246.205 (talk) 05:21, 23 October 2015 (UTC)

Older, miscellaneous comments

The text describing the image used to represent the difference between the decimal and binary interpretations of values is somewhat ambiguous. For example, 15 is 50% higher than 10 but 10 is only 33% lower than 15... As a result, the current text with the image simply describing it as a difference between the two is ambiguous, and both sets of figures uploaded are correct. I have tried changing the text to read "This chart shows the growing percentage shortfall of the decimal interpretation from the binary interpretations of the unit prefixes plotted against the logarithm of storage size. Example uses multiple units of bytes." SEoF (talk) 11:42, 10 May 2010 (UTC)


The reason that a kilobyte is 2^10 is that the same numbers stored in the computer that the data represents are used to define the addresses where the numbers are, so a 10-bit address bus can address one 2^10th bytes (or a kilobyte).. A 20 bit address bus can address 2^20th bytes, or megabyte. And so on.. The IBM PC's used a 16-bit address bus on the CPU but the address bus was actually 20-bits, how this happened is that the Intel 8086 chips would waste an extra cycle to produce a address word of 20-bits, by using 16-bits of one word and 4 of another. This is why in some 8086 assembly programs you might see hexadecimal numbers to address memory like this: [00bb:aafe]. In fact the 8086 is the virtual machine for Intel chips following the 386 release. It's not efficient nor easy to use word sizes of 20-bits, 10-bits without loss as internally computers use 8, 16, 32, or 64 bit data busses. So your addressing of memory changes in purportion.. Indirect storage media like tapes, flash cards, and such, where the address word is loaded into the device before reading out the data (rather than sending the entire data word to the device), the storage on such devices can be of any length..

a


I changed the text from "byte is a basic unit of storage" because bit is a basic unit of storage, byte is derived.

It could just as easily be said that the byte is the basic unit and the bit comes from making eighths of a byte. --Ihope127 17:25, 5 September 2005 (UTC)

A few years ago I too thought byte meant 8 bits and my eyes were opend (by a lisp hacker as it happens). If you don't believe the claims about 7 bit byte and 9 bit byte and so on I suggest you look on the web.

I hope this article is an improvement.

--drj


I've also remember an etymology that byte originally stood for by eight, but the Jargon file says that even the first machines having bytes were not 8-bit. Oh well, maybe it's worthy just for the mnemonic value. --Chexum


i found some additional things for the table... they're really there and official. example: 5 EB = all the ever spoken things of human beings

Is the "doggabyte" unit for real? Reference? Also, do we really need articles about anything past exabyte, since these have no practical application yet and most probably never will?—Eloquence 23:10, Dec 31, 2003 (UTC)

I removed the links to nonabyte, doggabyte, nobibyte, and dogbibyte, because I realized that the fact that I had made those pages redirects to byte created a self-link situation. I think the other articles for absurdly large units are at least not so stubby that they have to go. As for practical application, remember when you thought no one could possibly ever fill up a whole gigabyte?! --Ed Cormany 23:40, 31 Dec 2003 (UTC)
I'd still like to know if doggabyte is a real term. Google provides no credible reference (only a couple of websites mention it).—Eloquence
Supposing "doggabyte" is a real term, wouldn't the binary form be "dobibyte" instead of "dogbibyte"? Consider "zettabyte," which goes to "zebibyte." Also, "dogbibyte" is hard to say. --Bkell 08:11, 14 Feb 2004 (UTC)

Nice article! Good work! Dpbsmith 02:00, 27 Jan 2004 (UTC)


C for example defines byte to be synonymous with unsigned char -- this makes it sound as though standard C allows definitions of the form "byte foo". Marnanel 00:36, 20 Feb 2004 (UTC)


I suggest removing mention of the "standards"-based terms, as they have not caught on and (likely) never will. Pgunn 15:30, 22 Feb 2004 (EST)

The "standards"-based terms (try to) redefine existing standards, 1KB=1024b, it shouldn't suddenly be 1KB=1000b. That should probably be mentioned in the article.
Simon Arlott 19:36, 22 Apr 2004 (UTC)
Common usage does not imply "standard". The "kibi", "mebi", etc prefixes are IEC standards.
By the way, ask a hard drive manufacturer what a "gigabyte" is, and their answer will be "1 000 000 000 bytes", since this allows them to claim that a disk with 60 000 000 000 bytes is 60Gb, whereas common usage would call this a 55.9 Gib disk.
Jrv 5 July 2005 19:35 (UTC)
Standard means "a quality or measure which is established by authority, custom, or general consent", so, while the authoritative IEC prefixes are standard, so are the actual, generally consented prefixes. On the other hand, I question the authoritativeness of an organization who charges money for access to their "standards".

Undeference (not logged in - Sept 22)


There is no mention in the article for why a kibibyte is different from a kilobyte. See this IEC article for several important reasons why 1000 != 1024.

Also, there is nothing in this entry about the potential confusion between "B" (SI abbreviation for Bel) and "B" (popular abbreviation for a byte).

Jrv 5 July 2005 19:35 (UTC)

I seriously doubt that anyone has ever actually been confused by this, but go ahead and mention it if you want. – Smyth\talk 5 July 2005 21:53 (UTC)
I too found this interesting since IIRC (correct me if I'm wrong) IEC recommend the use of "kB" to mean 1000 bytes. The closest I can come up to make bytes use a proper SI form would be kbytes, Mbytes, etc., but this is not as nice as just the full kilobyte, megabyte, etc. form (it's like kmeter instead of kilometer or km). - Undeference (not logged in - Feb 6)
In practice, the risk of confusing B(els) with B(ytes) is negligible. In practice, bels are only used with the deci prefix (see decibel)). Talking about decibytes would be ridiculous! Not even decibits make sense to me, although fractions of bits actually do exist, for example in information theory, as it is just as easy to write 0.1 bits as 1 decibit. TERdON 04:19, 3 January 2007 (UTC)
Bels are marked without the deci prefix. "The bel is still used to represent noise power levels in hard drive specifications, for instance. The Richter scale uses numbers expressed in bels as well, though they are not labeled with a unit. In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B. In astronomy, the apparent magnitude measures the brightness of stars logarithmically, since just as the ear responds logarithmically to acoustic power, the eye responds logarithmically to brightness." (from Bel --tonsofpcs (Talk) 08:34, 3 January 2007 (UTC)
I'd like that first claim about hard drives referenced, as I've only ever seen dB numbers. The other usage areas aren't really directly related to computers, so the risk would only exist in very specialized contexts. TERdON 11:57, 3 January 2007 (UTC)
Bels are not SI units, it was once proposed but explictly rejected by the comittee (see decibel), so I removed that claim.63.115.63.178 (talk) 22:54, 17 April 2015 (UTC)

True or false??

True or false: it is very likely that the "kibibyte"-like terms will not get widely adopted anytime between now and 2040. 66.245.117.101 00:13, 27 Jun 2004 (UTC)

Extremely doubtful. I find it more than a little annoying that Wikipedia articles use "kibibyte" and "mibibyte" when the vast majority of readers know these as the more widespread and familiar terms "kilobyte" and "megabyte" (or even "K" and "megs"). In spite of the wishes of ISO and its ilk, forcing its terminology on the masses just ain't gonna happen. Sheesh. — Loadmaster 23:03, 5 July 2007 (UTC)
I don't see why Wikipedia should respect some crazed standard ISO pulled out and virtually nobody ever liked or used. I consider this article to be wrong, on the basis that it wrongly specifies kilo/mega/etc. units as powers of 10. By doing this, Wikipedia only creates confusion and helps dishonest marketing guys. 217.125.117.197 14:12, 16 November 2007 (UTC)
Agreed. Nobody uses the mega/kilo units in their power-of-ten form, however correct by SI standards, when referring to data. The only exception to this is hard-disk manufacturers who obviously achieve a larger-looking drive size by hijacking the meaning of the units. And now Wikipedia is, in effect, endorsing that! - Edam (talk) 11:44, 26 June 2008 (UTC)
People ought to use ISO units. If a kilobyte means 2^10 bytes, but a kilo-Pascal means 10^3 pascals then we'll waste the time of generations of students to come - just as previous generations wasted their time learning British Imperial Units (with the slug for a unit of force, a perch for a unit of distance, etc.), or as U.S. students waste their time learning the unrevised British Imperial Units, (inches, feet, yards, miles, gallons etc.).
If nothing else, the U.S. use of non-metric units is a anticompetitive barrier, akin to a tariff, on parts manufacturers from other nations: try selling bolts to the U.S. automotive industry if you're from any non-U.S. (I.e. metric using) nation. Similarly, U.S. cars are harder to sell abroad when they use non-metric parts. (These last points may have finally been sorted out,: it's been a while since I checked.) The U.S. insistence of using their own system of units is parochialism at its worst and underlines that the U.S. is only interested in "free trade" when it benefits them.
StandardPerson (talk) 05:04, 16 February 2014 (UTC)

Just about every operating system(if not actually every one) and all software I've used defines a bit as 1 or 0, a byte as 8 bits, and kilobyte as 1024 bytes, a megabyte as 1024 kilobytes and so on. For this article to state that these units of measures can be expressed as simple exponents is simply misleading. So I agree that the use of the kibibyte as if it were fact is annoying. It's a largely unused, and unimplemented standard. I've yet to see it really used accept for on places such as wikipedia, and hardware manufacturers that like to make an extra buck confusing customers. When I got my A+ certification not too long ago, the test still used the 1024 system. I really don't think there is any point in changing it. Just seems silly. —Preceding unsigned comment added by 66.227.223.5 (talk) 19:07, 8 May 2008 (UTC)

This is wrong because these units (Ki-, Mi-, etc.) can be seen as default units in Ubuntu on the Gnome desktop. I'm sure you can find other examples. I disagree with previous comments stating that it is unclear: the use of KB or MB is confusing, because you don't know if they are multiple of ten or powers of two, whereas KiB is crystal clear: it is 1024 B! Calimo (talk) 17:22, 1 February 2009 (UTC)

Origin of the word byte ?

The line that comes closest to answering this questions is : "The word was coined by mutating the word bite so it would not be accidentally misspelled as bit." Now whats the relation between the word "bite" and the concept of the byte ? How did bite come up in the first place ? Jay 16:27, 16 August 2005 (UTC)

http://www.byte.com/art/9502/sec2/art12.htm

From the polish byte (być=to be), als equivalent of german "wesend", i.e. the set of a class and his complement in a sign system or "univers du discours"
[1]
Please reply in the same section as what you're replying to. It's just plain very annoying when one doesn't do that. --Ihope127 17:27, 5 September 2005 (UTC)

I asked the same question on Wikipedia:Reference desk/Language. Copied here. Jay 16:52, 6 September 2005 (UTC)

In the byte article, the line that comes closest to answering this questions is : "The word was coined by mutating the word bite so it would not be accidentally misspelled as bit." Now whats the relation between the word "bite" and the concept of the byte ? How did 'bite' come up in the first place ? And why would someone mis-pronounce 'bite' as 'bit' ? Jay 18:36, 30 August 2005 (UTC)

A "bit" is the fundamental unit of digital information and can take on one of two values, commonly either zero or one. It takes a group of "bits" to define say a character, typically an eight bit group. This group of eight bits is called a "byte". There are exceptions to the eight bit grouping so this is a generalization. Now, imagine a long string of bits (zeros and ones) from which you take a "bite" exactly eight bits long and that each "bite" represents a number or letter. That is the concept (I think) that you were asking about. There was opportunity for confusion between the words "bit" and "bite" and so the word "byte" was coined to replace "bite" in an attempt to reduce the confusion. hydnjo talk 19:54, 30 August 2005 (UTC)
Thanks you've precisely answered my question. So bite refers to the physical action of "biting". But is the name origin just an assumption ? Is there some link which says that byte comes from bite meaning "take a bite from a long string" ? This link gives an alternate meaning (retronym ?) "Binary Yoked Transfer Element" Jay 17:26, 1 September 2005 (UTC)
I imagine the similarity with "bit" also led to the introduction of "byte". Then someone thought they were too similar, and changed the "i" to a "y". — Nowhither 01:55, 31 August 2005 (UTC)
Would this person be Werner Buchholz ? Byte article mentions only this one name. This link says that he published a memo in 1956 July where he mentioned the word "byte", meaning the transition from bite to byte wouldn't have taken much time. Jay 17:26, 1 September 2005 (UTC)
I saw in a recent post in the comp.std.c newsgroup that Buchholz coined the term according to "Blaauw and Brooks". I'm guessing that they authored a historical account of the event. — Loadmaster 17:07, 19 October 2006 (UTC)
Also, the word "bit" comes from "binary digit" Superm401 | Talk 02:16, August 31, 2005 (UTC)

Possible backronyms

I've always thought that BYTE comes from BinarY TErm. At least this is what I was taught over and over 20+ years ago...

Sounds like a backronym to me. — Loadmaster 17:08, 19 October 2006 (UTC)

Other sources have also said that the word byte comes from the following: BinarY TablE

Again, backronym. --Air 09:37, 25 January 2007 (UTC)


There is a reference to the STRETCH origin of the word here: http://mywebpages.comcast.net/georgetrimble/A.html (just before the Multiple General Purpose Registers section).

The reference I have is "The origin of the word byte" by Buchholz, IEEE Annals history of computing, 3(1) page 72. However, I have not seen this paper and don't have access to a copy. Can anybody help out? Derek farn 12:07, 25 January 2007 (UTC)

The definition of BYTE is BinarY TablE. This is based on a paragraph in one of the textbooks I used. The textbook stated "binary table" in referring to a table which had the various characters we could view, on the left side, the ASCII decimal equivalent, to the right, a Hexa Decimal equivalent to its right, and a binary equivalent, at the very far right. This was to demonstrate that the computer uses these combinations of binary digits (or BITs) to create all the characters and do all the actions we want the computer to show and do. This does not dispute anything stated by anyone else. It is only a definition, not a description of how it looks, nor does it state its size. (User: Amigan_1)

Not "to bite" but "a bite"

Here is a claim someone has made to coining the word byte. http://www.byte.com/art/9502/sec2/art12.htm

Here is an excerpt from a letter the author of that Byte magazine Letter (my father) wrote to me:

On good days, we would have the XD-1 up and running and all the programs doing the right thing, and we then had some time to just sit and talk idly, as we waited for the computer to finish doing its thing. On one such occasion, I coined the word "byte", they (Jules Schwartz and Dick Beeler) liked it, and we began using it amongst ourselves. The origin of the word was a need for referencing only a part of the word length of the computer, but a part larger than just one bit...Many programs had to access just a specific 4-bit segment of the full word...I wanted a name for this smaller segment of the fuller word. The word "bit" lead to "bite" (meaningfully less than the whole), but for a unique spelling, "i" could be "y", and thus the word "byte" was born.

He was refering to "bite" as a noun not a verb, the definition of which is "less than the whole." Webster.com defines it even better "an amount taken usually in one operation for one purpose," which is precisely what I think he meant.User:koydooley

In support of this, I have heard the term "nybble" to be used to mean half a byte (as nibble might be half a bite). i am not aware that "nybble" was in broad use, but everyone I worked with felt that byte came from being a bite of data, e.g. a part that could be taken and easily consumed. 192.240.14.1 (talk) 21:11, 6 October 2014 (UTC)

The term nybble was coined in a humorous article in Datamation that ridiculed the word byte. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:41, 6 October 2014 (UTC)

See also Byte Magazine, Volume 2 Number 2, page 144, letter by William Buchholz https://archive.org/stream/byte-magazine-1977-02/1977_02_BYTE_02-02_Usable_Systems#page/n145/mode/2up

Byte = By-Eight?

I read somewhere that Byte is actually from "by eight" originally, not 'bite'. (Maybe it should have looked like "byght") 67.5.157.143 16:59, 9 May 2006 (UTC)

A "byte" was usually less than eight bits in the early years of the use of the word. -R. S. Shaw 03:46, 10 May 2006 (UTC)

Larger Units Tables

It seems redundant to have two tables about the same thing (one at the top and the other at the bottom). I think they should be combined. If not, then the lower table should be expanded to include exabytes. This term is in common usage today in the computer industry. (For example, a 64-bit processor can address 16 EB of virtual memory.) I will go ahead and expand the lower table, but I'll leave the redundancy question for you all.
MAzari 19:40, 11 August 2006 (UTC)

I was looking for larger powers as are in use in various other places, see e.g. https://sites.google.com/site/byteisinfinity/. It would be beneficial to include here. Yet I am unsure about an official source. Comments anyone? --Boris 'pi' Piwinger (talk) 11:45, 26 March 2013 (UTC)

Crumb

Did somebody add "crumb"? I've never heard (read) the term. Could you please cite sources to verify this?

Good point. Google throws up nothing of interest except Wikipedia related material. I will ask around. Derek farn 23:51, 22 January 2007 (UTC)

I think this should be removed as being either a hoax or a rarely-used neologism (which is against Wikipedia standards). I found a non-Wikipedia entry for it at MathWorld, but MathWorld is sometimes quite suspect. I've run across a earlier case or two of neologism from there. Other editors have expressed concern about MathWorld: Wikipedia talk:WikiProject Mathematics/Archive 11#MathWorld, Wikipedia talk:Reliable sources/archive2#Judging secondary sources from their sources; also there was a prod'd (deleted) page, at the moment still in Google cache, deleted because it was a MathWorld neologism.
Another note, there is a "definition" like this at Crumb, which was tagged with a {fact} tag (but somebody has removed the tag). -R. S. Shaw 04:48, 23 January 2007 (UTC)

Programming languages

Article used to say: "It is one of the basic integral data types in many programming languages." I have corrected this to some programming languages, especially systems programming languages. Higher-level languages typically do not have an integral "byte" data type, though they may have a data type to represent characters (which is sometimes only for ASCII characters, sometimes more general). The C family (including Java) of course has bytes as integral types (called char), but there are other language families around.... --Macrakis 23:28, 25 February 2007 (UTC)

Well, just to be clear here: C and C++ have the "char" type, which is an integer data type capable of holding the smallest addressable unit of memory (actually that's "unsigned char", but no matter). Java has a "byte" type that is an 8-bit signed integer type, and it also has a "char" type that holds a 16-bit Unicode character. To confuse matters, C/C++ also has a "wchar_t" type to hold wide character values, which may be as small as 8 bits on some systems. — Loadmaster 22:59, 5 July 2007 (UTC)
Also, at least one well known higher level language (Python 3) has a builtin byte type, used to represent the encoding of data in the outside world (in files, network connection data streams, etc.) as distinct from characters which are abstract. Paul Koning (talk) 18:54, 25 March 2015 (UTC)

Update rejection due to architectual dependency?

I just added "page" to the list of "Alternate words". A TW script by Oli Filth immediately comes back with "Page size depends on architecture, etc.." How is the dependence on "architecture, etc.." any less true of the other terms on the list? In fact, most of them are followed by qualifiers specifying under which circumstances the term is valid. Oscar 14:37, 18 August 2007 (UTC)

"Page" has a specific technical meaning; it refers to a concept, and doesn't refer to any particular size. You're right that terms such as "double word" are also architecture dependent, but phrases such as these don't have any technical meaning other than to refer to specific amounts of memory. In my opinion, adding "page" to this list would just cause confusion. Oli Filth 14:53, 18 August 2007 (UTC)
Also, the terms currently in the list refer to atomic storage elements. In general, a page is a relatively large area of memory; the data stored there is unlikely to be atomic. Oli Filth 14:58, 18 August 2007 (UTC)
"Page" refers to a 256-byte block of memory on certain architectures, just as "word" refers to a 32-bit block of memory on certain architectures. "Page 0" has special technical significance on certain architectures, and "Page 6" on others, but those aren't the terms being used here. The phrase "I'm copying a page of memory into this buffer" is syntactically identical to "I'm copying a double-word of memory into this buffer." Also, the term "page" is obviously inspired by the term "word", which further establishes its belonging on this list.
Although you're right that the existing terms are atomic on certain architectures, the list isn't one of "Alternate terms for atomic memory blocks".
Looking at it from a different perspective: Outside of "page", can you think of other terms for memory blocks that aren't simply SI prefixes on "byte" and follow in the same tradition of "the normal hacker-ly enjoyment of punning wordplay" as the terms on the existing list? I guess I'm thinking that this term belongs on the list more than it doesn't belong on the list, and if it doesn't belong on the list, then there should be a list that it belongs on, and that list should include the words already on this list... Do you see where I'm going with this?
On a somewhat related note: does the last paragraph (about the motivation of the terms) really make any sense? The other terms are at least as ambiguous as "word", so I don't see how disambiguation could be a fundamental motivation.
~ RedSolstice 15:29, 18 August 2007 (UTC)

IEC

Someone might want to link to the correct page for IEC instead of making it to the disambiguation page. There's more than one instance of this on more than one page. LtDonny 23:04, 23 September 2007 (UTC)

Multiples vs. Powers

Should the word 'powers' be used instead of 'multiples' in '"Since computer memory comes in multiples of 2 rather than 10"? 131.107.0.105 02:18, 7 November 2007 (UTC)

Yes, you are correct -- it should be powers. Indefatigable 21:06, 7 November 2007 (UTC)

Reason for 8 bits instead of any other number?

It can be noted that a byte is 8 bits because 7 bits have too few possibilities (128 or 2^7) and 9 bits have too much for efficiency (512 posibillities)

I remember reading that somewhere on the internet. I'll post later when I found it again. --KelvinHOWiknerd(talk) 16:40, 1 May 2008 (UTC)

I'd say it's more like 4 bits is too small and 16 bits is too big. And anything in-between is not "round" enough. Evanh (talk) 00:50, 13 September 2008 (UTC)
More likely the answer is simply that it was 8 in the IBM/360, which is the machine that made the term ubiquitous. Paul Koning (talk) 18:55, 25 March 2015 (UTC)
Prior to the IBM System/360, various vendors referred to byte sizes of 7, 8, 9 and 12. Stretch allowed byte sizes of 1 to 8. Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:37, 26 March 2015 (UTC)

Meaning is a bit lacking for the most common use of Byte

"#1 A contiguous sequence of a fixed number of bits (binary digits). The use of a byte to mean 8 bits has become nearly ubiquitous."

This needs to clearly state that it's a measure of memory capacity. Rather than the second meaning where a Byte is a datatype.

Evanh (talk) 00:54, 13 September 2008 (UTC)

Archaic use aside

Where is a byte defined as other than 8 bits? Nowhere past 1980. You shouldn't bring up archaic uses to make the meaning of a term less clear.Likebox (talk) 22:01, 15 November 2008 (UTC)

there is no strict definition and WP is not the place to create one. Kbrose (talk) 22:24, 15 November 2008 (UTC)
I have seen "Byte = 8bits" in every computer manual I ever read. Granted, it was more than twenty years ago, but still, the definition hasn't changed. The first place I learned that there were other conventions about what a "byte" was decades ago was right here, but that doesn't mean that the word doesn't have a single meaning now--- it's always eight bits.Likebox (talk) 00:15, 16 November 2008 (UTC)
Just to be clear --- I am saying that there is a strict definition --- a byte is 8 bits. If you were talking about a word, that is ambiguous still--- it can be 16 or 32 bits. A doubleword is 32 or 64 bits, but the bigger values I think are becoming standard.Likebox (talk) 00:23, 16 November 2008 (UTC)
The past and current version of the C (language) Standard recognises that there are not always 8-bits in a byte in its definifinition of CHAR_BIT macro. I agree that such machines are rare, but they do exist. Derek farn (talk) 02:39, 16 November 2008 (UTC)
Ok, sorry. I didn't know it's in the standard. But shouldn't the article discuss the current accepted usage first, and then discuss the very rare exceptions later? I think that's better than making a long confusing intro for such a simple concept. But maybe the article is clear enough as it is.Likebox (talk) 05:23, 16 November 2008 (UTC)
Bytes with different numbers of bits are mostly obsolete these days, but may be worth mentioning. The high level description need not mention these but should avoid inaccuracy. I suggest something like the following: "Today, a byte universally contains 8 bits (but see History below for exceptions)." Or something. Dcoetzee 06:31, 16 November 2008 (UTC)
Dcoetzee, as you say non-8-bit bytes are "mostly obsolete" (while they did appear on older machines they sometimes appear in research machines, so obsolete is not quiet the right term). This means a byte does not yet universally contain 8 bits. How about "almost universally"? ISO communications standards got around this problem by defining the term octet. Derek farn (talk) 14:30, 16 November 2008 (UTC)
VSDSP signal processor has 16 bit bytes and it's definitely not obsolete. Probably other signal processors and special processors have non-8-bit word lengths too... 195.197.254.3 (talk) 13:21, 14 August 2012 (UTC)
The article quite adequately points out the situation with neutrality. There is no need to introduce additional word acrobatics. Kbrose (talk) 17:35, 16 November 2008 (UTC)

uppercase "o"

French-speaking countries sometimes use an uppercase "o" for "octet"[7].

First this is wrong, because I've never seen an uppercase "o" used here, only lowercase.

Second, this is not what the ref states:

French-speaking countries often use "o" for "octet", nowadays a synonym for byte, but this is unacceptable in SI because of the risk of confusion with the zero.

There is no uppercase here.

Third, and probably most important, this ref is using Wikipedia article SI prefix content, which is itself unsourced and doesn't claim that it is "unacceptable".

Therefore I remove it and put it here. The following paragraph about the use of "octet" in non-english-speaking countries is sufficient.

French-speaking countries sometimes use an uppercase "o" for "octet"[1]. This is not consistent with SI because of the risk of confusion with the zero, and the convention that capitals are reserved for unit names derived from proper names, such as the ampere (whose symbol is A) and joule (symbol J), versus the second (symbol s) and metre (symbol m).

Calimo (talk) 17:40, 1 February 2009 (UTC)

References

Okay. Here's What I Think:

I do not understand 95 percent of this article. And how wonderful. So what about Quantabytes. What is within a bit. A "nibble" or a "microbit"? —Preceding unsigned comment added by Himboy484wikidude (talkcontribs) 23:08, 6 September 2009 (UTC)

nibble is half an octet. Bit is the basic unit of information, there's nothing "within it". I understand the confusion, a bit, but it's really quite simple. Meters can be divided as needed, bits can't. 62.152.133.235 (talk) 11:46, 17 March 2012 (UTC)

A bit is translated into an electrical charge read by the hardware -- you can't have a fraction of an electrical charge. You either do or don't (1 or 0). This is Madness300 05:33, 30 June 2012 (UTC)

By Eight nonsense

Can we get rid of this stupid line?

 The spelling change both reduced the chance of a bite being mistaken for a bit and made the new word a contraction of "by eight".

Bytes predate 8-bit standardisations (actually I seem to remember 7-bit was more common initially).

I'm not sure BinarY TablES is any better though... it wouldn't be the only occurance of backrynms being snuck in to processor opcodes.

89.238.157.212 (talk) 08:22, 13 March 2010 (UTC)

Linearly growing percentage

I'm no mathematician, so could be mistaken/confused with the term 'Linearly growing percentage', but the numbers don't grow linearly and when done over a larger range shows a definite curve. (wolframalpha) SHayter (talk) 02:16, 18 July 2010 (UTC)

Should be more concise

This article, IMO, is a prime example of how Wikipedia can get far too lost in the details. Shouldn't the very first sentence of the article get the common usage out of the way, the usage ~90% of readers are probably searching for? Less technical readers are going to become extraordinarily confused by the time they reach the 5th sentence's still-not-very-concise explanation that "The size of a byte is typically hardware dependent, but the modern de facto standard is eight bits, as this is a convenient power of two."

Shouldn't the very first sentences be something like: "A byte is a small unit of computer storage, typically 8 bits. This common byte can represent values from 0 to 255. However, some rare systems define a byte as a different, arbitrary number of bits such as"...[proceed to detailed explanation]

Understand that I'm not trying to undervalue the notion that byte is not always 8 bits; this should definitely be mentioned. But when the vast majority of the time it *is* 8 bits, why spend so much time FIRST talking about how it can be other sizes? Isn't that better suited for LATER explanation? The way the article is currently structured seems like trying to explain e-mail to a rookie by defining the data protocol... rather than telling them it's a way to send text over the internet. --76.117.251.119 (talk) 17:15, 7 November 2010 (UTC)

You're absolutely right. I think you should go ahead and change the first sentence to your wording. Indefatigable (talk) 23:20, 8 November 2010 (UTC)

French octets

The real, unspoken reason that the French use "octets" instead of "bytes" is the fact that "byte" looks like a homophone of an obscene word in French, right? Shouldn't that warrant a mention? Xezlec (talk) 17:14, 26 February 2011 (UTC)

What French word are you referring to? Bitte, which IIRC means "penis"? If someone can find a reliable source that gives this as the reason byte is not used in French, then it deserves a mention. Indefatigable (talk) 23:59, 27 February 2011 (UTC)

Citation needed on decibel vs. decibyte

Why does the statement that a decibyte is never used require any citation at all? It is self-evident. Almost all computer systems use a byte composed of eight bits, and the bit is indivisible. Thus, decibytes are never used because they cannot exist on any computer system of any significance. 76.107.196.142 (talk) 05:14, 18 November 2011 (UTC)

I agree, this is self-evident. I've removed it. 2p0rk (talk) 11:59, 16 June 2012 (UTC)