Talk:36-bit computing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
(Redirected from Talk:36-bit)

Six-bit character encodings?[edit]

It also allowed the storage of six alphanumeric characters encoded in a six-bit character encoding.

Can someone give an example for a six-bit character encoding? --Abdull 18:23, 26 January 2007 (UTC)[reply]

See sixbit, which I just created. --Macrakis 20:50, 26 January 2007 (UTC)[reply]

Rename as "36-bit"?[edit]

I suggest this article be renamed "36-bit", for consistency with the other articles using Template:N-bit . Letdorf (talk) 14:56, 25 March 2008 (UTC).[reply]

C[edit]

The C programming language requires that all memory be accessible as bytes, so C implementations on 36-bit machines use 9-bit bytes.

I don't believe that that's true. The C language requires that a C compiler recognize the datatype "char", but puts few restrictions on its size, other than that "char" can't be larger than "short", "int", or "long". As far as the requirements of the C language, it would be perfectly acceptable for a "char" to always occur on a 36-bit-word boundary and to occupy any or all of that 36-bit word, again provided only that "short" was no smaller. If C compilers on historic 36-bit mainframe computers used 9-bit bytes, that was a choice of the compiler authors in the interest of making the most efficient use of memory, which (by 2011 standards) was shockingly limited and appallingly expensive — it wasn't a requirement. In comparison, the Pascal programming language supports both types of character storage, and allows the choice to be made by the Pascal program author ("array" vs. "packed array") rather than by the Pascal compiler author; C gives that choice to the C compiler author only. 76.100.17.21 (talk) 10:50, 23 January 2011 (UTC)[reply]

What is it that you believe is not true in the above statement? "The C programming language requires that all memory be accessible as bytes"? Or "The C programming language requires that all memory be accessible as bytes"? The first one is a requirement that limits the byte length to divisors of 36, i.e. 6, 9, 12, 18 and 36. Of these 9 was chosen being the smallest practical length (6 being too small). The restriction imposed by C is that within words there cannot be bits not accessible by the char type, not the matter that you discuss above. I believe that your argumentation misses the point discussed in the article, so I am removing the "dubious" remark. 129.112.109.245 (talk) 23:10, 1 April 2011 (UTC)[reply]

I'm not sure what the preceeding note was asking about, as the same quoted text was offered for both alternatives, but I am sure that the preceeding author is confusing the general requirements of the C language with the specific design decisions, however reasonable and well chosen, of compiler developers. C requires that a compiler support a scalar datatype named "char", with no restrictions on its size, offset, or alignment within a machine word; the only constraint is that a "char" must be large enough to hold a character of "the basic execution character set". Note how minimal that constraint is! The basic execution character set is not required to be ASCII, EBCDIC, or any other particular representation; it's not required to be the same as the compilation / source code character set; and there may even be multiple execution / run-time character sets, of which only the basic one need fit within a "char". On wide-word, word-addressible hardware, there is no requirement that multiple C "char"s be stored within a word, and if they are, there is no requirement that they fill all available bits without gaps, or even that they be spaced uniformly within that word. So the claim that "the C programming language requires that all memory be accessible as bytes" is wrong -- but I'll be conservative and just tag it "dubious" rather than deleting it. If I'm the one who's wrong, please cite the parts of the C standard document that say so, and correct me.  :) And as for the second part of the original claim, that "C implementations on 36-bit machines use 9-bit bytes", not only does that not follow from the preceeding part of the claim, but such a sweeping statement can't be supported without citing documentation for every C compiler which ever ran on 36-bit machines. If there are no objections after a reasonable period, I'll just edit the article to mention that, within the C language, the use of 9-bit chars packed four per 36-bit word is a natural and efficient choice of compiler designers, and omit the claim that the requirements of the C language somehow dictate that result. 76.100.17.21 (talk) 00:24, 23 October 2011 (UTC)[reply]

The C standard requires that character types have at least 8 bits, the signed char type have a range of at least -127 to +127, and that the unsigned char type have a range of at least 0 to 255 (ISO/IEC 9899:1999 section 5.2.4.2.1), and that all C types other than bitfields be a multiple of the size of a character (section 6.2.6.1 paragraphs 2 and 4). The only character sizes that satisfy this requirement on a machine with a 36-bit word are 9, 12, 18, and 36. --Brouhaha (talk) 03:18, 23 October 2011 (UTC)[reply]

The Marshall Cline and other references in this article seem to support these surprising-to-me requirements. I added the Marshall Cline reference to this article. It says there *is* a requirement that C "char"s "fill all available bits without gaps" and that "the C programming language requires that all memory be accessible as bytes". It uses the 36-bit PDP-10 as an example, and derives the same "9, 12, 18, and 36" options that Brouhaha lists. As far as I can tell, the only reason for those requirements is so memcpy() can copy arbitrary data, of any data type, from one "C object" or "plain old data" struct to another, without needing to know exactly what kind of data it is, and without losing any bits or accidentally overwriting either neighboring struct. I agree that this is a surprising and apparently unnecessary restriction, at least to people like us. (People who are familiar with Pascal "packed array" and unpacked arrays, and C data structure alignment, and the "Unpredictable struct construction" section of "The Top 10 Ways to get screwed by the "C" programming language"[1]). --DavidCary (talk) 21:12, 6 April 2013 (UTC)[reply]

address space[edit]

The following statement appears in the article:

These computers used 18-bit word addressing, not byte addressing, giving an address space of 218 36-bit words,

The computers referenced include the IBM 7090, which had only 15 bits of address space, or 32,768 36-bit words. If there is no objection I will replace the statement in the article with the following:

These computers had addresses 15 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing between 32768 and 262144 words, or 196608 to 1572864 characters. John Sauter (talk) 04:24, 4 February 2012 (UTC)[reply]

Good point. However, I doubt that every 36-bit computer had at least 32768 words of memory installed. Perhaps it would be simpler and more accurate to state something more like:

These computers had addresses 15 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing at most 262144 words, or at most 1572864 characters.

--DavidCary (talk) 02:15, 7 April 2013 (UTC)[reply]

BCD math[edit]

A recently added paragraph claims that computers were motivated have 36-bit words because of BCD math. Based on my personal experience, this is not true. The DEC PDP-6 and its successors used 36-bit words but did not do BCD math—they did not use BCD at all; instead they used a 6-bit form of ASCII known as sixbit along with 7-bit ASCII. Math was done using twos complement 36-bit integers and twos complement 36-bit floating point.

The IBM 7090, which was a successor of IBM's original 36-bit computer, the 701, used BCD, but did arithmetic using sign magnitude 36-bit integers and sign magnitude 36-bit floating point.

If there are no objections, I will delete the paragraph. John Sauter (talk) 06:08, 10 September 2019 (UTC)[reply]

The IBM 650 had 5-bit digits, with characters taking two digits, as did the IBM 7070, IBM 7072, and IBM 7074. The IBM 702 appeared to divide memory into 6-bit characters, not digits (digits were represented as the characters 0 through 9); presumably the IBM 705 and IBM 7080 were similar. The IBM 1401 and other IBM 1400 series machines did so as well.
So none of the "commercial" IBM machines appeared to have 6-bit digits.
The UNIVAC I also used 6-bit characters, with digits represented as the characters 0 through 9. The UNIVAC Solid State machines appeared to have 4-bit-plus-parity digits.
The NCR 315 had memory divided into 12-bit units, containing either two 6-bit characters or 3 4-bit decimal digits.
So what, if any, computers really did have 6-bit digits, rather than 6-bit characters? The version of the article before the recent edits said
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
I don't see any decimal computers with 6-bit digits, as per the above. For binary computers, they may have supported BCD as well, either in hardware or software, but did any of them use 6 bits, rather than 4 bits, for digits? Guy Harris (talk) 06:56, 10 September 2019 (UTC)[reply]
So, absent any citation for 6-bit digits, I'd say we should either delete the paragraph or have it refer to 6-bit characters rather than 6-bit digits. Guy Harris (talk) 06:56, 10 September 2019 (UTC)[reply]