Jump to content

Talk:128-bit computing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
(Redirected from Talk:128-bit)

Format based heading

[edit]

--64.142.36.76 (talk) 20:57, 29 May 2008 (UTC) I'm not sure you need a crystal ball to see 128-bit processors... what about Cell (microprocessor) for example? -- Coneslayer 18:35, 2005 Apr 26 (UTC)[reply]

What about it? Vector processors have had 128-bit wide registers and data paths for several years (or stranger combinations like those present in the MIPS R5900-based Playstation 2 CPU, which lets you combine the high and low portions of registers for varying tasks). What is still a ways off is a 128 bit wide superscalar processor; for superscalar design having a bus and register width that large is wasteful for most tasks. The issue here is similar to the reason that a memory word in most scalar microcomputer architectures is 8 bits wide. -- uberpenguin 16:06, 2005 May 3 (UTC)


I've got here a copy of the VAX Architecture Handbook (title slogan: Architecture For The 80's), which defines an octaword as a 128-bit integer type, and there are even a couple of instructions which act on such quantities. --bjh21 22:05, 26 Apr 2005 (UTC)

Yeah, again, the concept isn't novel. The TIMI implemented in the System/38 on through the AS/400 is a true virtual implementation of a 128-bit machine, designed originally for use on 32-bit processors but in such a way that it would easily scale to newer designs for years to come (and that MI is still used in the brand-new iSeries you can buy today). -- uberpenguin 16:06, 2005 May 3 (UTC)

Uberpenguin, the point is that when this article was created, someone put a "speedy delete" tag on it, saying that "Wikipedia is not a crystal ball." My comment and bjh21's served to point out that the concepts on this page are relevant in the present. -- Coneslayer 16:21, 2005 May 3 (UTC)

Going double width will not double computing power. It will increase addressable memory space and allow higher precision arithemtics, but will not, in general, increase computing power. The last section is therefore unfactual and misleading. I suggest it is removed.

Todays move to 64 bit architecures is mostly about memory addressing. They increase in performance comes mostly from other changes such as adding more registers.

Not quite. Address width is not dependent upon data width. But you're right about it being a negligible performance advantage. (It's actually a penalty in some cases—i.e. moving small payloads in large containers.)
überRegenbogen (talk) 06:50, 25 June 2010 (UTC)[reply]

Expert

[edit]

here is a 128bit cpu:Emotion_Engine linux even run on it(because linux runs on the playstation2) —The preceding unsigned comment was added by 213.189.165.28 (talkcontribs).

We will never exceed the 64-bit address space. Even with superfast RAM and huge memory bandwidth, it would take thousands of years to just clear (write a 0) to every location of an 64-bit address space. A transition to 128-bit might happen for other reasons though. Large address spaces can be used to "hide" pages, instead of using an MMU to enforce memory safety. It would take an attacker decades (or a lot of luck) to find the pages of another process.

Reply: What about when 64 bit VMs are hosted? One thousand of them on a single machine, each supposedly granting a large amount of address space. --64.142.36.76 (talk) 20:57, 29 May 2008 (UTC)[reply]

A 128-bit virtual address space could be useful, even if a 128-bit physical address space isn't. It's the virtual address space that determines the important things in the software world like pointer sizes. --Doradus (talk) 16:57, 5 August 2008 (UTC)[reply]
All of you need to read WP:NOTAFORUM. Anyhow, Emotion Engine isn't 128-bit. Read about it's details-no it's not 128-bit. — Preceding unsigned comment added by Jasper Deng (talkcontribs) 04:49, 29 December 2010 (UTC)[reply]

Pleb

[edit]

"Expert"'s comment above assumes sequential memory access. I can imagine (and probably show) system designs where a process can, will and should exceed 2^64 address bits for regular memory access. This will and does require parallel (non sequentially-dependent) memory access, irrelevant of MMU or potential hacks. IMHO, address-length constraints will "go away" as we require more content-related - non-address-based - data access.

To tie address-space bit-count to ALU word-length is as a result of at least:

  • processor construction constraints;
  • programmers' desire for (C-type) pointers;
  • consumer-level marketing.
It's not even necessary for pointers in high level languages. C handles 24 and 20 bit pointers just fine when compiling for 16-bit CPUs, such as the 65816, 80286, and 8086/8088. Likewise with 16-bit pointers on 8-bit CPUs, like the 6502 and Z80. The resulting machine code is just slightly more complex.
überRegenbogen (talk) 06:30, 25 June 2010 (UTC)[reply]


Paragraph about "128-bit consoles" was removed

[edit]

This really has no place here. The consoles were referred to as "128-bit" because of their graphics memory bus width, not because of their 128-bit processor. The Nintendo 64 has a graphics memory bus width of 64-bits, it doesn't have a 64-bit CPU.

The paragraph was "The sixth generation of game consoles, released in 2000 and 2001 and including the Playstation 2, GameCube and Xbox, is sometimes incorrectly referred to as "the 128-bit era" as the Gamecube and Xbox both used 32-bit CPUs, while the Playstation 2 used a 64-bit CPU. As of January 1, 2008, no video game consoles using 128-bit CPUs have been announced."

But the 7 "co"-processors (SPE's) in the Cell processor is more-or-less pure 128-bit SIMD, as far as I know. However, they are not used for general purpose processing, but more like *graphic cards* or whatever. Read it yourself :b? Crakkpot (talk) 13:07, 23 February 2008 (UTC)[reply]
128-bit SIMD doesn't equal one big 128-bit value, since it is two or more smaller values being stored in one bigger register. Rilak (talk) 09:16, 24 February 2008 (UTC)[reply]
128-bit SIMD simply means it operates simultaneously on groups of smaller bit width groups of integers or floating point numbers. This is useful in some applications, such as graphics, that tend to consist of doing the same operation over and over again on large groups of data, as it saves the processor the trouble of having to decode the same instruction over and over again. It's true that the PS3 was unusually geared towards such kinds of operations, and this kind of processing is basically all a modern GPU does. But would not call this a "128-bit processor", however, because the data would still be 32-bit/64-bit, as well as the memory addresses. The only thing that's really 128-bit wide would be the registers. The SIMD capabilities aren't somehow special - 128-bit SIMD was introduced to the 32-bit x86 Pentium III process in 1999 in the form of SSE, and with the introduction of AVX in 2011 the SIMD registers were expanded to a full 256-bits wide. Increasing any of those other metrics to 128-bits, however, would be an absolute waste of silicon that would yield little to no performance improvements. Not even supercomputer architectures go there. When it comes to the PlayStation 2's SIMD registers, however, all they did was to basically combine the pre-existing 64-bit integer registers of the base MIPS architecture into larger 128-bit ones, share them between the integer and a newer SIMD instruction sets, and add a second ALU so that they could process four 32-bit integer instructions at once (strangely, although integer math was 64-bit, the SIMD instructions could not operate on groups of two 64-bit numbers, only 128-bit wide groupings of 32-bit, 16-bit, and 8-bit ones). This is a comparably minimal investment of silicon, the biggest investment would've been the extra ALU and, amusingly, the whole part that allows them to claim 128-bits was entirely recycled. No one argues ever tries to argue that the Xbox was 128-bit due to having all of the same SIMD capabilities (the Xbox's Pentium III was actually superior here, because it had dedicated SIMD registers instead of sharing them with the integer instructions, and its SIMD capabilities could operate on either floating point or integer data, rather than being confined to just integers), just because everyone "knows" boring old x86 was 32-bit. But Sony was able to pull a fast one, as it was a rather strange sounding architecture to a layman. In reality, its base MIPS architecture was fairly standard RISC common throughout the 90's in non-consumer computers, with 128-bit SIMD capabilities sort tacked on with a bit of an inelegant hack. But to a layman, all of that talk is just big words to filter out. When they said "128-bit wide SIMD registers", people heard "128-bit blah blah blah", and so it was 128-bit. This was only relevant because of the ridiculous bit war present at the time in 90's, where a consoles power was somehow thought to be directly tied to however many bits it had. In truth, it helps when doing math on larger numbers, and it effects the maximum amount of addressable memory, and some other things. As well, doubling the number of bits produces hyperbolic gains in these metrics, to the point that not even Moore's law is able to catch up. Going from 8-bit to 16-bit with the Super Nintendo was a reasonable move at the time - it allowed, for instance, an increase in pallete from 256 to 65k colors. And going to 32-bit was perhaps a sensible move for the next generation, even though the Playstation doesn't need to address 2GB of memory, I suppose it does allow operations on numbers in the millions and some higher precision floating point operations. But, for some reason, manufacturers just couldn't be sated, Nintendo just had to jump the gun and claim 64 of those bit thingies. Now, once we get to 64-bit, we're basically outside of the realm of numbers people know how to pronounce (16 quintillion, to be precise; that's the thing that comes after trillion and quadrillion). The Nintendo 64 can't even begin to hope to use up a 32-bit address space, and yet it claims a full 64-bits. Just ridiculous. And if you actually look under the hood, yeah, it's not. That was just a gimmick. It wouldn't have been a particularly technological achievement to actually have a 64-bit processor, you could very well do it easily with the technology of the day, it would just be dumb, the capabilities would go unused. And at the end of things, the Playstation 2 waltzes in and, well, we just know it has to have 128 of those bit thingies, don't we? Since those have to double with every generation? So, of course, it claims a number of bits that would basically leave it prepared for the space age, if and when it ever needs to back up the brain of ever member of the human species in a single addressable memory space, a number of bits that even supercomputers in the service of scientists crunching the most strenuous and onerous tasks of our day don't bother with, for an entry level home gaming device. Just in case. And, again, if you look under the hood it doesn't, it's just catching up with some advancements in microarchitecture at the time which involved storing multiple sets of 32-bit data in one big register. What next? Why not count the size of your RAM in your favor, and claim to have millions of those bit thingies? However, after this point, people seem to have finally tired of bit hype, it had already died down to a point so that Sony did not feel the need to title their dang console after it, and people mostly forgot about 128-bit. 6 years later, more than a decade after the introduction of the N64, processors were finally introduced in systems with many many times the power, RAM, and space that actually had 64-bits. In the modern day, a decade after that, most programs are still compiled in 32-bit x86, just because the 64-bit capabilities aren't necessary and don't do much good. 108.131.77.176 (talk) 11:06, 16 May 2014 (UTC)[reply]

Windows 8 and "AMD128"

[edit]

Microsoft is working with Intel, AMD, HP and IBM to offer support IA-128 (in development) on Windows 8 and Windows 9. [1]

That's no source, that's a mix of blog vanity and corporate shilling 70.53.136.14 (talk) 23:04, 7 October 2009 (UTC)[reply]

To quote the reference: A LinkedIn profile, which has already been taken down, for a Robert Morgan, Senior Research & Development at Microsoft, has shone a sliver of light on the possibility of 128-bit support coming to Windows 8.
This wouldn't constitute a valid source even if WP had less strict demands for reliability. Adrianwn (talk) 19:12, 8 October 2009 (UTC)[reply]

It seems that the only "source" for that claim is a deleted entry on a social networking site, all other references go to various blogs containing mostly speculation. Neither Microsoft nor Robert Morgan have confirmed this rumour. So please, before you add this stuff again, get some reliable sources. Adrianwn (talk) 17:14, 24 October 2009 (UTC)[reply]

This rumor still seems to exist although it is false, but any statement refuting it would probably be OR, and simply mentioning the rumor without refuting it would lend to much credibility to it. If you disagree, please state your reasons here. Adrianwn (talk) 05:59, 26 April 2010 (UTC)[reply]

There are no plans to create 128-bit processor yet and I doubt that at least one will appear in the next century. The market hasn't even completed transition to 64-bit. These sources might be talking about 128-bit SIMD support, but this is already included in all versions of Windows supporting x86-64. Thus I considered that statement in the article plainly nonsense and removed it. 1exec1 (talk) 12:15, 10 August 2010 (UTC)[reply]

References

  1. ^ "Microsoft mulling 128-bit versions of Windows 8, Windows 9". Eightforums Blog. October 7, 2009.

AS/400

[edit]

This article says:

The AS/400 virtual instruction set defines all pointers as 128-bit.

while the AS/400 article says:

The IBM System i's instruction set defines all pointers as 48-bit.

which is true? —Preceding unsigned comment added by 213.61.9.74 (talk) 15:38, 28 February 2011 (UTC)[reply]

On the AS/400, pointers are 128-bit, but physical addresses were 48-bit, and now 64-bit. 70.239.13.84 (talk) 09:51, 28 May 2011 (UTC)[reply]
On the AS/400, pointers, at the Machine Interface level, are 128-bits, and they contain a 1-byte type field, a 7-byte type-dependent data field, and an 8-byte virtual address.[1][2] I suspect the CISC AS/400s, and probably System/38, had 2 spare bytes and a 6-byte virtual address. Guy Harris (talk) 07:38, 6 August 2023 (UTC)[reply]

Address width is not dependent upon data width

[edit]

In the Uses section, "128-bit processors would allow memory addressing for..." confuses two separate issues

64 bit computing is not about memory capacity. 64 bit address buses could exist in systems with smaller data widths. Whilst it is convenient to be able manipulate pointers as units, data and addresse width are not interdependent.

There have been many CPU architectures with wider address buses than data buses. The computing world before the late 1980s was dominated by them (65816, 80286, 8086, 6502, 6800, Z80, 8080, 8008, 4004, ...). The catch was that pointers had to be handled in pieces or constricted to segments. 64 bit word and address widths happened coincidentally because it was convenient—not necessary.
überRegenbogen (talk) 05:59, 25 June 2010 (UTC)[reply]

Grammar

[edit]

I've noticed many grammar errors regarding the use of plural nouns, like IPv6 addresses. You don't say:

  • The length of IPv6 address...

Instead, say:

  • The length of a IPv6 address...

or

  • The length of IPv6 addresses...

I fixed some of these errors. Please watch out for them.Jasper Deng (talk) 06:11, 23 February 2011 (UTC)[reply]

Clarification to lead paragraph

[edit]

I made a clarification to the lead paragraph, changing the following:

   There are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses,
   though a number of processors do operate on 128-bit data. The IBM System/370 could be considered the first
   rudimentary 128-bit computer as it used 128-bit floating point registers. Most modern CPUs feature SIMD
   instruction sets (..)

to

   While there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses,
   a number of processors do have specialized ways to operate on 128-bit chunks of data. The IBM System/370 could be
   considered the first rudimentary 128-bit computer, as it used 128-bit floating point registers. Most modern CPUs feature
   SIMD instruction sets (SSE, AltiVec etc.) where 128-bit vector registers are used to
   store several smaller numbers, such as four 32-bit floating-point numbers. A single instruction can then operate on all
   these values in parallel. However, these processors do not operate on individual numbers that are 128 binary digits in length,
   only their registers have the size of 128-bits.

Is this correct? (Is there a "simple" text-box formatting wiki-markup that follows HTML syntax such as <box wrap=true> whatever text </box> ? — Preceding unsigned comment added by Jimw338 (talkcontribs) 22:55, 9 November 2012 (UTC)[reply]

Sentence

[edit]

"Quadruple precision (128-bit) floating-point numbers can store 64-bit fixed point numbers or integers accurately without losing precision. "

This doesn't make sense to me. You can store more than 64-bit fixed-point numbers. What does it mean? 64 bits before the decimal point (or binary point) and 64 bits after the point? Bubba73 You talkin' to me? 01:24, 1 August 2020 (UTC)[reply]

Yes, up to 113 bits (before + after the point). I suspect the one who wrote that was thinking about converting a 64-bit integer type (which exists in practice) to quadruple precision, and this is always done exactly. Vincent Lefèvre (talk) 01:50, 1 August 2020 (UTC)[reply]
Why 113 bits? You could use seven bits to tell where the point goes, leaving 121 bits for the data. Bubba73 You talkin' to me? 04:06, 1 August 2020 (UTC)[reply]
Because an IEEE quadruple-precision floating-point number has a 113-bit significand. You can represent integers from −2113 to 2113 exactly, but also 113-bit fixed-point numbers (which are integers scaled by a fixed exponent, assumed to be known/fixed at "compile time"), in addition to floating-point numbers, of course. — Vincent Lefèvre (talk) 08:31, 1 August 2020 (UTC)[reply]
Thanks, I could really use 113-bit integers. Bubba73 You talkin' to me? 18:47, 1 August 2020 (UTC)[reply]
Yes, I've just done a change (mentioning 113-bit integers and 64-bit integers in particular). Vincent Lefèvre (talk) 23:23, 1 August 2020 (UTC)[reply]