Talk:Graphics processing unit/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

More general tip regarding halts in the development of Wiki pages

This is a tip more generally applicable to Wikipedia, not just the specific direction of this article: Suggestions have been made below but not actioned. I'm guessing, but this may be that people have become far more hesitant about editing the main page after trying previously, receiving a slap for meddling, and having the page reverted. This leads perhaps to the culture moving towards these open discussions. The problem with it, is that many suggestions are made and few of them are actioned. So, I believe, it may help if suggestions can receive votes on a -10 to +10 scale from "Absolutely don't consider this" through "Whatever" to "Absolutely, make this happen!". To address the obvious weakness in this theory, in that it will still be open to fan/partisan attacks or views, perhaps it would also help if anybody expressing a belief should supply with it an explanation. — Preceding unsigned comment added by 82.31.180.54 (talk) 18:22, 15 July 2015 (UTC)

undo vandalism

Change: GPUs added my shading back to: GPUs added programmable shading Then you may delete this comment. -- Michael s. Hoffman tiny iphone — Preceding unsigned comment added by 166.137.209.176 (talk) 17:45, 4 October 2014 (UTC)

Needs A separate Architecture Section

I came to this page to see just how a GPU is different from a CPU since neither can do the other's job very well rightly and GPU can obviously handle parallel operations. While there is plenty of history, other than the brief sentence in the intro sayings how GPUs are massively paralleled, there is no mention whatsoever of the actually architecture of a GPU and how internally it is different from a general purpose CPU. Most of the article discusses history, graphics memory, and graphics card but not the GPU itself... —Preceding unsigned comment added by 209.77.137.57 (talk) 18:13, 21 August 2008 (UTC)

I think this would be very helpful in the article, for sure. --DRKThree (talk) 15:10, 26 June 2011 (UTC)

Seconded. This is wise. One of the chief concerns with the article is that it links to a multitude of ideas from individuals, acting separately, reporting about what they believe is an example of a GPU, but nowhere does the entry properly consider or address the defining characteristics of technically what a GPU actually is. It is this: A graphics processor which embodies a combination general purpose programmability and rendering power. Provided it covers these two defining characteristics, it's a GPU. If it has only programmability and graphics orientation, as the i860 did before it was retasked, then it is a Microprocesor. If it has rendering power, in any capacity, but offers nothing which can reasonably be considered programming power then it is a graphics accelerator. — Preceding unsigned comment added by 194.75.238.182 (talk) 09:42, 15 July 2015 (UTC)

Introduction

The intro states "In more than 90% of desktop and notebook computers integrated GPUs are usually far less powerful than their add-in counterparts." This almost led me to believe that 10% of integrated GPUs are more or equally powerful as add-in ones. However, that is not what the external source means.

Quoting: "In fact... more than 90 percent of new desktop and notebook computers use integrated graphics."

Is my view valid?--64.230.4.64 (talk) 22:38, 10 February 2008 (UTC)

Actually, the intro states "More than 90% of new desktop and notebook computers have integrated GPUs, which are usually far less powerful than those on a video card". I believe it is clear that the 90% reference is for desktops and notebooks, while the "usually far less powerfull" is for the integrated GPUs. The comma in the middle of the phrase is the key. —Preceding unsigned comment added by 201.235.18.87 (talk) 17:37, 8 December 2008 (UTC)

Untitled discussion

This Page has been directly quoted in PCWorld Magazine in the October 2006 edition, on the article about GPU's.--Tiresais 08:22, 7 September 2006 (UTC)


NVIDIA Corporation coined the term around 1999 to describe its GeForce range of graphics chips, based on the abbreviation "CPU" for a computer's central processor. However, Sony may have used the term in 1994 to describe the graphics hardware inside its PlayStation game console.

Okay, so if Sony or nVidia coined the term "GPU", how does one explain this?
The information in question has been removed until verification. Thanks for pointing to this issue. Optim 15:19, 9 Jan 2004 (UTC)
Why has the comment that nVidia coined the term been restored? GPU was being used in the mid 1980s, and nVidia was founded in 1993 Crusadeonilliteracy 13:20, 21 Feb 2004 (UTC)
I've been wondering that for a while now. I'll try to fix it. -lee 22:53, 5 Jan 2005 (UTC)
I removed the claim once more. Please do not add it again, since (as demonstrated above), the term has been in use LONG before NVidia has been on the scene. -- uberpenguin 02:59, 18 December 2005 (UTC)
What? If it showed up again between January and December, I didn't put it there. Please get your attributions straight. -lee 22:43, 9 January 2006 (UTC)
I wasn't singling you out, nor was I blaming you for adding it. It's just a general warning to anybody involved not to re-add the text. -- uberpenguin 23:42, 9 January 2006 (UTC)
The graphics processor in the Atari Jaguar was called "Graphics Processing Unit" / "GPU" even before its realease in 1993. See also the Atari Jaguar FAQ or the Wikipedia-entry about the Atari Jaguar. — Preceding unsigned comment added by Irata42 (talkcontribs) 13:31, 7 June 2012 (UTC)

Untrue Claim

"Several (very expensive) graphics boards for PCs and computer workstations used digital signal processor chips (like TI's TMS340 series) to implement fast drawing functions,..." TMS340 never was a DSP, this was a VDP (from my knowledge term VDP is also missused to other chips than TM340 familly). TMS34010 and TMS34020 from programmer point of view was a normal CPU but this familly has additionaly special instructions for graphic operation and highly integrated graphics subsytem ie vertical and horizontal counters, with these counters and additional hardware this VDP was synchronized with electron beam (this claim is valid for CRT displaying technology). TMS340 can be used to build normal uComputer - they dont need for normal calculations ie not graphics any additional - universal CPU's. Also from my knowledge TI DSP (TMS320) was used as a part of the various graphics subsystems (for acceleration of the calculation).

I went ahead and fixed this (though it's been a while now). -lee 15:09, 27 March 2007 (UTC)

The unit price of the device is not relevant to its definition. This is the stuff of Amiga fanboydom: "Yeah, but they don't count because I couldn't afford them". Irrelevant. — Preceding unsigned comment added by 217.42.71.124 (talk) 16:56, 11 July 2015 (UTC)

RAM

The article needs to make better sense of the whole RAM thing. It states that:

A GPU will typically have access to a limited amount of high-performance VRAM directly on the card, which offers much greater speed than dynamic RAM, though at much greater cost. For example, most modern cards have 256 MB of VRAM, with some having as much as 512MB of VRAM, whereas the computer itself may have 1GB or more of system memory.

But both RAM links redirect to Dynamic random access memory. And when I was looking at video cards, most of them listed that their ram was DDR - the same kind as the standard system memory. So is the video RAM really a different type of RAM, or is it just in a different place - on the video card as opposed to in the mother board's main DIMM slots?

Well, today most cards are using GDDR3 which is definitely not in-use on motherboards. But some cheaper cards use DDR2, or even regular DDR SDRAM. And before that cards used SDRAM, EDO DRAM, FPM DRAM. So yes the RAM chips are often absolutely the same as those used for the system. The implementation of them is just different. Graphics boards can run their memory faster because it is easier to make the bus faster if it is short and the RAM is directly soldered on the board instead of in sockets. And, of course, because graphics boards often use faster specced RAM chips. --Swaaye 04:05, 30 March 2006 (UTC)
Eh... No, the real utility of having onboard RAM is that it is dedicated, not that its implementation is somehow faster than that of primary storage. Graphics card memory (which is more or less used for framebuffer and texture memory) is directly addressable by a GPU and therefore isn't subject to all the delays involved in having to utilize the CPU (or DMA of some sort) for main memory access. The physical parameters of the graphics expansion board have pretty much no bearing on making the RAM somehow "faster" (at least in the context of this discussion). Rather, the onboard memory is dedicated for usage by the GPU and therefore invokes much less latency than using computer primary storage would. I'll improve this article's treatment of on-card memory a bit... -- uberpenguin @ 2006-03-30 04:40Z
This isn't entirely correct. The fact that it is local memory is the key factor in lower-end cards where memory bandwidth isn't a limiting factor. However, in high-end cards, speed is absolutely the most important feature of video card ram. Take the X1950XTX for example, its memory is clocked at 1GHz (2GHz DDR), far faster than any system memory to date. In short, video cards use a local batch of fast, dedicated memory to perform memory intensive functions. The faster the memory, the more performance you get. -- michaelothomas
I basically removed the offending text and cleaned up the paragraph a tad. This article doesn't need to explain the design motives regarding expansion cards, so I didn't really add any more content. -- uberpenguin @ 2006-03-30 04:54Z

It is also true that how wide the memory interface is make a difference in performance. The higher the number, the better. Many mid-range cards these days use 128 bit interface while more powerful or special enhanced editions of cards usually have 256 bits or more.Alexander 04:02, 11 August 2007 (UTC)

Transistors

I think that GPU sizes are twice that of current generation CPUs. Is this true ? Wizzy 08:17, 10 August 2006 (UTC) yeah i got 2 nVidia 8800GTX's and their 11"long 2"deep and 4" wide these are the first directX10 copatibale card on the market they have 756MB GDDR3 thesr things are gynormos

The article should be split or moved

1. No-one ever used the GPU term until nVidia invented it for GeForce 256 2. The correct name has always been "video card", and it continues to be used --Dmitry (talkcontibs ) 09:09, 17 September 2006 (UTC)

Oppose 1: See the first topic of this page.
2: There's no such thing as a correct term. NVIDIA might have coined it but, if you're saying that it is not in common usage, you must be living under a rock still. rohith 19:44, 17 November 2006 (UTC)
Still, though, the scope here does seem pretty narrow. As noted further down, people working in academic and industrial applications had GPUs and frame buffers long before Atari, Apple, IBM and Commodore made them common on PCs, and long before Nvidia started applying the term to the GeForce. I'm wondering if we should indeed merge the PC-specific items into the video card article (or perhaps another article like History of video hardware on personal computers) and leave this one for a general treatment of the subject (covering things GPUs do in general, like BitBLT, Bézier curves, etc), and some of the history I didn't know about when I first rewrote this. -lee 15:22, 27 March 2007 (UTC)

Which GPU has a 3D accelerator on it?

Can someone tell which one is exactily, i coufused KanuT 03:03, 3 January 2007 (UTC)

hello! anywhat tell me Which GPU has a 3D accelerator on it KanuT 21:17, 14 January 2007 (UTC)

These days, they all do. Even really old cards like the ATI Rage family have at least some 3D capability. -lee 15:23, 27 March 2007 (UTC)

Generally speaking, do integrated GPUs use the same instruction set as dedicated ones? Do their performance shortage mainly come from the lack of a dedicated VRAM? (bzongbzongbzong@163.com 00:07, 06 Aug 2011) — Preceding unsigned comment added by 221.6.29.71 (talk)

History re-ordered

I think the history section needs to be re-arranged. For example, the following bit:

In the late 1980s and early 1990s, high-speed, general-purpose microprocessors became popular for implementing high-end GPUs. Several high-end graphics boards for PCs and computer workstations used TI's TMS340 series (a 32-bit CPU optimized for graphics applications, with a frame buffer controller on-chip) to implement fast drawing functions; these were especially popular for CAD applications. Also, many laser printers from Apple shipped with a PostScript raster image processor (a special case of a GPU) running on a Motorola 68000-series CPU, or a faster RISC CPU like the AMD 29000 or Intel i960. A few very specialised applications used digital signal processors for 3D support, such as Atari Games' Hard Drivin' and Race Drivin' games.

is under the "1970's" heading, despite clearly not being relevant to the 70's. I'd trivially cut&paste it into the 80's section, but I wonder if maybe it should be split up between the 80s and 90s? Emteeoh 20:49, 12 January 2007 (UTC)

Will do. I don't know how that got split up like that. -lee 15:00, 27 March 2007 (UTC)

I have a PC, and i don't know how to check whar graphics card i have... could someone tell me how to figure it out, like through control panel or something? thanks.

Microsoft bias in the history section?

The history section of this article seems to concentrate overmuch on the impact of DirectX/Direct3D and its predecessors, only mentioning the older and far more developed standard OpenGL later on, and neglecting any mention of 3Dfx's Glide API or MiniGL. While DirectX did bring many of the mentioned features into general use among Windows users, they were generally not the first to bring them to market, or even into popular use. GreenReaper 04:22, 14 January 2007 (UTC)

Which do you think would come first and dominate an article on war; WWII or the secret 100 year burger war of 1732? The fact of the matter is, DirectX dominates the market, is far wider known and is a far more "important" (though not better) than OpenGL. We have always had the policy of not letting personal views get in the way of obvious topic domination. --Jimmi Hugh 03:35, 13 April 2007 (UTC)
DirectX is about as close to winning the war as the US is to winning the war in Iraq; While Glide may have died out, OpenGL is still under strong industry use, and it is more than worth the mention of the hundreds of graphical APIs that came before DirectX. So, while your personal views may be that DirectX has won, direct evidence to the contrary proves otherwise.76.89.26.83 11:16, 28 October 2007 (UTC)
The wonderful thing about history is that its large and expansive, you can provide just the directX side of it but thats hardly comprihensive. there is no reason to focus just on directX and there is no reason to give non microsoft api's credit where its not due but pretending that they don't exist helps no one —Preceding unsigned comment added by Gordmoo (talkcontribs) 04:20, 11 October 2007 (UTC)

Rename or broaden

above discussion notwithstanding, this strikes me as a reasonably competent version of what it is. however, it is NOT an article about GPUs in general. it is aritcle about the history of GPUs in PC applications, with emphasis on MS Windows.

There is little attention to basic theory of operation or general evolution of this type of processor. the word "vector" appears only twice. Scalability issues are not addressed. pioneers in vector proessing and computer imaging with little PC involvement like Cray, TI, or SGI are not mentioned at all.

No sense criticizing an apple for not tasting like a pear, but we shouldn't call it a pear either. i suggest that either the scope of the article be broadened or the title be changed to better reflect the content and a separate, more general GPU article (linked to this one) be started.

- ef

I have to agree here. I wrote most of the beginning a while back; it was even worse then (apparently someone thought NVIDIA had invented the term!). Unfortunately, most of my knowledge of graphics systems comes from the PC and consumer electronics world; I'm familiar with some of the high-end stuff by name (E&S should be on that list too), but I don't really know enough to flesh this out. In any case, if you know more than I do, feel free to edit the article and add what you know. Thanks. -lee 14:59, 27 March 2007 (UTC)

90% ??

Working in the industry, I have to seriously question the figure that 90% of desktops have integrated video on the motherboard. There's also the issue of referencing Intel-based systems, but not AMD-based systems.

69.95.74.113 20:13, 23 June 2007 (UTC)

It's 90% of desktops and notebooks. Not many notebooks use a separate video card. And the figure is from a cited source. If you know of a better source, please add it. --Harumphy 21:10, 23 June 2007 (UTC)

Actually anything other than a very low-end business only laptop these days has mobile versions (or onboard, but still superior to Intel chipsets, like the ATI onboard graphics, or Nvidia 6100) of desktop graphics cards. Also, outside of business, or entry level oriented machines, actual graphics card are featured. There is no way it is as high as 90% unless you count non-PC computers.Alexander 04:07, 11 August 2007 (UTC)

Opinions and anecdotes are of no help. We need facts from cited sources. Harumphy 12:32, 11 August 2007 (UTC)

Oh for christs sake come on guys, the quoted articel is, as i understand, not necessarily any more reliable than any of us. and at any rate, when talking about this kind of tecnology, there is no point relying on an articel, written more than a year ago! 90%?!? thats rubish and we all know it. go shopping for computers on the internet and you will conclude that even most laptops don't have IGP anymore. —Preceding unsigned comment added by 83.95.198.203 (talk) 15:59, 14 December 2008 (UTC)

Going shopping for computers and concluding that most laptops don't have an IPG is original research, which policy strictly forbids. Rilak (talk) 04:39, 15 December 2008 (UTC)

Policy bumboy right there!!!!! — Preceding unsigned comment added by 86.5.254.250 (talk) 18:35, 29 August 2011 (UTC)

This source from JPR indicates that in 2011 38% of Consumer PC's and notebooks sold have integrated graphics. — Preceding unsigned comment added by Jkrshw (talkcontribs) 07:20, 12 October 2011 (UTC)

Weird formulation?

"A GPU can sit on top of a video card, or it can be integrated directly into the motherboard in more than 90% of desktop and notebook computers"

Should not this be something like:

"A GPU can sit on top of a video card, or it can be integrated directly into the motherboard, as it is in more than 90% of desktop and notebook computers" —Preceding unsigned comment added by LarsPM (talkcontribs) 09:31, 4 September 2007 (UTC)

  • oppose - not the same topic. --Gronky 11:16, 23 October 2007 (UTC)
  • oppose - two very different topics. --Harumphy 11:34, 23 October 2007 (UTC)
  • Support. Free software drivers are only notable in an encylopedic concept in the context of discussing either free software or GPUs. The current standalone article reads like an essay because it's basically a floating discussion. As this article currently doesn't discuss garphics card drivers at all, and such drivers are an important aspect of modern GPU considerations, I think this is an appropriate merge target. Chris Cunningham 11:40, 23 October 2007 (UTC)
  • Oppose - This is its own free-standing topic, and I believe a quite notable one. It seems to be gaining in notability, so if this does get merged it will probably separate out as its own article again in a little while; although I believe it's already sufficiently notable for that. --Daniel11 08:00, 26 October 2007 (UTC)
  • Oppose - See above. --Alexc3 (talk) 00:52, 6 December 2007 (UTC)
  • Oppose - FOSS is not an important aspect of GPUs for most people. It's a narrow issue that doesn't belong in a general discussion of graphics cards. Since everyone except one person has opposed this over the past three months, I'm removing the merge proposal. 72.229.28.14 (talk) 20:11, 1 February 2008 (UTC)

Requested move

The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was no consensus to support move at this time. JPG-GR (talk) 17:30, 29 May 2008 (UTC)

This is to address the question of "GPU" (Graphics processing unit) being used as a title when it is not the most common name WP:COMMONNAME and to settle the charge that it is not a vendor-neutral term. Regardless of the particular etymology of "GPU", the fact remains that neither 3dfx, ATI or Intel have ever marketed their graphics products as GPUs, to my knowledge. ATI calls them VPUs, which is really just an attempt at counter-branding obviously derived from NVIDIA's use of GPU. 3dfx and Intel have both relied on generic phrases such as graphics "processor", "accelerator", "chip", "chipset", etc. And these terms are still widespread, used both by Intel and in colloquial usage. I don't see any reason why the wiki shouldn't use similar terminology. Google results:

  1. "graphics processor" -wikipedia 3.2m
  2. "video chipset" -wikipedia 1.3m
  3. gpu OR "graphics processing unit" -wikipedia 1.25m
  4. "gpu" -wikipedia 1.2m, but many hits aren't about computer hardware
  5. "video processor" -wikipedia 1.2m
  6. "graphics chip" -wikipedia 1m
  7. "graphics accelerator" -wikipedia 880k
  8. "graphics processing unit" -wikipedia 313k

As you can see, even if we were to be extremely generous by A)allowing for the acronym "GPU" to stand in for the spelled-out phrase, B)attribute every ghit to this context, and C)combine hits from both GPU and "graphics processing unit", although many would be redundant, the phrase "graphics processor" is still more common twice over. If we discount the non-relevant hits for GPU we would likely end up with "graphics chip" still being more common. Thus "GPU" and its lesser equivalent "graphics processing unit" together are riding third in common usage. I will also note that "processor" and "chip" both refer to the same scope of hardware as "GPU", and doesn't introduce the implication that video card does. Given that "graphics processor" is the WP:COMMONNAME and that GPU is brand-specific terminology (in spite of the popularity of the brand), the best title of the article would be Graphics processor, unless someone can find one that is even more common. Ham Pastrami (talk) 04:17, 19 May 2008 (UTC)

For some reason, I get 20,500,000 hits for GPU on Google, and 1,750,000 for "graphics processor" in quotes. I don't think it's a proprietary term at all, just like CPU isn't. --Wirbelwindヴィルヴェルヴィント (talk) 06:33, 19 May 2008 (UTC)
Addendum: Searching google for "gpu AND graphics OR video OR processor OR processing OR unit OR video" yields 696,000 results. --Wirbelwindヴィルヴェルヴィント (talk) 06:39, 19 May 2008 (UTC)
Because you're not including the -wikipedia part. That filters out links that are from or likely related to the usage on Wikipedia so that the incumbent phrase (such as the one currently in use in this and subsequently related articles) does not influence the count. It is a bit curious that allowing Wikipedia links would cause the number to shrink, though. Maybe someone who knows a little more about how Google runs its searches can answer to this. I don't think CPU is a proprietary term either, but that's not what we're discussing (simply "processor" might be more common, but is ambiguous, and ambiguity is not a problem in this case). CPU is not also not a valid comparison for the reason shown below. Ham Pastrami (talk) 09:15, 19 May 2008 (UTC)
  • Comment the COMMON NAME is GPU. And VPU was originated by 3DLabs for their WildCat line of professional graphics boards, and later co-opted by both nVidia and ATI, though no one else calls them VPUs. And neither Intel, nVidia, ATI, nor Matrox originated the GPU term. IIRC, NEC was the first to use it for an ISA 16 pro-graphics board that required you to solder it into your motherboard. Aside from the fact that GPU is regular computer jargon, like signal processing unit, central processing unit. A graphics processor could be a box the size of a PC, or a VAX. 70.55.86.17 (talk) 08:24, 19 May 2008 (UTC)

Is GPU regular computer jargon "just like" CPU and SPU? Let's find out:

  • "cpu" -wikipedia 16m (just dandy)
  • "central processing unit" -wikipedia 2.2m (ok)
  • "signal processor" -wikipedia 2.36m (ok)
  • "signal processing unit" -wikipedia 327k (oops!)
  • "spu" -wikipedia (8 out of top 10 hits unrelated to circuitry)

SPU is ambiguous and "signal processing unit" receives far fewer hits than "signal processor". Where are these articles located? Central processing unit. Signal processor. Ok, that's perfectly in line with the demonstrated common names. So now we have further evidence that "GPU" and other derivative acronyms are not comparable to "CPU" in terms of commonality, and are inconsistent with Wikipedia naming conventions. Ham Pastrami (talk) 09:15, 19 May 2008 (UTC) Regarding "graphics processor" potentially being some kind of unholy box. That title already redirects to this article. If there were a practical cause for concern about ambiguity, that would not be the case. Can you at least give an example of a "graphics processor" of this size? Also, where does it say that a GPU can never be the size of a box? Video cards are getting to that size already, so it's not completely out of the question for the core or chipset to some day reach this size. In any case this seems like a red herring to me. Ham Pastrami (talk) 09:19, 19 May 2008 (UTC)

(I'm an administrator who came here via WP:RM) I don't think you've demonstrated sufficiently what the common name for this thing is, and I don't there's consensus to move. Have you looked at books, journal articles and other scholarly works and see what they call this thing? enochlau (talk) 09:22, 29 May 2008 (UTC)

The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

No mention of Silicon Graphics?

Before there was OpenGL, there was GL, upon which OpenGL was very closely based. GL is the graphics library that accompanied graphics-dedicated machines produced by Silicon Graphics, Inc. -- (the original name was the "Iris") -- which contained what I would call the first GPU ever. At that time ('80-early '90s) the distinction between a GPU and a "graphics card" was, I think, blurred, but the Silicon Graphics machines came with one or more chips that performed graphics primitives very fast (at least for that era), and every research or CAD group that could afford one would buy one.

OpenGL was released by Silicon Graphics.

(Disclaimer: I have never had any financial or other relationship with Silicon Graphics. But from this article it seems as though memories of computer graphics history are extremely short.)Daqu (talk) 17:12, 23 September 2008 (UTC)

Silicon Graphics hardware didn't use GPUs, they used ASICs for each stage or a few stages of the graphics pipeline. The IRIS doesn't have a GPU, it has a Geometry Engine that implemented one stage of graphics pipeline - matrix transformations, clipping and mapping to output device coordinates. Further information can be found in a paper by James H. Clark, "The Geometry Engine: A VLSI Geometry System for Graphics". There should be a copy of this paper somewhere on the Internet. Rilak (talk) 18:28, 26 September 2008 (UTC)


This whole "GPU" article has had a wonk which has not been fixed since the beginning. The problem with it is that people are placing additions based on what they think should be thought of as a GPU to satisfy them personally, such as the Amiga's "blitter", which certainly isn't in the same class as anything that can rightfully be called a GPU and as such should be dropped from consideration without a thought. Only Amiga disciples will protest, thus pinpointing a key problem with public access infomation. For the article to be truly dispassionate, the feelings of discples should not be taken seriously. Technical accuracy is the aim, not another page of Amiga publicity/eulogy passed off as information. — Preceding unsigned comment added by 217.42.71.124 (talk) 17:13, 11 July 2015 (UTC)

History

With regard to OpenGL and DirectX history, this article is not very accurate. As a graphics researcher at Bell Labs and Microsoft Research, I was an eyewitness to much of this. OpenGL was primarily developed by Silicon Graphics and was more or less based on their workstation hardware architecture. SGI promoted the concept of multi-pass rendering to achieve more sophisticated graphics effects, but computer game developers were designing software that was based on the concept of a single pass with "multi texture". The most important example being John Carmack's Quake game, which used two texture maps, one for detailed texture color and a second to store low-resolution radiosity for more realistic lighting.

3DFX intoduced the first dual texturing unit in 1998, for the Voodoo2 card. This was motivated by Carmack's multitexture games, but it was clearly an important future trend. nVidia and ATI were quick to follow, and Microsoft began working on modifications to DirectX to support the feature. DirectX 6 (Summer 1998) had support for multitexture hardware, and a joint design process involving Microsoft, nVidia and ATI worked out the design of vertex and pixel shaders which appeared partly in DirectX 7 (Fall 1999) and fully in DirectX 8 (early 2001 or late 2000?).

While SGI resisted multitexture in favor of multipass (their workstation hardware was based on the latter), both nVidia and ATI immediately exposed DirectX pixel/vertex shader functionality via the OpenGL extensions mechanism. Thus they were able to bypass the conservative ARB (archtecture review board) which was dominated by the aging UNIX workstation industry. John Carmack, upset by the poor quality of DirectX 5, had decided to base his games on OpenGL. Ironically, Carmack was encouraged by the Windows NT team at Microsoft and possibly even assisted by them. This was concurrent with a war within Microsoft between the DirectX team (part of the Windows 9x division) and the Windows NT division which supported the OpenGL API. Bill Gates personally ended the conflict, when asked to terminate DirectX, he decided instead to terminate the OpenGL effort to avoid having part of the Windows kernel be dictated by an outside committee (the ARB).

Again Carmack's dominance of the industry at that time was an important factor in nVidia and ATI's decision to support OpenGL and extend it with DirectX shader functionality. Typically a subset was implemented, often refered to as "QuakeGL", since at that time the support of Carmack's software the the imperative. DonPMitchell (talk) 16:55, 31 July 2009 (UTC)

Some friends have pointed out that multi-texture hardware predates Carmack's software. Also, Carmack's break with Direct3D was around the timeframe of DX 3.0. DonPMitchell (talk) 23:31, 31 July 2009 (UTC)

Citation Update for GPU

Adding some suggested citations for anybody who wants to read and update the article. As the main article states, the page is way behind (market data from 2008 - almost three years ago). Here are some of the more recent market share figures:

http://jonpeddie.com/press-releases/details/jon-peddie-research-announces-report-on-cpus-with-embedded-graphics-process/ TIBURON, Calif--September 13, 2010

2008 2009 2010
Embedded HPU 0.0% 0.0% 0.1%
Embedded EPG 0.0% 0.0% 22.8%
IGP 67.3% 71.2% 46.7%
Discrete 32.7% 28.8% 30.3%
Area of Contention 0.0% 0.0% 0.0%

http://jonpeddie.com/press-releases/details/jon-peddie-research-announces-3rd-quarter-pc-graphics-shipments/ TIBURON, Calif--October 25, 2010

Vendor This Quarter Market Share Last Quarter Market Share Unit Growth Qtr-Qtr This Quarter Last Year Market Share Growth Yr-Yr
AMD 22.3% 25.0% -11.4% 20.1% 11.0%
Intel 55.6% 53.4% 3.2% 53.6% 3.7%
Nvidia 21.2% 20.7% 1.5% 25.3% -16.1%
Matrox 0.1% 0.1% 0.0% 0.0% 102.5%
SiS 0.0% 0.1% -96.7% 0.3% -99.4%
VIA/S3 0.8% 0.8% -4.7% 0.7% 16.4%
Total 100.0% 100.0% -0.9% 100.0% 0.0%

--MarsInSVG (talk) 09:28, 26 November 2010 (UTC)

IGP section is out of date.

Add new Intel Sandy Bridge to the section. —Preceding unsigned comment added by 71.29.75.158 (talk) 23:22, 19 December 2010 (UTC)


Bruteforceattack

What does bruteforceattack have in common with graphicalprocessingunit? —Preceding unsigned comment added by 89.31.161.101 (talk) 00:24, 9 January 2011 (UTC)

photo is sensationalist bs for gamers

GPUs aren't covers, they are circuits. Find an 'open' one. --62.1.89.9 (talk) 22:19, 11 January 2011 (UTC)

Did the Amiga have a gpu or not?

I've removed the following from the article page where is does not belong:


    • I've corrected the information above once already and had the edition removed, so I'd like to make the point that the copy above is an Opinion. It includes no citations to factually backup any of the claims. To wit: 1. There were other more sophisticated graphics accelerators before the Amiga. 2. Having separate CPU and graphics chips was common to virtually all home computers. The only real difference is fans of the hardware are more overly keen to accentuate aspects of the Amiga they felt made it superior to the PC standard of the time. 3. The Amiga video hardware lacked critical features which were present on its predecessor, the 64. 4. The argument about whether or not it is actually important to have graphics hardware on a separate unit is moot. As time goes on and on-chip multi-threading becomes typical GPU and CPU functions are tending to converge. The truth is, how many components the motherboard design is divided into is not vital. Arguably it's better to have tighter integration.

I don't know who wrote it. It seems clear the author misses the point, though. A GPU isn't a separate chip for rendering the contents of a frame buffer. A chip that reads the contents of a frame buffer and then uses that data to output to a video port is not a GPU. A GPU is a specialized memory manipulator dedicated to altering memory intended for eventual output as video. Granted, GPUs aren't restricted to manipulating just image data, but that is their first purpose, hence the 'G' for graphics in GPU. Mc6809e (talk) 19:34, 7 April 2011 (UTC)

Amiga

      • lovely, thanks very much for the revision of the paragraph above. However, the information below is still, basically, user generated advertising. It also contradicts the factual information supplied above in this very entry, in which it is written correctly, the term 'GPU' was coined and defined by NVidia specifically, as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second." Critical to the validity of this article as well is the fact the term GPU is understood to refer to graphics processors which are themselves capable of handling the floating point operation used for graphics which was previously the remit of the CPU. This is the significant difference which separates a "graphics Accelerator" from a "GPU": Floating point math, in hardware. The Amiga chip is an accelerator, similar to the ones that went into PC graphics cards of the time, this is not a GPU by any measure. Hardware implementations of graphics bitblit operations pre-date the Amiga by more ten yers. Please don't overestimate the value of finding the Amiga to be the first personal computer you encountered which had graphics hardware. Thank you for the edition of the above text, it's more honest now. I would be most grateful if you would make appropriate changes accordingly to the text below since you are more versed in wikipedia style. Thank you! One more thing in passing: Almost anything you can name was the first example of <whatever it was> since everything is unique, that alone doesn't make it special, worth a thought if you're thinking <anything at all> was the only thing which ever possessed it's own particular compliment of features and benefits... because it's true of every product! PS. I don't mean to hijack this wikipedia page, I truly believe that writing like the stuff immediately below entirely fit for Amiga fan pages, but NOT an encyclopaedia which people trust to be true and impartial. *** Vapourmile (talk) 19:55, 13 April 2011 (UTC)

Visual Processing Unit?

The first line reads "graphics processing unit or GPU (also occasionally called visual processing unit or VPU)...". The term "VPU" has not been used since the launch of the first ATI cards, so I don't think that it is relevant to be used in the first sentence of this article. Doing a google search on "Visual Processing Unit" can prove that this term hasn't been in use for a long time.DRKThree (talk) 15:08, 26 June 2011 (UTC)

A specific paragraph needs a citation

Originally posted on this article with this edit by IP 2.96.102.236. I decided to move it to here for discussion.

"The paragraph above needs a citation: It's reads like fan-mail. Detracting reasons: An IBM Personal Computer would be classed as 'Using a GPU' if it were equipped with a graphics card with hardware acceleration which had been avaiable since 1983. Also, the Smalltalk-74 system used a silicon implentation of a blitter which the Amiga blitter was a secong generation imitation of. Thirdly, this argument refers to blitter hardware which a very different device from a GPU, as defined by Nvidia above, whose name belongs to silicon with a far greater depth of features than a blitter which is a relatively simplistic device. The Amiga blitter is certainly not the same class of device as a GPU. The Amiga is a particular culprit for attracting fan-posting which is far more an expression of the inflated opinion of its users than actual historical fact. Please amend with respect to this."

Haseo9999 (talk) 01:40, 29 March 2012 (UTC)

  • Several issues with the above. What card are you referring to as being available in 1983? The earliest graphics card I'm aware of that accelerated graphics for the PC was the Professional Graphics Controller which was released in 1984. Even then, the card was essentially a separate 8088 on a card with direct access to video memory for speed. That doesn't qualify as a GPU. Second, Smalltalk-74 systems used microcode to drive the discrete logic chips that constituted their CPU. There was no blitter hardware -- only a microcode sequencer driving the CPU's ALU and registers. Third, the Amiga's blitter fits the definition provided by the article, but if we restrict ourselves to NVidia's definition then we must do away with the many references to early graphics accelerators mentioned therein. I think that's a mistake. Additionally, the Amiga's blitter isn't the whole of the Amiga's graphics acceleration. The Amiga graphics hardware has a separate processor with it's own instruction set running a program out of memory that is capable of operating and directing the blitter to perform a sequence of graphics operations independent of the CPU. This ability to run a sequence of operations in parallel and independent of the CPU seems to me enough to graduate the Amiga's graphics hardware from a mere assistant to the CPU to being a full GPU. Also, the Amiga's blitter distinguishes itself from other BITBLT assist hardware in that it supports three source operands, making a single-blit stencil operation possible. This is important when dealing with multi-bitplane color displays. Finally, three source operand support allows the blitter to be used to construct a full adder circuit with just two blits. Indeed, something similar to this was done in Dec 87' as part of an implementation of Conway's game of life using the Amiga's blitter, making it perhaps the first example of general purpose computing with a GPU.

Mc6809e (talk) 03:21, 7 May 2012 (UTC)

Mention of GPUs embedded in SoCs

SoCs that include one (or multiple) GPU/GPU-like IC-Blocks are e.g. Apple Ax, OMAP, Snapdragon, Tegra, NovaThor, Exynos, etc.

Please also read Talk:Graphics_processing_unit#No_mention_of_Silicon_Graphics.3F, because sometime these units are called multimedia processors. At least the VideoCore is reported Here: Ars Technica: Video iPod – Vivisection to replace the "Wolfson audio codec" and add "video processing and output" Doors5678 (talk) 18:06, 13 April 2012 (UTC)

i corrected a grammar error

I usually do that when I find them, but the people that run the website should look over the websites people post for that kind of errors.

No mention of 3Dlabs?

3Dlabs GLINT 300SX, described in 3Dlabs as "the industry's first single chip, 3D-capable graphics device that was shipped on graphics boards from multiple vendors", was announced in 1994. It provided workstation graphics (OpenGL) to the PC platform (Windows NT, PCI) in a single chip. It was something of a landmark in the story of the GPU. Lostdistance (talk) 14:51, 1 January 2014 (UTC)

Good point! Please feel free to add this info into the article's "History" section, it would be a good addition. — Dsimic (talk) 01:38, 2 January 2014 (UTC)

Probably also worth mentioning their Gamma[1] chip, which put transform and lighting in silicon for the first time, and their P10[2] chip, which I think introduced programmable shaders. 109.224.164.230 (talk) 10:11, 5 September 2014 (UTC)

See also

I think there are too many Wikilinks in the see also section. Not everyone one of those links are necessary. If it's OK with other editors I can trim down the section.--Chamith (talk) 04:41, 3 April 2015 (UTC)

1980s/1990s

Can someone with more time than I have put some info about the very influential and widespread TMS9918/TMS9928 video interfaces in here? Wayne Hardman (talk) 21:02, 2 April 2017 (UTC)

Proposed merge with PGPU

Same topic Widefox; talk 13:34, 7 September 2016 (UTC)

Agreed, let's merge. TranslucentCloud (talk) 13:55, 7 September 2016 (UTC)
Merge. Sizeofint (talk) 14:55, 17 September 2016 (UTC)

Proposed merge with VGPU

subtopic no more than dict def Widefox; talk 13:35, 7 September 2016 (UTC)

Agreed, let's merge. TranslucentCloud (talk) 13:55, 7 September 2016 (UTC)
Merge. Could potentially be merged into hardware virtualization as well. Sizeofint (talk) 14:57, 17 September 2016 (UTC)

No mention of Evans & Sutherland?

Evans & Sutherland were doing GPU hardware decades before nvidia were formed. This article seems to have a very "consumer PC" slant with no mention of the actual history of 3D graphics. — Preceding unsigned comment added by 172.91.92.28 (talk) 08:53, 17 October 2016 (UTC)

ARM buys Apical

Things are hotting up with this ARM aquisition of Apical. This wiki article should be updated with these latest GPU developments. 82.25.154.11 (talk)

basics missing?

there is a great write-up by game developer fabian giessen A trip through the Graphics Pipeline 2011, a little technical though. the wikipedia article seems to miss out on explaining the basics, like a GPU command processor, command buffer, graphics scheduler, user mode, kernel mode, and many others. --ThurnerRupert (talk) 11:04, 6 February 2017 (UTC)

What is the GPU only computer?

A computer which only uses many GPUs and not a CPU. It usually requires a different operating system to perform better. — Preceding unsigned comment added by 2A02:587:410A:D400:F1C7:FBA2:7C6F:1F05 (talk) 05:08, 11 May 2017 (UTC)

"Embarrassingly parallel"?

"GPGPU can be used for many types of embarrassingly parallel tasks" -- who is it that is embarrassed? And why aren't "highly" or "massively" more appropriate adjectives? AllTheGoodNamesWereTaken (talk) 16:30, 16 July 2017 (UTC)

GPUs as the underlying/enabling hardware for modern AI (CNNs/"deep learning")

I putting a line in the intro about the importance of the modern GPU for most of the AI research that's made such big leaps and bounds in the last decade, hopefully a short section expounding further at a later date. Let me know if there is a better place for it somewhere else or if its largely duplicate of other content.Krb19 (talk) 22:48, 3 August 2017 (UTC)

Citation 36 is to a Wiki article that has no citation of what is claimed

It cites the 3dfx Interactive Wiki page for the claim and the claim is not even mentioned in the main article of Glide API. — Preceding unsigned comment added by 5lood237 (talkcontribs) 23:49, 16 February 2019 (UTC)


Missing Cirrus Logic

Cirrus Logic was a big player in the early GPU market during the 1990's. Cirrus Logic acquired Acumos and dominated PC Graphics until 1999. Cirrus purchased Pixel Semiconductor in 1993 and launched multi-media enabled GPUs. The history of GPU's is incomplete without acknowledging Cirrus' contributions. Someone kindly research and add that. Thank you. — Preceding unsigned comment added by 192.104.24.223 (talk) 16:42, 15 September 2017 (UTC)

External links modified

Hello fellow Wikipedians,

I have just modified 4 external links on Graphics processing unit. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 17:31, 22 October 2017 (UTC)

The use of GPU for crypto mining

At the present moment, many high quality GPU are purchased to mine cryptocurrencies. An article on it would be relevant; the consequences on the gaming industry and the long term effect of offering huge amounts of GPU power can have on all related subjects. — Preceding unsigned comment added by UniversusTech (talkcontribs) 11:26, 6 February 2018 (UTC)

Five-month-old spam link removal

An unregistered user from IP 105.102.175.151 made two edits to Wikipedia on April 13th linking to the same external page. It led to a defunct website, but I removed the link on this page a few minutes ago rather than trying to archive it. The original edit that added it is here: https://en.wikipedia.org/w/index.php?diff=858415919&oldid=836268311&title=Graphics_processing_unit The other external link was on another article's talk page and was not caught before being archived on August 19th.

I'm not sure if this is a common tactic used by spammers but I suspect it helps fuel SEO and I would not be surprised if such links are found elsewhere on Wikipedia. 161.6.219.225 (talk) 01:26, 7 September 2018 (UTC)

Well spotted, definitely link spam. Thanks! --Zac67 (talk) 05:27, 7 September 2018 (UTC)

eGPU

Why doesn't a search for eGPU point to this topic? — Preceding unsigned comment added by 68.109.194.172 (talk) 21:49, 8 April 2020 (UTC)

Technology name/existence?

Just heard it in one tech video. it seems it's not SLI (parallel performance, dividing the workload).

at 4:58: performance of one GPU being passed through the other (on board) GPU  89.201.184.159 (talk) 04:28, 26 April 2020 (UTC)
Hybrid Crossfire. It's not possible to "pass performance", but output from one GPU is passed to the output buffer of another. It's hybrid when two (vastly) different GPUs are combined. --Zac67 (talk) 08:36, 26 April 2020 (UTC)
it's not that. Motivation for "that" was to reuse Nvidia (very expensive) mining card (after bitcoin plummeted) for gaming, which could be bought at 1/3 or 1/4 the price at that time, otherwise they would end up in trash. But the problem is that those mining cards didn't have i/o ports. It seems that two cards in use in the video were the same model, but at 13:55 it seems that that doesn't need to be the case. The thing I mentioned can be heard from 4:58 to 5:10 and read at 13:55. So it was done, something. It's a prerequisite to watch the video to know what went on, and not talk from the head or from experience because there are some problems with that so it's not common knowledge whatever that is. I am not knowledgeable in this area but sth weird (and maybe inventive) went on and it worked. 89.201.184.8 (talk) 09:40, 26 April 2020 (UTC)

vGPU, parallel processing, VMs

It's not mentioned or barely touched upon but are important aspects

  • Sharing GPU over multiple Virtual Machines
  • vGPU [1]
  • parallel processing: Scalable Link Interface / AMD CrossFireX / NVLink 89.201.184.8 (talk) 08:08, 26 April 2020 (UTC)

mobile/desktop GPU

There is not a mention of it in the article but there is a distinction. There are mobile and desktop versions of the same GPU model, and mobile GPUs are used in some desktop computers (e.g. iMac 5K Retina 27" Display from 2014) and laptops. Setenzatsu.2 (talk) 15:13, 5 December 2021 (UTC)