Talk:GeForce 400 series

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Dubious[edit]

  • "At CES 2010 Jen-Hsun Huang said that development had yet started."

This needs citation and clarification. Did he say that development had yet to start or that it had started? —Preceding unsigned comment added by KodakYarr (talkcontribs) 23:25, 21 February 2010 (UTC)[reply]

  • "combined with support for Visual Studio and C++"

This is very unclear, does this means the c++ code is run on the computer ? on the GPU ? —Preceding unsigned comment added by 110.66.66.3 (talk) 12:41, 26 February 2010 (UTC)[reply]


OpenGL?[edit]

I'm surprised the source of the Direct3D version support data didn't also include OpenGL version support data as well? Will it support OpenGL 4? Swiftpaw (talk) 19:58, 26 March 2010 (UTC)[reply]

OpenGL 4 has "feature parity" with DirectX 11, so yes. --Ysangkok (talk) 10:09, 28 March 2010 (UTC)[reply]

Why isn't the OpenGL version listed in the summary box? -- Stou (talk) 09:35, 18 April 2010 (UTC)[reply]

Note sure, but someone had already added them for the infobox, but the template didn't display them, so I've added them to the infobox template and slightly modified the entries already present for this article.--Rfdparker2002 (talk) 11:49, 11 May 2010 (UTC)[reply]

Power Consumption[edit]

How many watts? —Preceding unsigned comment added by 92.46.101.79 (talk) 11:07, 27 March 2010 (UTC)[reply]

http://www.tomshardware.com/reviews/geforce-gtx-480,2585-15.html --Ysangkok (talk) 10:10, 28 March 2010 (UTC)[reply]

Performance Comparison[edit]

Should there really be a performance comparison (without ANY source) on this page? I´d say it´s better to leave the benchmarks to the tech sites (e.g. tomshardware). —Preceding unsigned comment added by 93.196.13.224 (talk) 22:41, 2 April 2010 (UTC)[reply]

While a quick survey of the other GPU pages didn't turn up any existing inter-vendor comparisons, I actually can't see that there's any issue as long as it's sourced and reasonably useful (not just the numbers from NVidia marketing, or some single benchmark run by some forum poster). The numbers are roughly correct (though there is of course significant spread over different applications and different review sites). I remember some place where they aggregated a whole heap of results from multiple review sites, and pulled some averages out of it, but I can't find it at the moment. I'll see if I can find it again, as it'd solve the WP:OR issues ... Maybe something along the lines of "averaging over a number of review sites and benchmarks, the GTX 480 was xyz% faster than the HD4870" or similar? AntiStatic (talk) 11:08, 3 April 2010 (UTC)[reply]
I'm not convinced that performance comparisons are appropriate for Wikipedia at all. That's something we've mostly managed to avoid in past articles. ButOnMethItIs (talk) 15:55, 3 April 2010 (UTC)[reply]

"GTX 480 is the highest-end model from the GTX 400 series"[edit]

This sentence in the article i think is incorrect. Surely there will be a "GTX 495" with a dual GPU (like GTX 295) right?? 81.154.253.88 (talk) 21:29, 4 April 2010 (UTC)[reply]

Considerung the power consumption of the GTX 480 this won't be possible without violating the PCIe specs or drastically lowering the shader count and frequency on each of the GPUs, which would render the sense of such a dual gpu card void. --Tdeu (talk) 20:59, 8 April 2010 (UTC)[reply]
This is not correct. By that reasoning the GTX295, HD5970, and HD4870X2 would all violate the PCIe specs. There have also been two complete dual GPU PCBs (one by ASUS and one by Galaxy) made demonstrating that such a card is entirely feasible. In fact the PCBs appear to be production models and not prototypes. In dual GPU configurations, both GPUs do not run at full load. The article should read that the "GTX 480 is CURRENTLY the highest-end model from the GTX 400 series". There is both a GTX 475 and GTX 485 on the horizon as well. Annoyed with fanboys (talk) 05:37, 15 August 2010 (UTC)[reply]
The PCIe ceiling is 300W. It's possible to ignore that ceiling, but it would not be a PCIe compliant device and could not be advertised as such. Without dramatic power reduction, two Fermis will not appear on a single card. The article should not be changed to anticipate new chips or new cards unless there are official announcements. ButOnMethItIs (talk) 06:38, 15 August 2010 (UTC)[reply]

About fast memory onboard[edit]

G200 has 16384 registers (64kB) per SM (8SP), GF100 has 32768 registers (128kB) per SM (32SP)

G200 has 8kB of constant cache per SM (8SP) and 24kB of texture cache per three SMs (24SP), plus 16kB of shared memory per SM (8SP), which averages out to 32kB of fast memory per SM (8SP). GF100 has 64kB of fast memory, which can be split 16-48 or 48-16 between cache and shared memory per SM(32SP).

Thus, the fastest memory to the SMs were cut by half, or from 12kB per SP to 6kB per SP. Since the L2 cache only increased from 256kB per 240SPs to 768kB per 480(or 512)SPs, the amount of fast memory actually decreased, which can affect your code.

I ask you, as someone who actually work with these cards, why is this not a legitimate drawback? ButOnMethItIs is undoing my changes repeatedly without answering this point. —Preceding unsigned comment added by 140.109.189.244 (talk) 09:56, 17 April 2010 (UTC)[reply]

You offer nothing in the way of reliable sources. It's all original research with a heavy helping of POV. I don't think you understand the role of a Wikipedia editor. ButOnMethItIs (talk) 10:10, 17 April 2010 (UTC)[reply]
There is no POV in the last version of the article which I just checked in. Or for that matter, the second-to-last version. The number of registers and the amount of memory on a chip is constant. It can't change and is not something that can be made up. All the details can be verified, even though I just found where my official reference book was originally from on NVIDIA's website. But you go around and throw insults such as "I don't think you know what you are talking about." I don't think that is the role of a Wikipedia editor either.
I would invite you to actually cite references. Protip: an Nvidia document describing how to check this information via CUDA doesn't cut it, not at all. After that, I would invite you to read WP:OR to see what else is wrong with what you're trying to do. ButOnMethItIs (talk) 11:20, 17 April 2010 (UTC)[reply]
You know, the version you just undid included a page reference to the official Nvidia table, namely Appendix G, page 139-140 of the CUDA 3.0 official reference manual (located at http://developer.download.nvidia.com/compute/cuda/3_0/toolkit/docs/NVIDIA_CUDA_ProgrammingGuide.pdf) where all the relevant data on SRAM onboard are tabulated, I don't know how much more authoritive a reference need to be on a piece of hardware than a manual published by the vendor. There is no original research here. Again, let me ask the innocent bystanders -- is it the a role of a Wikipedia editor to remove factual references without reading them, or for that matter to remove a note that the Error Correcting Code (ECC) feature is not present in the card (i.e., the GeForce GTX 400 series) for which the current article is titled, but leave untouched a paragraph in which states that ECC is present in the architecture? —Preceding unsigned comment added by 140.109.189.244 (talk) 13:09, 17 April 2010 (UTC)[reply]
That's fascinating. But from what I can tell, that document doesn't mention the GTX 480, GF100, Fermi, or anything relevant to this article. As for ECC support, it's in the memory controller. To "activate it", you would need ECC memory. It needs to be made clear that Fermi does not equal the Geforce 400 series, but that doesn't seem to be what you're interested in. ButOnMethItIs (talk) 13:38, 17 April 2010 (UTC)[reply]
Actually, if you hadn't been so hasty to undo every change I make and did a search for "Fermi" in your Acrobat Reader, you would see that the document does mention Fermi cards and state that they are compute capability 2.x (page 14, section 2.5). And Appendix A tells you that G200 cards are all of compute capability 1.3.
I have been told, and include an official reference that seems indicative of the fact, that ECC can be turned on and off on Tesla cards, using existing memory. That is, a Tesla 2070 that has 6GB of RAM will have 6*7/8=5.25GB of memory accessible when you turn the ECC feature on. In other words, it doesn't cost you extra to activate ECC on, it just decrease the amount of memory that you get to see. This is the same way that the Core i7 920 is the same chip as the Xeon W3520, except that you cannot turn on the ECC feature of the memory controller even though the circuitry is present. I believe that this meets the usual definition of "disabled".
As someone who work with CUDA programming, I am interested in exploring how well Nvidia cards work and disseminate correct knowledge. Granted I was annoyed earlier and had included some unneeded strong language. But I had edited all out all that by now, leaving only facts about the architecture and the cards in question. —Preceding unsigned comment added by 140.109.189.244 (talk) 14:15, 17 April 2010 (UTC)[reply]
I've left it for several days and the section hasn't matured into anything respectable. I've removed everything that was blatant original research or in the wrong section. What's left should be merged with a section describing the Fermi architecture, which is what this article needs. ButOnMethItIs (talk) 14:29, 22 April 2010 (UTC)[reply]
ECC is a valuable feature for those who does computations with the card. The fact that companies like Nvidia and Intel advertises chips with ECC as such should be enough to convince any reasonable person that not having ECC is a limitation. The error-checking and correction circuitry is still there in the chip, but can't be turned on because Nvidia doesn't allow it. I already answered your mistaken presumption that extra memory chips are required. The earlier section states that Fermi has ECC support; someone who reads this section would be justified in thinking that GTX 470/480 has ECC support. It is a fact that the 470/480 does NOT have ECC support. I would suggest that someone who appears to be so concerned about the quality of the article should never have deleted the ECC paragraph. Moved it, maybe.
I've removed this claim once again because it's original research. Do not reinsert it without a citation that clearly supports the claim. ButOnMethItIs (talk) 16:34, 24 April 2010 (UTC)[reply]

I'll answer you 140.109.189.244 . If you want citations, I'd suggest the website "google.com". I'm not doing that for you. First, both devices have 64k of constant mem (globally, across all SM's). However, constant memory accesses, cached or not, are fast only if all threads in a block (or certainly a warp) access the same address at the same time. Otherwise, they serialize the reads. I didn't know there was a constant cache at all actually. In anycase, constant mem has the same size and latency properties on both cards. constant mem is just a cache you can't write to from the device, and is used for very specific things. small numbers of constant parameters in fact, that are the same across all threads on the board. It was useful on the G200 as one of the few regions of low latency; now I hardly think about it because of it's restrictions.

Now, listen carefully: memory spaces are to be thought of w.r.t. SMs, not SPs. SP's execute single instructions. SM's manage SP instruction scheduling, memory spaces and access and so on. The unit of execution is a warp. on both architectures this is 32 threads. but on the g200, there were only enough SP's to execute a quarter warp, and to schedule either half a warp or a whole one I don't remember. On the FERMI, 2 warps are scheduled and dispatched and the 16 SP's can run a half warp (plus four special instruction units that compute sqrt, cos, sin and other graphicy functions very fast, and lots other stuff not present in GT200). So, all that's changed is that the instruction hardware (SP) has caught up with the software concept (a warp). In other words: it's just faster. you can't program an SP directly, or not easily. More instruction units does not mean less memory in some weird way. It just means more parallelism, and more latency hiding.

GT200 had *no* L2 cache. Where you got the idea that it did, I'm not sure. A GTX480 has 768k, which is 48k of L1 on each SM * number of SM (the 16 intended SM not 15 actual since one SM is switched off). This is the *only* fast memory available *between* SM's I know of on a device. I'm not sure, but I doubt the current ATI has a proper CPU-like L1/L2 caching system at all. That is what FERMI is all about. random access to global memory is slow when all threads are considered in aggregate. to be at all fast, the global reads and writes must be coalesced. Look that up. Even then, still slow, as you seem to be aware (but a coalesced read is bigger on FERMI than GT200).

With an L1 cache, you only take a hit for arbitrary random accesses once, unless your cache fills and a line is popped off to L2 or global. Do you get that? That is the point of all this trouble and fuss. That's the biggest architectural goal and accomplishment. Memory, all kinds of it, can be wicked quick without having to be structured in a certain way.

As for texture memory and texture cache: I don't know whether FERMI has a texture cache. maybe not, indeed. You can use the regular cache hierarchy (whose effective size is *not* a function of number of instruction units on an SM, as I said). FERMI is a move away from the notion of separable texture memory. But for what it's worth: texture alignment on a g200 is 256bytes. On a FERMI it's 512. What does that mean? it means that a texture cache or cache used for texture access (which cache stores *addresses*, obviously) only needs to be half the size to get the same data in a coalesced read from texture mem in the same time (or less since mem bandwidth is in general faster, and coalesced reads bigger). I'd ask you to figure out why that is, when you've read what coalescing is.

Finally, in general terms, FERMI SM's have 16 load/store units that can simultaneously request memory fetches or stores (seamlessly through the caching system) for half a warp in one cycle (the half warp it's running, or about to run), making access much faster to *any* kind of memory.

Read it. Program it. Figure it out. 140.109.189.244 researches ? talks : NULL

Duracell (talk) 22:37, 23 June 2010 (UTC)[reply]


Fermi[edit]

Referring to Fermi as "the inventor of the nuclear reactor" is like referring to Jesus as an early ophthalmologist that gave sight to the blind. Fermi made massive contributions to many areas of physics, not just nuclear, there's dozens of physical properties named after him along with a whole class of particles (fermions), a chemical element, and the biggest particle accelerator lab on US soil... from "his" wikipedia article: "Fermi is widely regarded as one of the leading scientists of the 20th century, highly accomplished in both theory and experiment". Anyway I think that part of the description should be removed and also since he had dual citizenship it would be more correct to describe him as the "Italian-American" physicist. -- Stou (talk) 09:35, 18 April 2010 (UTC)[reply]

First section needs help![edit]

I don't know how I feel about the grammer in the first section. In a few day I will hope to get around to correcting these errors but if anyone would like to help in the meantime please do!Cozzycovers (talk) 15:34, 26 April 2010 (UTC)[reply]

Great big table[edit]

I don't think the new columns added to the great-big-table achieve very much: they repeat for each card things which are properties of the GF100 chip. I can see a point in adding a 'chip used' column once nvidia produce cards in this series which are not just fused-down versions of the same chip, but five repeats of 'three billion transistors' or 'supports dx11' are uninteresting. Similarly, I'd put 'shader clock is twice main clock' rather than listing two numbers, one of which is twice the other Fivemack (talk) 10:02, 4 June 2010 (UTC)[reply]

295[edit]

I notice that my edits concerning the notion that the "theoretical floating point performance at both single and double precision is only 13% greater than a 295" have been removed. I'm not a video gamer. I'm a scientist. I'm sitting at a box that has both a GTX480 and a GTX295 in it. Firstly, the 295 is two GPU's, not one. It's two chips. They can communicate with each other only through the bus, at about 5GB/s. The global mem bandwidth on the 480 is about 170+ GB/s. So, even for video games, this is a very misleading statement since that drop in global memory bandwidth will have a significant impact. When I used my 295 to look at a 3D game to see what it could do (I don't remeber which one, cryo something maybe) I saw little improvement when using dual GPU mode.

Secondly, I'm not certain how double performance being half that of the FERMI spec fairly translates into being only 13% faster than the previous GT200/TESLA architiecture. The FERMI spec is 50% throughput for double compared to float. so, the GTX480 would seem 25% (in my *actual* experience, it is actually closer to 50%, but be that as it may). A GT200 has 10% of the instruction throughput at double as compared to single. On a 295, that's compounded in practice by the fact that a 295 is not a GPU. It is two. It shows up as two devices. They communicate over the bus. Now, even if it were accepted that "theoretically" (and that must mean: independent threads that never access anything other than registers and shared memory - not texture or global) a 480 had only 13% better double throughput (for which it requires double the SM's, being two GPU's) - how is the calculation made that single precision performance is similarly only 13% improved?

I'd suggest it was done by induction. Because my code ran, *before* I optimised it for FERMI about 6 times faster than on the 295. After optimisation, closer to ten faster (at single precision, non IEEE, equiv to the GT200).

The scientific community is not disappointed with this device, at all. I would suggest that the gamer community realise that it's architecture is fundamentally very different to other chips, and NVIDIA took a big risk. If it is *programmed right* you will see enhancements in games that relate certainly to graphics, but also to physics simulation and much else. But, current games will not take advantage of this.

And: the floating point performance of this device is *incredibly* fast. It's not 13% faster than a 295, "theoretically" or otherwise. That's why the chip runs almost hot enough to boil water. It's not doing it for it's health.

The comments need clarification, expansion, *data* and citation or I will remove them, replace them with my own cited remarks and frankly pull rank as an expert. Because to be honest, I suspect the negative assessment of this device is not NPOV. Or perhaps, rather, from someone who does not know how to program a device like this for graphics or any other purpose, but imagines they have some understanding from reading tomshardware or whateverDuracell (talk) 21:33, 23 June 2010 (UTC)[reply]

update: the comments read: "For both products, double precision performance has been limited to a quarter of that of the "full" Fermi architecture,[24] which according to the official table of CUDA instruction throughputs shows that the GTX 400 series has the same throughput per SP per clock cycle in double precision floating point as the GTX 200 series,[25] hence the GTX 480 has a theoretical maximum floating point performance only 13% better than that of the GTX 295 in both double and single precision (equal to the ratio of their respective shader clock speeds)."

Now, 1/4 of the proopsed double prec performance on a fermi is 1/8 the performance of the card at single precision. The estimation is relative to single precision performance on the card, not an absolute reference point. A 295 operates at about 1/10 the performance of its own single precsion perfomance. So, taking the ratio of the SP clocks I can see how one might arrive at at a figure like 15% improvement at double precision (although it's not that simple).

But how do you extrapolate from that that the *single* precison throughput is only 13% greater? There's a fallacy here. The fallacy is that the measuremnt of doble performance is *relative* to single performance. Another fallacy is equating clock rate with throughput wen comparing two completely different architectures. Would you compare A SPARC and an P4 in this way? No. A devices clock rate is not an automatic indication of FLOPS. And you must clarify what is meant by "theoretical". I can see that if each FP is simply performing, say, a floating point addition using only registers then this claim may be "theoretically" true (although, it isn't at single precision. I tested it.) but this is not a scenario that has any practical use, nor were the devices designed for that. In practice, the single precison performance is hugely faster on the FERMI cards, even the consumer cards like the GTX480. I mean, *hugely*, like 5x faster. And with IEEE numbers that don't go wrong when transferred tot he host. And no one really uses double precsion arithmetc anyway, certainly not in video games. It isn't necessary. If you need that, you can afford to buy nvidia's more epxpensive FERMI solutions.

I must look also, with repsect to the table here (and it's good that it's here) at how the GFLOPS are calculated. I think a variety of real world scenarios shold be included, for example FLOPS under different memory access patterns, under the heavy use of expf and sqrtf functions and so on. A 295 will perform poorly under these conditions, which are what the devices are used for. IT will not execute as many floating pointing operations per second as the newer card, which is optimized not for "theoretical FLOPS" but for FLOPS under *actual use cases*.

The fact is, our code, and the code of others, is just *tremendously* faster. More FLOPS are occuring. In theory, and practice. Duracell (talk) 12:33, 8 July 2010 (UTC)[reply]

I've made an edit to this section which makes it clearer that the (1/4) performance applies to the GTX line of GPUs when compared to the Tesla line, and a reference to a statement by NVIDIA on this topic. Although I have no reason to doubt the statement made on the peformance relative to the GTX295 (It looks correct to me) I removed the direct comparison between the GTX295 and the GTX 480. I felt that comparing an arbitrary pair of GPUs and extrapolating their performance from relative clock speeds was not necessary to make this point. I accept that was an arbitrary decision, and if others feel it improves the clarity of the article, I would not object if this comparison was added again. Matthewleslie (talk) 14:53, 12 July 2010 (UTC)[reply]

FX5800 series repeated[edit]

The common sentiment amongst many reviewers and Internet users is that GTX4xx series is a reiteration of the ill famed original GeForce FX Series, namely 58xx cards. Probably this information is worth to be added to the article. Artem-S-Tashkinov (talk) 14:17, 9 July 2010 (UTC)[reply]

No, it's not worth adding to the article. 173.187.0.44 (talk) 18:30, 11 July 2010 (UTC)[reply]
Seconded. The comparison adds nothing to the article. ButOnMethItIs (talk) 18:39, 11 July 2010 (UTC)[reply]

OpenGL Problems section[edit]

One user continues to blank this section. It seems to be well sourced. Am I missing something ?! Unflavoured (talk) 10:38, 21 October 2010 (UTC)[reply]

This section continues to be a problem. Why not discuss it here on the talk page ?! Is there a problem with the sources !? Unflavoured (talk) 13:36, 11 November 2010 (UTC)[reply]

I agree, the information provided is well sourced, and something that I for one appreciate being provided. That being a problem with the subject. This information belongs in the GeForce_400_Series section. --BadActor 09:06, 17 November 2010 (UTC) —Preceding unsigned comment added by Evilparty (talkcontribs)

There is this rumor around the net that Nvidia is intentionally crippling their geforce cards in 3d applications via drivers, the reason to make room for their more expensive quadro cards (which are mostly just rebranded geforces). People who contact Nvidia support don't get any concrete answers on this one. Seeing December is shopping season, "OpenGL Problems" section surely isn't in their best interest. I mean, why would anyone want to continually flush that section without any discussion whatsoever? What are the users' motives? Is it safe to assume there's something really weird going on here? (Undoum) 11:42, 14 December 2010 (ECT)

Most definitely an important issue that isnt getting any attention from Nvidia. Does this issue also affect non-game 3D applications running in Direct3D instead of OpenGL(Say 3dsmax)? 79.180.16.241 (talk) 11:31, 18 December 2010 (UTC)[reply]

See reference number 14 of the article. It links to a forum post where a user reports having problems with a GTX 470 and 3dsmax. I don't remember if 3dsmax perhaps also supports OpenGL, but I'd think it mainly uses Direct3D. (Undoum) 16:23, 20 December 2010 (ECT) — Preceding unsigned comment added by Undoum (talkcontribs)

There is no OpenGL problems, the same Quadro cards running the same GF100 GPUs do not have any issues. When will you retards learn that this was intentionally done by ATI trolls? Most of the useless threads have 1 reply, it was clearly planted by ATI trolls. —Preceding unsigned comment added by 124.13.113.224 (talk) 01:56, 7 January 2011 (UTC)[reply]

You are going to have to address the issue properly: "This is all untrue" is not as convincing as the sources cited. And since we have had quite a bit of to and fro with this particular section, perhaps it is best to discuss the problem here fully and obtain a consensus that satisfies both sides, instead of simply blanking the section over and over. Unflavoured (talk) 04:47, 7 January 2011 (UTC)[reply]
I will continue to remove fake trolling info and there's nothing you all can do since you all are retards. —Preceding unsigned comment added by 124.13.113.224 (talk) 09:13, 7 January 2011 (UTC)[reply]
Name calling will not help much. Some one may come along and ban you, and revert your edits, and then your efforts will go to waste. On the other hand, if we discussed this properly, we can reach a consensus that satisfies both sides. Perhaps you can suggest a different wording, or add more info representing your viewpoint, so that it is balanced. But simply deleting the whole section, especially when it is sourced, will not do. Unflavoured (talk) 09:28, 7 January 2011 (UTC)[reply]
I'll remove fake trolling info because you retards can't understand simple English that there is no OpenGL problems except fake trolling threads concocted by ATI trolls, so you deserve to be called retards. —Preceding unsigned comment added by 124.13.113.224 (talk) 11:42, 7 January 2011 (UTC)[reply]
Again, all you are offering is "No, there are no problems" when the cited sources show otherwise. Granted, some of these are forum posts, but some of these are also articles. You cannot simply dismiss the whole lot and offer us your opinion: "None of it is true." If you want to add something along the lines of: "This has no effect in most games" or provide a few sources that negate the given info, that would be much better than simply blanking the section. Unflavoured (talk) 13:06, 7 January 2011 (UTC)[reply]

Please do not blank the talk page. Unflavoured (talk) 03:50, 8 February 2011 (UTC)[reply]

Again, please do not blank the talk page. Unflavoured (talk) 09:49, 9 February 2011 (UTC)[reply]

Pre-launch statements[edit]

Why does the article even have this section? Most of the statements there are not noteworthy in and of themselves. —Preceding unsigned comment added by 99.188.150.12 (talk) 20:16, 26 February 2011 (UTC)[reply]

OpenGL Problems section, again...[edit]

One user continues to blank this section again and again. Please use the talk page to discuss ways to reach a consensus instead of having this "tug-of-war." Please realize that it is very easy to revert your edits, and that if you continue to blank the section, instead of discussing how to improve it and reaching a consensus, then your efforts will go to waste. Simply deleting it is not the answer. Unflavoured (talk) 07:32, 15 March 2011 (UTC)[reply]

Please do not blank the talk page, thank you. Unflavoured (talk) 03:55, 4 April 2011 (UTC)[reply]


that section seems to be relying entirely on forum posts and other self published material. unless there are better sources, it needs to go. Jeff Song (talk) 19:11, 21 July 2011 (UTC)[reply]

which sources are articles that are not self-published? please specify, as I don;t see them . Jeff Song (talk) 18:45, 22 July 2011 (UTC)[reply]

This is not a BLP. Self-published sources can be found in relevant articles, especially ones about technology. For example, go to the Sandy Bridge page: You will find Anandtech, Bit-tech, Tomshardware and others being cited as sources. Nearly all articles which are focused on a very in-depth/specific PC components will use self-published sources. Now Fermi is smaller ( from the mainstream POV ) than Sandy Bridge, so it is much less likely to see (e-)ink on places like, say, CNN... never mind a small aspect such as the OpenGL problems, which affect very few users anyway. Just because a source is self-published does not mean that

you should simply dismiss it, especially when there are multitudinous people that all agree that this problem does exist. Unflavoured (talk) 06:59, 25 July 2011 (UTC)[reply]

Self published sources are not supposed to be used at all, not just on BLPs. If this problem is indeed notable, it would have been published in some reliable publication. That other pages might suffer from a similar problem is beside the point. Please find better sources. Jeff Song (talk) 00:07, 26 July 2011 (UTC)[reply]
I have taken this to the noticeboard on sources. I don't think Alien babal Tech is reliable, it seems to be a website put up by a collection of self-described "enthusiasts", and the person who wrote the criticism in this article is not even known by his real name, only "BFG10K". But I'm willing to hear other opinions. Jeff Song (talk) 00:48, 26 July 2011 (UTC)[reply]
Website is notable enough to have its own Wikipedia article. You deleted the section again, with the summary: "blog posts" even though these are not blog posts. Unflavoured (talk) 02:13, 26 July 2011 (UTC)[reply]
Being notable is not the same as being reliable, and that's even before getting into the fact that the article about the website was written almost entirely by one of the website's creators. The reference I deleted was an entry written by an anonymous "BFG10K, as a "post" published on blogging software (Wordpress), with an RSS feed, user's comments and post archives - it is very clearly a blog. Jeff Song (talk) 16:59, 26 July 2011 (UTC)[reply]
I've removed the newsgroup and forum postings as they are clearly not reliable sources. ButOnMethItIs (talk) 10:00, 25 July 2011 (UTC)[reply]

Unflavoured needs to be permanently banned from Wikipedia for constantly vandalising this page. — Preceding unsigned comment added by 60.51.59.4 (talk) 20:59, 27 July 2011 (UTC)[reply]

Problematic user[edit]

There seems to be a user who continues to blank a section of the article, and to blank the talk page section discussing that section. After so many reverts ( I have lost count of how many ) I believe that this may now be considered vandalism. Can this page be semi-protected ?! Unflavoured (talk) 04:17, 12 April 2011 (UTC)[reply]

EXTERNAL LINKS[edit]

Somebody needs to fix or remove the first external link:

Access Denied You don't have permission to access "http://www.nvidia.com/object/gf100.html" on this server.

Reference #18.a2c53342.1368027796.3e31402

Regarding Direct3D 12.0 support[edit]

Please see http://techreport.com/news/26210/directx-12-will-also-add-new-features-for-next-gen-gpus before you state that the GeForce 400, 500, 600, and 700 series fully support Direct3D 12 or DirectX 12. They do not support the full Direct3D 12.0 standard because that standard will support features that were not invented when those GPU series were built. Nvidia's blog was technically true but quite misleading. While those GPUs will run Direct3D 12, they will run that API at a reduced feature level according to the article I cited. Jesse Viviano (talk) 03:02, 10 May 2014 (UTC)[reply]

GTX 460[edit]

Has two sets of data and two prices. No explanation at all can be found on the article.

There is a "GTX 460v2" (and a "460 SE" and a "460 OEM;" I hate these idiots with their marketing schemes) which is NOT the same as this "GTX 460v1.1."98.232.226.203 (talk) 18:55, 8 February 2016 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on GeForce 400 series. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 18:50, 8 January 2017 (UTC)[reply]

A Commons file used on this page or its Wikidata item has been nominated for deletion[edit]

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 15:08, 16 August 2021 (UTC)[reply]