The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2by Ryan Smith on September 18, 2014 10:30 PM EST
At the risk of sounding like a broken record, the biggest story in the GPU industry over the last year has been over what isn’t as opposed to what is. What isn’t happening is that after nearly 3 years of the leading edge manufacturing node for GPUs at TSMC being their 28nm process, it isn’t being replaced any time soon. As of this fall TSMC has 20nm up and running, but only for SoC-class devices such as Qualcomm Snapdragons and Apple’s A8. Consequently if you’re making something big and powerful like a GPU, all signs point to an unprecedented 4th year of 28nm being the leading node.
We start off with this tidbit because it’s important to understand the manufacturing situation in order to frame everything that follows. In years past TSMC would produce a new node every 2 years, and farther back yet there would even be half-nodes in between those 2 years. This meant that every 1-2 years GPU manufacturers could take advantage of Moore’s Law and pack in more hardware into a chip of the same size, rapidly increasing their performance. Given the embarrassingly parallel nature of graphics rendering, it’s this cadence in manufacturing improvements that has driven so much of the advancement of GPUs for so long.
With 28nm however that 2 year cadence has stalled, and this has driven GPU manufacturers into an interesting and really unprecedented corner. They can’t merely rest on their laurels for the 4 years between 28nm and the next node – their continuing existence means having new products every cycle – so they instead must find new ways to develop new products. They must iterate on their designs and technology so that now more than ever it’s their designs driving progress and not improvements in manufacturing technology.
What this means is that for consumers and technology enthusiasts alike we are venturing into something of an uncharted territory. With no real precedent to draw from we can only guess what AMD and NVIDIA will do to maintain the pace of innovation in the face of manufacturing stagnation. This makes this a frustrating time – who doesn’t miss GPUs doubling in performance every 2 years – but also an interesting one. How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market? We don’t know, and not knowing the answer leaves us open to be surprised.
Out of NVIDIA the answer to that has come in two parts this year. NVIDIA’s Kepler architecture, first introduced in 2012, has just about reached its retirement age. NVIDIA continues to develop new architectures on roughly a 2 year cycle, so new manufacturing process or not they have something ready to go. And that something is Maxwell.
GTX 750 Ti: First Generation Maxwell
At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we’ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn’t have a real answer to GM107’s energy efficiency.
However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first – either a 100/110 or 104 GPU – NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called “second generation Maxwell”. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.
GM204 itself follows up on the GM107 with everything we loved about the first Maxwell GPUs and yet with more. “Second generation” in this case is not just a description of the second wave of Maxwell GPUs, but in fact is a technically accurate description of the Maxwell 2 architecture. As we’ll see in our deep dive into the architecture, Maxwell 2 has learned some new tricks compared to Maxwell 1 that make it an even more potent processor, and further extends the functionality of the family.
|NVIDIA GPU Specification Comparison|
|GTX 980||GTX 970 (Corrected)||GTX 780 Ti||GTX 770|
|Memory Clock||7GHz GDDR5||7GHz GDDR5||7GHz GDDR5||7GHz GDDR5|
|Memory Bus Width||256-bit||256-bit||384-bit||256-bit|
|FP64||1/32 FP32||1/32 FP32||1/24 FP32||1/24 FP32|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
Today’s launch will see GM204 placed into two video cards, the GeForce GTX 980 and GeForce GTX 970. We’ll dive into the specs of each in a bit, but from an NVIDIA product standpoint these two parts are the immediate successors to the GTX 780/780Ti and GTX 770 respectively. As was the case with GTX 780 and GTX 680 before it, these latest parts are designed and positioned to offer a respectable but by no means massive performance gain over the GTX 700 series. NVIDIA’s target for the upgrade market continues to be owners of cards 2-3 years old – so the GTX 600 and GTX 500 series – where the accumulation of performance and feature enhancements over the years adds up to the kind of 70%+ performance improvement most buyers are looking for.
At the very high end the GTX 980 will be unrivaled. It is roughly 10% faster than GTX 780 Ti and consumes almost 1/3rd less power for that performance. This is enough to keep the single-GPU performance crown solidly in NVIDIA’s hands, maintaining a 10-20% lead over AMD’s flagship Radeon R9 290X. Meanwhile GTX 970 should fare similarly as well, however as our sample is having compatibility issues that we haven’t been able to resolve in time, that is a discussion we will need to have another day.
NVIDIA will be placing the MSRP on the GTX 980 at $549 and the GTX 970 at $329. Depending on what you’re using as a baseline, this is either a $50 increase over the last price of the GTX 780 and launch price of the GTX 680, or a roughly $100 price cut compared to the launch prices of the GTX 780 and GTX 780 Ti. Meanwhile GTX 970 is effectively a drop-in replacement for GTX 770, launching at the price that GTX 770 has held for so long. We should see both GPUs at the usual places, though at present neither Newegg nor Amazon is showing any inventory yet – likely thanks to the odd time of launch as this coincides with NVIDIA's Game24 event – but you can check on GTX 980 and GTX 970 tomorrow.
|Fall 2014 GPU Pricing Comparison|
|Radeon R9 295X2||$1000|
|$550||GeForce GTX 980|
|Radeon R9 290X||$500|
|Radeon R9 290||$400|
|$330||GeForce GTX 970|
|Radeon R9 280X||$280|
|Radeon R9 285||$250|
|Radeon R9 280||$220||GeForce GTX 760|
Finally, on a housekeeping note today’s article will be part of a series of articles on the GTX 980 series. As NVIDIA has only given us about half a week to look at GTX 980, we are splitting up our coverage to work within the time constraints. Today we will be covering GTX 980 and the Maxwell 2 architecture, including its construction, features, and the resulting GM204 GPU. Next week we will be looking at GTX 980 SLI performance, PCIe bandwidth, and a deeper look at the image quality aspects of NVIDIA’s newest anti-aliasing technologies, Dynamic Super Resolution and Multi-Frame sampled Anti-Aliasing. Finally, we will also be taking a look at the GTX 970 next week once we have a compatible sample. So stay tuned for the rest of our coverage on the Maxwell 2 family.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Kutark - Sunday, September 21, 2014 - linkI'd hold on to it. Thats still a damn fine card. Honestly you could probably find a used one on ebay for a decent price and SLI it up.
IMO though id splurge for a 970 and call it a day. I've got dual 760's right now, first time i've done SLI in prob 10 years. And honestly, the headaches just arent worth it. Yeah, most games work, but some games will have weird graphical issues (BF4 near release was a big one, DOTA 2 doesnt seem to like it), others dont utilize it well, etc. I kind of wish id just have stuck with the single 760. Either way, my 2p
SkyBill40 - Wednesday, September 24, 2014 - link@ Kutark:
Yeah, I tried to buy a nice card at that time despite wanting something higher than a 660Ti. But, as my wallet was the one doing the dictating, it's what I ended up with and I've been very happy. My only concern with a used one is just that: it's USED. Electronics are one of those "no go" zones for me when it comes to buying second hand since you have no idea about the circumstances surrounding the device and seeing as it's a video card and not a Blu Ray player or something, I'd like to know how long it's run, it's it's been OC'd or not, and the like. I'd be fine with buying another one new but not for the prices I'm seeing that are right in line with a 970. That would be dumb.
In the end, I'll probably wait it out a bit more and decide. I'm good for now and will probably buy a new 144Hz monitor instead.
Kutark - Sunday, September 21, 2014 - linkPsshhhhh.... I still have my 3dfx Voodoo SLI card. Granted its just sitting on my desk, but still!!!
In all seriousness though, my roommate, who is NOT a gamer, is still using an old 7800gt card i had laying around because the video card in his ancient computer decided to go out and he didnt feel like building a new one. Can't say i blame him, Core 2 quad's are juuust fine for browsing the web and such.
Kutark - Sunday, September 21, 2014 - linkVoodoo 2, i meant, realized i didnt type the 2.
justniz - Tuesday, December 9, 2014 - link>> the power bills are so ridiculous for the 8800 GTX!
Sorry but this is ridiculous. Do the math.
Best info I can find is that your card is consuming 230w.
Assuming you're paying 15¢/kWh, even gaming for 12 hours a day every day for a whole month will cost you $12.59. Doing the same with a gtx980 (165w) would cost you $9.03/month.
So you'd be paying maybe $580 to save $3.56 a month.
LaughingTarget - Friday, September 19, 2014 - linkThere is a major difference between market capitalization and available capital for investment. Market Cap is just a rote multiplication of the number of shares outstanding by the current share price. None of this is available for company use and is only an indirect measurement of how well a company is performing. Nvidia has $1.5 billion in cash and $2.5 billion in available treasury stock. Attempting to match Intel's process would put a significant dent into that with little indication it would justify the investment. Nvidia already took on a considerable chunk of debt going into this year as well, which would mean that future offerings would likely go for a higher cost of debt, making such an investment even harder to justify.
While Nvidia is blowing out AMD 3:1 on R&D and capacity, Intel is blowing both of them away, combined, by a wide margin. Intel is dropping $10 billion a year on R&D, which is a full $3 billion beyond the entire asset base of Nvidia. It's just not possible to close the gap right now.
Silma - Saturday, September 20, 2014 - linkI don't think you realize how many billion dollars you need to spend to open a 14 nm factory, not even counting R&D & yearly costs.
It's humongous, there is a reason why there are so few foundries in the world.
sp33d3r - Saturday, September 20, 2014 - linkWell, if the NVIDIA/AMD CEOs is blind enough and cannot see it coming, then intel are gonna manufacture their next integrated graphics on a 10 or 8 nm chip and though immature will be a tough competition to them in terms of power and efficiency and even weight.
remember currently pcs load integrated graphics as a must by intel and people add third party graphics only 'cause intels is not good enough literally adding weight of two graphics cards (Intels and third partys) to the product. Its all worlds apart more convenient when integrated graphics outperforms or able to challenge third party GPUs, we would just throw away NVIDIA and guess what they wont remain a monopoly anymore rather completely wiped out
Besides Intels integrated graphics are getting more mature in terms of not just die size with every launch, just compare 4000s with 5000s, it wont be long before they catch up.
wiyosaya - Friday, September 26, 2014 - linkI have to agree that it is partly not about the verification cost breaking the bank. However, what I think is the more likely reason is that since the current node works, they will try to wring every penny out of that node. Look at the prices for the Titan Z. If this is not an attempt to fleece the "gotta have it buyer," I don't know what is.
Ushio01 - Thursday, September 18, 2014 - linkWouldn't paying to use the 22nm fabs be a better idea as there about to become under used and all the teething troubles have been fixed.