The NVIDIA GeForce GTX 970 Review: Featuring EVGAby Ryan Smith on September 26, 2014 10:00 AM EST
Last week we took a look at NVIDIA’s newest consumer flagship video card, the GeForce GTX 980. Based on the company’s new GM204 GPU, GTX 980 further cemented NVIDIA’s ownership of the performance crown with a combination of performance improvements, new features, and power consumption reductions. Combined with a lower price than the now-dethroned GTX 780 Ti, GTX 980 is an impressive flagship with a mix of attributes that NVIDIA hopes to entice existing 600 and 500 series owners to upgrade to.
Of course even though GTX 980 was cheaper than the outgoing GTX 780 Ti, it is still a flagship card and at $549 is priced accordingly. But as in every GeForce product lineup there is a GeForce x70 right behind it, and for GTX 980 its lower-tier, lower priced counterpart is the GeForce GTX 970. Based on the same GM204 but configured with fewer active SMMs, a slightly lower clock speed, and a lower TDP, GTX 970 fills the gap by providing a lower performance but much lower priced alternative to the flagship GTX 980. In fact at $329 it’s some 40% cheaper than GTX 980, one of the largest discounts for a second-tier GeForce card in recent memory.
For this reason GTX 970 is an interesting card on its own, if not more interesting overall than its bigger sibling. The performance decrease from the reduced clock speeds and fewer SMMs is going to be tangible, but then so is a $220 savings to the pocketbook. With GTX 980 already topping our charts, if GTX 970 can stay relatively close then it would be a very tantalizing value proposition for enthusiast gamers who want to buy in to GM204 at a lower price.
|NVIDIA GPU Specification Comparison|
|GTX 980||GTX 970 (CorrecteD)||GTX 780||GTX 770|
|Memory Clock||7GHz GDDR5||7GHz GDDR5||6GHz GDDR5||7GHz GDDR5|
|Memory Bus Width||256-bit||256-bit||384-bit||256-bit|
|FP64||1/32 FP32||1/32 FP32||1/24 FP32||1/24 FP32|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
Compared to GTX 980 and its full-fledged GM204 GPU, GTX 970 takes a harvested GM204 that drops 3 of the SMMs, reducing its final count to 13 SMMs or 1664 CUDA cores. It also sheds part of a ROP/L2 cache partition while retaining the 256-bit memory bus of its bigger sibling, bringing the ROP count down to 56 ROPs and the L2 cache down to 1.75MB, a configuration option new to Maxwell
Along with the reduction in SMMs clock speed is also reduced slightly for GTX 970. It ships at a base clock speed of 1050MHz and a boost clock speed of 1178MHz. This puts the theoretical performance difference between it and the GTX 980 at about 85% of the ROP performance or about 79% of the shading/texturing/geometry performance. Given that the GTX 970 is unlikely to be ROP bound with so many ROPs, the real world performance difference should much more closely track the 79% value, meaning there is still potentially a significant performance delta between the GTX 980 and GTX 970.
Elsewhere the memory configuration is unchanged from GTX 980. This means we’re looking at 4GB of GDDR5 clocked at 7GHz, all on a 256-bit bus. Compared to the GTX 770 that the GTX 970 replaces, this is a welcome and much needed upgrade from what has been the 2GB VRAM standard that NVIDIA has held to for the last two and a half years.
GTX 970’s TDP meanwhile is lower than GTX 980’s thanks to the reduced clock speeds and SMM count. The stock GTX 970 will be shipping with a TDP of just 145W, some 80W less than GTX 770’s official TDP of 225W. NVIDIA’s official designs still include 2 6-pin PCIe power sockets despite the fact that the card should technically be able to operate on just one; it is not clear at this time whether this is for overclocking purposes (150W would leave almost no power headroom) or for safety purposes since NVIDIA would be so close to going over PCIe specifications.
Like the GTX 980, NVIDIA’s target market for the GTX 970 will be owners of GTX 600/500/400 series cards and their AMD equivalents. GTX 970 is faster than GTX 770 but not immensely so, and as a result NVIDIA does not expect GTX 770 owners to want to upgrade so soon. Meanwhile GTX 670 owners and beyond are looking at 65%+ improved performance for cards at the same tier, while power consumption will remain roughly consistent from the GTX 670’s 140W GPU Boost 1.0-based power target.
Furthermore, as we mentioned in our GTX 980 review, GTX 970 has been a pure virtual (no reference card) launch, which means all of NVIDIA’s partners are launching their custom cards right out of the gate. A lot of these have been recycled or otherwise only slightly modified GTX 700/600 series designs, owing to the fact that GM204’s memory bus has been held at 256-bits and its power requirements are so low.
Meanwhile since NVIDIA did not produce reference cards, for GTX 970 reviewers are being sampled directly by NVIDIA’s partners. For our review today we will be looking at EVGA’s GeForce GTX 970 FTW ACX 2.0, the company’s highest performance GTX 970 card. Accordingly, we will be taking a look at both it’s out of the box performance and performance when reconfigured as a stock card to showcase both performance profiles.
With the discontinuation of the GTX 780 series and GTX 770, competition for the GTX 970 will be split between the GTX 760 and GTX 980 on the NVIDIA side. On the AMD side things will be even more spread out; AMD’s closest cards from a pricing perspective are the R9 280X and R9 290 priced below and above the $329 GTX 970 respectively, but as we’ll see even R9 290X is not necessarily out of the picture thanks to GM204’s strong performance.
Surprisingly even a week after the launch of the GTX 900 series, AMD has yet to officially respond to the GTX 900 series launch with any further price cuts or additional incentives beyond their existing Never Settle Forever bundle. In lieu of that some retailers have been running their own promotions; our pricing benchmark retailer Newegg has been offering 15% discounts on some of their PowerColor R9 290 series cards, meanwhile some other cards qualify for a $40 Newegg gift card (which cannot be applied retroactively to the purchase). Since the bulk of these cards don’t qualify for the price discount we’re holding our reference prices at $500 for the R9 290X and $400 for the R9 290, however the very cheapest of these PowerColor cards with the discount in play can go for as little as $450 and $340 respectively.
Meanwhile GTX 900 series sales have been brisk, and while the cards are still in supply not all models are available or are regularly available. At the very least everything from reference clocked cards to significantly overclocked cards are available at Newegg, so there is still a range of options. Though they are coincidentally all EVGA cards as of publication time.
|Fall 2014 GPU Pricing Comparison|
|Radeon R9 295X2||$1000|
|$550||GeForce GTX 980|
|Radeon R9 290X||$500|
|Radeon R9 290||$400|
|$330||GeForce GTX 970|
|Radeon R9 280X||$280|
|Radeon R9 285||$250|
|Radeon R9 280||$220||GeForce GTX 760|
Post Your CommentPlease log in or sign up to comment.
View All Comments
Casecutter - Friday, September 26, 2014 - linkI’m confident in if we had two of what where the normal "AIB OC customs" of both a 970 and 290 things between might not appear so skewed. First as much as folks want this level of card to get them into 4K, there not... So it really just boils down to seeing what similarly generic OC custom offer and say "spar back and forth" @2560x1440 depending on the titles.
As to power I wish these reviews would halt the inadequate testing like it’s still 2004! The power (complete PC) should for each game B-M’d, and should record in retime the oscillation of power in milliseconds, then output the 'mean' over the test duration. As we know each title fluctuates boost frequency across every title, the 'mean' across each game is different. Then each 'mean' can be added and the average from the number of titles would offer to most straight-forward evaluation of power while gaming. Also, as most folk today "Sleep" their computers (and not many idle for more than 10-20min) I believe the best calculation for power is what a graphics card "suckles" while doing nothing like 80% each month. I’d more like to see how AMD ZeroCore impacts a machines power usage over a months’ time, verse the savings only during gaming. Consider gaming 3hr a day which constitutes 12.5% of a month, does the 25% difference in power gaming beat the 5W saved with Zerocore 80% of that month. Saving energy while using and enjoying something is fine, although wasting watts while doing nothing is incomprehensible.
Impulses - Sunday, September 28, 2014 - linkEhh, I recently bought 2x custom 290, but I've no doubt that even with a decent OC the 970 can st the very least still match it in most games... I don't regret the 290s, but I also only paid $350/360 for my WF Gigabyte cards, had I paid closer to $400 I'd be kicking myself right about now.
Iketh - Monday, September 29, 2014 - linkmost PCs default to sleeping during long idles and most people shut it off
dragonsqrrl - Friday, September 26, 2014 - linkMaxwell truly is an impressive architecture, I just wish Nvidia would stop further gimping double precision performance relative to single precision with each successive generation of their consumer cards. GF100/110 were capped at 1/8, GK110 was capped at 1/24, and now GM204 (and likely GM210) is capped at 1/32... What's still yet to be seen is how they're capping the performance on GM204, whether it's a hardware limitation like GK104, or a clock speed limitation in firmware like GK110.
Nvidia: You peasants want any sort of reasonable upgrade in FP64 performance? Pay up.
D. Lister - Friday, September 26, 2014 - link"Company X: You peasants want any sort of reasonable upgrade in product Y? Pay up."
Well, that's capitalism for ya... :p. Seriously though, if less DP ability means a cheaper GPU then as a gamer I'm all for it. If a dozen niche DP hobbyists get screwed over, and a thousand gamers get a better deal on a gaming card then why not? Remember what all that bit mining nonsense did to the North American prices of the Radeons?
D. Lister - Friday, September 26, 2014 - linkWoah, it seems they do tags differently here at AT :(. Sorry if the above message appears improperly formatted.
Mr Perfect - Friday, September 26, 2014 - linkIt's not you, the italic tag throws in a couple extra line breaks. Bold might too, I seem to remember that mangling a post of mine in the past.
D. Lister - Sunday, September 28, 2014 - linkOh, okay, thanks for the explanation :).
wetwareinterface - Saturday, September 27, 2014 - linkthis^
you seem to be under the illusion that nvidia intended to keep shooting themselves in the foot forever in regards to releasing their high end gpgpu chip under a gaming designation and relying on the driver (which is easy to hack) to keep people from buying a gamer card for workstation loads. face it they wised up and charge extra for fp64 and the higher ram count now. no more cheap workstation cards. the benefit as already described is cheaper gaming cards that are designed to be more efficient at gaming and leave the workstation loads to the workstation cards.
dragonsqrrl - Saturday, September 27, 2014 - linkThis is only partially true, and I think D. Lister basically suggested the same thing so I'll just make a single response for both. The argument for price and efficiency would really only be the case for a GK104 type scenario, where on die FP64 performance is physically limited to 1/24 FP32 due to there being 1/24 the Cuda cores. But what about GK110? There is no reason to limit it to 1/24 SP other than segmentation. There's pretty much no efficiency or price argument there, and we see proof of that in the Titan, no less efficient at gaming and really no more expensive to manufacture outside the additional memory and maybe some additional validation. In other words there's really no justification (or at least certainly not the justification you guys are suggesting) for why the GTX780 Ti couldn't have had 1/12 SP with 3GB GDDR5 at the same $700 MSRP, for instance. Of course other than further (and in my opinion unreasonable) segmentation.
This is why I was wondering how they're capping performance in GM204.