The NVIDIA GeForce RTX 2070 Founders Edition Review: Mid-Range Turing, High-End Priceby Nate Oh on October 16, 2018 9:00 AM EST
Meet The GeForce RTX 2070 Founders Edition Card
Touching quickly on the card itself, there's little we haven't already seen with the RTX 2080 Ti and 2080 Founders Editions. The biggest change is, of course, a new open air cooler design. Along with the Founders Edition specification changes of +10W TDP and +90 MHz boost clockspeed, the cards might be considered 'reference' in that they remain a first-party video card sold direct by NVIDIA, but strictly-speaking they are not because they no longer carry reference specifications.
Wrapped in the flattened industrial design introduced by the other RTX cards, the RTX 2070 Founders Edition looks essentially the same, save a few exceptions. The single 8-pin power connector is at the front of the card, while the NVLink SLI connectors are absent as the RTX 2070 does not support SLI. Internally, the dual 13-blade fans accompany a vapor chamber, while a 6-phase system provides the power for the 185W TDP RTX 2070 Founders Edition.
So while the single 8-pin configuration, suitable for up to 225W total draw, has remained the same from the GTX 1070, the TDP has not. The RTX 2070 Founders Edition brings 185W, with reference specification at 175W, compared to the 150W GTX 1070 and 145W GTX 970, following the trend of the 2080 Ti and 2080 pushing up the watts.
As for I/O, there is one difference between the 2070 and its older siblings. The RTX 2070 Founders Edition drops the isolated DisplayPort for a DVI port, matching the GTX 1070's outputs. This is in addition to DisplayPort 1.4 and DSC support, the latter of which is part of the DP1.4 spec, as well as the VR-centric USB-C VirtualLink port, which also carries an associated 30W not included in the overall TDP. While the past few years have seen DVI excised from the top-end cards, it's more of a matter of practicality for mid-range cards (inasmuch as $500 is a midrange price) that are often paired with budget DVI monitors, particularly as a drop-in upgrade for an aging video card.
As mentioned in the RTX 2080 Ti and 2080 launch article, something to note is the potential impact on OEM sales with this reference design change. The RTX 2070 also arrives as an open air design and so can no longer guarantee self-cooling independent of chassis airflow. In addition to the price and lower volume nature of these GPU parts, these aspects make the RTX reference cards less suitable for large OEMs.
Post Your CommentPlease log in or sign up to comment.
View All Comments
PeachNCream - Tuesday, October 16, 2018 - linkYou've got a good graphics card in the 970 that should get you at least a couple more years of reasonable performance. If I were in your position, I wouldn't be in the market for a new GPU. However, I do sympathize with you when it comes to the cost it takes to be able to play these days and I agree that a shift to some form of console is a sensible alternative. PC hardware pricing has been on the rise in the last few years and it stings when you've come to expect performance improvements alongside cost reductions that we've been enjoying for the majority of the years since microcomputers found their way into homes in the 1980s.
I think what's driving that is a diminishing market. Economies of scale don't work when there's no further growth for what's become a mature industry (PCs in general) and a declining segment (desktop PCs in specific) due to the slow shift of computing tasks to mobile phones. I don't see anywhere for desktop components to go but further up as we lean into the physical limits of the materials we have available while also contending with falling sales numbers. Compound that with the damage these prices will inflict on the appeal of PC gaming to the masses and we're starting to look at a heck of an ugly snowball on its way down the hill.
It's probably a good time to make a graceful exit like you're mulling over now. As someone else that's thrown in the towel, I can happily confirm there's lots of fun to be had on very modest, inexpensive hardware. From older games to low system requirements new releases, I have faith that there will always be a way to burn up a lot of free time at a keyboard even if you end up with very old, very cheap hardware.
WarlockOfOz - Wednesday, October 17, 2018 - linkConcur. I'm still rocking a 750Ti and feeling no need to upgrade it or the even older CPU (phenom x4) despite having money put aside. I'll replace when it breaks, like my fridge, unless something does make going past 1080p compelling - whether that's VR, ray tracing, or a must have game that I can't play at all.
nikon133 - Tuesday, October 16, 2018 - linkI hear you.
Been considering to make my current rig - older i7 (Haswel) with recently added 1070 - my last gaming PC. It really boils down to how next gen consoles turn out - but even as current gen is, I seem to be spending more time on PS4 than on gaming PC. In fact, MHW is the only game I am playing on PC atm, and event hat because of friends who insisted to play it on PC. Eventually, we are lucky if we get to play it together once a week, on average... definitely not worth investment into new rig, for me.
ingwe - Wednesday, October 17, 2018 - linkI mean you don't need the most top end or recent parts. I am gaming on a 5850 and i5-4670K (I think that is the model it has been so long I might be mixing things up). It runs great. 256 GB SSD and 16 GB of ram.
The prices are crazy for the high end but you also don't need the highest end and most recent gen when performance improvements are marginal.
Farfolomew - Monday, October 22, 2018 - linkIn the 486 days, computer gaming was worth that much money. The landscape was rapidly changing, games were rapidly changing. The internet was taking hold, 3D gfx starting to be born. It was amazing. It was money well spent to be able to play groundbreaking new types of games.
Nowadays, although overall less expensive perhaps, your money doesn't buy you much new in terms of originality and exciting gameplay. All we get are prettier and prettier textures with duller and duller games. WoW and Counterstrike are STILL massively popular games, certainly not for their gfx.
Eris_Floralia - Tuesday, October 16, 2018 - linkNate, iirc they handicapped the tensor performance of FP16 with FP32 accumulate, which is only half of those on equivlant Quadro cards, maybe that's why HGEMM performance is low.
Eris_Floralia - Tuesday, October 16, 2018 - link*equivalent
Yojimbo - Tuesday, October 16, 2018 - linkThe chart says half precision GEMM. So I think a lack of accelerated 32-bit accumulation should not be slowing the GPU down. As far as I know, the Turing Tensor Cores perform FP16 multiplications with FP16 accumulations at 8 operations per clock much like Volta Tensor Cores perform FP16 mults. with FP32 accumulation at 8 operations per clock.
Eris_Floralia - Tuesday, October 16, 2018 - linkTuring FP16 with FP16 accumulate is fully enabled on all RTX cards, but FP16 with FP32 accumulate is 1/2 rate on GeForce cards.
They used out of the box configuration which likely used Volta's FP16 with FP32 accumulate, resulting in half the performance.
HGEMM results for 2080TI/2080/2070 are very close to their 54/40/30 TFLOPS theoretical performance. If it was a Quadro card you will see double the performance with this config. If they updated the binary support, you'll likely see double the perf with FP16 accumulate too.
Yojimbo - Tuesday, October 16, 2018 - link"They used out of the box configuration which likely used Volta's FP16 with FP32 accumulate, resulting in half the performance."
It could be some driver error. But I don't see why the GPUs not having FP32 accumulate should be the ultimate cause for the poor results. I admit I don't know much about the test, but why should the test demand FP16 multiplications with FP32 accumulate? That's more or less an experimental situation only available commercially in NVIDIA's hardware, as far as I know. If the test is meant to use FP16 accumulate and FP32 is being forced in the test then the reason for the poor results is a driver or testing error, not that Turing GPUs only have FP16 accumulate at full Tensor Core speed for the precision.