Yesterday, Nvidia took the wraps off its high-end GP100 GPU and gave us a look at what its top-end HPC configuration would look like come Q1 2017. While this new card is explicitly aimed at the scientific computing market and Nvidia has said nothing about future consumer products, the information the company revealed confirms some of what we’ve privately heard about next-generation GPUs from both AMD and Nvidia.
If you’re thinking about using some of your tax rebate on a new GPU or just eyeing the market in general, we’d recommend waiting at least a few more months before pulling the trigger. It may even be worth waiting until the end of the year based on what we now know is coming down the pipe.
What to expect when you’re expecting (a new GPU)
First, a bit of review: We already know AMD is launching a new set of GPUs this summer,codenamed Polaris 10 and Polaris 11. These cores are expected to target the sweet spot of the add-in-board (AIB) market, which typically means the $199 – $299 price segment. High-end cards like the GTX 980 Ti and Fury X may command headlines, but both AMD and Nvidia ship far more GTX 960s and Radeon R7 370s than they do top-end cards.
Polaris 10 and 11 are expected to use GDDR5 rather than HBM (I’ve seen the rumors that claim some Polaris SKUs might use HBM1 — it’s technically possible, but I think it exceedingly unlikely) and AMD has said these new GPUs will improve performance-per-watt by 2.5x compared with their predecessors. The company’s next-generation Vega GPU family, which arrives late this year, is rumored to be the first ground-up new architecture since GCN debuted in 2012 with 4,096 shader cores and HBM2 memory.
We don’t know yet what Nvidia’s plans are for any consumer-oriented Pascal cards, but the speeds and core counts on GP100 tell us rather a lot about the benefits of 16nm FinFET and how it will impact Nvidia’s product lines this generation.
With GP100, Nvidia increased its core count by 17% while simultaneously ramping up the base clock by 40%. Baseline TDP for this GPU, meanwhile, increased by 20%, to 300W. The relationship between clock speed, voltage, and power consumption is not linear, but the GTX Titan X shipped with a base clock of 1GHz, only slightly higher than the Tesla M40’s 948MHz. The GP100 has up to 60 SM units (only 56 are enabled), which puts the total number of cores on-die at 3,840. That’s 25% more cores than the old M40, but the die is just 3% larger.
We may not know details, but the implications are straightforward: Nvidia should be able to deliver a high-end consumer card with 30-40% higher clocks and significantly higher core counts within the same price envelopes that Maxwell occupies today. We don’t know when Team Green will start refreshing its hardware, but it’ll almost certainly be within the next nine months.
Here’s the bottom line: AMD is going to start refreshing its midrange cards this summer, and it’d be unusual if Nvidia didn’t have fresh GPUs of its own to meet them. Both companies will likely follow with high-end refreshes towards the end of the year or very early next year, again, probably within short order of each other.
When waiting makes sense
There’s a cliche in the tech industry that claims it’s foolish to try and time your upgrades because technology is always advancing. 10-12 years ago, when AMD and Nvidia were nearly doubling their top-end performance every single year, this kind of argument made sense. Today, it’s much less valid. Technology advances year-on-year, but the rate and pace of those advances can vary significantly.
The 14/16nm node is a major stepping stone for GPU performance because it’s the first full-node shrink that’s been available to the GPU industry in more than four years. If you care about low power consumption and small form factors, upcoming chips should be dramatically more power efficient. If you care about high-end performance, you may have to wait another nine months, but the amount of GPU you’ll be able to buy for the same amount of money should be 30-50% higher than what you’ll get today.
There’s also the question of VR technology. We don’t know yet how VR will evolve or how seriously it will impact the future of gaming; estimates I’ve seen range from total transformation to a niche market for a handful of well-heeled enthusiasts. Regardless, if you plan on jumping on the VR bandwagon, it behooves you to wait and see what kind of performance next-generation video cards can offer.
Remember this: VR technology demands both high frame rates and extremely smooth frame delivery, and this has knock-on effects on which GPUs can reliably deliver that experience. A GPU that drives 50 frames per second where 30 is a minimum requirement is pushing 1.67x more frames per second than the user demands as a minimum standard. A GPU that delivers 110 frames per second where 90 is a minimum requirement is only 1.22x above the target frame rate. It doesn’t take much in the way of additional eye candy before our second GPU is bottoming out at 90 FPS again.
The final reason to consider delaying an upgrade is whether you plan to upgrade to a 4K monitor at any point in the next few years. 4K pushes roughly 4x more pixels than 1080p monitors and modern graphics cards are often 33-50% slower when gaming at that resolution. Waiting a few more months to buy at the beginning of the new cycle could mean 50% more performance for the same price and gives you a better chance of buying a card that can handle 4K in a wider variety of titles.
If your GPU suddenly dies tomorrow or you can’t stand running an old HD 5000 or GTX 400-series card another minute, you can upgrade to a newer AMD or Nvidia model available today and still see an enormous performance uplift — but customers who can wait for the next-generation refreshes to arrive will be getting much more bang for their buck. We don’t know what the exact specs will be for any specific AMD or Nvidia next-gen GPU, but what we’re seeing and hearing about the 16/14nm node is extremely encouraging. If you can wait, you almost certainly won’t regret it — especially if you want a clearer picture on which company, AMD or Nvidia, performs better in DirectX 12.