Nvidia today announced a new graphics processor with 240 computing cores, giving PCs the horsepower needed to run three-dimensional games and scientific applications.
The new GeForce GTX 280, the largest GPU ever built by Nvidia, includes 1.4 billion transistors and delivers 933 gigaflops of performance. It succeeds the GeForce 8800 GTX, which had 128 cores and delivered 518 gigaflops of performance.
What you make of the GeForce GTX 280 may hinge on where you come down on the multi-GPU question.
Clearly, the GTX 280 is far and away the new single-GPU performance champ, and Nvidia has done it again by nearly doubling the resources of the G80. Its performance is strongest, relatively speaking, at high resolutions where current solutions suffer most, surely in part because of its true 1GB memory size. And one can’t help but like the legion of tweaks and incremental enhancements Nvidia has made to an already familiar and successful basic GPU architecture, from better tuning of the shader cores to the precipitous reduction in idle power draw.
All other things being equal, I’d rather have a big single-GPU card like the GTX 280 than a dual-chip special like the Radeon HD 3870 X2 or the GeForce 9800 GX2 any day. Multi-GPU setups are fragile, and in some games, their performance simply doesn’t scale very well. Also, Nvidia’s support for multiple monitors in SLI and GX2 solutions is pretty dreadful. The trouble is, things are pretty decidedly not equal. More often that not, the GeForce 9800 GX2 is faster than the GTX 280, and the GX2 is currently selling for as little as 470 bucks, American money. Compared to that, the GTX 280’s asking price of $649 seems mighty steep. Even the GTX 260 at $399 feels expensive in light of the alternatives-dual GeForce 8800 GTs in SLI, for instance-unless you’re committed to the single-GPU path.
Another problem with cards like the 9800 GX2 is simply that they’ve shown us that there’s more performance to be had in today’s games than what the GTX 260 and 280 can offer. One can’t escape the impression, seeing the benchmark results, that the GT200’s performance could be higher. Yet many of the changes Nvidia has introduced in this new GPU fall decidedly under the rubric of future-proofing. We’re unlikely to see games push the limits of this shader core for some time to come, for example. I went back and looked, and it turns out that when the GeForce 8800 GTX debuted, it was often slower than two GeForce 7900 GTX cards in SLI. No one cared much at the time because the G80 brought with it a whole boatload of new capabilities. One can’t exactly say the same for the GT200, but then again, things like a double-size register file for more complex shaders or faster stream-out for geometry shaders may end up being fairly consequential in the long run. It’s just terribly difficult to judge these things right now, when cheaper multi-GPU alternatives will run today’s games faster.
The first reviews can be found on AnandTech, Benchmark Reviews, Bjorn3D, CHW, ComputerBase, Driver Heaven, Guru3D, ExtremeTech, EliteBastards, HardOCP, Hardware Secrets, Hot Hardware, ChilleHardware, InsideHW, MadBoxPC, Neoseeker, Overclockers Club, Overclock3D.net, PC Perspective, techPowerUp!, Technic3D, The TechReport, TheInquirer, Tweaktown, t-break.