Intel is getting back into the graphics game, and we don’t mean with graphics cores onboard its CPUs. In 2020 Intel is expected to debut its first add-in graphics card since the Intel740 released back in 1998. The technology behind this GPU and its potential performance remain shrouded in mystery, but as time goes on more and more details are coming to light. If it proves a viable alternative to Nvidia and AMD, this will be one of the most important events in the graphics card industry this century.
Here’s what we know so far.
Pricing and availability
At the beginning of August, we got our first hint at how Intel might price its upcoming Xe graphics cards. In an interview with a YouTube channel translated from Russian, which has now been taken down, Intel Chief Architect Raja Koduri said its new cards will debut at around $200. This price approximation makes a lot of sense and aligns with what we’d been hearing about so far about who the intended audience for these cards was. These cards may end up being less for gamers and more data centers and content creation.
The translation, however, has since been corrected by Intel, clarifying that “Raja was making the point that not all users will buy a $500-$600 graphics card, and that Intel strategy revolves around going for the full stack that ranges from Client to the Data Center. The $200 reference in the interview was an example of general entry pricing for Client dGPUs – and not a confirmation of Intel dGPU pricing.”
The company has, however, stated that it intends to have a wide target. The company says Intel Xe will show up in every segment from integrated graphics to top-shelf data center solutions. That includes midrange and enthusiast discrete graphics, as well.
A recent Intel driver leak referenced four different discrete graphics cards, suggesting that for gamers and hardware fans, there will be a relatively broad selection of graphics cards to pick from.
Intel has slated the graphics cards for a 2020 release, and has remained firm on that, so as we edge close to the end of 2018 we have less than two years to wait to see these graphics cards launched. That is, as long as Intel doesn’t face the kind of delays it has had with its CPU range as of late.
Architecture and performance
When Intel made its official announcement about the new graphics card technology it was working on, it made it clear that it was developing a dedicated graphics card. While that would suggest that it was building something distinct from its existing onboard GPU ventures, these cards will be based on the same 12th-generation architecture at the core of its onboard graphics solutions for upcoming CPU generations.
The driver leak in late July 2019, suggested that different models of discrete Xe graphics cards would feature varying numbers of “execution units,” as per Toms Hardware. The more modest high-power model would sport 128, with two more impressive cards sporting 256, and 512. This would suggest these cards will be targeting the midrange of graphics performance, but we’ll need to learn more before we can make an educated guess on actual performance.
Intel will transition to using the “Xe” branding for all future graphics endeavors, whether on board or discrete, though there will severe performance differences between them.
— Intel Graphics (@IntelGraphics) August 15, 2018
To give that some context, Intel’s 9th-gen graphics architecture includes GPU cores like its UHD Graphics 630, which can be found in everything from entry-level Pentium Gold G5500 CPUs to the fantastically powerful Core i7-9700K. Intel’s more recent 11th-generation (it skipped the 10th generation) is found in the graphics cores in its new Ice Lake mobile CPUs.
Gen11 isn’t Intel Xe, but it already makes improvements that put Intel on the right path. It targets a teraflop of power, much greater than past Intel IGPs. It also adds support for HDR and Adaptive Sync, two popular features found on discrete cards from AMD and Nvidia. But we’re told that the 12th-generation will be an even grander leap in graphical performance, which is what should put Intel in competition with AMD and Nvidia.
A rumored move to HBM
From the same (now removed) interview mentioned above, Koduri also suggested that this new graphics cards might not use the more typical GDDR6 for memory. Instead, it was suggested that these GPUs could use the higher-spec, high-bandwidth memory (HBM), which is a more expensive and less common type of memory.
That has since been discredited once Intel issued a clarification on the interview. The actual translation, instead says this: “So the strategy we’re taking is we’re not really worried about the performance range, the cost range and all because eventually our architecture as I’ve publicly said, has to hit from mainstream, which starts even around $100, all the way to Data Center-class graphics with HBM memories and all, which will be expensive.”
In other words, no, it doesn’t sound like HBM will be used on entry-level graphics cards. Seeing that sort of memory pop up in data center-class cards is a bit more what we’d expect.
Most consumer graphics cards stick with GDDR6, including everything as high-end as Nvidia’s RTX 2080 Ti. The only recent exception has been AMD’s high-end cards, which have used HBM in the past. More recently, AMD moved to HBM2 for its Radeon VII, though that was ditched on its newer, midrange Radeon cards. Entry-level Xe cards with HBM2 memory would be an interesting proposition, and again position Intel’s chips more toward non-gaming use cases that require the high bandwidth.
Nvidia has gone all-in on real-time ray tracing, making it a key feature of its current RTX graphics cards. Despite its slow start, the technology has the potential to become the most important new feature computer graphics over the coming years. The problem is that the increase in lifelike lighting and shadows can be costly in terms of performance. AMD has been a bit more hesitant about diving into the world of ray tracing for that exact reason, though it has plans to support it in the future, especially on consoles like the Playstation 5.
Well before the launch of Xe, Intel has already come out of the gate touting support ray tracing in its future GPUs. Jim Jeffers, Intel’s senior principal engineer (and senior director of advanced rendering and visualization), made the following statement on the matter: “Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.” We don’t yet know what that statement means for ray tracing in games, but if the hardware acceleration has been implemented, we’d be surprised if Intel didn’t also bring that to gamers.
AMD alumni are helping to make it
Intel hasn’t released a dedicated graphics card in 20 years. It did develop what became a co-processor, in Larabee, in the late 2000s, but that proved to be far from competitive with modern graphics cards, even if it found some intriguing use cases in its own right. To develop its graphics architecture into something worthy of a dedicated graphics card, Intel hired on some industry experts, most notably Raja Koduri. He was hired straight from AMD where he had spent several years as a chief architect of the Radeon Technology Group, heading up development on AMD’s Vega and Navi architectures.
He’s been there for over a year, and he was even joined in mid-2018 by Jim Keller, the lead architect on AMD’s Zen architecture. He is heading up Intel’s silicon development and will, according to Intel itself, help “change the way [Intel] builds silicon.” That could be considered additional evidence of Intel’s push towards viable 10nm production.
Other ex-AMD employees that Intel has picked up over the past few months include former director of global product marketing at AMD, Chris Hook, who spent 17 years working at the company, and Darren McPhee, who now heads up Intel’s product marketing for discrete graphics.