What is Intel Xe Graphics Card?

Janidu Jayasanka
Jul 2 · 5 min read

It’s been a wild year in the graphics space, topped by Intel’s announcement that it has started a new program to launch graphics cards for both gaming and enterprise markets (purportedly code-named Arctic Sound). That’s shocking because Nvidia and AMD have been the primary two discrete GPU producers for the last 20 years.

Developing a new GPU is an ambitious goal for Intel, especially considering its failed Larrabee project in 2010. But the company has been busy assembling the right team for the job. That includes veterans like Raja Koduri, rock star chip architect Jim Keller, and graphics marketer extraordinaire Chris Hook, to name just a few. The company also recently purchased Ineda Systems, a graphics company based out of Hyderabad, India, for its experience with SoC design.

With the team in place, now it comes down to the hardware and design, but Intel is prepared there, too. Intel has an install base of over a billion screens around the world. That advantage comes courtesy of its integrated graphics chips in its CPUs that make Intel the world’s largest GPU manufacturer. The company also has an IP war chest (at one point it owned more graphics patents than the other vendors combined) to do battle.

Intel whipped the covers off its Xe graphics branding at its recent Architecture Day, but it doesn’t represent the actual final brand names, like Radeon or GeForce. Instead, the Xe branding signifies Intel’s full range of low- to high-power graphics solutions.

According to Intel, its coming Gen11 engine represents a huge step forward for its integrated graphics. Given the facts and figures the company presented, we can reasonably spitball raw performance of these new integrated graphics in the range of the Radeon Vega 8 cores that come with the Ryzen 3 2200G.

Intel’s Xe line will come after Gen11 and replace the Gen12 branding. These graphics processors will scale from integrated graphics chips on processors up to discrete mid-range, enthusiast, and data center/AI cards. Intel says it will split these graphics solutions into two distinct architectures, with both integrated and discrete graphics cards for the consumer market (client), and discrete cards for the data center. Intel also disclosed that the cards would come wielding the 10nm process and arrive in 2020, which is in line with its previous projections.

Intel also plans for the graphics cards to work with its OneAPI software, which the company designed to simplify programming across its GPU, CPU, FPGA, and AI accelerators. The new software provides unified libraries that will allow applications to move seamlessly between Intel’s diverse types of compute. If successful, this could be Intel’s answer to Nvidia’s CUDA, except it would work on any Intel processor. That also includes things like support for 4K streaming, legacy VP8 and AVC codecs, along with HEVC 10-bit decode/encode, VP9 8/10-bit decode, and VP9 8-bit encode (and perhaps even support for VP9 10-bit encode). We also expect HDR and Wide Color Gamut support.

It’s natural to expect Intel’s freshman effort to target the mid-range enthusiast market with enticing price points, but true to its heritage, the company may also bring high-end cards based on enterprise designs to the enthusiast space, just like it does with its high-end desktop processors.

Only time will tell, but Intel has the deep pockets to take losses for the first few generations as it builds a customer base. In either case, Intel’s new cards will force both AMD and Nvidia to become more competitive on pricing.

Intel recently surprised the world with its announcement of its new Foveros (Greek for awesome) 3D chip stacking technology. Intel says it built Foveros upon the lessons it learned with its innovative EMIB (Embedded Multi-Die Interconnect Bridge) technology, which is a complicated name for a technique that provides high-speed communication between several chips. The Foveros chip stacking technique connects multiple stacked dies into one 3D package. This technology may sound like a far-off pipe dream, but Intel already has working chips based on the technology (albeit small ones) coming to market in 2019. That leaves the company some room to debut the technology with its new discrete GPUs in 2020, though that hasn’t been confirmed.

In either case, speculation is running rampant. Intel announced that it is already developing a new FPGA using the Foveros technology. The company claims that this technology will enable up to two orders of magnitude performance improvement over the next-gen Falcon Mesa FPGAs, along with density and power efficiency improvements. Those same types of radical performance advances could also apply to GPUs built on the same technology. That means it is possible that we could see 3D stacked GPUs from Intel in the future, even if they don’t debut in the company’s first discrete graphics cards.

Intel could also use an MCM (Multi-Chip Module) approach that essentially ties multiple small die together into one heterogeneous package but in a less-risky 2D alignment. Both AMD and Nvidia also rely on outside fabs. Intel’s fabs have been a liability lately as the company struggles with its 10nm process, but when used correctly, Intel’s fabs are a tangible advantage that could give the company a leg up on its competitors. Chip packaging techniques are becoming the true differentiator in the waning light of Moore’s Law, and it’s fair to say that the companies that produce AMD and Nvidia’s GPUs (TSMC and GlobalFoundries) cannot compete with Intel’s next-gen packaging technologies. At least for now.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade