- cross-posted to:
- hardware@lemmit.online
- cross-posted to:
- hardware@lemmit.online
I wonder why they invest so much into the iGPU of their CPUs. Does anyone really buy a 13700k and then game on it’s iGPU?
Perhaps someone here can clarify the logic behind this.
They’re planning to enter the handheld space with meteor lake. Gen to gen improvements with focus on igpu keep them relevant. Can’t fall behind in any space.
ROG Ally is a Z1 extreme AMD apu. Intel is making a meteor lake APU.
Ooh, very interesting, it would be cool to see Intel in the handheld space.
Though what you posted is about Meteor Lake (which is mobile only, not desktop). I specifically wondered about why they would include it in Arrow Lake (Desktop-) chips. Who would ever care about iGPU performance for high-end desktop CPUs?
Probably to compete with AMD’s desktop APUs. The desktop APU space does rather well in the enterprise/education sector for mass deployment in office/IT rooms where a full on workstation isn’t required.
This was back when AMD didn’t have iGPUs at all. Every intel non f sku is what AMD would have called an APU.
Yea see the the article I linked says that meteor lake is lacking in XMX and may be lacking XESS as a result, so I’m assuming that they just keep pouring R&D money at igpu sector and implementing the new builds so they can stay competitive.
Nobody is using it but they are using at the desktop level, but they are furthering the platform to be used in the handheld sector.
Also yea, commercial applications of APU’s. No need to for a discrete gpu to have a manageable workstation.
Arrow Lake will also have mobile SoC. Lunar Lake will only cover the low power mobile segment. Arrow Lake will cover everything else.
Furthermore, iGPU and CPU tiles are separated now, so they can replace the iGPU tile or take it out entirely if they want. My guess is Desktop SoC will have a gimped iGPU version.
You are right, I forgot about mobile ARL.
Handhelds are a microscopic fraction of the laptop market. Even if only a small portion of the laptop market cares about iGPU performance, that’s still far more impactful than anything with handhelds. This is for laptops first and foremost.
Article is old btw. Meteor Lake iGPU does support XeSS. Intel posted a demonstration on Youtube.
WOW! So excited!
Does anyone really buy a 13700k and then game on it’s iGPU?
The 13700K has a considerably smaller GPU than a 1360P actually, so no.
companies like Apple and AMD have useful gpus in their SoCs, something that Intel is still catching up on. On desktop it’s not that big of a deal but for portables it is a good way to save on power+cost while still delivering adequate performance.
Also they’re going to be paying to develop these GPU IP blocks anyway, may as well use the IP as much as possible against the competition. Amortize that R&D cost. Arrow Lake will have the GPU segregated from the CPU chiplet also so it’s less likely there will be defects even if they up the GPU size, whereas something like the 13700k is already pretty big on its own even with the small igpu so it would be relatively expensive to slap a good iGPU on there.
Yes
Have you considered that the vast majority of users don’t use their computers primarily for gaming?
The iGPU isn’t for playing games, even with these bigger ones that can. They’re for media encoding and productivity. They’re meant to go against Apple chips.
Anyone know when Arrow Lake is releasing?
Supposed to be late 2024.
Is Arrow Lake where the Adamantine cache went?
Doubt it’s coming to ARL either. ARL and MTL appear to share the same base tile tech, there’s prob reuse there. LNL and PTL are marked for the next generation of Foveros Omni , with 25um bump pitches.
How many X letters do you want? Yes
So is that Battlemage or not? Are they just sticking to Arc for laptop and APUs because it’s more suited in some way?
It’s Alchemist, previous iGPUs were the old xe iris LPE iGPU architecture
From my perspective as someone who works with media a lot, having a powerful iGPU with the Cpu which has great decoders/encoders, speeds up lots of work flows. And reduces the need for a workstation with multiple GPUs, which ofc, some might still need regardless. Ideally, Nvidia and AMD would improve the ones in their GPUs, and so would Intel, but so far I haven’t seen any signs of that, especially with Nvidia who wants you to buy a Quadro type card for that. At the moment Apples decoders/encoders do so much heavy lifting the Cpu and GPU on their SoC can accelerate other operations. I’d like to see this on the PC side. We used to buy add in cards for that say for Avid systems, Media100 systems, and Red footage, etc. If anyone has expertise on this feel free to chime in and add more, and educate me some more. So maybe not too applicable to heavy gaming but for content creation and heavy media work it would be a great thing.
There are already encoders and decoders for x.264/265/VP9/AV1 on Intel GPUs, these are codec-specific. The article of this post points to Intel increasing the capabilities of the GPU, which is usually accompanied by an increase in encoding/decoding performance and efficiency.
Right. So adding more codecs is what I’d like to see with improved performance and efficiency: pro res, braw, avid dnx, etc. I’m not sure if this could happen tho. Intel doesn’t have to beat Apple with pro red speed but, something close and same with the others, ofc, this might be me wishing for pie in the sky lol. And with Ray tracing, Render engines, any ideas on that? I know it’s tending a bit off topic.
The thing about codec support is that you essentially have to add specific circuits that are used purely for decoding and encoding video using that specific codec. Each addition takes up transistors and increases the complexity of the chip.
XMX cores are mostly used for XeSS and other AI inferencing tasks as far as I understand. While it could be feasible to create an AI model that encodes video to very small file sizes, it would likely consume a lot of power in the process. For video encoding with relatively high bitrates it’s more likely an ASIC would consume a lot less power.
XeSS is already a worthy competitor/answer to DLSS (in contrast to AMD’s FSR2), so adding XMX cores to accelerate XeSS alone can be worth it. I also suspect Intel GPUs use the XMX cores for raytracing denoising.
Ah, got it. I’m guessing this is why Intel leaves those types of circuits for the GPU. Then I’m looking forward to seeing what battlemage brings, and how these innovations trickle down to the iGPU