A look at entry-level, laptop gaming

Osvaldo Doederlein
11 min readAug 26, 2023

--

If you follow me here or on Twitter you know I’m a Desktop guy. However, my kid needed the mobility so I got this HP Victus: 7840HS, RTX 3050 6GB, 16GB DDR5-5600, 512GB NVMe, 16.1" 1080p 144Hz. $749 at MicroCenter (not an endorsement). Branded as a Gaming Laptop, the 3050 GPU qualifies for that as entry-level but the machine is not just for gaming (I hope!) and overall specs looked great for that price range.

I used the opportunity to check out the tech. The system offers an 8-core Zen4 CPU with integrated RDNA3 780M graphics, plus AMD’s new AI Engine (largely a promise today). The RTX 3050 Mobile is the 6 GB refresh, about ~10% stronger than the desktop RX 580 I had 4 years ago.

Detailed CPU & GPU specs (not all correct, see below).

Tuning, Test setup

The laptop has little if any OC potential. I didn’t mess with the BIOS, CPU or GPUs; all stock. I disabled Windows 11’s Memory Integrity and removed a cataclysmic amount of bloatware. I wanted to compare this device to my gaming desktop: 7900 XTX at stock, 5900X with Ryzen Master-managed per-core CO, 32GB-3800. Ryzen Master doesn’t even support the 7840HS. For the 780M iGPU, the Radeon App’s Tuning tab only offers an option to optimize DRAM allocation for productivity (reserving 512MB for VRAM) or gaming (2GB); I used the latter setting only for testing the iGPU.

This laptop uses a mux switch, but the optimal setting for dGPU/gaming is not the default: you enable it via HP’s Omen app (single HP preload I kept). I tested both Hybrid and Discrete mode, with two findings:

  • The dGPU performs identically in both modes. I couldn’t detect any advantage (over error margin) of Discrete mode, even in synthetic benchmarks that usually maximize small architectural factors.
  • The iGPU works perfectly well even in Discrete mode.

It looks like the 780M iGPU can pipe pixels from the 3050 without a visible penalty, so the major factor (not tested here) should be power/battery use.

Synthetic Benchmarks, Laptop Edition

Let's start with the synthetic tests few include in GPU reviews (but should). I’m running all tests at 1080p, except VRMark that uses a fixed resolution. For most of the 3DMark tests 1080p requires a Custom Run.

For the purposes of this study, I was interested to look at the performance of the synthetic tests as a fraction of my high-end desktop; the goal is to identify any functional areas where this entry-level, mobile gaming system is particularly weak or strong compared to a "standard" gaming PC.

Synthetic tests / Relative performance: 7840HS + RTX 3050 6GB vs. 5900X + 7900 XTX.

As one would expect the laptop is generally much slower, but there are some surprises. Commenting the interesting findings:

  • 3Mark PCIe: 50% of desktop, despite GPU-Z's claim of 16 x PCIe 4.0. The tool is incorrect, the mobile 3050 is specced to only 8 lanes.
  • 3DMark CPU: This mobile Zen4 beats my desktop Zen3's single-thread score by 5%; the multi-threaded test hits 74% of the desktop but it does that with only 66% as many cores and with a TDP limit of 54W.
  • 3DMark FireStrike: The Physics test runs 79% as fast as my desktop. That's another CPU-bound test, suggesting that a system with this balance of GPU/CPU could shine in some games e.g. simulation.
  • 3DMark API Overhead: A final CPU-bound test, measuring GPU driver code that pushes draw commands; scores are very competitive. In the DX11 Multithreaded test I actually got +40% over my desktop! That's probably because my big PC is running a Radeon GPU; multithreaded DX11 is Radeon’s Achilles’ heel. See this great analysis by Intel.
  • VRMark Cyan Room: Reaches 90% of my desktop, 90fps at 2,264 x 1,348 resolution (1,132 x 1,348 per eye). I suppose there is some bottleneck since the test hits only 130fps in a 7900 XTX, very low considering the sub-1440p pixel count and 2017-vintage rendering.
  • 3DMark Mesh Shader: The test with Mesh Shader Off is very close to desktop performance; that's a very simple rasterization benchmark, possibly bandwidth-limited. But with Mesh Shaders On, the desktop 7900 XTX runs with +256% fps while the mobile 3050 only gains +36%.
  • 3DMark Night Raid: Designed for "integrated graphics gaming", but on a 2018-era DX12 engine, this test is easy on the 3050 with both sections above 300fps: about 50% of my PC (in part because the test becomes CPU-limited). See next section for iGPU performance.
3DMark Solar Bay
  • 3DMark Solar Bay: a brand-new, "cross-platform benchmark for RT capable Windows and Android devices", Solar Bay is the successor of Night Raid for those devices including any PC with a good iGPU. I also love that it's a modern, RT-enabled Vulkan bench. (No iOS support since Apple hates open standards.) Solar Bay looks decent in a 1080p laptop; level of detail is close to a decade-old PC game except for heavy use of RT reflection. The 3050 turns 176fps, a respectable 38% of my desktop. I anticipated simple tests or games hitting the display's 144fps, but not with any RT. Of course, Ray Tracing is easier with low geometric complexity that fits in other restrictions for entry-level platforms. Also it's the easiest kind of RT — 100% specular reflections, literally perfect mirrors.
  • FurMark: Does very poorly with only 17% of the desktop's performance. This is my only OpenGL-based synthetic test so it's hard to draw any conclusions; also, FurMark is better as a stress/VRM test.

Update: Solar Bay was updated on 11–09–2023 to support iOS, but that was accomplished by porting to Apple’s Metal API so scores aren’t easily comparable between iOS and other platforms — there are some multi-API benchmarks that exhibit double-digit differences between APIs on the same hardware.

Integrated Graphics

I repeated the same tests in the 7840HS's integrated graphics (780M), but now presenting all scores as fractions of the dedicated RTX 3050's:

Synthetic tests / Relative performance: 7840H/780M (iGPU) 2GB (DDR5) vs. RTX 3050 6GB.

The iGPU performs mostly around 30%-50% of the dGPU, which is not bad, but not good enough for modern AAA games even at 1080p — Phoenix is not yet the big APU that entry-level gamers need. Highlights:

  • 3DMark API tests apparently show the 780M's driver not well optimized for any multi-threaded API; it only does well in DX11 ST. But I believe that's just the effect of very unbalanced CPU/GPU power i.e. the 12-CU iGPU cannot process draw commands as fast as the driver can push them with multithreaded APIs on the 8-core/16-thread CPU.
  • 3DMark Port Royal, SpeedWay and Solar Bay, all the Ray Tracing tests, have relative performance below the average of raster-only tests. Still, Solar Bay's 72fps is surprisingly good. You wouldn't imagine that a small RDNA3 iGPU could do any level of RT locking 60fps even at 1080p.

Note that the 7940HS is almost the same CPU & iGPU specs as the Asus ROG Ally'z Z1 Extreme APU. The laptop part offers higher CPU clocks with some extra TDP, but I'm not sure if the custom Z1 design counters with some advantage of its own. In any case the Ally would be a good argument that this iGPU is good enough for 1080p low-settings gaming, but I didn't have the time or motivation for full testing of the iGPU here. For one thing, handhelds get away with lower expectations of playable framerates but I barely recognize anything at 30–40fps as a working videogame.

Gaming Tests

Moving on to actual games, I chose a subset of my built-in benchmarks that represent a range of ages and APIs. All big AAA titles at launch. Rules first: I’m testing all games at the “High” preset, but not Ultra if there is one, and no Ray Tracing even if that would be part of High (but see later about RT). Games tested at native 1080p and upscaled at the Quality level if available: DLSS2 on the laptop, FSR or XeSS on my Radeon desktop.

Gaming tests, native 1080p and DLSS2/Quality upscaling when available.

I was able to get all games above 60fps at native 1080p, except for Returnal which needed DLSS2/Quality. I added a second Returnal test with the Low preset, that did the trick, 60fps native and DLSS2/Q averaging a safer 68fps. The best surprise was Hitman 3: 110fps native and DLSS2/Q almost meeting the display's 144Hz limit with 138fps.

In relative terms, compared to my desktop's performance: SOTR & Batman ~35%, Horizon & Hitman ~40%, Returnal (both High and Low) ~45%.

Mobile Upscaling

My tests also show that upscaling is not a panacea for entry-level gaming. In Returnal, DLSS2/Q gains a very mediocre 13%; in Horizon and Hitman, the same Quality factor gains 25% which is more useful but far from what's possible at this scaling factor — on my desktop benchmarks at 4K, Quality upscaling gains >60% in Returnal and >80% in many games. SOTR is the only good upscaling result with XeSS/Quality delivering +67% frames.

Meanwhile, Batman represents older games well: typically easy on modern hardware, running at a pretty good 80fps. But you better like the native performance because there's no support for any modern upscaler.

Effectiveness of all upscalers: Extra FPS for Quality upscaling vs Native, 1080p.

Going back to our synthetic suite, 3DMark includes tests for the big three. Relative performance better for DLSS2 and FSR2 than XeSS. DLSS2 is indisputably the winner since it sits at the top in both image quality analyses (from every major reviewer) and performance.

But the FSR2 x XeSS debate is more complicated. XeSS’s quality comes at a penalty in framerate (on non-Arc GPUs). More evidence of this gap, that makes the choice harder even in games where XeSS meaningfully beats FSR2’s quality — since the reason to use upscalers is to gain FPS! But I find that quality gap smaller in a laptop’s display, even one on the bigger side. FSR2/Q at 1080p looks good enough to me that I wouldn’t trade hardly-noticeable extra quality for 10% extra frames. In my desktop with a 32" display any additional artifact is much easier to spot — and IMO the debate about upscaling quality is biased by reviewers using desktops with big, high-quality gaming monitors. Even if one uses a display with native 1080p resolution to test upscalers in 1080p (as one should) that’s likely to be at least a 27" panel, way bigger than any laptop’s display.

At higher scaling (e.g. Performance), even DLSS2 is a bigger quality tradeoff at 1080p, noticeable even in a small display. But my rule of thumb is that if Quality upscaling can’t push a game over 60fps then the hardware isn’t good enough to run the game at that resolution. Before upscaling even more I’ll rather lower other settings… unless I want to keep Ray Tracing. On my desktop I might take Balanced upscaling for 4K max RT. But RT is generally out of the reach for entry-level GPUs (see next section).

I wanted to confirm those 3DMark upscaling results with real-game tests. You need something that's current/AAA, supports all upscalers well, runs generally well on the laptop, and includes a good built-in benchmark. Hitman 3, recently updated for XeSS 1.1, was a great choice.

Upscaling gains in Hitman 3, Raster and RT.

My results help clarify recent debates on Twitter, where some people (IIRC all using desktop Ada GPUs) report that XeSS 1.1 has erased that FPS gap to other upscalers. That's clearly not true for my 7900 XTX, in every single test synthetic or real-game I can find. Also not true for this mobile RTX 3050.

XeSS's trailing performance is bigger for the raster tests; with RT maxed out it's still behind FSR 2 but much closer. That factor might be why some don't see the gap, at least in some GPUs / games. The upscaling pass has identical costs with and without RT; however, the relative gain depends on factors like main rendering time and scheduling. RTX GPUs can run asynchronous tasks on shaders vs tensor cores vs RT cores; RDNA has strong support for async compute but it runs everything on the Shading Units. Its "RT cores" or "AI accelerators" are better described as extensions to the SUs, they aren't independent execution units. In any case, XeSS's minor gap with RT is not a big win in a device that can't do RT well even with upscaling.

Ray Tracing

I also tried to run all games in my selection with RT enabled, but results at native 1080p were consistently too poor to be worth it — even DLSS2/Quality wouldn't make most games playable at >60fps.

The best result by far was Hitman 3. Not because its RT implementation is particularly efficient (quite the opposite), but for some reason upscaling works very well for this game at 1080p. Well enough to just score an exact 60fps with DLSS2 and FSR2 (56fps for XeSS). But this is only the average in a benchmark where some scenes dip well under that number, and it's only for the Performance mode which is tough to endure on 1080p. That's still High settings so I have some room to improve by reducing other options, but I think the exercise is sufficient to prove the obvious: AAA+RT is not worth even trying on an entry-level Ampere GPU.

VRAM

VRAM size was a big meme in tech/gaming over 2023, thanks to many PC releases with poor optimization at launch that couldn't work decently with even "only 8GB". Most of those games were improved after a few patches, but I had this issue on my mind because this laptop's GPU offers just 6GB. (The bandwidth of 14GBps matches a same-class desktop RTX 3050 8GB.)

I observed good performance for my whole suite of big games; of course, not a large selection and all Raster-only and 1080p. I still wouldn't buy this platform for gaming if I want to play every new AAA release, especially at launch not after the fifth patch. But that's a game optimization problem more than an intrinsic limitation of GPUs with 8GB or even 6GB today. One could argue that VRAM size is part of the problem for my poor Ray Tracing results, since RT generally consumes more VRAM than rasterization; but it's not likely the big bottleneck here.

--

--

Osvaldo Doederlein

Software engineer at Google. Husband, Father. Likes science fiction, gaming, PC hardware, tech in general.