mattgadient.com

Faster gaming on Intel UHD 620 (Kaby Lake R) by upgrading RAM

In searching around the web, I found it surprisingly difficult to find RAM benchmarks (single/dual rank, single/dual channel) for gaming on recent Intel integrated graphics. Ryzen? Sure. Intel with a dedicated video card? Yep. Intel Iris? Some old stuff. The little Intel HD and UHD? Not so much.

It’s a little surprising because there are tons of laptops floating around with Intel’s basic iGPU. There are very few upgrades that can be done to these machines, and RAM is one of them.

Some Benchmark Results

Forewarning:

For the upgraded RAM, I went with the Kingston HyperX Impact line (HX424S14IBK2/32 in the 2x16GB dual kit aka HX424S14IB/16 in the 1x16GB single).

These 16GB sticks are dual-rank whereas the original stick was single-rank. The speeds are the same as stock (DDR4-2400), but the latency of the Kingston HyperX Impact here is CL14, whereas the original stick of memory equipped in the machine utilized the slower CL17 (the base JEDEC standard that most manufacturers use).

The good news is that if you’re upgrading RAM in a similar laptop to mine for performance reasons, you’ll probably be at least considering the same Kingston HyperX Impact line for it’s fast timings. This might put you in the same boat as me and make my results very applicable to you.

The bad news is that this isn’t a “pure” single-rank vs dual-rank comparison – for that you would ideally have RAM that is identical in all ways except for being single/dual rank. So for somebody who isn’t upgrading to RAM with tighter timings and is instead hoping to just get a feel for dual-rank benefits, you’ll want to take my results with an extra bit of salt.

Configuration and notable impacts:

The Dell Inspiron 5570 laptop I’m using comes with the 8th generation Intel i5-8250U and UHD 620. Note that the UHD 620 is extremely close on paper to the older HD 530 and HD 630.

Original RAM was a Micron 8GB 2400Mhz DDR4 SODIMM single rank CL17.
Upgraded RAM was Kingston HyperX Impact 16GB 2400Mhz DDR4 SODIMM dual rank CL14.

  • 8GB results – this is the base (Micron single-rank CL17) operating in single-channel.
  • 16GB results – this is the upgraded stick (Kingston dual-rank CL14) operating in single-channel. Part of the improvements you see in these results may come from “dual rank” and part may come from the tighter CL14.
  • 8GB+16GB – the 2 sticks above mixed (single-rank + dual rank) operating in “flex mode” (dual channel on the first 16GB of 24GB total). Note that because the slowest timing is used in this configuration (CL17) there is no benefit to the faster timings normally offered by the 16GB stick.
  • 16GB+16GB – two upgraded sticks (Kingston dual-rank CL14) operating in dual-channel. The following parts all come together here for a performance improvement: dual rank, dual channel, tighter CL14 timings.

Let’s start with some games (emphasis on games – you’ll see why later):

 

Intel Integrated UHD 620 - game performance of various RAM combinations

Similar to my previous article, to make an “at a glance” look a little cleaner, I’ve gone with % here rather than listing frame rates (though if you want the frame rates they’re listed in the next sets of charts). The stock stick is listed at the 100% mark for each test.

The top results (DX11/DX12) were run in Windows – the remainder were on Linux via the Phoronix Test Suite.

Dissecting the Chart

Let’s start with the obvious… Games on the Intel UHD 620 integrated graphics absolutely see a benefit from upgrading RAM. Whether it’s simply adding another stick for dual-channel, or carefully choosing your RAM for dual-rank and timings, you’re getting something.

The elephant in the room: The 16GB + 16GB combination (both sticks dual-rank HyperX) absolutely decimated the other results about 40% of the time. I actually re-ran a number of those tests because it didn’t really make sense:

  • In many ways that 16GB+16GB is just combining the advantages of the 16GB test (HyperX latency and dual-rank) with the advantages of the 8+16GB test (dual channel via flex). An “additive” effect would make sense, but this was way beyond that.
  • The actual RAM capacity (ie 8GB vs 32GB) shouldn’t make a huge difference because there isn’t a lot of memory consumption during these tests – heck, the Linux system uses under 2GB total with Urban Terror running.
  • If it were a matter of flex channel (unmatched capacity) being worse than dual channel (matched capacity), I would have expected similar behavior from the other 60% of games tested. Yet most of the Windows results were closer to “additive” and a game like UT2004 was quite a bit less-than-additive.
  • It didn’t seem to be tied to frame rate.

…the tests were redone the next day and I was left with the same results.

I’ll touch a bit more on the games later.

Results of all 80-something benchmarks – both CPU and GPU stuff

Click for larger images. Note that:

  • First 2: left side is fps, MB/s, points, etc – whatever the benchmark used where higher is better. Right side is % improvement (higher=better).
  • Last 2: left side is time taken where lower is better. Right side is % improvement (higher=better).
  • Instead of centering around 100% the data centers around 0% – again, easier to parse at a glance.

RAM Bechmark Results - Games and Graphics - Higher is better
RAM Bechmark Results - The Rest - Higher is better
RAM Bechmark Results - Timed Benchmarks 1
RAM Bechmark Results - Timed Benchmarks 2

  • Light Green are the games you saw above.
  • Dark Green are 3D programs (generally similar to games but either benchmarks or focused on rendering).
  • Orange tend to revolve around game logic (Chess, Connect 4, Sudoku, etc), but don’t do anything graphically.
  • Teal are memory benchmarks. RAMspeed and Stream are a little more RAM-oriented whereas CacheBench spends its time in the CPU cache.
  • Peach is for compilers. Applicable if you build a lot of software.
  • Pink are the common video encoders (VP9, x264, x265). Applicable if you use Handbrake or do a lot of video editing.
  • Yellow is for file compression.
  • Grey is everything else. CPU-oriented benchmarks, encryption, crypto, etc. Generally more niche use cases.

Summing up the Charts above:

Games had impressive gains. Averages for each of the 3 “upgraded” categories here are 7.2%, 12.0%, and 33.2%

3D programs and benchmarks were in the same ballpark as games, though a little less consistent. IndigoBench had a flat 0% across the board, Java2D and RenderBench actually went negative, and GLmark2 takes the cake for the highest 16GB+16GB result at a whopping +102.5%.

The rest…? Ignoring the RAM benchmarks (which are obviously extremely high), a fairly disappointing show overall, with most sitting in the 0-2% range. There are a number of exceptions but this is a situation where you generally have to know a bit about how your program behaves (whether it spends a lot of time accessing memory beyond the L1/L2 cache or not) to predict what sort of benefit you might see.

Power Consumption

After reading the above, you may be thinking “okay, that got a little complicated at times… but a benefit is a benefit, this is one of the few things I can upgrade in my laptop, and I’m leaning towards a RAM upgrade”.

Not so fast!

DRAM power consumption

Focus on the bottom 2 rows.

The RAM upgrade does have a downside: increased power consumption.

This generally isn’t significant: on a 42Wh battery, the upgrade from 8GB to 32GB of RAM would result in me going:

  • from 7h36m to 7h0m at idle
  • from 1h11m to 1h9m if sitting in my Garrison in WoW

Technically you’re looking at an extra kWh of power every 500-2500 hours of “on” time depending on what you’re doing which might mean another $0.10-$0.25 every 3 weeks (if gaming non-stop – on the other end 3 months if idle 24/7) depending on the electricity rates where you live.

That said, lots of RAM means less time accessing the hard drive, so in actuality it will probably *save* power overall if you’re doing any amount of work that touches the hard drive (or playing games which often do). This is just harder to measure because we all use our machines differently and don’t all have the same make/model of hard drive.

It’s worth knowing about anyway.

Conclusion / Recommendations

I came into this with a focus on integrated graphics performance in games. As it turned out, that’s where the RAM upgrade has the most benefit. It may not be enough to turn a slideshow into a playable game, but as those of you who have played games at 20-30fps know: every extra frame (or few frames) makes a notable difference and can bring a game from “I can’t take this” to “okay this is bearable”. If you’re in the 60+ fps range in a game already, capping the frame rate can save some power.

When it comes to other programs that don’t utilize the integrated graphics, a RAM upgrade just isn’t enticing when it comes to performance unless you’re already memory-starved or are using specific workloads. Heck, the undervolt I previously benchmarked usually gave more performance in CPU-centric tasks than this RAM upgrade. And the undervolt was free!

Thus, when it comes to recommendations:

If you play games and are on 1 stick of RAM: Grab a 2nd stick, even if it’s a different capacity. Dual channel “flex” mode does work and provides a size-able benefit. Note that unmatched sticks, particularly from different manufacturers can cause compatibility issues with some laptops – sometimes putting the stick with “weaker” timings in slot 1 can avoid this, but other times you just have to spring for a kit.

If you play games and are on 2 sticks of RAM (dual channel): With RAM prices being what they are in 2018, I don’t think I’d upgrade just for timings or dual-rank unless you’re desperate for every ounce of performance. But if upgrading for the sake of capacity (ie 2x4GB to 2x16GB), I’d look at dual-rank and tight timings (as long as the price difference isn’t extreme) since you’re doling out a chunk of money already anyway. Note that as it stands most 16GB sticks are dual-rank by necessity. For 8GB or smaller, most are single-rank so you really have to do your shopping carefully.

If you play games and your motherboard has more than 2 banks of RAM — or if you are overclocking: Be cautious with dual-rank, as it’s been known to limit the max RAM frequency when overclocking. It can also be less well-behaved (and have less benefit or no benefit) when you use more than 2 DIMMS. Single-rank is safer and 4xSingleRank should behave somewhat closely to 2xDualRank anyway.

If you play games, but on dedicated graphics (AMD/nVidia): Do not use my results in making a decision. Most benchmarks I’ve seen in the wild indicate that Ryzen gets a notable benefit, and that Intel may receive a slight benefit, but search around for something similar to your setup.

If you do not play games: Unless your workload is similar to something in the chart (or you find other benchmarks that indicate it might benefit), I’d probably just get whatever RAM is cheapest from the manufacturer you prefer and wouldn’t worry a whole lot about single/dual rank/channel or timings. Frequency is another matter, but if you’re on a basic laptop without XMP or overclocking options, you probably don’t have much choice when it comes to frequency to begin with.

3 Comments

 | Leave a Comment
  1. Thanks for sharing, will be considered at my next PC build & buy

  2. Tiago Silva

    Hello and thank you for this science here! Really difficult to find some testings on this UHD 620.
    I’m just wondering if you have experience on editing 1080p videos with this GPU. Does it lag while previewing, or going backwards and forwards during the editing? And what about rendering 10 minutes video for example? I’m thinking on opting for this machine with an ssd, but my concern is the Intel UHD 620 not being capable of taking these tasks properly.
    Thanks again for the great piece fo info you brought us.

    • I haven’t delved into video editing on this machine yet, though I’ve delved into editing on various Intel machines in the past. Maybe someone else who has would be willing to chime in.

      That said, the answer is likely to depend at least partially on the editing software being used. For example, software that makes use of Intel’s QuickSync where appropriate, and software that is highly optimized across hardware platforms is likely to do quite well. If using a program that does CPU-based (“software”) rendering and encoding, the GPU won’t have much impact on that portion of the process.

      If you have access to another Intel-based system that is relatively recent (within 5 years or so… say… Ivy Bridge or newer excluding eDRAM models), it could be worth downloading a demo of whatever software you’re considering to test on it to get a feel for the rough ballpark you might be hitting performance wise.

Leave a Comment

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.

To reduce spam, I manually approve all comments, so don't panic if it looks like the page simply refreshed and your comment doesn't show up immediately.