Faster gaming on Intel UHD 620 (Kaby Lake R) by upgrading RAM

In searching around the web, I found it surprisingly difficult to find RAM benchmarks (single/dual rank, single/dual channel) for gaming on recent Intel integrated graphics. Ryzen? Sure. Intel with a dedicated video card? Yep. Intel Iris? Some old stuff. The little Intel HD and UHD? Not so much.

It’s a little surprising because there are tons of laptops floating around with Intel’s basic iGPU. There are very few upgrades that can be done to these machines, and RAM is one of them.

Some Benchmark Results

Forewarning:

For the upgraded RAM, I went with the Kingston HyperX Impact line (HX424S14IBK2/32 in the 2x16GB dual kit aka HX424S14IB/16 in the 1x16GB single).

These 16GB sticks are dual-rank whereas the original stick was single-rank. The speeds are the same as stock (DDR4-2400), but the latency of the Kingston HyperX Impact here is CL14, whereas the original stick of memory equipped in the machine utilized the slower CL17 (the base JEDEC standard that most manufacturers use).

The good news is that if you’re upgrading RAM in a similar laptop to mine for performance reasons, you’ll probably be at least considering the same Kingston HyperX Impact line for it’s fast timings. This might put you in the same boat as me and make my results very applicable to you.

The bad news is that this isn’t a “pure” single-rank vs dual-rank comparison – for that you would ideally have RAM that is identical in all ways except for being single/dual rank. So for somebody who isn’t upgrading to RAM with tighter timings and is instead hoping to just get a feel for dual-rank benefits, you’ll want to take my results with an extra bit of salt.

Configuration and notable impacts:

The Dell Inspiron 5570 laptop I’m using comes with the 8th generation Intel i5-8250U and UHD 620. Note that the UHD 620 is extremely close on paper to the older HD 530 and HD 630.

Original RAM was a Micron 8GB 2400Mhz DDR4 SODIMM single rank CL17.
Upgraded RAM was Kingston HyperX Impact 16GB 2400Mhz DDR4 SODIMM dual rank CL14.

  • 8GB results – this is the base (Micron single-rank CL17) operating in single-channel.
  • 16GB results – this is the upgraded stick (Kingston dual-rank CL14) operating in single-channel. Part of the improvements you see in these results may come from “dual rank” and part may come from the tighter CL14.
  • 8GB+16GB – the 2 sticks above mixed (single-rank + dual rank) operating in “flex mode” (dual channel on the first 16GB of 24GB total). Note that because the slowest timing is used in this configuration (CL17) there is no benefit to the faster timings normally offered by the 16GB stick.
  • 16GB+16GB – two upgraded sticks (Kingston dual-rank CL14) operating in dual-channel. The following parts all come together here for a performance improvement: dual rank, dual channel, tighter CL14 timings.

Let’s start with some games (emphasis on games – you’ll see why later):

 

Similar to my previous article, to make an “at a glance” look a little cleaner, I’ve gone with % here rather than listing frame rates (though if you want the frame rates they’re listed in the next sets of charts). The stock stick is listed at the 100% mark for each test.

The top results (DX11/DX12) were run in Windows – the remainder were on Linux via the Phoronix Test Suite.

Dissecting the Chart

Let’s start with the obvious… Games on the Intel UHD 620 integrated graphics absolutely see a benefit from upgrading RAM. Whether it’s simply adding another stick for dual-channel, or carefully choosing your RAM for dual-rank and timings, you’re getting something.

The elephant in the room: The 16GB + 16GB combination (both sticks dual-rank HyperX) absolutely decimated the other results about 40% of the time. I actually re-ran a number of those tests because it didn’t really make sense:

  • In many ways that 16GB+16GB is just combining the advantages of the 16GB test (HyperX latency and dual-rank) with the advantages of the 8+16GB test (dual channel via flex). An “additive” effect would make sense, but this was way beyond that.
  • The actual RAM capacity (ie 8GB vs 32GB) shouldn’t make a huge difference because there isn’t a lot of memory consumption during these tests – heck, the Linux system uses under 2GB total with Urban Terror running.
  • If it were a matter of flex channel (unmatched capacity) being worse than dual channel (matched capacity), I would have expected similar behavior from the other 60% of games tested. Yet most of the Windows results were closer to “additive” and a game like UT2004 was quite a bit less-than-additive.
  • It didn’t seem to be tied to frame rate.

…the tests were redone the next day and I was left with the same results.

I’ll touch a bit more on the games later.

Results of all 80-something benchmarks – both CPU and GPU stuff

Click for larger images. Note that:

  • First 2: left side is fps, MB/s, points, etc – whatever the benchmark used where higher is better. Right side is % improvement (higher=better).
  • Last 2: left side is time taken where lower is better. Right side is % improvement (higher=better).
  • Instead of centering around 100% the data centers around 0% – again, easier to parse at a glance.

RAM Bechmark Results - Games and Graphics - Higher is better RAM Bechmark Results - The Rest - Higher is better RAM Bechmark Results - Timed Benchmarks 1 RAM Bechmark Results - Timed Benchmarks 2

  • Light Green are the games you saw above.
  • Dark Green are 3D programs (generally similar to games but either benchmarks or focused on rendering).
  • Orange tend to revolve around game logic (Chess, Connect 4, Sudoku, etc), but don’t do anything graphically.
  • Teal are memory benchmarks. RAMspeed and Stream are a little more RAM-oriented whereas CacheBench spends its time in the CPU cache.
  • Peach is for compilers. Applicable if you build a lot of software.
  • Pink are the common video encoders (VP9, x264, x265). Applicable if you use Handbrake or do a lot of video editing.
  • Yellow is for file compression.
  • Grey is everything else. CPU-oriented benchmarks, encryption, crypto, etc. Generally more niche use cases.

Summing up the Charts above:

Games had impressive gains. Averages for each of the 3 “upgraded” categories here are 7.2%, 12.0%, and 33.2%

3D programs and benchmarks were in the same ballpark as games, though a little less consistent. IndigoBench had a flat 0% across the board, Java2D and RenderBench actually went negative, and GLmark2 takes the cake for the highest 16GB+16GB result at a whopping +102.5%.

The rest…? Ignoring the RAM benchmarks (which are obviously extremely high), a fairly disappointing show overall, with most sitting in the 0-2% range. There are a number of exceptions but this is a situation where you generally have to know a bit about how your program behaves (whether it spends a lot of time accessing memory beyond the L1/L2 cache or not) to predict what sort of benefit you might see.

Power Consumption

After reading the above, you may be thinking “okay, that got a little complicated at times… but a benefit is a benefit, this is one of the few things I can upgrade in my laptop, and I’m leaning towards a RAM upgrade”.

Not so fast!

DRAM power consumption

Focus on the bottom 2 rows.

The RAM upgrade does have a downside: increased power consumption.

This generally isn’t significant: on a 42Wh battery, the upgrade from 8GB to 32GB of RAM would result in me going:

  • from 7h36m to 7h0m at idle
  • from 1h11m to 1h9m if sitting in my Garrison in WoW

Technically you’re looking at an extra kWh of power every 500-2500 hours of “on” time depending on what you’re doing which might mean another $0.10-$0.25 every 3 weeks (if gaming non-stop – on the other end 3 months if idle 24/7) depending on the electricity rates where you live.

That said, lots of RAM means less time accessing the hard drive, so in actuality it will probably *save* power overall if you’re doing any amount of work that touches the hard drive (or playing games which often do). This is just harder to measure because we all use our machines differently and don’t all have the same make/model of hard drive.

It’s worth knowing about anyway.

Conclusion / Recommendations

I came into this with a focus on integrated graphics performance in games. As it turned out, that’s where the RAM upgrade has the most benefit. It may not be enough to turn a slideshow into a playable game, but as those of you who have played games at 20-30fps know: every extra frame (or few frames) makes a notable difference and can bring a game from “I can’t take this” to “okay this is bearable”. If you’re in the 60+ fps range in a game already, capping the frame rate can save some power.

When it comes to other programs that don’t utilize the integrated graphics, a RAM upgrade just isn’t enticing when it comes to performance unless you’re already memory-starved or are using specific workloads. Heck, the undervolt I previously benchmarked usually gave more performance in CPU-centric tasks than this RAM upgrade. And the undervolt was free!

Thus, when it comes to recommendations:

If you play games and are on 1 stick of RAM: Grab a 2nd stick, even if it’s a different capacity. Dual channel “flex” mode does work and provides a size-able benefit. Note that unmatched sticks, particularly from different manufacturers can cause compatibility issues with some laptops – sometimes putting the stick with “weaker” timings in slot 1 can avoid this, but other times you just have to spring for a kit.

If you play games and are on 2 sticks of RAM (dual channel): With RAM prices being what they are in 2018, I don’t think I’d upgrade just for timings or dual-rank unless you’re desperate for every ounce of performance. But if upgrading for the sake of capacity (ie 2x4GB to 2x16GB), I’d look at dual-rank and tight timings (as long as the price difference isn’t extreme) since you’re doling out a chunk of money already anyway. Note that as it stands most 16GB sticks are dual-rank by necessity. For 8GB or smaller, most are single-rank so you really have to do your shopping carefully.

If you play games and your motherboard has more than 2 banks of RAM — or if you are overclocking: Be cautious with dual-rank, as it’s been known to limit the max RAM frequency when overclocking. It can also be less well-behaved (and have less benefit or no benefit) when you use more than 2 DIMMS. Single-rank is safer and 4xSingleRank should behave somewhat closely to 2xDualRank anyway.

If you play games, but on dedicated graphics (AMD/nVidia): Do not use my results in making a decision. Most benchmarks I’ve seen in the wild indicate that Ryzen gets a notable benefit, and that Intel may receive a slight benefit, but search around for something similar to your setup.

If you do not play games: Unless your workload is similar to something in the chart (or you find other benchmarks that indicate it might benefit), I’d probably just get whatever RAM is cheapest from the manufacturer you prefer and wouldn’t worry a whole lot about single/dual rank/channel or timings. Frequency is another matter, but if you’re on a basic laptop without XMP or overclocking options, you probably don’t have much choice when it comes to frequency to begin with.

13 Comments | Leave a Comment

 Sort by Oldest | Sort by Newest
  1. Dirk on August 29, 2018 - click here to reply
    Thanks for sharing, will be considered at my next PC build & buy
  2. Tiago Silva on October 26, 2018 - click here to reply
    Hello and thank you for this science here! Really difficult to find some testings on this UHD 620.
    I'm just wondering if you have experience on editing 1080p videos with this GPU. Does it lag while previewing, or going backwards and forwards during the editing? And what about rendering 10 minutes video for example? I'm thinking on opting for this machine with an ssd, but my concern is the Intel UHD 620 not being capable of taking these tasks properly.
    Thanks again for the great piece of info you brought us.
    • Matt Gadient on October 26, 2018 - click here to reply
      I haven't delved into video editing on this machine yet, though I've delved into editing on various Intel machines in the past. Maybe someone else who has would be willing to chime in.

      That said, the answer is likely to depend at least partially on the editing software being used. For example, software that makes use of Intel's QuickSync where appropriate, and software that is highly optimized across hardware platforms is likely to do quite well. If using a program that does CPU-based ("software") rendering and encoding, the GPU won't have much impact on that portion of the process.

      If you have access to another Intel-based system that is relatively recent (within 5 years or so... say... Ivy Bridge or newer excluding eDRAM models), it could be worth downloading a demo of whatever software you're considering to test on it to get a feel for the rough ballpark you might be hitting performance wise.
  3. Jakub on January 15, 2019 - click here to reply
    Hallo! How did you achieved so grat improvement in WoW? I upgraded from 8SR to 2x 8SR, CPU-Z confirmed it's running dual channel, but in WoW, I can't see any difference. The only scenes, where my FPS is under 50 FPS, are paricles heavy, but I'm getting still the same FPS drops here after the upgrade.
    On my classic PC, this problem was removed by overclocking the CPU, but on the laptop with i5-8250u, the CPU is used only from 20% during these scenes, but GPU is on 100% load. That's confusing me. It's the Intel UHD 620 the bottleneck? Particles shoud be handled mainly by CPU I suppose. Soz for the duplicate, didn't filled any initials first time. :X
    • Matt Gadient on January 15, 2019 - click here to reply
      Both comments came through (without an email address the comment still makes it, but instead of a pending comment message the page looks like it simply refreshed).

      In any case, to elaborate a little I did the WoW testing from within my Garrison since it's an instanced area where I can log/relog without changing position/rotation, framerates stayed relatively consistent across multiple runs, and there weren't other players nearby who can influence the results. This may obviously be different from actual gameplay where there are a lot of other factors that can come into play including what area/environment you're in, how many players are around, etc.

      As for particles, nobody will know their engine as well as Blizzard does, but generally while particles do tend to be CPU-heavy they're often also GPU heavy. They can result in additional draw calls, may have an impact on shader passes (particularly if they use transparency or otherwise are impacted by their surroundings), can use somewhat complex shaders themselves, and so on. To be clear though, there are a number of ways to do "particles" and the CPU/GPU impact of each method can vary widely.

      The UHD 620 will almost certainly be the bottleneck in most of the stuff you play (including WoW): Intel's integrated graphics has come a long way since the days of... oh... say the days of the GMA line. But it still tends to be trounced by even a mid-range modern dedicated GPU. Intel iGPUs are power efficient and come "free" as part of the processor but dedicated GPUs are still the ones with the muscle.
  4. KirkArg on January 19, 2019 - click here to reply
    Excellent info, I have the same notebook but with a dedicated amd 530, and because of the poorly thermal design the Cpu and Gpu are always throttling and I loose performance. So I was thinking of upgrading the ram to 2X Gskill 8gb ripjaws (the Kingston impact is waaay to expensive in my country). So my idea is to disable the Gpu and use the hd620 instead, would you recommend me to do so? Thanks in advance
    • Matt Gadient on January 19, 2019 - click here to reply
      Assuming the AMD GPU can be disabled in the BIOS, I'd probably give that a shot on it's own first and do a few before/after benchmarks to see how the UHD620 compares under a potentially-throttled load. If the UHD620 comes out ahead, then you'd have to decide whether any gains via a RAM upgrade are a worthwhile tradeoff for you compared to the cost. OTOH if the AMD 530 comes out ahead, since it uses dedicated video memory, moving to DR/DC memory probably isn't going to give much benefit at all (see the non-game benchmark charts for what I'd expect): in that case I'd probably look at it simply from a "do I want more RAM in this machine, and how much" point of view.

      Edit: If the AMD 530 you have is the 4GB/GDDR5 variant, benchmarks seem to point to it being quite a bit faster than the UHD620 - for the UHD620 to come out ahead I suspect it would have to be either throttling extremely badly or be generating so much heat that the CPU is forced to throttle badly. The weaker AMD 530 variants on the other hand could be a closer call since benchmarks seem to align them a little closer to the UHD620.
      • KirkArg on January 19, 2019 - click here to reply
        Thanks for the response. So far I have seen that in games like Overwatch the Cpu throttles about 45/50% and the Gpu clock goes from 1.100 mhz to 600mhz, And the game runs fine, so my goal for this specific game would be to lower the Temperatures of the system if I manage to disable the Gpu. (if I can't do it from the bios I can try it in the advance config from the power settings)
        My model is the one with 4gb, in the Benchmarks is usually 20% faster but they don't say what configuration of ram are they using for the test, so is quiet complicated to estimate the gains of the upgrade.
        For now, I will try to find how to disable or not The Gpu under specifics programs on the fly.
        My system has 1 stick of 16gb, the same brand that you use to have, I'm not looking forward to get more than 16 (my peak use of ram was 12gb, but it's usually around 8) so I have came to the conclusion that 2X 8gb is a good investment not only for the igpu but also for the system.
        One other thing that I'm gonna try, and it may be off-topic, it is changing the thermal compound for the Krionaut, it could probably lead to better temperatures, I'm doing some research for now but it could be a good choice as a complement.
        Thanks for your time and sorry if my English is a little bit wroken
        • KirkArg on February 4, 2019 - click here to reply
          Hi there, I just want to give you a quick update on my own adventure with this laptop.
          So far what I did to avoid thermal throttling is to reduce the max workload on the CPU to 80%, now this is only when I want to play any game. How did I get to that number was straight forward but boring, I bought the 3d Mark soft and while using Core Temp I started to take notes on the score of the GPU,CPU, max temp. The numbers where quiet interesting, between 50% and 80% workload the scores on the GPU where almost the same but on the CPU they where from 1400 ish to 2200 ish, but the temperatures on the other hand stay almost the same at 78-82º. If we consider that this CPU has a Max TDP of 25w and I’m reducing the performance 20% we could estimate that the heatsink has an efficient dissipation of 20W, after that the CPU temps hits easily 92º/98º and starts throttling.
          About performance, in some none intensive CPU games I could not find any considerable drop in FPS, it makes sense because the real bottleneck here is the low-end GPU. So the balance between GPU and locking the workload of the CPU to 80% (even 70% works fine, but it is nice to have that extra 10% for some heavy duty moments) is the best I could find.
          Now that I have made this I’m going to upgrade the RAM and disable the GPU to see if there is any improvement.
          Note: in order to make easy the task of changing power settings what I did was to create 2 bat files that changes to a specific power setting and put them on the taskbar.
          • Matt Gadient on February 4, 2019
            Interesting to hear your results!

            Just a quick note in regards to the 20W / 25W bits: do keep in mind that the relationship between power consumption and frequency/voltage often isn't linear (80% workload may result in less than 80% power usage). If you end up needing to use a specific wattage when determining something you may want to either grab a Kill-A-Watt to measure from the wall, or fire up Intel's Power Gadget tool (or Intel XTU) to see what Intel reports for on-die power.
  5. Anonymous on November 13, 2019 - click here to reply
    Certo good work!
    Thanks.
  6. Anonymous on July 5, 2020 - click here to reply
    Thanks for the imformation. really there is no much information about how to improve a laptop with intel uhd 620 graphics.i was looking something like this, thanks, some of my doubts has been clarified. ram 4gb+4gb (dual channel) plus SSD could soud interesting too.
  7. mayt200 on February 8, 2022 - click here to reply
    Nice work, its helps a lot for me ! Big THX !

Leave a Comment

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.