7 watts idle on Intel 12th/13th gen: the foundation for building a low power server/NAS

We shall start with a bit of history:

Not all my systems have been so successful. In 2022 I measured a couple other systems at 19 watts and 27 watts as part of Curbing the “Gas-Guzzling” tendencies of AMD Radeon with Multi-Monitor . While I did manage to get that 27 watt AMD system down in power some time later, not every CPU/motherboard combo is destined for the 10 watt ballpark.

Before going further, the 7 watt figure for this system was before storage has been added. The 7 watts (measured at the wall) includes:

  • Motherboard (Intel H770)
  • CPU (Intel i5-12400)
  • 64GB RAM
  • SSD (booting Ubuntu Server 23.04)
  • PSU (Corsair)
  • C-States set up in BIOS so that it reaches C8
  • powertop with auto-tune (which disabled the USB keyboard when the port went to sleep)

Note that if I don’t allow powertop to disable the keyboard, I get 8 watts measured at the wall.

Let’s get into detailed specs and component choices. This time around I had the following goals:

  • low idle power
  • reasonable CPU performance for compression
  • able to handle 12 hard drives + at least 1 NVMe
  • capacity to (eventually) convert those 12 hard drives to 6 NVMe + 6 SSD SATA
  • keep costs under control – since a motherboard purchase would be required, try to stick with DDR4 and reuse a CPU I already have.

Putting together a new system with the hopes of getting in the ballpark of the 10 watt range *measured from the wall* is often not only a challenge, but a bit of a gamble. Sometimes you just have to take your best educated guesses in terms of components, build your rig, tune what you can, and let the chips fall where they may.

Motherboard – ASUS Prime H770-Plus D4

Before I begin, here is a quick look at the motherboard layout. The GREEN CPU-connected slots and ORANGE chipset-connected slots will become relevant throughout this write-up.

ASUS PRIME H770 with M.2 and PCIe port layout

At the time of writing, widely available consumer options were motherboards in the Intel 600/700-series and AMD 500/600-series.

One of my goals above was the capacity for an eventual 6 NVMe drives.

Digging into deeper details as to why this can be a challenge (feel free to skip this section)…

Problem: There are 0 consumer motherboards with 6x M.2 slots that can all be used at the same time in PCIe mode. On AMD the MEG X570S Unify-X Max *looks* like it does, but check the manual and you’ll find that if you try to populate all 6, the last one has to be a SATA variant. The ASRock Z790 PG Sonic also has 6 slots, but you can only use 5 of them (with a legitimate excuse: they offer a Gen5 NVMe slot but it comes with an either/or caveat).

Why This Problem Exists: There are chipset lane limitations on consumer boards. Assuming I want the ability to run all M.2 in Gen4x4 and assuming a manufacturer were actually willing to devote all the lanes to M.2 NVMe slots (they’re not), AMD X570 and Intel B760 would max at three M.2 slots, with AMD B650 and Intel H670/Q670/Z690/W680 managing four. Five M.2 slots is possible on AMD X670 and Intel H770 boards. Six on a Z790 board. Beyond that, extraordinary measures like robbing the main PCIE slot of lanes would be required. If sheer M.2 count were desired, manufacturers could run theoretically run lanes in Gen4x2 or add some Gen3 M.2 slots, but at that point they’ve created a *very* niche product.

The Solution: PCI-E to M.2 adapters became necessary. Now when searching for a motherboard, it became a matter if adding the M.2 slots included to any available PCI-E slots capable of x4 or higher. My options were now limited to AMD X570, Intel H770, and Intel Z790 motherboards. Note that while using bifurcation is a possibility on some motherboards to get more than 1 NVMe out of the main PCIe slot, I decided not to rely on it.

I decided to go the Intel route for a few reasons:

  1. Chipset TDP: 600/700-series Intel chipsets all have a 6W TDP, whereas the TDP of the AMD X670 chipset is pretty high (7w+7w). AMD chipset power consumption has concerned me for a while, as previous X570 chipsets had a TDP of 11w and needed a fan.
  2. Chipset Speed: Intel H670/Q670/W680/Z690/H770/Z790 chipsets have a DMI 4.0 x8 link to the CPU. AMD X570/B650/X670 have a PCIe 4.0 x4 link to the CPU. Theoretical throughput on the Intel should be twice as much as AMD (16GB/s vs 8GB/s).
  3. I already had 64GB of DDR4 that the Intel system could use. AMD 600-series chipsets are all DDR5-only.
  4. I already had an Intel 12th Gen CPU.
  5. I’ve yet to see any positive discussion around AM5 power consumption. At all. Update: as I was writing this, news actually came out about AMD 7000-series CPUs burning/bulging where the motherboard socket pins meet the CPU. Yeah, sorry AMD, not this time.

So Intel it was. After checking out available DDR4 motherboards on the market, I quickly narrowed options to 2 manufacturers: MSI and ASUS.

Don’t care about the board comparisons? Feel free to skip this.

The enticing MSI boards were the PRO Z790-P WIFI DDR4 and Z790-A WIFI DDR4. Nearly identical on the surface, except the “A” is a little more premium (audio, rear ports, heatsinks, power phases, etc). Pros/cons:

  • Pro: 4x M.2 (Gen4x4) + 1x PCIE Gen5x16 + 1x PCIE Gen4x4 supports a total of 6 Gen4 NVMe
  • Pro: 2x PCIE Gen3x1 extra
  • Pro: 6 SATA ports
  • Con: Intel 2.5G LAN (known to be problematic and buggy)
  • Con: I’m not a fan of the MSI BIOS
  • Con: My current B660 board that results in higher idle consumption than expected is an MSI.

Attractive ASUS options were the Prime H770-Plus D4 and Prime Z790-P D4 (optional WIFI edition). Getting into the TUF, Strix, or ProArt was just too expensive.

I’ll start by listing pros/cons for the H770-Plus:

  • Pro: 3x M.2 (Gen4x4) + 1x PCIE Gen5x16 + 2x PCIE Gen4x4 supports a total of 6 Gen4 NVMe
  • Pro: 2x PCIE Gen3x1 extra
  • Con: Only 4 SATA ports
  • Pro: 2.5G Realtek Network Adapter (preferable to Intel 2.5G LAN these days) (see comments)

The Z790-P D4 is similar except it has more power phases, better heatsinking, more USB ports, extra fan header, and for our purposes…:

  • +1 PCIE Gen4x4
  • -1 PCIE Gen3x1

Ultimately the ASUS Prime H770-Plus D4 was about $100 cheaper at the time and is what I chose.

One upside I’ve found with “cheaper” boards is they tend to have fewer components and thus less vampire power drain at idle, though this isn’t always a certainty.

CPU – Intel i5-12400 (H0 stepping) – Alder Lake

I already had this CPU as part of a previous desktop build. At the time it was chosen for the desktop system because:

  • it had AV1 hardware decode
  • it had the highest performance available from the Intel lineup of the 12th generation that avoids the E-core silicon overhead
  • in that build, I was getting a new motherboard with 2xDP anyway, and going older-gen didn’t make sense to me.

That desktop build turned out to be a disappointment, and ranks as one of my least favorite builds.

Some details…

I had issues where sometimes only 1 of 2 DP-attached monitors would wake in Linux which meant I had to either pull/reconnect the other DP connector, or manually suspend/resume the system so it could try again.

Another issue was that rebooting between Windows/Linux sometimes caused odd issues which necessitated a full poweroff/restart.

Hardware decode on Ubuntu using Wayland is still problematic and when programs tried to use it to play video, problems would ensue.

Finally, unlike my previous Intel systems which could all be brought down near the 10 watt mark, this one was idling at 19 watts, though I suspected the MSI motherboard I was using may have been a factor.

Most of the headaches I experienced were related to the GPU and display. Since I was about to build something server-oriented, that was no longer a factor.

MEMORY – 64GB DDR4-3200

Here’s what I used:

  • 2x16GB Kingston HyperX dual-rank (Hynix DJR)
  • 2x16GB Kingston HyperX single-rank (Hynix CJR)

This was memory I already had. I ran the 4 sticks of memory at the XMP profile of the dual-rank kit which was 16-18-18-36. Everything else was essentially left to the defaults except that I ran the RAM at 1.25 volts (higher than stock 1.20, but lower than the XMP 1.35v setting). TestMem5 and Memtest86 showed stability at 1.22v, though testing this memory on previous motherboards had shown 1.22v to be unstable, so for a little extra buffer when it comes to stability I boosted the voltage to 1.25v.

Boot Drive – Sandisk Ultra 3D 1TB SSD

This component wasn’t deliberately chosen. When I wanted a fresh Ubuntu Server install for testing, this happened to be the only SSD I had kicking around that wasn’t currently being used. I was going to be doing a lot of A/B testing on PCIE and NVMe devices, so installing Ubuntu 23.04 to a SATA SSD made sense to keep PCIE slots free.

Note that after testing, the main OS was to be run on a Samsung SSD 970 EVO Plus 500GB NVMe. Not much to say except that Samsung stuff tends to reliably go into low power modes.

Having used both drives, I can’t measure any power difference between them in my testing. Tom’s Hardware tested the Samsung idle at 0.072 watts (via ASPM/APST), and Anandtech tested the Sandisk Ultra 3D idle to be 0.056 watts (via ALPM). Both are well below the 1W resolution of my Kill-A-Watt meter.

PSU – Corsair RM750

As much as this 750W PSU may appear to be overkill for a system intended to sit around 10 watts, when 12 drive motors spin up at the same time, the instantaneous load is likely to be quite high. Seagate states 2A/3A DC/AC peak currents on the 12V rail for one of their 10TB 3.5″ drives. Even peak random read/writes can clock in at over 2A.

This bursty power demand has the potential to be problematic if the PSU isn’t up to the task. If an array of 6 drives collectively pull 150-200 watts at the same moment the CPU spikes to pull a peak 120W, that’s a jump from around 10 watts idle to around 400 watts. This could easily cause an instantaneous voltage dip – if it dips enough to cause an immediate crash/reboot it’s probably not a big deal, but if it dips just enough that data is corrupted during a memory refresh or when another drive is mid-write… that’s a more painful problem. Oversizing the PSU to some degree (or adding some in-line capacitors to the power rails) makes sense.

Fortunately, despite operating outside of the peak efficiency range, much of the Corsair RM series is pretty efficient across a wide range.

Power Measurements – Initial

A few important bits:

  • Power measured from the wall
  • Intel PowerTOP was used to auto-tune settings
  • Ubuntu Server 23.04

A few potentially-important BIOS bits:

  • CPU C-states were enabled in the BIOS (C10)
  • ASPM was enabled with everything set to L1
  • RC6 (Render Standby) enabled
  • Aggressive LPM Support enabled (ALPM)
  • DISABLED: HD Audio, Connectivity Mode, LEDs, GNA Device, Serial Port

9-10 watts was the consumption when the display output was on.

7 watts was the consumption once the display turned off (consoleblank=600 kernel boot parameter for a 600s timer), which is where this system sits most of the week.

8 watts was the consumption if the USB keyboard power management was disabled. If you don’t SSH into the server from elsewhere, spending the extra watt for keyboard use might be necessary.

Problematic Power Measurements – Loaded up with spinning rust (spun-down)

As mentioned in the beginning, I started with 12 hard drives. Half were 2.5″ and the other half were 3.5″. Because the motherboard only has 4 SATA ports, a SATA controller and a port multiplier were used to handle the remaining drives. Additionally, 4 NVMe drives were used early on: one of them, a Western Digital SN770 had a tendency to get quite hot even at idle which indicates it probably wasn’t going into a low power mode.

With all the equipment connected, at idle, with display off, and with the 12 drives spun down to standby, I was shocked to see that my idle power consumption had gone from 7 watts all the way up to a whopping 24-25 watts. Far too much! Something was amiss.

Power Consumption Puzzles – High Power Investigation and Diagnosis

I disconnected the hard drives and started testing components one at a time. These were fairly crude tests meant to get a rough idea as to the culprit, so numbers here aren’t precise.

I quickly discovered that the JMB585 SATA controller I was using caused power consumption to increase by something in the 6-10 watt range (precise measurements in a later section). The controller itself is only supposed to take a couple watts, and the tiny heatsink stayed cool, so there was obviously more going on. Where was the power going?

I decided to watch the CPU package C-states. Without the JMB585 SATA controller, the system hit C6. When the JMB585 was reconnected, the best the system hit was C3. Ah ha! But why? Turns out that if a PCIE-connected device won’t go into ASPM L1, the CPU won’t go into as deep a sleep. The JMB585 controller cards don’t seem to have ASPM support.

A little further experimentation revealed something else that I hadn’t known, and it has to do with C6 vs C8. The system will only hit C8 if there’s nothing hooked up to the CPU-attached PCIE lanes. In other words, if anything is plugged in to the top PCIE slot or the top NVMe slot, C6 is the maximum. The power consumption difference between C6 and C8 *seemed* to be less than a watt in a simple test.

So while C8 would be a luxury, hitting C6 was a must. C3 uses too much power. If SATA controllers were going to prevent the CPU from hitting the best power saving states, I started to wonder whether I should have been looking for a motherboard with 6-8 SATA ports so that I wouldn’t have to rely on add-on controllers…

A little searching for SATA HBAs showed that while there aren’t many options here, the ASM1166 SATA controller should support ASPM L1, though the firmware has to be flashed for it to work properly (and to work at all on newer Intel boards). This was something I’d have to order: I have Marvel and JMicron spares, but they don’t support ASPM. I’d actually been avoiding ASMedia for years, but out of necessity they were now getting another chance: I ordered a couple ASM1166 6 port SATA controllers.

Aside: BadTLP, Bad! AER Bus Errors from the pcieport

Worth a mention… During initial testing with a WD Black SN770 (Gen4 NVMe), I found a problem when the primary (top CPU-attached) PCIE and NVMe ports were used. Running dmesg resulted in output littered with stuff like:

pcieport 0000:00:06.0: AER: Corrected error received: 0000:02:00.0
nvme 0000:02:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
pcieport 0000:00:06.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
pcieport 0000:00:06.0: AER: Error of this Agent is reported first
nvme 0000:02:00.0: [ 6] BadTLP

…after much trial-and-error I found that if the “PEG – ASPM” BIOS setting was set to [Disabled] or [L0s] there were no errors.

ASUS PRIME H770-PLUS BIOS Advanced Platform Misc ASPM

Of course, this was a bad option, as [L1] is crucial for power savings. If [L1] or [L0sL1] were used, the only option was to set the Link Speed of those ports to [Gen3], which didn’t stop the errors, but reduced them substantially.

Some research showed the root cause can be any number of things. Because swapping the motherboard or CPU wasn’t a pleasant thought, my best hope was swapping to a different brand of NVMe.

I ordered some Crucial P3 NVMe drives. This turned out to be a successful endeavor: with the WD drives replaced by the Crucial drives, I was no longer getting any errors, though keep in mind these are Gen3 drives.

Power Consumption Puzzles – Finding L1.1 and L1.2 to be enabled on chipset-connected ports only

When I had the 2 Crucial P3 NVMe drives installed in the CPU-connected PCIEx16 slot and the top M2 slot, I noticed higher idle temps than expected. While the NAND sat at about 27-29C, the controllers were reporting 49-50C – much higher than I expected for these particular drives.

I moved the one from the PCIEx16 slot to a chipset-connected PCIEx4 slot. An interesting difference between these drives showed up via lspci -vvv:

CPU-connected M2 slot: L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
Chipset-connected PCIE slot: L1SubCtl1: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+

L1 sub-states only seem to get enabled on the chipset-connected slots. Unfortunate, but it does seem to coincide with the available BIOS settings in the screenshot above.

Let’s reference that motherboard picture again to show the situation:

ASUS PRIME H770 with M.2 and PCIe port layout

I put both NVMe drives on chipset-connected PCIE slots. Now both showed L1.1+/L1.2+ and both controller temps were down from the 49-50C range to 38-41C.

Unfortunately when attempting various A/B tests using these 2 Crucial NVMe drives with different slot configurations and various BIOS settings, I saw very inconsistent behavior in terms of temperature, though it’s worth noting the JMB585 and an NVMe boot drive were also connected during these tests. For example, both drives might idle at around 40C until a soft reboot at which point 1 (or both) might now idle at the 50C range. Sometimes it seemed possible to keep 1 drive on the CPU-connected M.2 and retain 40C temperatures on both drives as long as the x16 slot wasn’t populated. Presumably I was hitting some sort of bug. The Samsung boot NVMe seemed to keep a consistent idle temperature regardless of what was happening with the Crucial NVMe drives, so I suspected the Crucial drives themselves are at least partly to blame.

Interestingly, sometimes one (or both) controller temps would drop all the way down to the 29C range when on the chipset-connected slots. Since trying to find a low-power 4TB NVMe replacement for the Crucial P3 wasn’t a realistic goal, my best hope at this point was that the ASPM-incompatible JMicron JMB 585 was somehow to blame, since it was soon to be replaced with the ASPM-compatible ASMedia ASM 1166.

Late Update: I unfortunately didn’t keep track of temperatures throughout the rest of the testing, and heatsinks/airflow between drives have all been jumbled around. But for whatever it’s worth, In the final build, my Crucial P3 controller temps are 31-34C, and NAND temps are 23-24C.

Power Consumption Puzzles – Swapping from the JMB585 to the ASM1166.

After a couple weeks the ASM1166 arrived. First a couple bits regarding the card which you might find helpful if you’re considering it…

I began with a firmware flash – ASM1166 cards often have old firmware which doesn’t work with Intel 600-series motherboards and from what I understand can have issues with power management. Newer firmware can be found floating around in various places, but I decided to grab a copy from SilverStone (“fix compatibility issue” in the Download section of https://www.silverstonetek.com/en/product/info/expansion-cards/ECS06/) and followed the instructions at https://docs.phil-barker.com/posts/upgrading-ASM1166-firmware-for-unraid/ . Note that the SilverStone files had an identical MD5 to firmware I found by following the thread at https://forums.unraid.net/topic/102010-recommended-controllers-for-unraid/page/8/#comment-1185707 .

For anyone planning to purchase one of these ASMedia cards, I should note that like most SATA controllers and HBAs out there, the quality really varies. One of my cards had a heatsink that was on a bit crooked: the thermal pad was thick enough to prevent it from shorting nearby components, but be aware that these products can be really hit-and-miss. This is one of the situations where paying a little more to buy from somewhere with a good return policy can be prudent.

I did quite a bit of A/B testing, so here is a quick “JMicron JMB585 vs ASMedia ASM1166” in terms of total system power consumption, though it may only be applicable to this platform (or perhaps even this specific motherboard).

JMicron JMB585 vs ASMedia ASM1166

DRIVELESS

First, power consumption without any drives connected to the cards (the SATA SSD boot drive is connected to the motherboard) to get a baseline. PowerTOP used on all devices except for the keyboard (adding +1 watt). Measurements after the display output went to sleep.

  • 8 watts – No SATA controller – C8 power state
  • 9 watts – ASM1166 on a chipset-connected x4 slot – C8 power state
  • 12 watts – JMB585 on the CPU-connected x16 slot – C3 power state
  • 15 watts – JMB585 on a chipset-connected x4 slot – C3 power state
  • 22 watts – ASM1166 on the CPU-connected x16 slot – C2 power state

The ASM1166 does well here if plugged into a chipset-connected slot (only +1 watt), but does horribly if connected to the main PCI-E slot (+14 watts) where the CPU package power state plummets to C2. Shockingly, the JMB585 behaves in an opposite manner where it’s consumption is lower on the CPU-connected slot (and it didn’t cause C2) – however, you’ll soon see that things change when drives are actually connected…

I did additional testing with the controllers, including playing “musical chairs” with a couple NVMe drives to see if multiple devices would throw a wrench into things, but nothing unexpected took place so I’ll skip those details.

ADDING DRIVES

With baseline measurements complete, next it was time to actually put some drives on these controllers. The SATA SSD boot drive stayed on the motherboard, 2 NVMe drives were added to the mix (chipset-connected unless otherwise noted), and 4 of the 2.5″ SATA hard drives were placed on the controller. I’ll list the “spun down” consumption after the hard drives went into standby – “spun up” was exactly 2 watts higher in every test while the drives were idle.

  • 10 watts – ASM1166 on a chipset-connected x4 slot – C8 power state
  • 11 watts – ASM1166 on a chipset-connected x4 slot with 1 NVMe moved to the CPU-connected x16 slot – C6 power state
  • 11 watts – 2x ASM1166 on chipset-connected x4 slots, with only 1 NVMe drive – C8 power state
  • 16 watts – JMB585 on a chipset-connected x4 slot – C3 power state
  • 24 watts – JMB585 on CPU-connected x16 slot – C2 power state

With 4 drives connected via a chipset-connected slot, the ASM1166 adds +2 watts to system power consumption, whereas the JMB585 adds +8 watts. No contest.

An additional benefit is that I was able to use both of the ASM1166 cards in the system, whereas attempting to use both of my JMB575 cards at the same time resulted in the system refusing to boot, though that could be a platform or motherboard-specific issue.

There is a trade-off though – I always found the JMB585 to be rock-solid reliable, including when paired with a JMB575 port multiplier. My past experience with ASMedia SATA controllers has been less than stellar: reliability with the ASM1166 remains to be seen, but at the very least it’s a bad candidate for a port multiplier since it doesn’t support FBS (only CBS).

A couple other minor hiccups that presented with the ASM1166:

  1. When removing/reinserting the NVMe boot drive, a BIOS message appeared claiming that it couldn’t boot due to GPT corruption. The ASM1166 cards had to be temporarily removed for the BIOS to “find” the NVMe boot drive again (after which they could be reinstalled).
  2. The ASM1166 cards claim to have a *lot* of ports – this causes additional boot time as Linux has to iterate through all of them.

ASMedia ASM1166 claiming many ports it does not actually have

Update: SATA and SSD Brands

One of the comments mentioned an older Samsung 840 PRO SSD limiting to C3 whereas a Crucial Force GT SSD allowed C8. While those are older drives, I still found this a bit surprising. It was worth investigating.

I used the H770 as a testbed with a Samsung 850 EVO SATA SSD boot drive along with a Crucial P3 NVMe and built a custom kernel to allow the Realtek network adapter to reach L1.2. No ASM1166, just using the Intel onboard SATA. I reached C10 after running powertop with auto-tune and allowing the display to sleep. I tried various drives I have on hand, powering off the system each time to swap drives and repeat the process. Here were the results.

Drives that resulted in the system being stuck at C6:

  • 1TB Patriot P210 SATA SSD

Drives that allowed C10:

  • 500GB Samsung 850 EVO SATA SSD
  • 4TB 2.5″ Seagate SATA HDD
  • 8TB 3.5″ Seagate SATA HDD
  • 14TB Toshiba SATA HDD
  • 1TB Sandisk Ultra 3D SATA SSD
  • 4TB Sandisk Ultra 3D SATA SSD (note: slow trim)
  • 4TB Crucial MX500

I suggest being cautious when selecting SATA SSD brands and models. I’ll try to update this list over time with drives I’ve tested, but keep in mind certain manufacturers in the storage space have shown a propensity towards silently swapping to inferior components in some of of their mainline products, so you should always verify the claimed performance metrics of any storage devices you buy while within your return window. Feel free to leave a comment with good/bad drives you come across.

Power Consumption Puzzles – Conclusion

A few important bits if aiming for low consumption:

1) Motherboard support and BIOS configuration are critical – I’ve had motherboards with very inflexible BIOS’s. On this one, “Native ASPM” and the appropriate L1 states must be enabled (to allow OS-controlled instead of BIOS-controlled) for low power consumption to work.

2) Devices all need to support ASPM L1. Otherwise you’re really rolling the dice. The hardest part here as you might have guessed is finding SATA controllers that support it – if possible, get a motherboard enough sufficient Intel chipset-connected SATA ports to avoid needing a separate card. I should note that finding NVMe drives that have working low-power APST power states under ASPM isn’t always a given and you’ll want to do some research there too.

3) If you can hit the C8 power state, avoid using CPU-attached PCIe lanes (top PCIe and M2 slot). On this specific motherboard, my advice would be to avoiding using them altogether if you can, unless you either need the low-latency full-bandwidth path to the CPU or your devices are so active they never sleep anyway. Recall that BOTH my JMicron and ASMedia SATA cards caused the CPU Package C-State to plummet to C2 if plugged into the x16 PCI-E slot.

4) Measuring power from the wall is the only way to make sure that what you *think* is happening is actually happening. A Kill-A-Watt device will pay for itself over time if you use it – consider that I bought mine in 2006 ($16USD + $14USD shipping at the time through eBay). At that time I found our rarely-used fax machine which was always powered on used 7 watts… just keeping that one device powered off when unused during the next 10 years more than paid for the Kill-A-Watt.

Power Consumption when loaded up with a bunch of HDDs

Now that a variety of parts have moved in-and-out of the system throughout this process, the current setup is as follows:

  • 1x Samsung 970 EVO Plus NVMe (500GB boot drive)
  • 2x Crucial P3 NVMe (4TB each)
  • 5x Seagate 2.5″ HDD (5TB each – 4TB utilized)
  • 6x Seagate 3.5″ HDD (10TB each – 8TB utilized)
  • 2x ASM1166 cards providing SATA ports

Total power measured from the wall (display on, keyboard enabled):

  • 50 watts with all 11 HDD in active-idle
  • 38 watts with the 6x 3.5″ HDD in Idle B
  • 34 watts with the 6x 3.5″ HDD in Idle C
  • 21 watts with the 6x 3.5″ HDD in Standby_Z (spun down)
  • 18 watts with the 5x 2.5″ HDD ALSO in Standby
  • 16 watts with the display output ALSO off
  • 15 watts when PowerTOP is allowed to disable the USB Keyboard

Seagate rates standby consumption of these 3.5″ drives at about 0.8w each, and the 2.5″ drives at about 0.18w each. This lines up with what I’m seeing above. My active-idle numbers actually match up pretty well to Seagate specs too.

The obvious observation: compared to the rest of the system components, the 3.5″ drives are power-hungry monsters.

The HDDs will eventually be replaced with SSDs. With idle consumption as low as it is during HDD standby, there isn’t a major rush and this process will gradually take place as my HDD drives/spares fail and SSD prices fall.

The plan for “end game” is for an all-SSD build. Originally the plan was for 1 boot drive, 6xNVMe (likely Crucial P3 4TB) for a RAIDZ2 array, and 6xSATA (likely Samsung 870 QVO 8TB) for the 2nd RAIDZ2 array. Since using the CPU-connected M2/PCIe slots not only brings unpredictability but also comes at a slight C-state/power/temperature cost, I might alter that plan and give up a couple NVMe in the first array and use SATA instead so that I don’t have to touch CPU-connected lanes. Time will tell.

Unnecessary Storage Details

This part is only worth reading if you’re interested in meticulous details about the storage. Feel free to skip to the final section otherwise.

NVMe boot drive

As alluded to earlier, this is a Samsung 970 EVO Plus. Currently less than 4GB of the 500GB space is used (a 64GB swap partition exists but always sits at 0 used). It was originally chosen because Samsung had developed a reputation for reliability (which has been falling by the wayside lately), and Samsung also scored well in reviews every time it came to idle power consumption. This drive is almost always idle and both Controller and NAND temps stayed low throughout all testing (20-24C). It may eventually be swapped to a SATA SSD to free up an NVMe port.

2.5″ HDD

These drives are used for the primary 6-drive ZFS RAIDZ2 array – the one that gets the most use. One day a week it’s busy with a task that involves reading a few TB over the course of 24 hours. Usage through the rest of the week is sporadic, and the drives spend most of the week spun down. For anyone wondering why piddly 2.5″ drives are used instead of 3.5″ drives, there *is* a reason: power consumption.

Power consumption of the 2.5″ Seagate drives is honestly pretty impressive. Spun down they’re each rated at 0.18w, in low power idle they’re rated at 0.85w, and the read/write averages are rated at about 2w. There are plenty of SSDs out there with worse power consumption numbers than this spinning rust. 5TB capacity gives a lot of storage-per-watt.

The major downsides to these 2.5″ Seagate drives are:

  • Not great performers. 80-120MB/s peak read/write. To be fair though, many TLC/QLC SSDs fall to these write levels when their SLC cache is exhausted.
  • SMR (Shingled Magnetic recording). Reads are fine, but write performance absolutely plummets when random writes take place – it acts like a QLC SSD without an SLC cache that also doesn’t have TRIM.
  • Low rated workload (55TB/year vs 550TB/year for 3.5″ Exos drives).
  • No configurable error recovery time (SCT ERC), and these drives can hang for minutes if they hit an error while they relentlessly try to re-read the problematic sector. Ubuntu needs to be configured to wait instead of trying to reset the drive after 30 seconds.
  • Higher error rates if they heat up (I’ve had to replace a few and have discovered they don’t like running hot).
  • Typical HDD pain points (slow to spin up, etc).

To be absolutely fair to Seagate, these are sold as external USB backup drives. Pulling these 15mm tall drives out of the enclosures and using them as RAID members in a NAS isn’t exactly using them as intended. The ultra low power consumption is tremendous, but there are obvious trade-offs.

Long term, these 2.5″ 4/5TB drives will slowly be replaced by 4TB SSD drives (possibly all NVMe). SSDs in 4TB capacity started to become available on the consumer end in 2021/2022 at about 4-5x the cost of the spinners. Less than 2 years later they’ve dropped to about 2x the cost, and I expect decent brands to last more than 2x as long as the Seagate spinners.

If availability of the Crucial P3 (Gen3) model remains, I’ll likely keep with this model despite being limited to Gen3 speeds. I strongly considered the Crucial P3 Plus (Gen4), but power consumption in reviews was higher despite very few situations where performance was notably higher as well. My biggest concern with the P3 Plus (Gen4) was that if I had issues with ASPM/APST, Tom’s Hardware showed it with a 0.3w idle power premium over the P3 (Gen3) for the 2TB model. I prefer “worst-case scenario” power to be as low as possible.

3.5″ HDD

Used in the secondary 6-drive RAIDZ2 array – a backup array that’s spun up for about 2 hours a week where it receives constant heavy writes.

Power consumption of the 3.5″ Seagate drives is about what you’d expect. These 10TB drives are rated at about 0.8w each in standby, 2-5w idle, and 6-9w reading and writing.

Two concerns here:

  • These are rated to collectively pull about 45-50 watts when writing. That’s a bit of extra UPS load I don’t really want if a lengthy power outage takes place during the backups (I stick with consumer 1500 watt UPS’s).
  • These are rated to collectively pull about 4.8 watts when in standby. Again, some UPS load I wouldn’t mind shaving off.

Long-term these drives will likely be replaced by Samsung 870 QVO 8TB SATA drives. The 870 QVO sports 0.041w/0.046w idle with ALPM, 0.224w/0.229w idle without, and 2.0-2.7w during a copy (according to Toms/Anandtech).

Price-wise, the Samsung 8TB SATA SSD is currently a fair bit more expensive than 8TB spinners (closer to 3x the cost) so unless these drives start to see more frequent use for some reason, replacement with the SSDs will almost certainly wait until I’ve run out of spares.

NVMe Cache Drive

Replacing my spinning rust with SSDs is a process that will likely take a while.

In the meantime, ZFS has a couple options to make use of high-speed storage (typically SSD) in front of slower storage:

  • “Special” Allocation Class – allows you to create a vdev specifically for metadata and for “small” blocks if desired.
  • A cache drive, known commonly as an L2ARC.

If you create the “special” vdev at pool creation, all your metadata (and optionally, small blocks of a size you choose) will go on the “special” vdev instead of your spinning rust. Very fast file listings and directory traversal whilst keeping the spinning rust for the files themselves. Yes, you can “ls” a bunch of directories without waking your HDDs from sleep. Biggest downside is that because all your metadata is on this vdev, if it ever dies, access to all your data is essentially gone. So it really should be at least mirrored. Maybe even a 3-way mirror. Say goodbye to a few ports.

The L2ARC is a bit different. It’s essentially a level 2 cache. When the cache in RAM gets full, ZFS will copy some of the blocks to the L2ARC before it evicts that content from RAM. The next time that data needs to be accessed, it’ll be read from the L2ARC instead of the disk. One benefit compared to the “special” vdev is that you’re fine with only 1 SSD – if there’s a problem with the data in the L2ARC (bad checksum, drive dies, etc), ZFS will just read the content from the original disk. Also, once the L2ARC is full, ZFS will just start again at the beginning of the L2ARC SSD and overwrite stuff it wrote before which has some pros (old data never accessed anymore) and cons (data that was frequently accessed and will need to get written to the L2ARC again). You can also add/remove L2ARC devices from the pool at your leisure – want to add a 64GB SSD, 500GB SSD, and 2TB SSD? Go right ahead – ZFS will distribute blocks among them. Need to remove the 500GB SSD from the pool a few days later and use it elsewhere? Go right ahead. The biggest downside to the L2ARC is that if you forget to specify “cache” when adding the device, you probably mucked up your pool. It’s also imperfect: even with careful tuning it’s hard to get ZFS to write EVERYTHING you want to the L2ARC before it gets evicted from memory. At the same time, depending on your data, the L2ARC may see a lot of writes, and you may have to carefully watch the health of your SSD.

In the past I’ve used the “special”, used L2ARC, and have used both at the same time (you can even tell the L2ARC not to cache things already contained in the “special” vdev).

This time around I simply went with an L2ARC on a 4TB NVMe: once all the other 2.5″ drives have been replaced by SSD and the speed benefits of an SSD cache no longer apply, I can simply remove this cache device (though theoretically having 1 L2ARC cache drive handling the bulk of reads *would* allow the other NVMe drives to stay in low power mode more…).

 

Conclusion – Regrets? Second-guessing? What could have gone differently?

Unlike the ASRock J4005 build where I realized part way through that I’d kneecapped myself in a number of ways, I don’t get the same sense here. This time I ended up with low idle power AND a pretty capable system that should be flexible even if repurposed in the future.

I’m quite happy with my component choices, though I’d be curious to know how the MSI PRO Z790-P DDR4 (one of the other motherboards I considered) would do in comparison. Functionality-wise the MSI has the advantage of 6xSATA ports, but it comes with the obvious downside of the notorious Intel 2.5G networking chip. The MSI also has a PS/2 port and I’ve never actually checked to see if PS/2 keyboard power consumption is lower than USB (recall that I save 1 watt if I allow powertop to shut down the USB keyboard port). And of course it would be interesting to compare the ASPM and ALPM settings, and to see if the snags I hit with CPU-attached PCIe/M.2 slots exist in the same way.

While this system currently sits in the 15-16 watt range when idle with drives in standby, once all HDDs are replaced with SSDs, I’d expect idle consumption of around 10-11 watts which isn’t bad for 72TB worth of drives, 64GB of RAM, and a pretty decent processor.

Update: Recent Linux kernels disable the L1 power saving modes of most Realtek NICs which prevents the CPU from entering decent C-states, thus increasing power consumption by quite a lot. While there are workarounds, moving forward I’ll likely limit myself to motherboards containing Intel 1 Gigabit network adapters (perhaps moving to Intel 2.5 Gigabit when it becomes clear they’ve worked out all the kinks). You can find further details about the Realtek NIC situation in the comments below.

152 Comments | Leave a Comment

 Sort by Oldest | Sort by Newest
  1. Anonymous on May 14, 2023 - click here to reply
    Hi Matt, great piece!

    I guess a lot of headaches could have been avoided if you found a board with more sata ports on it!

    On my end I never managed to get my chip beyond C3. I purposefully tried to reduce the amount of excess components (like those SATA controllers, I'd read about how hit and miss they could be).

    I'll double check my BIOS settings to make sure that I've enabled all the relevant things you mentioned in your piece.
  2. Geert on May 22, 2023 - click here to reply
    Hi,
    Very interesting article, many thanks.
    So you don’t care about ECC, some say it’s a must for an always on server especially with ZFS.
    Also NVME’s seem to burn more fuel than SSD’s.
    I am looking for a frugal ECC motherboard but did not find anything yet, W680 boards are hard to get.
    In the meantime I am running Unraid on a J5040 Asrock board with two 1TB SSD’s in mirror and 3 mechanical WD’S that are sleeping most of the time.
    The system burns 19 watt at idle, its was 16-17 watt (C6) before adding an Asmedia Controller (4). I will replace the old seasonic PSU by Corsair soon.
    Regards
    Geert
  3. Hamun on July 4, 2023 - click here to reply
    What OS did you use ?
  4. Anonymous on August 9, 2023 - click here to reply
    Amazing article Matt. This has inpired me a lot. Since there's no write up, what do you think about Gigabyte H470M DS3H with i5 for low power low profile media server with 30-40TB media?
    • I actually used it for a period of time as the NAS. As a media server the CPU would lack hardware AV1 decode, but aside from that I suspect it would be fine.

      Keep in mind that if the 2nd M.2 slot is populated, only 5 of the 6 SATA ports will work. If I recall correctly the BIOS on the H470M DS3H also hid a few options (like forcing IGPU or dedicated GPU) unless put in CSM mode. Additionally it would randomly hang on the boot screen with an error if I had an LSI SAS HBA installed, necessitating another restart attempt - regular SATA controllers worked fine though. Putting aside those weird little nuances, I found it to be reliable and it worked great and I quite like the board.
  5. Robert on August 15, 2023 - click here to reply
    Hi Matt, thanks for the interesting read. I am trying to minimize power consumption on a NAS system with two 3,5'' HDDs and that is also running the OS and some virtual machines on two SSDs. With a Intel J4205 board I and 2 WD Red 6 TB the system crashes a few minutes after I set the HDDs to sleep. By crashing I mean everything is off and I need to repower the system. Did you ever encounter something like this? The system is running normally at 25 W, with the HDDs powered down it is at 15 W. Power supply is some 250 W I had flying around here. Is it possible, that the ATX power supply switches off due to small load?
    • Matt Gadient on August 15, 2023 - click here to reply
      Some older power supplies shut off if load is too low (some BIOS's actually have a setting for a dummy load to combat this). Some really old power supplies go out of voltage spec if load on a rail is very low. Power line fluctuations can be more problematic at very low loads as well.
      • Robert on August 22, 2023 - click here to reply
        Small update, I ordered a new 300W in the 50€ range. Fun fact, the power consumption is 1 W less than before, either due to higher efficiency or because the cooling fan is running less. The low power use case is also fine now.
  6. Ahmed on August 30, 2023 - click here to reply
    ECC could be important to have for a system running 24/7 and handling important NAS data (regardless of using ZFS or not it is still a nice feature to have ECC for NAS).

    Do you plan to publish a similar article but for a system with ECC support and low idle power consumption that would still be compatible with Linux (I think low idle power consumption from AMD is not the best for Linux as an example).

    I am planning to make such a build soon myself and I would like to know if I should start making my build in the next month or two or maybe wait a bit to read your publications which would provide some useful in sites that can help me make a better educated decisions on the components choice.

    Nevertheless, thank you very much for the thoroughly written article. You did an impressive job here highlighting the important parts of building a very efficient low power NAS.
    • Matt Gadient on August 30, 2023 - click here to reply
      Nothing planned in the short term in regards to ECC. I generally just stick to Kingston memory and run it rigorously through Memtest86 and TestMem5 before putting it into use. Were it possible to get ECC in a current low-power low-cost platform I'd go for it, but for me it would be more of a nicety than a necessity.

      In any case, best of luck with your build!
  7. Olivier on September 6, 2023 - click here to reply
    Hi Matt,
    Thanks for your very detailed and informative NAS articles!
    I'm putting mine together with an i3-10500T, 16GB and 4xHDD.
    For the power supply, I found an Antec EarthWatts 380W EA-380D (80+ Bronze) reconditioned for 25€. Is it worth it in your opinion? or is it too old?
    If you have another model to recommend, I'd love to hear from you. Thanks in advance.
    • Matt Gadient on September 6, 2023 - click here to reply
      I normally buy the Corsair RM or SF series these days due to their very good efficiency at idle. But those are a bit expensive. I've always liked the EA-380D power supplies a lot (though I recently gave one of my last ones away), so if that sits nicely in your budget I'd say go for it.
  8. xblax on September 11, 2023 - click here to reply
    That article helped me to decide for a B760M-K D4 mainboard with i3-1200 for my home server upgrade after seeing here what low power consumption is possible. I upgraded from a FM2A88M-HD+ with AMD A4-4000 and was able to reduce idle power from 40W to 15W, which means the new hardware will basically pay for itself in a couple of years.

    I also got a 970 Evo Plus (2TB) as the boot drive and can confirm that it must be connected to the chipset in order to reach low package C-States (C8). What I found interesting is that the difference between package C3 and C8 was much bigger when then SSD is connected to the chipset. I believe that's because the chipset itself will only go into deep sleep states when all attached devices support ASPM and SATA Link Power Management is active.

    Connecting the SSD to the CPU PCIe did only increase power consumption by ~2W (Package C3 vs C8), while having not having ASPM on a device connected to the chipset seems to take additional 5W just for the chipset but has the same effect (C3) to the package C-State.

    One interesting thing worth noting is that I have a PCIe 1.1 DVB-C Capture card connected to the chipset. Even though ASPM is listed as a capability for the card by lspci and I booted the kernel with pcie_aspm=force it didn't get enabled for that card. I had to force-enable ASPM via setpci, see https://wireless.wiki.kernel.org/en/users/documentation/aspm - seems to work without issues. That helped me reaching that 15W idle power. Unfortunately the capture card still takes ~5W, otherwise I currently only have 2x4TB HDD from Toshiba connected which spin down when idle.

    Btw. Sata Hot Plug must be disabled for all ports, or otherwise the package will only reach C6.
  9. danwat1234 on September 15, 2023 - click here to reply
    Looks like you aren't a fan of USB-connected drives, could have used a hub or two and some enclosures. Good writeup!
  10. Anonymous on September 22, 2023 - click here to reply
    Hey,

    Super great article, thanks for all these informations.

    I’m planning building my nas. As the power consumption is the main topic, what do you think about the following build (but I’m kinda a noob about the system and what’s possible and/or the limitation of such a low tdp chip) ?

    Asrock N100M micro ATX (with the new Intel® Quad-Core Processor N100 (up to 3.4 GHz) with a 6W tdp. As there are only 2 sata ports, the idea is to add a SAS HBA card with 8 additional SATA ports on the 1 x PCIe 3.0 x16 Slot. For the storage it would be 1 M2 (the one from the motherboard) for TrueNas OS, 2 SSD mirroring sata for VM, docker, … and 8 HDD Seagate EXO 7200 rpm drives as a final step (2 at the beginning and then evolving based on the need).

    For the power supply, a Seasonic Focus PX 550W - Modular 80+ Platinum ATX and finally a unique stick of 32GB of ram (non ECC).

    Many thanks in advance
    • Matt Gadient on September 22, 2023 - click here to reply
      I've actually considered the N100 recently, which seems to be the latest darling of the mini PC world. Only 1 memory channel on the N100 boards, but for the majority of situations where memory bandwidth isn't critical that's perfectly fine. The biggest issue I've found over the past few years is these ASRock onboard-CPU boards have gone up in price to the point where a cheap motherboard + CPU is often within reach, along with more PCI-E lanes, more onboard SATA, and similar power consumption as long as you can reach high c-states. But I'd snap up the ASRock N100 quickly if the price were right. Note that the x16 slot runs at x2 so you'll max out at 1GB/s throughput on a PCIe 2.0 card, and 2GB/s on a PCIe 3.0 card - unlikely you'd hit those speeds under normal usage anyway across a bunch of HDDs but it's something to be aware of on these boards.

      On the SAS HBA card, I'd suggest looking around to see what idle power consumption others are seeing on the specific card you're considering: the popular ones often pull a few watts while doing absolutely nothing. Not sure how *BSD handles the cards, but of the few that seem to have ASPM enabled by default, Linux eventually seems to disable it in the kernel at some point due to issues. That said, this is a situation where the ASRock N100 might fare better than a separate CPU/motherboard combo as I'd expect it to be less sensitive to c-state implications of an expansion card, though this is just a guess based on what I saw with my ASRock J4x05 boards and may not apply to N100.

      The Seasonic PX 550W looks like a great choice.

      Overall looks like a solid build!
  11. paldepind on September 23, 2023 - click here to reply
    Thanks for a great post full of helpful information.

    Do you have any tips for identifying motherboards that can achieve low power usage? People sometimes recommend ITX motherboards but I haven't found any measurements about how many watts ITX vs ATX usually saves. Now, ITX wouldn't have worked for this build, but ATX doesn't seem to have been a significant source of power consumption anyway. In general, it seems very hard to figure out which motherboards are power-efficient and which are not?

    What do you mean with "the E-core silicon overhead" and why did you try to avoid it? I understand the CPUs with E-cores are probably more complex, but I would've thought that the E-cores could lead to lower power usage when the CPU is doing non-intensive tasks at low load.

    Again, thanks for the great info. I hope to be able to build a system with similar power efficiency. Right now I have a Gigabyte Z790 UD AX motherboard and an i5-13500 system that I can not get below 28W.
    • Matt Gadient on September 23, 2023 - click here to reply
      In terms of low power motherboards, my general rule of thumb is that lower component counts tend to result in lower power consumption. This is not a robust rule, but it usually holds up well enough here. A quick "sniff test" is looking at the number of power phases (something manufacturers advertise heavily): lots of phases running at high switching frequencies are great for hard core overclockers, but for low power we want few phases switched at such a low frequency that if it the motherboard has a MOSFET heatsink it's mostly decorative.

      The advantage to ITX is that it tends to limit the component count, but it's not strictly necessary - last week I actually repurposed the "Intel i3-10320 on a Gigabyte H470M DS3H" I mentioned at the beginning and got it down to 6 watts idle (headless, no keyboard, onboard Intel i219V 1GbE network only, c-states in BIOS, 3 Samsung SATA SSDs 840/860/870, Corsair RM850 power supply, Ubuntu Server with powertop). It's a very utilitarian motherboard. I won't do a separate write-up because the board is no longer available, but 6 watts on that MicroATX Gibabyte H470 board and 7 watts on the ATX ASUS H770 board in this write-up are my best 2 results so far and notably neither were ITX. Something else I just noticed: both these boards only have 6 power phases.

      As to the "E-core silicon overhead", a lot of details can be found at https://www.hwcooling.net/en/the-same-and-yet-different-intel-core-i5-12400-duel-h0-vs-c0/ , but I'll try to summarize. The i5-12400 comes with 6 P-cores and 0 E-cores enabled, commonly referred to as 6+0. However, it came in 2 variants: a "C0" stepping which was originally an 8+8 that had cores fused off to become a 6+0, and an "H0" stepping which was manufactured directly as an 6+0 and never had any E-core hardware inside to begin with. In the tests (page 5 of that article), the C0 used up to 16 watts more power than the H0 depending on the benchmark, including almost 11 watts more at idle. Now it's always possible their C0 sample had other contributing issues causing power leakage, or that there's some other variable at play, but either way the 2 chips that had physical E-Cores hardware inside didn't fare well in the idle test.

      Because I focus on extremely low idle consumption for most of my systems, I can't justify buying any of the combined P/E-core chips until I see some data that shows chips with E-cores doing under 10 watts idle. And I simply haven't yet. This is an area where Intel is very much under threat these days: the AMD Mini PCs are now getting down to about 6-7 watts idle power consumption for a Ryzen 9 7940HS ( https://youtu.be/l3Vaz7S3HmQ?t=610 ) and if AMD brings this type of APU design to the desktop side or someone like ASRock starts to package some of these impressive HS chips in a custom motherboard, Intel could quickly lose the low-idle-power market.
      • paldepind on September 27, 2023 - click here to reply
        Thanks a lot for the great reply 🙏. There's not a lot of info out there on this kind of stuff, so you sharing your knowledge is very valuable and appreciated.

        I can see that the motherboard I bought is probably not ideal (it advertises a lot of phases).
        • Matt Gadient on September 27, 2023 - click here to reply
          As for the phases, while I use it as a quick sniff test it's also not perfect: if your motherboard phases utilize very efficient MOSFETs it could outperform a motherboard with fewer but less-efficient MOSFETs at a given switching frequency. And there are certainly many other motherboard components that add variables to the situation. Point being that I'll bet there's a 6-phase motherboard out there somewhere that guzzles electricity and probably some 20-phase out there that allows for a sub 10 watt build, though I wouldn't expect it to be the common case.

          With that said, even if your board had a lot of inefficient MOSFETs, the 28 watt power consumption you said you're getting seems a bit high unless you've got some spinning rust or a PCIe card that guzzles power. Have you checked to see if you're hitting C6 or better power states? Recall that when I put the ASM1166 on the main PCIe slot I was limited to C2 and was consuming 22 watts.
          • paldepind on October 9, 2023
            Sorry for the late reply (I wasn't expecting one and I get no notifications). Good thing I needed to re-read some of the great info here

            You are indeed correct that the 28 W I shared was not as good as it could get. I made the mistake of thinking that unplugging the SATA cables from my HDDs would leave them powered off. As is obvious in hindsight, you also need to unplug them from the PSU. Additionally, I had a bunch of peripherals connected to the PC that I didn't realize would consume power (in particular a plugged-in monitor makes a big difference). After disconnecting all HDDs and all peripherals I get readings in the 8-10W range.

            To hopefully make this a useful data point for others I'll share some more details. The CPU is an i5-13500 in a Gigabyte Z790 UD AX motherboard. The only thing connected is a SATA SSD and a single stick of memory. The PSU is a 850W Corsair RM850x. The system reaches C8 and even C10. A few more things could be done to reduce the power draw. I was measuring while idling in GNOME (I assume having no DE running will save a tiny bit of CPU), I have two CPU fans that are running slowly even at low temps, the system is on WiFi (I assume ethernet consumes less power), and I haven't disabled case LEDs nor HD Audio.

            I'm now very happy with the level of power consumption. Perhaps, one takeaway is that Intel E-cores do not affect the idle power draw much, at least in this CPU. The only problem I have now is that the system is unstable and sporadically reboots 😭. I think I've narrowed the issue down to a faulty CPU or a faulty motherboard (I've tried replacing the PSU and memtest86+ says that the memory is fine). The company where I purchased the parts claims that both are fine, but unless I find another solution I'll try and replace the CPU and motherboard with some low-end parts: a 13th gen i3 and an Asus B760M-A motherboard. If that fixes the problem hopefully I'll be able to return the other parts, in the worst case I'll use the budget parts for a server and the higher-end parts for a workstation.
        • Jon on December 13, 2023 - click here to reply
          Hi paldepind,

          i have exactly same setup ( i5-13500 + Z790 UD AX), my boot drive is Samsung 990 pro 4TB
          and i have exactly same problem - sporadic reboots. Have you managed to find what is causing it?

          I've tried latest BIOS F9b as well as F5 and changing multiple BIOS settings, but so far nothing helps. My suspicion is that boot drive goes into some low power mode and as is unable to recover from it, but don't know how to prove it.
          • Daniel on December 13, 2023
            In your cases, do you have any events recorded in the system logs prior to the reset?
          • Jon on December 18, 2023
            Hello,

            there are no events before the crash, i also have netconsole configured - still nothing logged.
            With latest BIOS, default settings and 970 evo plus as boot device (no other disk attached) system seems stable, but unfortunately it draws 32W on average while idling which is not acceptable.

            Currently i'm changing one setting at a time and waiting for 12h+ in order to figure out what is really causing this and that takes a lot of time.
          • Matt Gadient on December 18, 2023
            This could be an entirely different scenario, but I had picked up the Gigabyte Z790 D DDR4 about a month ago and when attempting to compress a 1TB directory with about 1 million files from a RAIDZ array to a tar.zstd file another drive it would experience a random reboot within 30 mins. This happened over and over and over. Disabling C-States in the BIOS prevented the issue from occurring. Swapping all the hardware to the ASUS H770 board resulted in everything working correctly. I've bought many Gigabyte boards over the years - this was the first one I've had to return.
          • Zac on May 20, 2024
            I've been running into a similar issue on a Gigabyte z790 board ( z790 Aorus Elite AX DDR4), everything is stable until I enable high c-states (c8/10).
            I've tried all settings, once I enable c8/10 system will become unstable and shutdown at some point, I've checked all drives and cables, replaced psu twice, currently awaiting a cpu RMA from intel. Judging from this thread here it might seem this is a gigabyte issue, I've also tried varying BIOS firmwares without any luck, so it seems if a CPU RMA does not fix it, it would be an issue with Gigabyte z790 motherboards.

            Disappointing for sure, as I am out of return window and really do not want to deal with Gigabyte RMA, so If CPU RMA doesnt work, I will just have to live with it or sell it off and buy a different board.

            Since this thread is a little old (Dec 2023), I wonder if anyone else has had this issue and was able to resolve it.

            Thank you.
      • Daniel on November 14, 2023 - click here to reply
        I'm not sure the number of phases is a good indicator.

        I'm currently testing an ASUS TUF GAMING B760M-PLUS WIFI D4 (12+1 DrMos) and at idle, with the monitor and USB (mouse, keyboard) suspended, the power meter shows 6.7-8.1 W. The rest of the system:

        - i5 13500
        - 2 x 16 GB 3600 MHz (gear 1)
        - 1 TB KC 3000
        - RM550x 2021
        - 2 x 120 mm fans @ 450 rpm
        - audio codec on
        - WiFi off

        Arch Linux + RTL8125 module (my router does not support EEE)

        With the Realtek card disabled, the power meter shows 6.4 - 6.7 W

        PC states w/ LAN
        C2 (pc2) 0,7%
        C3 (pc3) 1,3%
        C6 (pc6) 41,1%
        C7 (pc7) 0,0%
        C8 (pc8) 0,0%
        C9 (pc9) 0,0%
        C10 (pc10) 55,8%

        PC states w/o LAN
        C2 (pc2) 0,6%
        C3 (pc3) 0,9%
        C6 (pc6) 0,0%
        C7 (pc7) 0,0%
        C8 (pc8) 0,0%
        C9 (pc9) 0,0%
        C10 (pc10) 97,8%

        I had similar results on a B660 AORUS MASTER DDR4 (16+1+1).
        • Daniel on November 16, 2023 - click here to reply
          I forgot to mention. Although the results are decent, I do not recommend this ASUS board - it is unstable at idle. I had several random shutdowns (sudden power cuts).
        • Jackie D on July 9, 2024 - click here to reply
          Does the AORUS MASTER motherboard have the same low idle issue?
          • Daniel on July 10, 2024
            No. It's rock solid. Runs nearly 24/7, so far without a single crash.
      • Wolfgang on December 1, 2023 - click here to reply
        Thanks for the article Matt!

        I recently bought an i3-13100 (4+0) and an i3-13500 (6+8) to test out the "E-core overhead" claims I've seen online. I'm happy to report that the power consumption at idle is identical for both of these chips! Perhaps the elevated power consumption issue is unique to the i5-12400 C0, unfortunately I don't have one on hand to test.
  12. Dave on September 25, 2023 - click here to reply
    Please can you recommend a UPS with low idle power consumption? I've read that the idle power draw massively varies.
  13. htmlboss on October 9, 2023 - click here to reply
    Hey thanks for this write-up Matt! I've been looking to build a low-power system to act as my personal cloud dev environment that I can SSH into from any PC (not a fan of the github codespaces pricing lol). A first skim of your research indicates this is a good starting point for me :)

    I'm currently running an off-lease quanta 1u that I grabbed off ebay just before covid hit and it's single-core performance is really showing its age. Also it idles at 80W >.<
  14. Lukas on October 12, 2023 - click here to reply
    Hi, thank you for great article!
    I would like to share my experience with my new 12—14 W PC.

    I just built fanless mini-ITX PC. Case is also a passive cooler - AKASA Maxwell Pro and inside is AMD Ryzen 5600G (Zen 3, 65W TDP), Gigabyte B550I AORUS PRO AX (bios FB), 1x 16GB DDR4 (I plan to upgrade to 2x32GB), 1x 2TB Samsung 980Pro m.2 SSD. It's powered by 12V AC/DC power supply from AKASA (max. 150W) and Inter-Tech MINI-ITX PSU 160 W.

    12—14W idle power consumption for whole PC under Windows 10 (measured on DC side, power plan is balanced, but I enabled ASPM, Pstates and C-states in bios and PCIe power saving in advanced setting in windows Power plan).
    Under load (Cinebench R23) 61—65W. Currently I'm doing undervolting to have better power consumption and temperatures.

    ----------

    my small home-lab & NAS has under 2W idle power consumption ‼️

    I recommend Odroid H3 (H3+) with BIOS 1.11 and Debian 11 + DietPi + kernel 6.0.0 (or newer) + applied tweaks via powertop it has idle power consumption only 1.2 — 1.5W (compared to 2.7W for RPi 4 - source) ⚡️(with my configuration: 1x 16GB RAM and 1x SATA SSD).

    See: https://i.ibb.co/7QD390m/H3-1-10-6-0-0-sata-idle.gif

    Max memory size is 64 GB RAM and it has 1x m.2 port, 2x SATA 3, and 2x LAN 2.5Gbps. It's much faster than Raspberry Pi 4 with lower power consumption in idle. In load it can consume 20W (+depends on connected devices).

    If you need more SATA ports then m.2 port can be expanded to 5x SATA using this: https://wiki.odroid.com/odroid-h3/application_note/m.2_to_sata_adapter
  15. Martin on October 17, 2023 - click here to reply
    Thanks for a great article, some very useful info.

    I'm currently struggling with getting my new NAS to use less than 40W at idle with no data drives, and I can for the life of me not understand why it's using that much. My gaming desktop idles at less.

    It's an Asrock H670M-ITX/ac with an i3-12100, 8GB RAM, be quiet 400W PSU. Originally used a Kingston NV2 NVMe for OS, but found that replacing it with a SATA SSD decreased idle power by about 10W (50W - > 40W).

    According to powertop, cores get into C7 no problem, but package refuses to leave C2. Not sure how important that is.

    I'll keep working at it, with your article as reference. :)
    • Matt Gadient on October 17, 2023 - click here to reply
      The package C-states make the difference. Steps I generally take: (1) remove anything and everything connected to a PCIE slot, (2) ensure C-States are enabled in BIOS, with the highest C-state selected (usually C10 but only C6-C8 on some motherboards), (3) start trial-and-erroring BIOS settings until you find the culprit. I usually use a Ubuntu USB stick for testing as sometimes you can get weird edge-case OS things with a live install (Debian Testing recently thwarted one of my attempts).
  16. nice on October 28, 2023 - click here to reply
    Thank you I get to c10 normally after configuring that build with proxmox. However, in the recent Ubuntu 23.10 we only reach c3 if it's a kernel problem. I wonder if other people are like that, too
    • Dan on November 2, 2023 - click here to reply
      I too have found this issue. After many frustrating hours and days of testing, I have narrowed it down a little. I use archlinux and after booting with every monthly live usb I see that my system cannot reach a deeper package c-state than c3 after kernel 6.4.7. This equates to about 7-9 watts more draw than c8. It may not be the kernel but a package within the archlinux liveusb. Not sure where to go from here. Any ideas from anyone on how to narrow this down would be welcome.

      Matt,
      Your articles inspired me so I purchased the same motherboard ( prime h770 plus d4 ) and similar processor etc. I was able to reach 12-14w minimum. Happy with that for now but the additional 8w due to the package not reaching c8 anymore has been very frustrating, have you seen anything similar in your build?
      • Matt Gadient on November 2, 2023 - click here to reply
        Very interesting. It's looking like the issue I hit in Debian might not have been an isolated edge-case as I'd thought: A little over a month ago, I swapped the motherboard/CPU with my desktop system, installed Debian 12, upgraded to Debian trixie/testing and was hitting what now seems to be the same C-state issues you both have. Using a Ubuntu USB stick, c-states worked correctly. I didn't have time to start diagnosing, so I simply reinstalled Debian 12 bookworm/stable (currently running kernel 6.1) which allowed C8.
        • Matt Gadient on November 2, 2023 - click here to reply
          As a follow-up, today I updated Debian 12, resulting in the kernel being updated from 6.1.0-12-amd64 to 6.1.0-13-amd64. Upon reboot I was stuck at C3. Reverting to 6.1.0-12 restored C8. It's really starting to look like we may be seeing a kernel bug/regression that is evidently being backported into older kernels. In the 6.1 branch it looks like maybe somewhere between 6.1.52 and 6.1.55 . No idea if anyone's submitted bug reports or if it's been fixed in mainline, and I don't have time to sift through the changelogs at the moment but if it doesn't get resolved I'll eventually have to dig into it I suppose.
  17. Marc Gutt on November 4, 2023 - click here to reply
    Hi Matt,
    you should consider a different PSU like the Corsair RM550x (2021) or BeQuiet 12M 550W. The Corsair is the best for low power setups, but extremely hard to get. It will reduce the power consumption even further (2 to 4 watts).
    This and other tweaks are mentioned in this topic:
    https://forums.unraid.net/topic/98070-reduce-power-consumption-with-powertop/
  18. Dan on November 6, 2023 - click here to reply
    I can confirm that installing the Realtek r8125 driver to the latest from the Realtek website solved the problem for me. My system now reaches the c8 state once more. Not sure it ever reaches c10, but the bios says it is supported. Can anyone advise how to check for c10?
    • Matt Gadient on November 7, 2023 - click here to reply
      To check for C10, since powertop updates slowly you can leave powertop running while the display sleeps, wait a minute or so, then wake the display and see if powertop shows any percentage in C10. Alternately, SSH into the machine and run powertop when the display is asleep.
      • Dan on November 9, 2023 - click here to reply
        Thanks Matt, ssh into machine confirmed c10. Trying now to get lower than 12w. Tried everything in bios, so the culprits are psu, dram power/timings, and possibly the power meter (can’t get a killa-watt meter in the uk, but I think mine is equivalent). Cheers
  19. Matt Gadient on November 6, 2023 - click here to reply
    Thanks to those who passed along the Realtek network adapter details in regards to the c-states. The cause is the following patch which seems to have been pushed out to 6.x kernels built on or after Sept 13 2023: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/drivers/net/ethernet/realtek/r8169_main.c?id=v6.1.53&id2=v6.1.52

    Sadly, this was an intentional change to the Realtek driver contained in the kernel. It ensures that L1 is disabled on virtually every Realtek adapter, seemingly in response to a number of Realtek adapters experiencing stability issues when these low power states are enabled. On my current test system, this results in a 4 watt power increase at idle with the screen slept as the system no longer goes beyond C3.

    Trying the Realtek driver that was linked by Tseting is likely to be the easiest workaround for now, though I have no idea how it will fare on unsupported kernel versions and I'm personally not a huge fan of kernel modules. I'll paste it here for convenience: https://www.realtek.com/en/component/zoo/category/network-interface-controllers-10-100-1000m-gigabit-ethernet-pci-express-software

    There is of course a harder workaround - for those familiar with compiling the kernel, reverting the change in the diff above will restore L1/L1.1 on RTL8168h/8111h, RTL8107e, RTL8168ep/8111ep, RTL8168fp/RTL8117, and RTL8125A/B devices (anything older already had it disabled) - if you want to allow L1.2 you can force rtl_aspm_is_safe() to return true, though on my test system it didn't provide any benefit over L1.1.

    Unless the kernel devs have a change of heart here, it looks like Intel NICs may be the only viable option moving forward. Intel 1G NICs have generally been very solid. Worryingly, I found that disabling the Realtek NIC on my MSI board doesn't completely detach it (still stuck at C3) so buying a board with a Realtek NIC with plans to disable it and use an Intel network expansion card may be risky. Worth noting that moving forward there is a flag vendors can set on an 8125A/B adapter to indicate L1.2 is tested and allowed which the linux kernel will respect, but I don't know if it's made it into any motherboards or expansion cards.
  20. SaveEnergy on November 12, 2023 - click here to reply
    Hello,
    thanks for the detailed information you shared.
    Your setup has inspired me and I have purchased the "Prime h770-plus" including 4800 DDR Ram.
    Unfortunately I also have problems with NVME SSD's when aspm is enabled in the bios (and PCI express clock gating). The workarounds to block certain power saving modes of the WD's did not help. Tried the SN750 (1tb) and the SN850X(2tb).
    Can you still recommend the Crucial P3 or does it also regularly fail with reference to ASPM problems?

    Who has what other NVME on the board running reliably with ASPM enabled?

    Do you run the setup productively in continuous operation?

    Additionally, I have found that an 840pro (256gb) at least on SATA1 prevents the system from going lower than C3. A Crusial Force GT (128GB), on the other hand, works up to C8.
    I got around the problem with the Realtek NIC by removing the check in the kernel.
    Thank you and best regards
    • SaveEnergy on November 13, 2023 - click here to reply
      Hello,
      I have now tried a Samsung 990 pro. Unfortunately this still leads to the already known nvme aspm errors.
      Does anyone else have any ideas?
      It can't really be the power supply (I would find that very strange), because a Corsair cx 750 is being used here temporarily.
      If nothing helps, does anyone have a good alternative board?
      Somehow I put my foot in my mouth too often when it comes to hardware ;-(.
      • Firesucker on September 3, 2024 - click here to reply
        For me an 1 TB Samsung Pro works perfectly. Reaching C10 even with a Intel X710-DA2 attached
  21. jit-010101 on November 14, 2023 - click here to reply
    As to:

    "I guess a lot of headaches could have been avoided if you found a board with more sata ports on it!"

    That's not true by default - because it comes down to what SATA-Controller is used and/or how these Ports are connected. Chances are high that its using an onboard controller, and specifically for the N5105 there's a wide known NAS-Variant with 6-SATA Ports floating around sold by Kingnovy and Topton.

    The Black one uses JMS585 and the Green PCB one uses ASM1166 - black is stuck with C3 and Green can go down to C8 (verified myself because I do have the green variant). If I would need anything more then a backup server, then I'd go the route here - with an much more powerfull Intel on LGA1700.

    A good example of how low you could go with idle power consumption is the Odroid H3 - <2W idle with 2x HDDs via SATA in Spindown ... however as soon as you add (the wrong) components that will accelerate quickly - check the power consumption stats here:

    https://github.com/fenio/ugly-nas

    TLDR: In the end its sipping more power then your build here - and I have to add that I previously owned an Odroid H2, which fried their 5V lane and pulled the (expensive) SSDs with it ... every since then I'm staying away for the absolute maximum of the lowest power consumption in exotic combinations like the Odroid H3.

    I'd say in the end it all comes down to - how practical everything is vs the power consumption levels.

    That said I'm impressed with this build here - 7W idle with a 750W PSU is quite something.
  22. Frederik on November 18, 2023 - click here to reply
    Hi Matt, I read your article multiple times and am quite impressed with knowledge that is found in the post and the comments.
    I am currently upgrading my home server from j3455 to 11400.
    While switching hardware I found a strange issue:
    I5-11400+2x8Gb 2666 ddr4 +ASUS Prime B560M-A (latest bios) +asm1166.


    If I boot from USB without any sata drives attached package C state reaches c8.
    If I boot from USB with a SATA drive attached to the onboard sata controller package c state only reaches C2 (+4W).
    If I boot from USB with a SATA drive attached to an pcie asm1166 package c state reaches c8.

    So to me it seems the b560 SATA controller seems to have problems with power saving. Even if I have L1 enabled for everything and ran powertop tune it will not go below c2.

    Do you have an idea what could cause the b560 SATA controller cause 4w more?
    • Matt Gadient on November 18, 2023 - click here to reply
      Not sure. Onboard SATA on that board should be from the Intel chipset and you mention you have L1 enabled for everything. The first things I'd try:
      • Check that SATA Link State Power Management is enabled (on my H770 it's under Advanced/PCHStorageConfiguration/AggressiveLPMSupport)
      • The ASUS Prime B560M-A motherboard page states the SATA_2 port shares bandwidth with the M.2_2 slot. I don't know how this is handled internally but if you're plugged into SATA_2, try one of the other SATA ports on the motherboard
      • Disable SATA Hot Plug (xblax mentioned this above)
      • Try another SSD if you have one around to see if there's a difference (SaveEnergy mentioned this above)
      • Frederik on November 20, 2023 - click here to reply
        Hi Matt, Thanks for your input! I tried the following, but not succesfull. The board is still burning 4-5W because of C2.
        In the end it could be cheaper longtime to add another ASM1166 instead of the onboard controller. :D

        Check that SATA Link State Power Management is enabled (on my H770 it's under Advanced/PCHStorageConfiguration/AggressiveLPMSupport)

        This is enabled, I also tried with disabled, but it did not change power consumption or C-States. (leaving it enabled for now)

        The ASUS Prime B560M-A motherboard page states the SATA_2 port shares bandwidth with the M.2_2 slot. I don't know how this is handled internally but if you're plugged into SATA_2, try one of the other SATA ports on the motherboard

        In the BIOS I am able to specifiy if M.2_2 is using SATA or PCIE. According to BIOS and Manual, SATA6G_2 is only blocked if M.2_2 is set to SATA.
        But I have connected the ASM1166 in M.2_2 and configured it as PCIE. I confirmed that all onboard SATA ports work as expected with this setting.

        Disable SATA Hot Plug (xblax mentioned this above)

        Hotplug is disabled for all Ports by default. I enabled it to see if it changes something, but it did not. Leaving it on Disabled for now.

        Try another SSD if you have one around to see if there's a difference (SaveEnergy mentioned this above)

        I booted from USB and tried different devices: 2x SSDs (Emtec and older Samsung), 2x 3.5" HDDs (WD and Seagate) and even 1x LG DVD Burner.
        It seems that it doesnt matter what kind of deviecs is attached.


        It is always the same, as soon as I connect a devices over SATA onboard C2 is maximum.
        To verify this I booted from USB Stick with SATA devices attached, and then unplugged all of them while booted.
        As soon as the last sata device is physically disconnected from the live system it will go to pc6 immediately and pc8 short after.

        When reconnecting all SATA devices it stays in PC6/8 but also dmesg does not recognize the replug (most likely because of hotplug disabled)
        I will crawl through dmesg boot logs, maybe something interesting pops up.
  23. Anonymous on December 1, 2023 - click here to reply
    Hi, I'm on a system with ASRock H610M-ITX and i3-12100, and the ASM1166 M.2 adapter I have does not seem to be recognized even after I've flashed the firmware from SilverStone. From the spec sheet of H610m-ITX it seems the M.2 slot is connected to the chipset. Is there anything that I can do at this point?
    • Matt Gadient on December 1, 2023 - click here to reply
      Not sure. I'd reseat it first just in case the connection is being quirky and/or temporarily pop in an NVMe SSD and/or double check that nothing in the BIOS is disabling that slot. Next I'd verify it's working correctly in another system with a drive connected, and then double check that the firmware updated correctly. If that checks out, to determine whether it's specific to that M.2 slot on your motherboard, I'd try putting the ASM1166 on a PCIe M2 adapter (currently about $3 USD from Aliexpress or $10 USD shipped from Amazon) and install to the main PCIe slot - I've used the x1 and x4 variants and they're quite handy.
      • Anonymous on January 31, 2024 - click here to reply
        I finally got around to try using an PCIE M.2 adapter on the CPU x16 slot, it works but the package state only stays at C2, just like what you have found. Oh well, looks like I won't be able to utilize some of the hard drive bays in my NAS case.
    • baegopooh on January 25, 2024 - click here to reply
      I have a same problem here with asrock z170-itx fatal1ty / asm1166 m.2 sata controller.
      -m.2 slot of mobo works(tested with m.2 ssd)
      -sata controller works (tested with m.2-pcie adapter&x16 slot of the mobo and m.2 slot of another minipc)
      But with asm1166 connected to m.2 slot, nothing show up with bios or lspci.

      So it looks like asrock board have some problem.
      Don't know how to proceed from further
      • Matt Gadient on January 25, 2024 - click here to reply
        Definitely seems like it might be a strange issue related to ASRock as you suggested and/or an issue with that M.2 slot. I took a quick look through the manual for that motherboard and didn't see anything BIOS-related that looked like it might apply. If it were me I might try turning CSM on/off just for the heck of it (sometimes it affects weird things), but I wouldn't be optimistic.
      • baegopooh on February 7, 2024 - click here to reply
        Disabling CSM completly made boot fail with 5 beep(something wrong with gpu). enabling cm and disabling storage oprom didn't help.
      • Anonymous on March 26, 2024 - click here to reply
        Interestingly Wolfgang from Wolfgang's Channel got the ASM1166 M.2 adapter to work in an ASRock N100DC-ITX board, with the system being able to hit C8 power state. All the PCIe lanes on that board should be coming directly from the N100.
        https://youtu.be/-DSTOUOhlc0?t=254
  24. Walter on December 1, 2023 - click here to reply
    hi Matt, thanks for the best and most detailed explanation on how to do this. I bought the ASUS Prime H770-Plus D4 for $99 on amazon yesterday to replicate your setup, delivery in a couple of weeks and I wanted to ask you if the Intel i5-12400 is still the CPU you would recommend at this time. I am asking because you mention that you already had the CPU (although it was carefully chosen at the time for a different project). I need to buy a CPU specifically for this and would like the most powerful one assuming it would idle at low power anyway.

    Also, one thing I don't understand in general, if I use the home server as a NAS but also as my router, would that prevent reaching high c states at idle given that it would always have to do some work? I have a gigabit connection and I am currently using openwrt on a pi4 but with qos enable I can get half download speed.

    Thanks again.
    • Matt Gadient on December 1, 2023 - click here to reply
      At least a couple people in the comments are using the i5-13500 and have been hitting the low power marks. It's got a higher turbo frequency, more cache, and 8 e-cores. So given that you're looking for the most powerful CPU while hitting low idle power, I suspect that might be a better option. It's possible some of the faster processors would be perfectly fine too, but obviously you're looking at spending a heftier chunk of cash to find out for certain and I'm not sure the ASUS H770 would handle the high power demands during turbo.

      As for being a router, I'm not sure although it's something I hope to test out within the next couple weeks. Currently I've got the ole ASRock/Intel J4005B-ITX running OpenWRT here as a router but the motherboard limit on it is C6 (though it spends 88-92% of time there after powertop with typical home traffic including a YouTube stream I tested just now). The thing's powered by one of my old Antec PSUs, and it reliably sits at a constant 10 watts.


      EDIT/UPDATE Dec 13: Just to follow up, I did a bit of testing with the Asus H770 + i3-12100 + Intel i350-4T running OpenWRT in a systemd-nspawn container running privileged with SYSTEMD_SECCOMP=0. Results are a bit messy.

      Doing nothing, it would be in C10 90-95% of the time.

      TEST A: Handling just 1 device watching YouTube, 90% C10. Connected to very light household traffic (spiky with under 3Mbit/s average) it sat in C10 for 80-90% of the time. Downloading from one of my web servers which caps the rate at 4MB/s (roughly 32Mbit/s) it dropped to around 50-60% range in C10.

      TEST B: Running iperf tests from an exterior VPS to a local machine (forwarding a port):
      - 0Mbps = approx 82% C10 (household traffic only)
      - 1Mbps = approx 73% C10
      - 5Mbps = approx 61% C10
      - 10Mbps = approx 58% C10
      - 20Mbps = approx 30% C10
      - 30Mbps = approx 12% C10
      ...it hit 0 at just over 40Mbps.

      TEST C: Interestingly, passing through the router between 2 local networks (no NAT, just forwarding) offered different results:
      - 0Mbps = approx 82% C10 (household traffic only)
      - 1Mbps = approx 82% C10
      - 5Mbps = approx 80% C10
      - 10Mbps = approx 74% C10
      - 20Mbps = approx 70% C10
      - 30Mbps = approx 64% C10
      ...it hit 0 at just over 70Mbps.

      Since I'm in an nspawn container, I wasn't able to try software flow offloading in the firewall section of OpenWRT to see if it would soften the impact in tests A and B - it's quite possible that the "flow offloading" it does would bring the results closer to test C. Also possible that IPv6 might do better in tests A and B by skipping NAT, though the impact of smaller-but-more packets could always throw a wrench into things.

      So far as I can tell, the takeaway here is that there's some degree of traffic the computer can handle as a firewall while still finding opportunities to sleep. But if it's being kept awake... other options start to look enticing too. I just tested the Intel J4005 again (plugged into nothing but a monitor) and it sits at 9-10W in OpenWRT even if C-States are disabled, and I suspect the J5xxx series would be similar (no idea about N100). If some oomph is needed, my Ryzen 5600G does 22-23W on a Ubuntu LiveDVD with C-states disabled. Both of those start to look equally attractive any time Alder Lake loses it's C-State advantage in my view.


      EDIT/UPDATE Dec 15: Bare metal, configured Debian as a router. TEST B was nearly identical except time in C10 was +6% for each item - still hit the hard wall just over 40Mbps. Flow tables didn't help.

      Test C fared a bit better, with the final numbers being:
      - 70Mbps = approx 70% C10
      - 75Mbps = approx 60% C10
      - 80Mbps = 0% C10 (hit a hard wall here)

      When I enabled flow tables for Test C, I eeked out a little bit more:
      - 80Mbps = approx 60% C10
      - 85Mbps = approx 45% C10
      - 90Mbps = 0% C10 (hit a hard wall here)

      TEST D: To test the impact of an increasing number of connections, I fired up a torrent client and added a bunch of torrents with a capped global download speed of 1MB/s.
      - 0 connections = approx 80% C10 (household traffic only)
      - 16 connections = varied 36-39% C10
      - 32 connections = varied 33-35% C10
      - 64 connections = varied 26-29% C10
      - 128 connections = varied 21-29% C10
      - 256 connections = approx 20% C10
      - 512 connections = approx 15% C10
      - 1024 connections = approx 5% C10
      ...I tried flow tables at various points. No positive difference.

      I came across a few interesting discoveries along this journey.

      First, flow tables didn't help much at all. If anything, it seemed that online speed tests seemed to peak a bit less. Maybe it's something specific to the Intel i350-T4 I used (in a Debian bare metal H770 and an OpenWRT bare metal J4005).

      Second, OpenWRT in a container wasn't a pleasant experience. I had weird issues cropping up where some connections were solid and others struggled. Perhaps with enough tweaking and coaxing it could be made to work smoothly. I found that a VM ate 2-2.5% CPU full-time on a bare install and wasn't easy on the C-states so I didn't chase that one further.

      Third, and this is very obscure and likely specific to the ASUS H770 or perhaps my combination of hardware or perhaps even the linux kernel I ran... if the built-in Realtek NIC was enabled in the BIOS but was NOT activated (via a systemd-networkd .network file), having another network card installed and activated caused the system to spend 50% of the time in C3. By "activated", I mean even something as simple as a [Match]name=X with the rest being empty. I tried an i210, i340 and i350. When using the i350-T4, I noticed that a corresponding SMBUS item in powertop also disappeared after I'd disabled the onboard NIC and moved the card to the second PCIEx1 slot. Sometimes it seems like ASUS has some gremlins running around on the PCIE bus.
  25. Walter on December 1, 2023 - click here to reply
    thanks a lot, I'll get the I-5-13500 and will report back the results in a few weeks once I collect all the parts.
  26. tom on December 7, 2023 - click here to reply
    Hello,

    I followed your build and bought the same motherboard with a i3-13100 cpu.
    I have one issue that I can't resolve and I don't know where / what to look for.

    I have installed ubuntu 22.04 and tried with 23.04 but the issue is still the same :

    Whenever I tried to ping http://www.google.com I have an issue AS SOON AS I remove my keyboard and mouse:
    - either "ping sendmsg no buffer space available" with drivers r8168
    - or "pcie link is down" with drivers r8125

    I have removed every power management option I could find.
    I tried to plug in another usb device.

    Any clues ?
    • Matt Gadient on December 7, 2023 - click here to reply
      Not sure. Really strange that USB devices would impact the network. To gather more information, these are the things I'd try:
      1. Check output of "dmesg" for any associated events.
      2. See if it happens when booting via the 22.04/23.04 LiveCD or if it's only post-install.
      3. Try Ubuntu 23.10 (I *think* the release version shipped with the kernel that disabled ASPM on the RTL8125) - either LiveCD or install depending on results of #1.
      4. Try a different set of USB ports, do 1 at a time to narrow down whether it's keyboard or mouse (or both), try a different brand keyboard/mouse to see if it makes a difference.
      5. Unplug/replug the network cable to see whether the network comes back up or if it's down for good.
      6. Disable "F1 for error" in the BIOS and try booting to the OS without keyboard/mouse plugged in, then see what happens when it's plugged in and unplugged.
      7. If you have a PCIe network card around (ideally non-Realtek for this test), see if it suffers from the same issue.
      Perhaps someone else who has hit the same issue will reply. This isn't a situation I'd have encountered as I ended up leaving my keyboard plugged in (eventually swapping to a wireless keyboard which didn't mind being slept by powertop). Could always be a weird issue with defective hardware.
      • tom on December 10, 2023 - click here to reply
        Tried everything but nothing changes, Gonna send it back and try another one.
  27. Rishi on December 7, 2023 - click here to reply
    Thanks for the write up - I thought I would share my findings.

    CPU: i5-12600K
    PSU: Corsair RM750e
    NVMe: Samsung 980 Pro
    RAM: 2x16 GB DDR5
    OS: Ubuntu 23.04 Server (running off USB)

    I initially bought a ASRock B760M Pro RS. After tuning the BIOS and even after force-ably enabling ASPM on the Realtek card, I couldn't get lower the PC-3. My total wattage was about ~15 watts. Not terrible but this machine was for a new home server that would be 24x7 and knew it could be better. I emailed ASRock, as their BIOS does not explicitly setting PC state values, it just has a tri-state of Auto, Enabled and Disabled, on if they plan on adding support and heard nothing back. So I was done with them.

    I returned the ASRock and switched to a ASUS Prime B760-M-A. I configured the BIOS and ran powertop. ASPM L1 was working on the Realtek without user changes. I was at about 11 watts. After unplugging the DP cable and USB wireless KB/mouse it dropped down to 7.2 watts. Awesome! It was able to down to PC10 and the system seems very stable. Pretty incredible how far desktop computers have come for power usage.
    • Paul on March 12, 2024 - click here to reply
      Thanks for making this post! I was wondering if K-series CPUs could be used in a power-efficient build. In my country the 12600K is way cheaper than non-K variant.
  28. SaveEnergy on December 14, 2023 - click here to reply
    With my h770 plus (ddr5) c8 is also reached when the SSD is in the CPU assigned nvme slot. After I had problems with my Samsung and WD ssd's, I am now with "SK hynix Platinum P41", which runs stable so far. Unfortunately I am currently back to c3 due to a network card from logilink (dual port) :-(. I'm currently trying to order an xl710-qda from China *g, but I'm not sure if it makes sense.
  29. etnicor on December 17, 2023 - click here to reply
    Hello,
    just did a build with an Asus Prime b760M d4 build and had exactly the same findings on my motherboard. Have contacted Asus too see if they have any feedback regarding the cpu connected pci/m2 slot. I get to C6 when having an intel I226-v in x16 slot and a samsung 970 evo in cpu m2 slot.

    However my X710-da2 network card I have to run in x4 slot since if used in x16 package c-state goes only to c2/c3.

    I have no issues with my i226-v nic.

    My usecase was to build a low power 10 gigabit router. Currently running OPNsense, but may switch to Vyos for lower system load.
    • baegopooh on February 6, 2024 - click here to reply
      Trying to reach deep c-state(at lest c6) with 2.5g or 10g nic in cpu-connected x16 slot here.
      I tried with
      1. intel x520-da2(board from china), which does not support aspm
      2. melanox connectx4(board from china), which support aspm but only allow c3
      3. randome i226 nic from china-which support aspm, but can't enable aspm

      So. can you let me know exact model or where you bought your i226v-nic please?
  30. Tony on December 23, 2023 - click here to reply
    Hi Matt,

    I'm sure you've heard this before but thank you for your informative posts and time.
    100% new to this all and it's a bit overwhelming to be honest, but the thought of owning a server is incredibly appealing. I approached this all exactly as I did when I started out pc building, a bunch if youtube videos, read a bunch of articles and sort of muddle my way through. Researching server builds it's immediately apparent you actually need to know what you're doing and reading your articles makes this even more starker.

    What are your suggestions for a plex/jellyfin server, always on, low power, full transcoding capabilities. Folks either seem high or low on products like synology and QNap which gives me pause. Thanks once again
    • Matt Gadient on December 23, 2023 - click here to reply
      Personally I'd actually hang off a little while if possible and see how the Ryzen 8700G does power-wise when it comes out - if idle consumption is better than the 5x00G series was it could become a strong contender. Beyond that, I haven't tested my 12100/12400 in transcoding tasks and don't have much specific to offer there beyond a vague awareness that you get 2 codec engines once you reach the 12500 (compared to 1 in weaker models). Maybe someone else will be willing to chime in with some recent experiences and observations though.
  31. JT on December 23, 2023 - click here to reply
    Check your Board, on mine the Intel Network-Chip wouldn't let my system go in deeper C-States (i226-V) looks like its still buggy.
  32. Alex on December 31, 2023 - click here to reply
    Hi Matt,

    Thanks for the great write-up! This year I have built myself a low-powered NAS to replace my pre-built QNAP TS-351 (Intel J1800) and a Gigabyte Brix J4105. My requirements were that this new system needed to consume less power on average than both of those systems combined and have far supirior performance. With the help of your (previous) articles, the Dutch Tweakers forum, the German Hardwareluxx forum, the unRAID forum and some other sources I came up with the following:

    - Intel i5-13500
    - Gigabyte B760M GAMING X DDR4
    - Crucial Pro CP2K16G4DFRA32A
    - be quiet! Pure Power 12 M 550W
    - 1x Toshiba MD04ACA50D 5TB (from my NAS)
    - 2x Toshiba MG07ACA14TE 14TB (from my NAS)
    - Crucial P1 1TB (NVMe)
    - Samsung 980 1TB (NVMe)
    - Crucial BX500 1TB (backup connected through USB)
    - Transcend SSD230S 2TB (SATA SSD)
    - Philips OEM USB2.0 drive (boot drive)

    With this setup I currently run almost 50 Docker containers with various applications and databases and I can reach 17W from the wall at idle. Everything spun down and no services accessed, except SSH. Which I'm pretty pleased with. Package C8 can be reached, especially when most applications aren't doing much or when I stop them. When I stop everything I can reach 11W at the lowest on unRAID.

    Another thing I (and several others) have noticed on Intel 600/700 was that using USB2.0 Serial devices like Zigbee or Z-Wave devices increased power consumption by a lot. Something like 5-7W. I currently use ser2net on a Pi to circumvent this. I reached out to Gigabyte and Intel but both denied that this was an issue.

    I also utilize E-cores for most apps as this saved me 1-2W on average. Some goes for system processes which I tend to move to those cores with taskset. Which seems to go over pretty good.

    Regarding the Realtek NIC, I recently tried the 'native' Realtek driver that's available in the apps store but that disabled L1 completely for me. Resulting in an additional 4W. Reverting to the kernel one and forcing L1 with: `echo 1 > /sys/bus/pci/devices/0000\:04\:00.0/link/l1_aspm` works.

    If you have any questions, you can always reach out. Have a great new years <3!
    • Matt Gadient on January 1, 2024 - click here to reply
      Looks like a great setup, appreciate you passing along the details. I was actually unaware of the /sys/bus/pci/devices/$DEVICELOC/link/l1_aspm setting - thanks for this. Just tried it with the RTL8125 on my MSI board and it works great - much easier to just set this at boot instead of building a custom kernel. Unfortunately the RTL8168 on the older Gigabyte H110N here doesn't seem to have that sysfs setting exposed so it will continue getting the custom kernel treatment until I get the chance to swap in an Intel i210 (which despite being 1Gbit is now slightly more expensive than the 2.5Gbit Intel i225 on AliExpress).
      • Jon on January 9, 2024 - click here to reply
        A slightly easier way to get to the ASPM L1 setting for the network interface is `/sys/class/net/$$$$/device/link/l1_aspm` where `$$$$` is the network interface name like `enp1s0`.
        • Matt Gadient on January 9, 2024 - click here to reply
          That is definitely easier than iterating through all the PCI devices as I'd been doing in my script. Thanks Jon.
    • Wojtek on January 6, 2024 - click here to reply
      Hi Alex,

      How is a stability of this build so far? I’m thinking of getting same cpu mobo combo but i’m worried it will be unstable at iddle, which will defeat my purpose of retiring my old xeon system.

      Regards,
      Wojtek
  33. Matt Gadient on January 3, 2024 - click here to reply
    Unfortunately not at this time.
  34. voltron4lyfe on January 11, 2024 - click here to reply
    Thank you for this article, although I will have to admit I feel a bit defeated. I was never able to get above C3 or below 20W. All C-State references are to the pakcage state. The Core state seems to be much more in the C10 range.

    My setup:

    I5-12500
    Asrock B660M Steel Legend
    64GB DDR4-3200 Corsair LPX
    2xNVME drives (1 SK Hynix Gold P31, 1 TeamGroup)
    ASM1064 PCI-Express 3x1 to 4 SATA adapter"
    Realtek 2.5G NIC
    3x120mm Fans.
    Proxmox 8 w/ Linux 6.5.11 kernel. Powersave governor enabled.

    I tried to strip everything down to the basics and removed all storage and other devices. I disabled onboard audio, realtek nic, and the ASM1062 SATA controller. Booting from a USB-Stick and running Powertop, it was ~50% C2 and ~50% C3. Never above. I confirmed that all devices supported ASPM using LSPCI and enabled every ASPM power related setting I could find in the bios. I also looked for sneaky overclocking settings but didn't find any enabled. In this config, the power usage was ~20W. Adding the ASM1064 SATA controller and the NVME drives didn't make a significant difference. I've run powertop --auto-tune each time with no appreciable affect.

    I then added an Nvidia 1660 Super GPU which I pass through to a lightweight Linux VM. I then run the nvidia drivers with persistence mode and nvidia-smi reports that it's using ~1 Watts. This saves ~10-20W.

    Adding ~8 HDD (combo of Western Digital Reds and Seagate Exo drives), the machine idles at ~100W and spends 80% of time in C2.

    I then start my main VM and passthrough the SATA controllers. The power usage increases by 30W to 130W and stays there. Powertop on the host shows the package never entering any c-states. I don't have any drive spindown configured.

    Not sure if I'm asking something, just sharing my experience. Even if it didn't make a big difference, I certainly learned a few things.

    I'm thinking that I may migrate from the VM to a Linux LXC container on the host. I'm wondering if the VM is somehow affecting power management. Thanks again for the very detailed and interesting writeup!
  35. Daniel on January 13, 2024 - click here to reply
    Hello, sharing my "just installed" experience

    Corsair SFX SF450 Platinum
    ASRock Z690-ITX
    Corsair DDR4 2*16GB 3200 MHz
    2 Samsung NVMe 980 Pro 1TB on board miniPCIe (ZFS Mirror)
    1 Samsung EVO 870 2TB on board SATA (ext4)
    I5-12400 stepping H0
    Realtek 2.5G disabled
    1 Noctua Fan on CPU (92mm)
    Proxmox 8.1 (1 Docker LXC running with plex)
    No Screen attached, no keyboard, no mouse
    BIOS loaded with ASRock default parameter + Audio/Wifi disabled, ASPM enabled everywhere

    Idle Consumption at the wall = 18W
    Powertop says | Pkg C2 28%, C3 64% | Core CPU C7 97%
  36. UnraidUser on January 22, 2024 - click here to reply
    Hi @MattGadient, thanks for your awesome post. I wanted to ask you if you could provide the version of powertop and also the linux kernel version you used. Me and other guys in the Unraid community noticed that powertop is not showing any pkg c-state and also core c_state show ACPI which according to github, this happens when c-state is not correctly read from powertop. Thanks for your support
    • Matt Gadient on January 22, 2024 - click here to reply
      On my 2 systems running the Intel i3-12100/i5-12400, PowerTOP 2.14 on kernel 6.1, and PowerTOP 2.15 on kernel 6.5.
      • Anonymous on January 24, 2024 - click here to reply
        Thanks for your answer, it seems that even if ACPI is shown the cpu can still reach lower C-states. I don't know if you are expert of Unraid, but basically I have a couple of NVMEs crucial P3 that are formatted as a mirror ZFS and on these drives I have all the folders/volumes etc. for the docker containers. Do you think that if these SSDs are running the entire day they can never allow the CPU to go to lower C-states?
        Anyway, I enabled everything in ASPM page of my motherboard (Asus Pro WS W680-ACE IPMI), the only thing that I noticed that it reset to Auto every time is the Native Aspm (when Enabled it should add OS ASPM support) but every time I enter the BIOS I see it set on Auto. DO you have any clue?
        When I run this command, this is the actual status of the devices:
         lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'

        0000:00:1b.0 PCI bridge: Intel Corporation Device 7ac0 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #17, Speed 8GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
        LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk-
        0000:00:1c.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #1 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #1, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:00:1c.1 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #2 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #2, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:00:1c.3 PCI bridge: Intel Corporation Device 7abb (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #4, Speed 2.5GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        0000:00:1c.4 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #5 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #5, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:00:1d.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #9 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #9, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
        LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk-
        0000:02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-LM (rev 06)
        LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L1, Exit Latency L1 <4us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:03:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-LM (rev 06)
        LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L1, Exit Latency L1 <4us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:04:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 06) (prog-if 00 [Normal decode])
        LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        0000:06:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) (prog-if 00 [VGA controller])
        LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us
        LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        0000:06:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
        LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <4us
        LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        10000:e0:06.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 (rev 02) (prog-if 00 [Normal decode])
        LnkCap: Port #5, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <16us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        10000:e0:1a.0 PCI bridge: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #25, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        10000:e0:1b.4 PCI bridge: Intel Corporation Device 7ac4 (rev 11) (prog-if 00 [Normal decode])
        LnkCap: Port #21, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        10000:e1:00.0 Non-Volatile memory controller: Micron/Crucial Technology P2 [Nick P2] / P3 / P3 Plus NVMe PCIe SSD (DRAM-less) (rev 01) (prog-if 02 [NVM Express])
        LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 unlimited
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        10000:e2:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01) (prog-if 02 [NVM Express])
        LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        10000:e3:00.0 Non-Volatile memory controller: Micron/Crucial Technology P2 [Nick P2] / P3 / P3 Plus NVMe PCIe SSD (DRAM-less) (rev 01) (prog-if 02 [NVM Express])
        LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 unlimited
        LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        • Matt Gadient on January 24, 2024 - click here to reply
          I haven't used unraid, but I'll answer what I can based on my generic Linux experience. On the first question, it depends what you mean by "SSDs running the entire day". Constant disk activity will prevent it from moving to better C-states. If you mean the disks sitting idle with ASPM disabled as they appear to be, the degree to which C-states are impacted seem to vary by device and unfortunately I don't recall having tested the P3 with ASPM completely disabled.

          As to the BIOS, my guess is that "Native ASPM" Auto/Enabled probably result in the same thing anyway. You could try setting "Native ASPM" to Disabled to try and force your BIOS settings on the OS, then see whether "lspci" reports an improvement on any of 8 entries that currently say "ASPM Disabled" - however you should probably make sure you have external backups just in case the kernel has intentionally disabled it on your system for reasons pertaining to data loss. I'm not sure what your ASPEED AST1150 bridge runs, but since ASPM seems to be disabled on it, I wouldn't be surprised if any devices downstream from it might be disabled too.

          When troubleshooting you may find it useful to change 1 BIOS setting at a time between rebooting/testing. If you can't get PowerTOP to tell you anything (even via something like a Ubuntu Live DVD), your next best bet is probably going to be to just use a Kill A Watt device to measure the impact of your changes.
  37. Maria on January 26, 2024 - click here to reply
    Thank you for the excellent article.
    What do you think of the board: ASRock PG Riptide Intel Z690 So. 1700 Dual Channel DDR4 ATX Retail

    Since I would have 8 SATA ports, I wouldn't need another PCIe card (until further notice). Plus the i5 12400F with existing HDD'S and SSDs

    In my office computer with MSI mainboard I don't have any of the C-State, energy saving modes or other settings as mentioned here. Is this due to MSI or the board itself? It is a B450M PRO-VDH MAX (MS-7A38).

    Thanks for any help and any commThank you for the excellent article.
    What do you think about the board: ASRock PG Riptide Intel Z690 So. 1700 Dual Channel DDR4 ATX Retail

    Since I would have 8 SATA ports, I wouldn't need another PCIe card (until further notice). Plus the i5 12400F with existing HDDs and SSDs

    In my office computer with MSI mainboard I don't have any of the C-State, energy saving modes or other settings as mentioned here. Is this due to MSI or the board itself? It is a B450M PRO-VDH MAX (MS-7A38).

    Thanks for any help and grateful for comments :)
    • Matt Gadient on January 26, 2024 - click here to reply
      So far nobody in the comments has managed better than C3 using an ASRock 600/700-series motherboard, with 15-20W being the typical idle power consumption reported in a minimal configuration.

      Not sure on the B450M motherboard you mentioned, though it's an AMD motherboard and I only have a couple B550 boards so not a lot of experience with them. You might have a really basic toggle called "Global C-State Control" although you may have to dig through all the menu items to find it. Depending on the CPU, if you're using Linux, newer kernels in the 6.x line support different amd_pstate scaling drivers which may have a positive overall power benefit.
    • baegopooh on February 6, 2024 - click here to reply
      AMD only allow c1,c2 and c6 and powertop show c6 as c3(afaik)
  38. Maria on January 30, 2024 - click here to reply
    Many thanks for the answer. 15-20W is of course not 7W ;)
    Now a colleague has bought a mini PC and installed Unraid. This with an m.2 to SATA adapter and therefore has 6 SATA ports. This would probably also be available with 2x m.2, so that 2 Ryzer cards could be installed.

    https://www.amazon.de/gp/product/B0BCVCWWF3/ https://www.amazon.de/gp/product/B0BWYXLNFT/

    Consumption without any settings with 2 docks and 2x VM is approx. 9W in idle mode. 5 HDDs are connected.

    The idea is that there is a mobile CPU behind it and you can find something like that. Do you have any experience? The results look good and the performance should be sufficient.
    • Matt Gadient on January 30, 2024 - click here to reply
      An M.2 SATA adapter in a Mini PC like that is something I've thought about but haven't attempted. If planning to run drives internally, the biggest consideration for me aside from space and heat is how they'd be powered - a Mini PC with an internal SATA connector might be a simple way to power a few drives via a power splitter depending on drive consumption.

      I'd expect CPU performance to be completely fine for most tasks.
      • Maria on January 30, 2024 - click here to reply
        My colleague has solved it as follows:
        - The Mini PC has its own power supply and the HDDs are supplied via a 2nd Pico PSU with 160W. He has an empty NAS enclosure, but the two fans alone consume an additional 11W, so he'll take another look at that.

        My idea was to open the Mini PC and install everything, including the HDD, in my ATX case. 1 or 2 Nactua fans, turned down considerably and then use the power supply with an existing ATX power supply, in addition to the Pico PSU for the Mini PC.
  39. Daniel on January 31, 2024 - click here to reply
    === I have an Intel NUC11-ATKPE002 ===

    Hardware:
    * CPU Alder Lake N6005
    * Samsung 980 1TB
    * 32 GB SO-DIMM
    * 1 Gb/s network

    Software
    * Proxmox 8
    * LXC Home-Assistant
    * LXC Plex (I915 hardware transcoding)
    * LXC dokuwiki
    * LXC fileserve (smbd,filebrowser,vsftpd)
    * LXC Syncthing
    * LXC mosquitto
    * LXC heimdall
    * LXC gotify
    * LXC rtsptoweb
    * LXC esphome
    etc..

    RUNNING 24/7

    Consumption measured at the wall (mostly idle): 7 watts

    === I also have a home build 'pseudo NAS' based on AsRock Z690 ===

    * CPU I5-12400
    * 2 NVMe 1TB
    * 8 hard disks ZFS Raidz2
    * Proxmox 8
    * 2.5 Gb/s network to my Desktop PC

    Consumption measured at the wall (mostly idle): 70 watts

    RUNNING ON-DEMAND

    === Switch 8 ports ===

    Consumption : 8 watts

    === my current experiment ===

    For a home usage, I do not to run my 'pseudo NAS' 24/7 but only on request

    I have installed debian wakeonlan on the NUC11, then
    When needed (either on schedule or manuel request) I can run

    * wakeonlan [MAC-ADDRESS-Z690] to wakeup the NAS
    * systemctl suspend [PSEUDO-NAS-Z7690] to put the NAS in sleep mode

    All this work pretty well
    I am working on a graphical interface in Home-Assistant which will also monitor the consumption

    This a the best compromise I have found: NAS, low consumption, home automation)
  40. Dave on February 1, 2024 - click here to reply
    Thanks to everyone for all the great info on this page. It was a big help. I found a nice low power itx configuration with a core desktop processor, so I thought I would share out my configuration and results to help others. My use case is not the same as some of the above, but might be useful to some.

    Idle power: ~5.3 watts

    Hardware:
    * ASUS PRIME H610I-PLUS D4
    * Intel Core i3-12100
    * Crucial RAM 32GB DDR4 2666 MHz CL19 CT32G4DFD8266
    * SAMSUNG 980 PRO SSD 2TB NVMe
    * PicoPSU-150-XT
    * Noctua NH-L9i-17xx, Low-Profile CPU Cooler
    * Noctua NF-A8 PWM, Case Fan
    Software:
    * BIOS 3010
    * Ubuntu 22.04.3 Server (running from a USB thumb drive)
    BIOS Settings:
    * EZ System Tuning set to: “Power Saving” (this automagically set all the needed BIOS settings)
    * PL1/PL2 set to 35/69 watts. (because of tight enclosure and the low power pico-psu)
    Software Settings (set from the attached TV console display)
    * powertop --auto-tune
    * setterm -blank 1

    Measured at the wall with a kill-a-watt. The reading mostly hangs out between 5.1 and 5.5 watts. The power measurement is taken once the console screen blanks. Package C-State are gathered through ssh. The processor spends most of the time in package C-state C10 (~90%).

    Some benefits to this configuration:
    * Very nice idle power
    * Simple BIOS configuration.
    * PCIe 4.0 x16 slot for potential upgrades/expansion (however, using it looks like it might mess up the low idle power)

    Some drawbacks to this configuration:
    * 1GbE vs 2.5GbE
    * The picoPSU has a 4 pin power connector, the board wants an 8 pin power connector (internet research indicates this is OK for low power situations)
    * Single M.2 slot. PCIe 3.0 x4 mode

    I had wanted 2.5GbE. When I hit critical mass in the house for 2.5GbE, I’ll probably play around with a 2.5GbE card in the PCIe 4.0 x16 slot.

    Other Info:
    * The magic “Power Saving” BIOS setting didn’t turn on “Native ASPM”. However, turning it on didn’t make a difference.
    * Disabling the WIFI/Bluetooth module in the BIOS didn’t make a difference.
    • Rusty on February 12, 2024 - click here to reply
      Thanks for your build. I copied what you got as (imitation is the best form of flattery!). Where did you change the BIOS PL1/PL2? Here's what I changed:
      OS: Unraid on USB
      PSU: RM850x (2021) as it is very efficient as low loads.
      Disks: added 2x12TB WD Red Plus NAS Drives alongside the NVMe
      RAM: 1x32GB 3400MHz DDR4 (salavaged from current PC)
      Fans: Intel stock CPU fan + case fans from the Fractal Node 304 case at low speed.

      My idle wattage measured from my UPS is ~21W, with Linux ISOs seeding on one HDD and the other drive in standby mode. They are clearly adding a lot of wattage to the build. I set the parity drive to only spin up when syncing nvme cache to data drive once a day (at least this is how I plan for it to function...), and the CPU is plenty fast for watching Creative Commons Licensed films. At any rate, 21W is nice for my first seedbox :D


      This PSU Efficiency spreadsheet should be useful for people here! Data was taken from cybernetics:
      https://docs.google.com/spreadsheets/d/1TnPx1h-nUKgq3MFzwl-OOIsuX_JSIurIq3JkFZVMUas
  41. IrishMarty on February 8, 2024 - click here to reply
    Really nice article. Low power builds are now an absolute neccesity for me here in the UK. I'm trying to do an ITX build but your results on the CPU connected PCIe slot concerns me. Will have to keep researching.
  42. etnicor on February 15, 2024 - click here to reply
    Don't know if it's of an interest.
    I was able to get the asmedia 1166 to run on the pcie x16 slot and use m2 slot connected to cpu and get to c8.

    Had to mod the Asus bios to get access to root port settings. Disabled Multi-VC on root port 1 and can now use the x16 slot and I am getting down to c8 cpu package state. I get to c8 when using m2 slot connected to cpu at the same time aswell.

    Simplified steps:
    Desoldred BIOS from motherboard
    Dump bios with an ch341 3.3v programmer
    modded BIOS with opensource UEFI-Editor and "unhidden" root port settings.
    Flashed BIOS
    resolder back.

    My motherboard is an ASUS PRIME B760M-A D4, but it should be the same on Prime H770-Plus D4 .

    Will have to thank Intel support for suggesting disabling Multi-VC, wouldn't have figured out that myself.
  43. Marcin on February 23, 2024 - click here to reply
    Thanks for the writeup! I built a similar config and was able to reach 10.5W at idle (would be lower with a better PSU) but I'm pretty happy. Power consumption was measured with a Shelly Plug S.

    CPU: i5-14500
    MB: Asus Z790-P D4 (Got this one as a customers return for the same price as the H770-P D4, and as you said it has an extra PCIe 4.0 x4)
    RAM: 4x 8Gb (no XMP) => will be replaced by 2x 32Gb
    NVMe: Sabrent Rocket 1TB PCIe 3.0
    PSU: Seasonic Focus GX-550

    This was tested on TrueNas Scale 23.10.2 and Ubuntu Server 23.10. For both of them, I grabbed the Realtek driver

    A few observations:
    • With the Z790-P D4 I can use the CPU connected NVMe slot without any impact to CPU states. I need to test if the PCIe 5.0 slots can also be used without any BIOS modding
    • Plugging any kind of USB device on the motherboard (keyboard, USB dongle) adds 5W
    • The PSU I'm using right now is pretty horrible for low loads: 60% at 10W and 71% at 20W. With a better PSU (81% eff) this build would get down to 7.8W
    • Using 'consoleblank' didn't seem to have an effect on RC6. It always reported 0% in powertop. More testing needed here
    • Daniel on February 24, 2024 - click here to reply
      Are you sure that any kind of USB device adds 5W? Can you please check the communication speed, with 'usb-devices', for example?
      • Marcin on February 25, 2024 - click here to reply
        Hi Daniel!

        Yes I am sure. When I connected my Logitech MX Keys usb2.0 dongle on the motherboard, the power consumption on the shelly went from 10.5 to 15-16W. If I remember correctly, it was messing with CPU C-states (i.e. no longer going into C6, C8).

        I have only tested with USB2.0 devices on the ports attached to the motherboard (on the rear I/O). USB2.0 devices weren't even recognized in non USB2.0 ports.
        My theory is BIOS and/or drivers for the USB controller. Since I'm not using any kind of USB in normal operation, I didn't investigate further. I'm now back on TrueNas and it's stuck with powertop 2.14 which doesn't support my CPU so C-state reporting is broken.

        For info, I'm running BIOS version 1611 and Intel ME 16.1.30.2307v4.

        As for the other tests:
        • 'consoleblank' did actually have an effect on RC6. I just wasn't looking at the right place. The iGPU was 100% in RC6.
        • I can also use the CPU-connected PCIe 5.0 port while still reaching C8 with some caveats: C6-C8 were hovering around 10-20%. I only tested this with an Arc A750 because it's the only PCIe device I currently have. Possibly, the card itself is to blame here so I'll retest this once I have an ASM1166
        • Daniel on February 26, 2024 - click here to reply
          I'm asking because I noticed something similar on B660 and B760 boards, but only with 12 Mbps USB devices and I'm curious if this also applies to Z690/Z790.
  44. Michael on February 28, 2024 - click here to reply
    Hello Matt,
    Congratulations on the well-founded and time-consuming article. The best source I could find here on the net on the subject, especially regarding the technical background. I've jumped on the bandwagon and will report back ;-)

    One short question remains: Is there an exact specification of which Kingston HyperX memory was used? What role does RAM play in power consumption?

    Thanks again and best regards,
    Michael
    • Matt Gadient on February 28, 2024 - click here to reply
      I don't have the model numbers of the sticks I used during the test (and different memory is being used at the moment). As to the power consumption I didn't A/B test - the last time I went out of my way to deliberately try to gauge power consumption of RAM was in a laptop with DDR4 SODIMMs where according to "Intel Power Gadget" a 8GB stick used 0.15W idle and 1.67W gaming, and a 16GB stick used 0.29W idle and 2.18W gaming, but I assume those were likely estimates - if curious, you can find details about that near the end of the following page: https://mattgadient.com/faster-gaming-on-intel-uhd-620-kaby-lake-r-by-upgrading-ram/
      • Michael on February 29, 2024 - click here to reply
        Thank you very much for the detailed answer. I'll get started and report back on what I've achieved.
        The question remains as to which hard disks enable C10. There really isn't much to be found on the net. Do you have any experience with 3.5" Seagate Exos disks? The Toshibas seem to work. Thank you
        • Matt Gadient on February 29, 2024 - click here to reply
          The following 3.5" Seagate 10TB SATA drives are all currently working for me with C10. Seagate Exos X16 (ST10000NM001G), Seagate Ironwolf Pro (ST10000NE0008), Seagate Barracuda (ST10000DM0004). So far there haven't been any Seagate drives that have caused me problems with C10.
  45. Paul on March 8, 2024 - click here to reply
    Hello!
    Is it lottery which stepping of 12400 I will get when buying one from retail shops? I heard C0 stepping consumes more at idle than H0 stepping.
    Also wanted to ask, if 12600 could be a good alternative?
    • At a retail store if you look at the S-spec on the box label (or the CPU itself), it should be a situation where SRL4V=C0 and SRL5Y=H0 for the 12400. A number of comments have indicated that chips with E-Cores haven't incurred a negative power cost for them though. As to other models, can't say with certainty regarding the 12600 but at least a couple people in the comments have used the i5-13500 successfully at low power (and it contains E-Cores).
      • Paul on March 12, 2024 - click here to reply
        Thanks for the info about 12400 steppings!
        I am considering 12600 since it has a more powerful iGPU than 12400. Both CPUs are 6P cores +0E cores.
        Do you know if 12600 also has different steppings?
  46. Michael on March 27, 2024 - click here to reply
    Hello Matt,
    Your article and comments are a real treasure. Thanks to everyone who has contributed so much. I would like to share my experience briefly:

    ASUS Prime H770-Plus D4 + i5-12400 H0 + 2x16GB Kingston DDR4-3200 Kingston Fury Beast + Samsung 970 Evo Plus
    Without keyboard and HDMI
    Powertop 2.14 (--auto-tune)

    Ubuntu Server 23.04 Kernel 6.2.0-20 (new, no updates):
    6-7 watts (mainly C10)

    Note: If HDMI is connected (+2-3 watts) and Logitech MX Bluetooth receiver +4-5 watts = 7-8 watts for headless operation)

    Upgrade additionally 2xSamsung EVO 970: 7-8 watts
    Upgrade additionally 4xSeagate X20 9-10 Watt (sleep mode)
    Upgrade 2x14mm fan 12-13 watts (lowest level)

    Just under 13 watts with 2xNVME and 4x3.5" is not a bad value (C10 at 90%).

    After a kernel update to 6.2.0-39, C10 is no longer possible, the system only goes up to C3 and consumes 21 watts.
    Obviously Realtek-associated. The manual setting of
    setpci -s 00:1c.2 0x50.B=0x42
    setpci -s 04:00.0 0x80.B=0x42

    fortunately leads back to the C10 status and thus 12-13 watts again

    Similar behavior with Ubuntu Server 23.10 Kernel 6.5.x 16-18 watts (mostly C3) ...

    I tried Unraid (6.1.74) as a test. The system is not quite as economical, hangs at 16 watts, even after manually setting the ASPM states via setpci (see above) ... Ubuntu has a few watts less, you'll have to do some research.

    In any case, building and researching was a lot of fun, the old Synology consumes many times more than the new system.
    Thanks again Matt and everyone else.
  47. Alexander on March 27, 2024 - click here to reply
    Hi! Read all comments, but cannot find clear answer...
    My ASRock B760M Pro RS/D4 struggle with 13500 c-states (maximum c3) and 20w idle power on clean proxmox.
    I tested it with Arc a380 and none of recipes from internet worked, it draws 20w idle :facepalm:

    Please recommend mATX motherboard for 13500,
    It would be perfect if mb also will support pcie bifurcation (x8x4x4)
  48. kihoon on April 8, 2024 - click here to reply
    I'm reading this from far away in Korea, and I think it will be very helpful for my NAS configuration, thank you.
    I have a question. Is there any difference in power consumption between the H610, B660, and B760? Also, if I connect a GPU to PCIe, will I only have an inefficient C2 state?
  49. cromo on April 12, 2024 - click here to reply
    I attempted to repurpose an HP Z2 G9 workstation motherboard, which is W680 and supports ECC, but not without power efficiency issues. I explained it in details here: https://forums.servethehome.com/index.php?threads/a-cost-effective-intel-w680-ecc-server-repurposing-an-hp-z2-g9-motherboard.43943/

    Log story short, I am having a problem with the first PCI slot reducing the C states to 2 with a GPU installed. Otherwise, with 2 x Lexar 1TB NVMEs and 64 GB memory, I am seeing power consumption as low as 4.5-6W with it 12600k, which is astonishing.
  50. Anonymous on April 13, 2024 - click here to reply
    Great article. Really good read.
  51. Martin on April 20, 2024 - click here to reply
    Interesting read, comments on the article were thoughtfull. ;)
  52. Rasmus on April 28, 2024 - click here to reply
    Hi

    Im looking to build a ITX based i3-12100 nas with the asm chip. However, as i understand, the usage of the x16 pci slot Will Force the system to remain at C2.

    But, Will running 8HDDs, 2ssds and one nvme (chipset) make reaching High Cstates a bit irellevant?

    I Assume your wattage tests show only the asm in a non cpu connected slot. It looks like i would save roughly 20W by achieving higher cstates with my cpu? The harddrives idle capacity and thus power savnings is not connected to the C-States?
    • Based on my results, I'd expect roughly 12-14w extra power consumption any time you're in C2 instead of ~C8 - whether that's relevant or is insignificant depends on your design goals. As to HDD's, I mention HDD power saving specifics of the drives I used under the "unnecessary storage details" section near the end of the post, but to ballpark it here, saw almost 5w saved per 3.5" drive when spun down instead of in active idle.

      Drive power savings and C-State savings are mostly unconnected: I say mostly because certain NVMe drives, certain SSDs, and even certain SATA configurations seem to inhibit C-States. But the power savings from having an HDD physically spun down (vs spun up) should be completely independent from anything else happening in the system power-wise, including C-States.
  53. duxet on May 2, 2024 - click here to reply
    I want to buy B760 ITX motherboard for my next NAS setup, but I am not sure which 2.5GbE NIC should I choose: Intel or Realtek one. I was sure that Intel would be a better choice, but I read on Unraid forum that actually their NICs are preventing system from reaching better C state than C2 when used with 2.5Gb link, while Realtek can reach C8. Even if that's true, then the question is if it's possible without using out of the tree r8125 driver instead of r8169. Could anyone confirm or deny this?
    • Alex left a comment (scroll up to Dec 31 2023) where he used
      echo 1 > /sys/bus/pci/devices/0000\:04\:00.0/link/l1_aspm
      to allow L1 for the Realtek NIC on his Gigabyte B760M, and Jon followed up in a comment with an easier device path.
      • duxet on May 3, 2024 - click here to reply
        But does it work with newer kernel versions? Looks like it's a very bumpy road with Realtek and ASPM, as sometimes it gets enabled: https://github.com/torvalds/linux/commit/a99790bf5c7f3d68d8b01e015d3212a98ee7bd57 but later it's disabled again: https://lists.ubuntu.com/archives/kernel-team/2023-September/142666.html

        I am a bit worried that even if I force ASPM to be enabled then it could cause some weird stability issues. I guess 1GbE cards are still the only safe choice to be sure that power management will work properly with Linux.
        • Luis on May 9, 2024 - click here to reply
          I can confirm on a build I just completed with a 2.5GB Intel NIC with an i5 13500 I’m idling at 7-8w, with no HDDs. I can hit C10 97% of the time and the i226-v chip reports ASPM being fully supported. My guess is with a couple more tweaks I can stay closer to 7w idle.
          • Philipb on September 16, 2024
            Thanks Luis, which board did you use?
          • vixmix on November 7, 2024
            Hi Luis - what model of Motherboard and chipset you use?
  54. Carsten on May 3, 2024 - click here to reply
    I just got an ASUS PRIME Z790-P (V2) with DDR5 and Intel i3-14100. Would be neat to see you doing this with the latest gen and DDR5 differences.
  55. Alex on May 13, 2024 - click here to reply
    Does anyone have/know an Intel chip based PCI-E card with at least two Gbit RJ45 connectors, which doesn't cause the hw package from staying at C3 instead of going to C8? I want to build a Proxmox server running opnsense as vm and if I can save 7W just by picking the right card, it would be fantastic
  56. Michael on May 16, 2024 - click here to reply
    Hi Matt,
    thanks for the great article. I use the same build and get C10 states using both the lower M2-Slots with Samsung 970 EVO. What bothers me is the temperature. The left slot is consistently getting 15 degree celsius lower temperatures. I am using BeQuiet heat sinks. I googled but did not find a clue. Did anybody observe a similar problem ?
    Thanks a lot Michael
  57. Alex on May 17, 2024 - click here to reply
    Adding some puzzle elements....

    Asus Prime H770 Plus D4, I3-12100, 2x32GB Corsair Vengeance LPX 2666MHz RAM, be quiet! Pure Power 12 M (550 W), WD blue SN570 512GB NVME drive, Supermicro AOC-SG-I2 Dual Port GbE, be quiet Pure rock slim 2 CPU cooler

    After forcing ASPM on the Realtek driver, which seemed to cause troubles even while it was deactivated, I can plug the WD blue into the Gen 4x4 NVME slot and the Supermicor in the Gen4x4 PCI-E slot, without the hardware package staying at C3. With Proxmox 8.2 I get around 60% time on C8 for the HW package, 98% C10 for the CPU and 96% on C7 for the Core(HW), resulting in around 8-9W power consumption, measured at the wall. All after applying all the standard BIOS power saving settings and running powertop --auto-tune, plus setting L1 ASPM on the Realtek via command line.

    All I see are short spikes, where consumption goes up to 15W, just to drop back to 8W, while running powertop. So I have 2 additional slots for MVME drives or places where I can add NVME -> 6x SATA adapter cards (with ASM1166 chipset).

    I also can confirm that after setting ASPM on Realtek NIC other cards don't cause C3 on the package anymore. If someone wants a cheaper option than the Supermicro, the "10Gtek dual NIC card with 82576 Intel chipset" also works fine.
  58. eduardz on May 18, 2024 - click here to reply
    Hello Matt

    Do you have an optimised build for 2024 or any hints, sugestions?

    I plan a nas build for zfs (raidz2) with 6 or 8 drives + plex server.

    I was looking for a tower case / motherboard for i5-14500t (plex igpu transcode) with at least ddr5 (it has some checksum error correction but not as good as ECC ) or maybe a mobo with ddr4 with ecc un-buffered (supported)
  59. Anonymous on May 18, 2024 - click here to reply
    Nice article, especially the power consume data.
  60. ChrisC on June 9, 2024 - click here to reply
    Excellent article, but I am also gobsmacked at what has been achieved.

    The reason I found your article in the first place is I have been experimenting with ASPM on a desktop system and found it only saved me a miserable 2 watts of power, your article indicates something isnt right with my findings and I perhaps need to do more digging, sadly hardly anyone spits out power saving numbers for specific things. Your article suggests to me the biggest gain from ASPM is that it allows the CPU to go in to a deep power saving state, so I suspect thats where my problems lie. I am also taking an interest because like you I am using desktop parts to run a storage platform which I would like to be as low power draw as possible.

    Interesting also that you have drives that have such low idle power.

    I brought two seagate ironwolf drives, which I regret, the idle power draw of these drives is a whopping 8w (when spinning), although when in standby they are under 1w. I then brought some 12tb WD helium drives which are under 4w idle when spinning. Similar power to the seagate when in standby.

    Other issues with the two seagate drives are (a) if any command is sent to the drive, e.g. SMART query, they will spin up, which is very odd behaviour I never seen from drive before, even querying their sleep state with the seachest tools spins them up, the WD drives attached to the same ASmedia controller card dont exhibit this behaviour. Also the slower RPM spin mode which I think is idle_c state doesnt work on my two seagate drives. TrueNAS the software I am using, sends a SMART query every 5 minutes for the temperature monitor feature which is hard to turn off, and is a global on/off only. So if the seagate drives are span down it wakes them up, I ended up manually patching the code so I could blacklist the two seagate drives from temperature monitoring, and also so that the SMART health checks were skipped if the drives are in standby.

    With this in mind though my power usage is horrific compared to yours, the system runs headless, 4 spindles, 3 SATA SSD and 1 NVME SSD, and at idle its about 48W, and about 34W with all spindles spun down. 3 fans are in the system,

    I also have interesting observations regarding NVME.

    In my systems, I have found NVME drives to consistently run much hotter than SATA SSD, and also usually hotter than active spindles as well. In a NUC, I had to resort to active cooling as the NVME was pegged at the 70C throttle limit even when idle. Like yourself I have discovered these things are a bit all over the place when it comes to power states.

    I have discovered as an example in windows, ASPM seems to have absolutely no impact on any of the 3 NVME drives in the system. L1 and L0s both no impact on temperatures. I have a Samsung 980 Pro, A WD SN850X, and a PCI express Intel DC P4600 which has its own beefy heatsink.

    Essentially the DC P4600 is always below 30C and doesnt get affected by ASPM mode.
    The WD SN850X runs in the low to mid 40s depending on ambient and is not affected by ASPM.
    I also have tested a WD SN570, and it behaves the same as the SN850X no affect with ASPM it idles at around 45C.
    Finally the 980 Pro is also not affected by ASPM, however I can trigger lower power states with this drive, idles very high at about 54-60C.

    So windows has hidden power settings that allow you to play with the NVME power states directly, the Samsung 980 Pro will drop by about 7-8C if in the first NVME power saving state, which whilst appreciated its still my hottest NVME drive and way hotter than my SATA SSDs which run in the 20s. Interestingly the second power saving state only drops a further 1-2C.

    WD NVME drives seem to consistently not support a deeper power saving state according to various reviews and reports on the net. Luckily my WD drives dont run as hot as my Samsung drive.

    I also own a 970 EVO which used to idle at about 65C, I got it down to low 40s by placing it inside a PCIe NVME adaptor with a big heatsink on it, so my experience of Samsung drives isnt great for temperatures. I have decided I am not a fan of NVME drives, they run hot for me and have very high idle power compared to SATA SSDs, yet you have appeared to crack them.
  61. Edward on July 24, 2024 - click here to reply
    Thanks for sharing all your testing. The observations about using cpu-pcie lanes really surprised me. That means on a mini itx build, it would be very difficult to go beyond 4 sata, no?
    • Edward on July 26, 2024 - click here to reply
      Found a possible solution..
      MSI and ASRock both have a z790 mitx motherboard with 3 m.2 slots... Two of which are chipset connected. That would allow you to have a m.2 to sata adapter, an nvme drive and avoid the cpu-pcie/cpu-m2 slots. Both boards are in the $250-300 range though, so the power savings/cost ratio takes a hit.
  62. RD on August 27, 2024 - click here to reply
    Found this incredibly helpful article. I'm thinking about building a NAS to upgrade from a Synology box and have been worried about increased power consumption. I do run a number of applications (home automation) which use storage pretty much constantly so optimizing *idle* power use isn't as useful as reducing power under active use. Will the choices and settings you have recommended above generally also help with having an overall lower power system when it's running?
    • Matt Gadient on August 27, 2024 - click here to reply
      The Crucial P3 NVMe SSD and Seagate 2.5" HDD drives are both extremely efficient when being accessed (check the TomsHardware review of the Crucial P3 2TB at https://www.tomshardware.com/reviews/crucial-p3-ssd-review/2 to get a good sense as to how it compared with others). The Corsair RM(x) PSU is also up there in terms of efficiency. I don't think you can go wrong with any of those components regardless of whether you're idling or under heavy use.

      As for the CPU/Motherboard, the Intel 12th gen really starts to lose it's benefit if it's not able to spend much time in the C6-C10 states (and unfortunately there's no way to know whether you'll reach those states until you try). But generally speaking for a system that's going to be moderately loaded 24/7 I'd personally start to gravitate more towards AMD at this point in time.
  63. Bastian on August 31, 2024 - click here to reply
    Hi Matt, love the detailed work you provide to the low-power home server community!

    I was wondering if you by any chance considered the CSM version of ASUS Prime H770-Plus D4. From what I gathered, CSM in combination with i5 like 12500 or 13500 should allow vPro Enterprise features, which in turn is supposed to bring DASH, KVM/IPMI like remote management capabilities. DASH is by far not as powerful as a full blown ASPEED AST2600 BMC solution, which usually draws additional 5-10W.

    At the same time, CSM is also intended to bring enterprise like stability, which could mean fewer BIOS options for tuning C-states and so on.
    • Matt Gadient on August 31, 2024 - click here to reply
      The CSM variant wasn't on my radar at the time, which likely means there was no local availability when I was motherboard shopping.

      Taking a quick look at the product page now, the only difference I'm seeing between it and the non-CSM model is access to ASUS Control Center Express. Tech specs page for the CSM variant is identical except for adding "1 x ACC Express Activation Key Card". Manual is identical, with no distinguishing features mentioned. I downloaded both BIOS's (1663) and the checksums were identical, though I suppose features could be flipped via flags.

      I wouldn't be surprised if features/functionality/BIOS were all exactly the same, with the exception of the functionality provided by ACC Express software. But no way to know for sure except for someone to try.
  64. Anonymous on September 3, 2024 - click here to reply
    Trying to rebuild your whole system. 12400 32gb h770 d4 plus in a fantec 24-slot with a X710-DA2 and a 9600-24i. Main nvme is a 1 tb samsung pro. The corsair is not deliverable anymore so i went with a seasonic titanium.

    The system can reach c10 without the 9600. It's not preped yet, so i didn't insert it.

    What really sucks on the board is the min fan speed of 20%. And I don't know why, but my min draw is 11,5W but with the X710-DA2. I will switch the cpu cooler to a passive one. Might be another watt gained.

    Just an info to others interested in the X710-DA2: no WoL at least for me. Patching that thing is PURE cancer - do it in EFI. That thing becomes HOT in active mode. You need to either cool it actively or swap the sink. I went with the latter. Turn the card with a pcie kit to stand horizontally. Put on a https://www.reichelt.de/kuehlkoerper-75-mm-alu-1-3-k-w-sk-89-75-kl-ssr-p227795.html?search=Sk+89+7 so i now also own a dremel. You need to make the card STAND on the mobo with distance spacers. Hint: efi shell only with a selfmade efi. Forget about that mobo opening it's efi.
  65. Philipb on September 16, 2024 - click here to reply
    Absolutely amazing write up thank you so much. I'm pretty much going to clone this.

    By any chance, for the same CPU or the i3-12th gen do you know of any good m-ATX or itx boards for power such as the above?
    • Matt Gadient on September 16, 2024 - click here to reply
      Given my issues with the Gigabyte board I tried, if I were trying to build an identical system but in mATX or ITX form factors (and willing to lose a number of PCI-E and NVMe slots), I would personally stick with the ASUS PRIME models and try one of:
      • ASUS PRIME H610I-PLUS D4 (ITX, DDR4, but 1G Realtek ethernet might be a major dice-roll)
      • ASUS PRIME H610M-A D4-CSM (mATX, DDR4, 1G Intel)
      • ASUS PRIME Z790M-PLUS (mATX, DDR5, 1G Intel)
      It won't come as a surprise that I really like the Intel 1G Ethernet on the last 2.

      To be clear, I haven't actually tried any of these: my assumption is that BIOS etc would probably be similar to the H770-PLUS D4 and that I'd be able to mirror the outcome. But I don't know for sure. The Z790M-PLUS packs 4xPCIe and 3xNVMe on it - assuming the DDR5 doesn't pose an issue, it's probably the one I'd gravitate towards. However I suspect that small chipset heatsink might need some airflow.

      There could be many other suitable options, mind you, including outside of the PRIME series, and including outside of ASUS. Historically, Gigabyte was always my go-to. It just didn't work out for me this generation.
      • Jim on October 22, 2024 - click here to reply
        Hi Matt, I did some trials with the "ASUS PRIME H610M-A D4". (I needed a MB which fits into my Jonsbo C2)

        Setup:
        - ASUS PRIME H610M-A D4-CSM
        - i3 12100 with Intel cooler
        - Samsung 970 Evo Plus 1TB as boot drive (on first M.2 slot)
        - Mushkin 2x 16GB DDR4 3200
        - Be quiet M12 550
        - minimal Debian 12

        I transfered Matt's Bios settings to the mATX Mainboard!
        Measured at the wall (Fritz Dect 200) = 6,4 Watt (idle)

        With a PicoPSU-90 I could reach 5,7 Watt (but much higher consumption at WOL-Standby!)

        Thanks for your great work, without your Bios instructions, it would take month to get to a low power state...
  66. Lukas on September 18, 2024 - click here to reply
    Hey Matt, thank you for this indepth article!
    I am actually also looking for mATX mainboards for a really efficient homeserver. I just have an extra caveat of building it in a passively cooled case (HDPlex H3).
    Now, this makes my mainboard choice a bit hard, because - as it seems -, I have to choose between two mainboards:
    Asus Prime Z790M Plus
    + Intel 1G Ethernet
    - kinda small and flimsy VRM cooling
    Asus TUF B760M Plus II
    + cheaper in my country by a fair bit
    + bulkier heatsink
    - 2.5G Realtek Ethernet

    I am currently lean towards the TUF, as another Ethernet is easier added than stronger VRM heatsink, but I wanted your opinion on how much of a show stopper the 2.5G realtek ethernet controller is.
    Thanks in advance!
    • Matt Gadient on September 18, 2024 - click here to reply
      Don't know if you've checked already, but make sure there's enough clearance above the VRM cooler for the heatpipes on your HDPlex H3 case. Taking a quick glance at product pictures for that case, it seems like the heatpipes aren't all that high up.

      With that in mind, depending on the CPU you're planning to run, the VRM cooling might not matter. The H770-PLUS I used doesn't even have a VRM cooler across the top VRMs. While I added some small heatsinks to mine, they weren't really needed as the VRMs didn't get very hot to begin with.

      My bigger concern with passive cooling would be the chipset heatsink because that can get extremely hot when the chipset is pushing a lot of data. The TUF GAMING B760M Plus II looks like it has the same one as the Prime Z790M Plus (just in black and rotated). Probably not much you can do here short of popping off the heatsink and doing some warranty-voiding modding though.

      Putting all that aside, if you're willing to jump through the hoops necessary to get good C-States, the Realtek 2.5G isn't necessarily a show-stopper. Though I'd make sure to buy from a retailer with an easy return policy just in case it's more problematic than hoped.
    • Daniel on September 21, 2024 - click here to reply
      I had TUF B760M DDR4. The chipset heatsink is even thinner than it looks in the pictures, but the real problem was idle and light load stability. Every few days or so, the system would randomly shut down.
      I have no idea whether this was a faulty unit or a design flaw, but I wouldn't risk buying this board from a shop that doesn't offer a 30-day return policy. ASUS support was completely useless, as is the warranty in cases like this.
  67. Andy23 on September 19, 2024 - click here to reply
    Appreciate your writeup, Matt! I'd like to share my experience. I'm able to reach 10.1W at idle (measured at wall), with Proxmox 8.2.2 and BIOS & powertop tweaks (C10 state reached).

    CPU: i7-14700
    Motherboard: Asus Z790-P WIFI (2.5G LAN connected)
    RAM: 96GB DDR5 (2x32GB + 2x16GB)
    NVMe: Samsung 970 EVO Plus 1TB
    PSU: Corsair RM750x
    The power consumption was measured without a monitor.

    A few observations:
    ---- importance of PSU ----
    1) I started with a Seasonic Focus 1000w PSU and the power consumption was 14.4w. Switching to Corsair RM750x led to +4w power reduction. The RM750x has a power efficiency of ~80% at 20w, that means that my original Focus 1000w must have a bad bad power efficiency at 20w (10.1*0.8/14.4=56%)!! So, folks please consider investing in a better PSU (reference: RM750x numbers in this sheet: https://docs.google.com/spreadsheets/d/1TnPx1h-nUKgq3MFzwl-OOIsuX_JSIurIq3JkFZVMUas/edit?gid=110239702#gid=110239702)
    2) powertop 2.14 does NOT support Intel 13/14Gen CPUs (it initially only showed C3 states). I had to compile the latest powertop 2.15 and that allowed me to see the C6-C10 state.
    ---- Problematic onboard SATA controller of ASUS Z790-P ----
    3) a surprising observation is that if I attach a SSD to the onboard SATA interface, the CPU c-state only gets as far as C6 (no C8, no C10 at all). However, if I attach the same SSD to a ASM1166 PCIe-to-SATA card, I could reach C10. This seems to suggest that the onboard SATA controller of this motherboard does NOT support ASPM properly ("lspci -vv" output of the onboard SATA controller does NOT show any ASPM capability).
    ---- Cost of adding a GPU ----
    4) adding a Nvidia GTX 1050 Ti GPU led to a whopping 16w increase at idle. Soon I realized that nvidia GPU idle watt can be reduced with an optimization "nvidia-smi --persistence-mode=1", with which the 16w becomes 8w (GPU performance mode P8). However, it's still additional 80% of my 10.1w idle when without the GPU. Probably not worth it to leave the GPU in. More research shows that the GPU itself only consumes about 3w at idle, but because of the GPU, the CPU C-states are held at C6, hence the 8w difference (with GPU vs. without).
    5) power consumption increased by another 6w when I attached the GPU to the Chipset-attached PCIe slot (initially it was in the CPU-attached PCIe slot). Yes, with all optimization, the GPU adds 8+6=14w if you attach it to the chipset-attached slot of this motherboard. I noticed that C-state changed from max C6 to max C3, which probably is the source of the additional 6w.

    Overall, I'm happy with the result and it's a fun experience. I'd really like to add a GPU (e.g. for photo-related tasks), but it's too expensive power-consumption-wise. Anyone has experience with GPU on low-power servers? Are there any ways to free the CPU c-state so that they can reach C8/C10? Note I'm talking about power consumption at idle (the GPU is not used for graphics output, and not connected to a monitor).
  68. Federico on September 21, 2024 - click here to reply
    Amazing analysis! I was getting lost looking for a power-conscious HBA and I had not even started considering the impact on power states...

    I'm surprised that AMD chipsets have fallen so far behind on power usage. I miss the old times when there were multiple good options for mini-ITX motherboards with the early AMD APUs and there was a concerted effort to have the whole package be low-power.

    I just saw the new AMD X870 chipset is supposed to have a TDP of 7 W but the motherboards announced so far seem a bit overkill ("The ROG Strix X870-A Gaming WIFI comes with 16+2+2 power stages rated for up to 90A", says alktech).
  69. lowpowerobsession on October 2, 2024 - click here to reply
    Hi Matt (and everyone else)!

    Looking to upgrade my pfsense and truenas boxes with 10 GbE networking, and came across this post. Taking the advice of going with a board with an intel 1gb ethernet controller, I went with the Asrock b660m pro RS, and a pentium g7400 for both boxes. I wanted an extra full length PCIe port for some expandability in the future.

    Going to try the tweaks here: https://www.reddit.com/r/ASRock/comments/1998ozl/how_to_get_higher_pkg_cstates_on_asrock/

    to see if I can get some acceptable power consumption. I saw your comment about using an H610M from ASUS and will probably switch to that if I do not have much luck with these boards. Will report back as I cannot seem to find much info on if people were successful in lowering b660m power draw.

    Any advice would be appreciated!
  70. mack on October 16, 2024 - click here to reply
    My build:
    - Chinese motherboard CWWK Q670 V1, the board has 3xnvme, 8 sata, pcie5 x16 with the possibility of splitting into 2 x8 (theoretically we can put 5 nvme disks), 2x2.5Gbs
    - i5 13500T
    - 1 module 48GB DDR5 Crucial Pro.
    - 2xSamsung evo 970 plus, 1xLexar NM790
    pico power supply (I don't currently have another)
    - connected lan in one port
    In idle state with Proxmox running about 8-10W

    Generally the parameters are nice, but in my opinion the bios is not refined, problems with ASPM in one port, only samsung evo works in it with ASPM enabled, in the rest it is ok with other disks.

    The processor only goes down to the C6 state

Leave a Comment

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.