Currently if you pay a somewhat average rate for electricity, the math works out pretty nicely: 1W = $1/year (approx) in electricity for something running 24/7. Subtract a little if you pay to heat your home, and add a little for extra AC in the summer.
I needed to put together a NAS / file server to replace an old power-hungry one. This time I was looking to do better in terms of power usage, and hoping to spend a bit less.
I started by looking at my most recent (non-NAS) machines. The most recent machine I put together ran an i3-6300 which I wrote about at Building a low power PC on Skylake – 10 watts idle. It idled at 10 watts (spoiler) and pulled 56-58 watts running Prime95 depending on the undervolt. Both measurements taken from the wall. However, it was being used as a typical desktop machine.
My laptop with a Kaby Lake (R) manages 5-8W at idle (and that includes the screen!). While I’d obviously have a very hard time hitting that in a desktop machine with off-the-shelf components, I was hoping to build something that would at least idle in the 6-8W ballpark.
Was I successful? Let’s find out!
Normally I’d start at the CPU/motherboard but this is a situation where X relies on Y which relies on Z. It’s easier to start with Z.
In How to shuck the Seagate Expansion 4TB portable (STEA4000400), and why…, I talked about 2.5″ drives pulling about 1-2 watts whereas 3.5″ drives tended to pull from 3-10 watts.
Let’s look at some data for current Seagate SMR drives.
|2.5″ 5TB||3.5″ 5TB||3.5″ 8TB|
|idle low power||0.85w||?||?|
|standby/sleep||0.18w||under 0.75w||under 0.75w|
The chart uses SMR variants because that’s the only place you can get high-capacity 2.5″ drives. The reality is the 3.5″ drives tend to use 3-4x more power across the board. Note that there are a number of non-SMR 3.5″ drives that do a bit better than the one in the chart (the chart shows the Seagate Archive), though they still fall into the “pulls 3x-4x more power” category.
Power consumption really starts to matter when you get multiple drives going:
- At some point, those spinning-rust hard drives become the most power hungry device in your machine.
- The annual electrical cost starts to add up when you have a lot of 3.5″ drives, and that’s before you account for extra fans and higher summer AC use.
- Finding a PSU that’s efficient at a low-power idle with drives spun down while ALSO having the capacity to spin up a bunch of 3.5″ hard drives is challenging.
Going with shucked 2.5″ drives is currently an economical long term choice when it’s viable. That said, if you’re doing frequent writes, need fast resilver/rebuild times, or need huge amounts of total storage and are limited by SATA ports, 3.5″ drives (ideally non-SMR for performance) may be the way to go.
Of course if you’re looking at small storage requirements (under 4TB for example), looking at high capacity SSDs will get you high performance with low power draw but with much higher up-front cost.
For my usage (heavy reads, fewer writes), 2.5″ SMR drives were the way to go.
4 of the 2.5″ drives means a total of less than 1 watt when spun down, approx 4-5 watts when spun up, and approx 8 watts when actively reading and writing.
One advantage the previous 10 watt Skylake machine had was that it was powered by an extremely efficient Antec pico-style PSU built into the case and powered from a 19V adapter.
On the other hand, the old power-hungry machine used a standard ATX power supply.
For this new build, I strongly considered going with a Pico PSU but eventually decided against it. Here’s why…
Pico PSU 5V Amp/Current Capability
When multiple hard drives spin up, they can pull a good bit of juice from the 5V rail. On its own not bad: 6x hard drives would generally peak at under 25 watts over the 5V rail for typical drives. Other components powered via the motherboard come into play though: For example, each device connected to a USB 3.0 port can pull up to 0.9A (or 1.5A if it’s a charging port) so ballpark 5w to 7.5W there. As for motherboard-specific components, the total power sum generally isn’t advertised.
The majority of standard ATX PSUs handle 20A on the 5V rail, bu it’s pretty tough to find a standalone Pico ATX PSU that handles much more than 6-8A. If you look at the specs of a number of Pico-style supplies you’ll find that the 12V rails have ample power, but 3.3V/5V don’t scale up as well. This makes sense, as most PicoPSUs seem to essentially pass through 12V from the adapter, so most of the current-related work they do is bucking down to 5V or 3.3V.
I did find some that were “rated” to handle the eventual power draw I predicted. However, in a number of cases the wires or pins were undersized for the amperage I’d be asking of it, and voltage drop became a concern.
If this were the only issue, I’d have direct-soldered some new wires and given it a shot. However….
Power Brick Quality and Pricing
I was mildly alarmed to see no-name power adapters “frequently bought together” on Amazon with the higher end Pico PSUs. Since the bulk of the work/protection/filtering happens in the mains adapter, it would be odd to cheap out here.
Looking on Digikey for some reasonable adapters (with high efficiency), it became clear that I could get solid adapters with detailed spec sheets, but the cost was starting to get up there.
Still, total price was competitive with ATX adapters. However…
Pico PSU Quality Concerns
I’m pretty sure I’ve bought $5 buck converters with higher component counts than some of the Pico PSUs I came across. And those buck converters didn’t have the same strict requirements that typical ATX power supplies do for ripple, transient response, overload/short-circuit protection, power sequencing, etc.
Looking again at the Antec Pico-style supply still running the Skylake machine, I realized it was substantially more complex than any of the Pico PSUs I came across, despite having a power brick to do a lot of the work.
Ultimately, this is what ended the PicoPSU search. For a basic desktop it wouldn’t be a major problem if a Pico PSU caused instability or destroyed a component. Instability causing a RAID array to be corrupted (or multiple drives destroyed) on the other hand…. kind of a big risk to take. Despite being around for a number of years, Pico PSUs are still a bit of a “wild west”, similar to the early ATX PSU days before major web publications started doing substantial testing.
PSU (continued) – Antec Earthwatts 380W ATX
Since a Pico PSU was out of the question, I intended to get the most efficient ATX supply I could find. Toms Hardware performs phenomenal testing on PSUs and their review of the Corsair RM650 seemed to show the best efficiency at low wattages. Unfortunately after ordering it, I found it was too long for the case (oops).
Ideally I’d have something under 200W, but since it’s almost impossible to find branded sub-300W ATX power supplies, I dug into my storage bin and pulled out some spare PSUs, tested them for no-load power draw, in addition to power with the motherboard I ended up using (which you’ll read about next).
Quick PSU Power Draw results
Off No-Load MB + BIOS Antec EarthWatts 380w Bronze 0w 3w 9w Antec Earthwatts 500w 0w 4w 10-11w Antec Earthwatts 450w Plat 0w 4w 8-9w Apevia WIN-500XSPX 4w 17-18w no test
Yikes on the APEVIA! I wasn’t about to hook it up to the motherboard by the way. It actually came with a case I wrote about years ago and was never used beyond the initial picture (cables are still twist-tied together). Yes, I’m glad I never used it. Yes, it’s possible I’ll salvage the fan. Yes APEVIA, you could have lowered the weight of the all the cases you sold by omitting the PSU and just FedEx-ing all your PSUs directly to the dump.
I settled on the Antec Earthwatts 380W Bronze.
Just being powered on with no load aside from it’s own fan, the PSU used 3W off the hop. Powering the MB/CPU/RAM brought things to 9W.
Speaking of the motherboard…
Motherboard and CPU – ASRock here we come
I knee-capped myself a bit here. And by a bit, I mean a lot. Two factors pushed me towards a certain motherboard:
- I already had an extra 8GB DDR4 SO-DIMM kicking around from my laptop upgrade.
- I was looking to spend as little as possible, while still getting a current generation CPU.
If you try to find a motherboard that solves both the above problems, right now you’ll undoubtedly land on an ASROCK Jxxx ITX complete with a Goldmont Plus CPU (Celeron J4005, Celeron J4105, or Pentium Silver J5005).
Here’s what I ended up with… the ASRock J4005B-ITX motherboard:
It’s less blurry in real life.
How did this motherboard/cpu choice kneecap me?
Here are a few limitations:
- The Intel controller within Goldmont Plus processors only has support for 2 SATA ports.
- The Goldmont Plus processors only have 6 PCI Express lanes which limits the number of 3rd party SATA controllers the manufacturer (ASRock in this case) can put in.
- These ASRock boards are ITX which results in them only having 1 PCIE slot which means only 1 PCIE SATA card can be installed.
- These budget Goldmont Plus boards have no voltage/frequency tuning available, so no undervolting available. No s0ix power states available either for further power reduction (though I don’t know if this is a CPU or motherboard limitation).
Let’s stop for a moment and evaluate. I’m creating a NAS, and I already limited my ability to add hard drives. Aiming for low power and I’ve already limited my ability to tweak power settings in the BIOS… not off to a good start, are we?!
If I’d been willing to spend a bit more up front and forego the usage of my extra DDR4 SODIMM, I’d have likely considered a current generation i3 and a non-ITX motherboard that had more SATA ports with some expansion slots for extra controllers. If only willing to forgo the DDR4 SODIMM, the ASRock J4005M or J4105M micro-ATX boards would have at least given 3 PCI-Express slots.
Motherboard Woes: ASRock J4005B-ITX and J4105-ITX
I also picked up the J4105-ITX for another machine which is fairly similar. Here are some pain points I came across between the 2 boards:
- The worst Memory QVL list I’ve ever seen. Seriously, I actually searched for a lot of the modules listed and they’re not even available at retail. To make matters worse, reviews have people showing problems with memory compatibility.
- PCI-E incompatibility with a PCIE-x2 card which kicked the ethernet out (mentioned later).
- Turbo won’t work if you use Windows Server 2019 and you’ll have a lot of missing devices shown in Device Manager (neither Windows Update nor ASRock have drivers available for Win Server). Note that Win 10 is fine as it has most of the drivers via Windows Update with ASRock filling in the rest.
- Hard power-offs can cause the system to not boot unless the power is killed for a period of time.
- Swapping RAM can require the CMOS to be cleared (or numerous restart attempts).
- J4105-ITX-specific: The ASM1061 that adds an extra 2 SATA ports (for a total of 4) started dying within a year, causing Command Timeouts to any hard drive that was plugged into it. Not that the ASM1061 is a very good controller to begin with…
On the plus side, both motherboards do support 16GB of RAM despite the specs claiming a max of 8. I tried 1 x 16GB stick and 2 x 8GB sticks for dual-channel. I didn’t test 32GB, though I suspect it would work. The RAM I tested with was a Kingston HyperX 16GB stick (dual-rank DDR4-2666 though it comes up as 2400), Kingston ValueRAM 8GB stick (single-rank), and my original Micron 8GB stick (single-rank).
UPDATE: I did finally get 32GB of RAM going (2x Kingston HyperX 16GB DDR4-2666 @ 2400Mhz). As the BIOS is extremely fickle when changing RAM, the process I ended up using which worked consistently was to (a) put in the new RAM, (b) short the clear-CMOS pins for a few seconds then release, (c) power up the machine, and (d) keep hitting the delete key. After what seems like 30+ seconds the fan speed changes briefly system then hard reboots (powers off then on automatically), but this time with the screen coming on and allowing you to press DEL to enter setup.
Power Consumption – Early Idle Tests (10-12 watts)
The initial test with just a keyboard and monitor attached resulted in 9 watts at the BIOS screen.
Once adding an SSD and booting into an OS, both Windows and Ubuntu would idle around 10-12 watts (though Ubuntu needed “powertop” tuning to get there).
It’s worth noting that in Ubuntu, power consumption was in that 10-12 watt range regardless of whether the Desktop or Server (cli-only) edition was used. Some GNOME stuff in the background would cause the CPU to bounce out of certain idle states, but if you’re trying to decide between Desktop and Server it’s really not going to make much difference in terms of power consumption. If you have a monitor hooked up you may as well use Desktop as it’s quick/easy to get it to turn off the screen after X minutes whereas the Server edition seems to simply leave it on all the time by default: fine if it’s headless but unfortunate if you have a monitor attached and forget to shut it off manually (using consoleblank in grub can help here).
Power Consumption – Pre-Tuning Idle AND Heavy Network/Disk Activity (4 HDD) (13-14 watts / 22 watts)
I installed a 4-port Marvel 88SE9215 SATA controller card in the PCIE slot.
I also tried an 8-port SATA controller card: the SA3008 which uses an ASM1806 PCIE bridge to drive 4x ASM1061 SATA controllers (incidentally, the ASRock J4105 uses the ASM1061 for 2 of the 4 ports it provides on the motherboard). The tiny bit of literature on the SA3008 out there suggests that it uses a 2x PCIE interface (despite being a 4x-sized card) and this motherboard supports 2x PCIE.
Unfortunately, the SA3008 card interfered with the Realtek network controller which wouldn’t come up. The card also pulled +4 watts compared to the Marvel-based card, really got warm even when inactive, and didn’t have any TIM between the controllers and the heatsink.
Update: I did later install an 8-port Marvel/JMicron 1x card which has worked quite well (written about here), though the power results below reflect the Marvel 4-port card.
Next, I set up a BTRFS RAID5 array with zstd:9 compression enabled across 4x Seagate SMR drives (4-5TB each).
Idle with this setup (drives not spun down) I was looking at 13-14 watts.
I ran an rsync from the old server to the new one. rsync and sshd had the CPU pegged and total consumption at the wall came to 25 watts. Note that rsync was operating between 6-32MB/s as it went through the files despite a gigabit connection, gravitating towards the low end as time went on. I eventually disabled mitigations and mounted the BTRFS array with nobarrier and speeds went up to a consistent 30+MB/s. Most of the CPU usage can be attributed to ZSTD compression being forced at a fairly high level.
If you’re doing substantial rsyncs onto a compressed BTRFS file system and are thinking about using these Jxxx-ITX boards, you may want to consider opting for a 4-core variant if you need a higher copy speed.
Power Consumption – Post Tuning Idle
As I alluded to above, I had done some tweaking. Here are the major bits:
- PowerTOP in Linux (auto-tune at start).
- Hard drives spun down after 30 mins.
- Replaced PSU fan with a Noctua fan.
- 1 case fan with the speed just above stall.
With the hard drives spun down, I was looking at a consistent 9 watts idle from the wall.
Highlights: The strengths of this setup
9 watts at spun-down idle (where it sits most of the time) is fairly reasonable considering there’s 1 SSD + 4 hard drives at my disposal with 16-20TB total capacity (12-15TB via RAID-5). Keeping in mind it’s being driven by an ATX PSU, this isn’t really a bad showing all things considered. For comparison, I took a look at a few Synology NAS devices and with the exception of a couple models, they all idle at a higher power draw.
If capacity were to become a major issue in the future, using 3.5″ drives instead could be done at the expense of about +15 watts idle, though if in a situation where they could be aggressively kept in sleep/standby I suspect the increase would only be 1-3 watts which is still less than the 8-port controller that I tried had used.
Sitting in open air (19 C), the CPU heatsink was at about 32 C during the rsync and touching each chip on the motherboard after power-down, none were detectably warm. The hottest component was the heatsink on the Marvell SATA controller card which sat at about 38 C.
The low power clearly translated to low heat, which meant I was able to get by with just 1 case fan at an extremely low setting: to be honest I probably could have relied on the PSU fan alone.
Limitations: The weak points of this setup
Unfortunately, the system as it stands will hold a max of 6 drives: 1 OS drive and 5x storage drives. Realistically, 4x storage drives becomes the day-to-day max because it’s worth having 1 spare port ready for hard drive upgrades/replacements. Other controller cards are possibilities down the road but options are really limited when the only expansion slot operates at a max PCIE rate of 2x.
The CPU being maxed during the file transfer is another drawback. This 2 core Celeron is worked pretty hard, and while it may be able to handle some other tasks in the future (ie Plex transcoding via Intel Quicksync), any time it’s asked to do 2 things at once I suspect it’ll slow to a crawl.
Switching from the J4005B to the J4105 would add 2 SATA ports which brings max drives from 6 to 8, and would double the core count: I’d expect slightly higher power usage but didn’t repeat all my tests with that configuration.
Doing it all over again: What I’d do differently
On one hand, I’m pleased that I managed to get below 10 watts: I’ve got a system that’ll likely serve files and do other tasks for years to come, all inside a nice low power envelope.
On the other hand, I’m really left to wonder if I might have managed to get there anyway with an undervolted i3 or Pentium Gold on a 300-series motherboard with the multiplier capped. Keep in mind that my previous Skylake build was idling at 10W – while it didn’t have 4x spun down drives or a full ATX PSU to contend with, it’s within the realm of possibility that improvements in Kaby Lake and beyond may be enough to offset those.
In any case, were I to go at this again, I suspect I’d go with a Micro-ATX board with 6 SATA ports and just tinker as much as possible to get the power consumption down. Obviously the cost would be a bit higher (and I wouldn’t have been able to use my spare DDR4-SODIMM), but future expansion would be substantially easier.
Update: Going Newer – Comet Lake at 11 Watts!
I’ll keep this short. By chance, I was doing some testing of one of my newer systems: an Intel i3-10320 on a Gigabyte H470M DS3H motherboard, running on a Corsair SF 450W Platinum power supply (my new favorite PSU for low power, though I had to extend the 24-pin cable to reach most motherboards). At idle with 2x16GB sticks of standard DDR4 and a couple NVMe drives, it pulls 11 watts, both in Windows and Ubuntu Desktop. The only out-of-the-ordinary trick here was forcing on all the C-states in the BIOS and choosing C10 as the desired C-state.
Update 2: Newer Still – 7 watt to 16 watt range on Alder Lake!
A full write-up for this one can be found at 7 watts idle on Intel 12th/13th gen: the foundation for building a low power server/NAS. Lots of details within, but as a teaser, when this Alder Lake 6-core 64GB DDR4-3200 system was in a similar 4×2.5″ SATA HDD configuration, it pulled 10 watts at idle with drives in standby (the 16 watt value is for 3xNVMe + 5×2.5″ SATA HDD + 6×3.5″ SATA HDD idle with drives in standby). It took a lot of work to get there, but may be worth a look if you’re hoping for a newer system.