9w idle: creating a low power home NAS / file server with 4 storage drives

Currently if you pay a somewhat average rate for electricity, the math works out pretty nicely: 1W = $1/year (approx) in electricity for something running 24/7. Subtract a little if you pay to heat your home, and add a little for extra AC in the summer.

I needed to put together a NAS / file server to replace an old power-hungry one. This time I was looking to do better in terms of power usage, and hoping to spend a bit less.

I started by looking at my most recent (non-NAS) machines. The most recent machine I put together ran an i3-6300 which I wrote about at Building a low power PC on Skylake – 10 watts idle. It idled at 10 watts (spoiler) and pulled 56-58 watts running Prime95 depending on the undervolt. Both measurements taken from the wall. However, it was being used as a typical desktop machine.

My laptop with a Kaby Lake (R) manages 5-8W at idle (and that includes the screen!). While I’d obviously have a very hard time hitting that in a desktop machine with off-the-shelf components, I was hoping to build something that would at least idle in the 6-8W ballpark.

Was I successful? Let’s find out!


Hard Drives

Normally I’d start at the CPU/motherboard but this is a situation where X relies on Y which relies on Z. It’s easier to start with Z.

In How to shuck the Seagate Expansion 4TB portable (STEA4000400), and why…, I talked about 2.5″ drives pulling about 1-2 watts whereas 3.5″ drives tended to pull from 3-10 watts.

Let’s look at some data for current Seagate SMR drives.

2.5″ 5TB3.5″ 5TB3.5″ 8TB
spinup max3.75w10-24w10-24w
idle low power0.85w??
standby/sleep0.18wunder 0.75wunder 0.75w

The chart uses SMR variants because that’s the only place you can get high-capacity 2.5″ drives. The reality is the 3.5″ drives tend to use 3-4x more power across the board. Note that there are a number of non-SMR 3.5″ drives that do a bit better than the one in the chart (the chart shows the Seagate Archive), though they still fall into the “pulls 3x-4x more power” category.

Power consumption really starts to matter when you get multiple drives going:

  • At some point, those spinning-rust hard drives become the most power hungry device in your machine.
  • The annual electrical cost starts to add up when you have a lot of 3.5″ drives, and that’s before you account for extra fans and higher summer AC use.
  • Finding a PSU that’s efficient at a low-power idle with drives spun down while ALSO having the capacity to spin up a bunch of 3.5″ hard drives is challenging.

Going with shucked 2.5″ drives is currently an economical long term choice when it’s viable. That said, if you’re doing frequent writes, need fast resilver/rebuild times, or need huge amounts of total storage and are limited by SATA ports, 3.5″ drives (ideally non-SMR for performance) may be the way to go.

Of course if you’re looking at small storage requirements (under 4TB for example), looking at high capacity SSDs will get you high performance with low power draw but with much higher up-front cost.

For my usage (heavy reads, fewer writes), 2.5″ SMR drives were the way to go.

4 of the 2.5″ drives means a total of less than 1 watt when spun down, approx 4-5 watts when spun up, and approx 8 watts when actively reading and writing.



One advantage the previous 10 watt Skylake machine had was that it was powered by an extremely efficient Antec pico-style PSU built into the case and powered from a 19V adapter.

On the other hand, the old power-hungry machine used a standard ATX power supply.

For this new build, I strongly considered going with a Pico PSU but eventually decided against it. Here’s why…

Pico PSU 5V Amp/Current Capability

When multiple hard drives spin up, they can pull a good bit of juice from the 5V rail. On its own not bad: 6x hard drives would generally peak at under 25 watts over the 5V rail for typical drives. Other components powered via the motherboard come into play though: For example, each device connected to a USB 3.0 port can pull up to 0.9A (or 1.5A if it’s a charging port) so ballpark 5w to 7.5W there. As for motherboard-specific components, the total power sum generally isn’t advertised.

The majority of standard ATX PSUs handle 20A on the 5V rail, bu it’s pretty tough to find a standalone Pico ATX PSU that handles much more than 6-8A.  If you look at the specs of a number of Pico-style supplies you’ll find that the 12V rails have ample power, but 3.3V/5V don’t scale up as well. This makes sense, as most PicoPSUs seem to essentially pass through 12V from the adapter, so most of the current-related work they do is bucking down to 5V or 3.3V.

I did find some that were “rated” to handle the eventual power draw I predicted. However, in a number of cases the wires or pins were undersized for the amperage I’d be asking of it, and voltage drop became a concern.

If this were the only issue, I’d have direct-soldered some new wires and given it a shot. However….


Power Brick Quality and Pricing

I was mildly alarmed to see no-name power adapters “frequently bought together” on Amazon with the higher end Pico PSUs. Since the bulk of the work/protection/filtering happens in the mains adapter, it would be odd to cheap out here.

Looking on Digikey for some reasonable adapters (with high efficiency), it became clear that I could get solid adapters with detailed spec sheets, but the cost was starting to get up there.

Still, total price was competitive with ATX adapters. However…


Pico PSU Quality Concerns

I’m pretty sure I’ve bought $5 buck converters with higher component counts than some of the Pico PSUs I came across. And those buck converters didn’t have the same strict requirements that typical ATX power supplies do for ripple, transient response, overload/short-circuit protection, power sequencing, etc.

Looking again at the Antec Pico-style supply still running the Skylake machine, I realized it was substantially more complex than any of the Pico PSUs I came across, despite having a power brick to do a lot of the work.

Ultimately, this is what ended the PicoPSU search. For a basic desktop it wouldn’t be a major problem if a Pico PSU caused instability or destroyed a component. Instability causing a RAID array to be corrupted (or multiple drives destroyed) on the other hand…. kind of a big risk to take. Despite being around for a number of years, Pico PSUs are still a bit of a “wild west”, similar to the early ATX PSU days before major web publications started doing substantial testing.

PSU (continued) – Antec Earthwatts 380W ATX

Since a Pico PSU was out of the question, I intended to get the most efficient ATX supply I could find. Toms Hardware performs phenomenal testing on PSUs and their review of the Corsair RM650 seemed to show the best efficiency at low wattages. Unfortunately after ordering it, I found it was too long for the case (oops).

Ideally I’d have something under 200W, but since it’s almost impossible to find branded sub-300W ATX power supplies, I dug into my storage bin and pulled out some spare PSUs, tested them for no-load power draw, in addition to power with the motherboard I ended up using (which you’ll read about next).

Quick PSU Power Draw results

OffNo-LoadMB + BIOS
Antec EarthWatts 380w Bronze0w3w9w
Antec Earthwatts 500w0w4w10-11w
Antec Earthwatts 450w Plat0w4w8-9w
Apevia WIN-500XSPX4w17-18wno test

Yikes on the APEVIA! I wasn’t about to hook it up to the motherboard by the way. It actually came with a case I wrote about years ago and was never used beyond the initial picture (cables are still twist-tied together). Yes, I’m glad I never used it. Yes, it’s possible I’ll salvage the fan. Yes APEVIA, you could have lowered the weight of the all the cases you sold by omitting the PSU and just FedEx-ing all your PSUs directly to the dump.

I settled on the Antec Earthwatts 380W Bronze.

Just being powered on with no load aside from it’s own fan, the PSU used 3W off the hop. Powering the MB/CPU/RAM brought things to 9W.

Speaking of the motherboard…

Motherboard and CPU – ASRock here we come

I knee-capped myself a bit here. And by a bit, I mean a lot. Two factors pushed me towards a certain motherboard:

  1. I already had an extra 8GB DDR4 SO-DIMM kicking around from my laptop upgrade.
  2. I was looking to spend as little as possible, while still getting a current generation CPU.

If you try to find a motherboard that solves both the above problems, right now you’ll undoubtedly land on an ASROCK Jxxx ITX complete with a Goldmont Plus CPU (Celeron J4005, Celeron J4105, or Pentium Silver J5005).

Here’s what I ended up with… the ASRock J4005B-ITX motherboard:

The ASRock J4005B-ITX

It’s less blurry in real life.

How did this motherboard/cpu choice kneecap me?

Here are a few limitations:

  • The Intel controller within Goldmont Plus processors only has support for 2 SATA ports.
  • The Goldmont Plus processors only have 6 PCI Express lanes which limits the number of 3rd party SATA controllers the manufacturer (ASRock in this case) can put in.
  • These ASRock boards are ITX which results in them only having 1 PCIE slot which means only 1 PCIE SATA card can be installed.
  • These budget Goldmont Plus boards have no voltage/frequency tuning available, so no undervolting available. No s0ix power states available either for further power reduction (though I don’t know if this is a CPU or motherboard limitation).

Let’s stop for a moment and evaluate. I’m creating a NAS, and I already limited my ability to add hard drives. Aiming for low power and I’ve already limited my ability to tweak power settings in the BIOS… not off to a good start, are we?!

If I’d been willing to spend a bit more up front and forego the usage of my extra DDR4 SODIMM, I’d have likely considered a current generation i3 and a non-ITX motherboard that had more SATA ports with some expansion slots for extra controllers. If only willing to forgo the DDR4 SODIMM, the ASRock J4005M or J4105M micro-ATX boards would have at least given 3 PCI-Express slots.



Motherboard Woes: ASRock J4005B-ITX and J4105-ITX

I also picked up the J4105-ITX for another machine which is fairly similar. Here are some pain points I came across between the 2 boards:

  1. The worst Memory QVL list I’ve ever seen. Seriously, I actually searched for a lot of the modules listed and they’re not even available at retail. To make matters worse, reviews have people showing problems with memory compatibility.
  2. PCI-E incompatibility with a PCIE-x2 card which kicked the ethernet out (mentioned later).
  3. Turbo won’t work if you use Windows Server 2019 and you’ll have a lot of missing devices shown in Device Manager (neither Windows Update nor ASRock have drivers available for Win Server). Note that Win 10 is fine as it has most of the drivers via Windows Update with ASRock filling in the rest.
  4. Hard power-offs can cause the system to not boot unless the power is killed for a period of time.
  5. Swapping RAM can require the CMOS to be cleared (or numerous restart attempts).
  6. J4105-ITX-specific: The ASM1061 that adds an extra 2 SATA ports (for a total of 4) started dying within a year, causing Command Timeouts to any hard drive that was plugged into it. Not that the ASM1061 is a very good controller to begin with…

On the plus side, both motherboards do support 16GB of RAM despite the specs claiming a max of 8. I tried 1 x 16GB stick and 2 x 8GB sticks for dual-channel. I didn’t test 32GB, though I suspect it would work. The RAM I tested with was a Kingston HyperX 16GB stick (dual-rank DDR4-2666 though it comes up as 2400), Kingston ValueRAM 8GB stick (single-rank), and my original Micron 8GB stick (single-rank).

UPDATE: I did finally get 32GB of RAM going (2x Kingston HyperX 16GB DDR4-2666 @ 2400Mhz). As the BIOS is extremely fickle when changing RAM, the process I ended up using which worked consistently was to (a) put in the new RAM, (b) short the clear-CMOS pins for a few seconds then release, (c) power up the machine, and (d) keep hitting the delete key. After what seems like 30+ seconds the fan speed changes briefly system then hard reboots (powers off then on automatically), but this time with the screen coming on and allowing you to press DEL to enter setup.

Power Consumption – Early Idle Tests (10-12 watts)

The initial test with just a keyboard and monitor attached resulted in 9 watts at the BIOS screen.

Once adding an SSD and booting into an OS, both Windows and Ubuntu would idle around 10-12 watts (though Ubuntu needed “powertop” tuning to get there).

It’s worth noting that in Ubuntu, power consumption was in that 10-12 watt range regardless of whether the Desktop or Server (cli-only) edition was used. Some GNOME stuff in the background would cause the CPU to bounce out of certain idle states, but if you’re trying to decide between Desktop and Server it’s really not going to make much difference in terms of power consumption. If you have a monitor hooked up you may as well use Desktop as it’s quick/easy to get it to turn off the screen after X minutes whereas the Server edition seems to simply leave it on all the time by default: fine if it’s headless but unfortunate if you have a monitor attached and forget to shut it off manually (using consoleblank in grub can help here).

Components for the 9W NAS

Power Consumption – Pre-Tuning Idle AND Heavy Network/Disk Activity (4 HDD) (13-14 watts / 22 watts)

I installed a 4-port Marvel 88SE9215 SATA controller card in the PCIE slot.

I also tried an 8-port SATA controller card: the SA3008 which uses an ASM1806 PCIE bridge to drive 4x ASM1061 SATA controllers (incidentally, the ASRock J4105 uses the ASM1061 for 2 of the 4 ports it provides on the motherboard). The tiny bit of literature on the SA3008 out there suggests that it uses a 2x PCIE interface (despite being a 4x-sized card) and this motherboard supports 2x PCIE.

Unfortunately, the SA3008 card interfered with the Realtek network controller which wouldn’t come up. The card also pulled +4 watts compared to the Marvel-based card, really got warm even when inactive, and didn’t have any TIM between the controllers and the heatsink.

Update: I did later install an 8-port Marvel/JMicron 1x card which has worked quite well (written about here), though the power results below reflect the Marvel 4-port card.

Next, I set up a BTRFS RAID5 array with zstd:9 compression enabled across 4x Seagate SMR drives (4-5TB each).

Idle with this setup (drives not spun down) I was looking at 13-14 watts.

I ran an rsync from the old server to the new one. rsync and sshd had the CPU pegged and total consumption at the wall came to 25 watts. Note that rsync was operating between 6-32MB/s as it went through the files despite a gigabit connection, gravitating towards the low end as time went on. I eventually disabled mitigations and mounted the BTRFS array with nobarrier and speeds went up to a consistent 30+MB/s. Most of the CPU usage can be attributed to ZSTD compression being forced at a fairly high level.

If you’re doing substantial rsyncs onto a compressed BTRFS file system and are thinking about using these Jxxx-ITX boards, you may want to consider opting for a 4-core variant if you need a higher copy speed.


Power Consumption – Post Tuning Idle

As I alluded to above, I had done some tweaking. Here are the major bits:

  • PowerTOP in Linux (auto-tune at start).
  • Hard drives spun down after 30 mins.
  • Replaced PSU fan with a Noctua fan.
  • 1 case fan with the speed just above stall.

With the hard drives spun down, I was looking at a consistent 9 watts idle from the wall.


Highlights: The strengths of this setup

9 watts at spun-down idle (where it sits most of the time) is fairly reasonable considering there’s 1 SSD + 4 hard drives at my disposal with 16-20TB total capacity (12-15TB via RAID-5). Keeping in mind it’s being driven by an ATX PSU, this isn’t really a bad showing all things considered. For comparison, I took a look at a few Synology NAS devices and with the exception of a couple models, they all idle at a higher power draw.

If capacity were to become a major issue in the future, using 3.5″ drives instead could be done at the expense of about +15 watts idle, though if in a situation where they could be aggressively kept in sleep/standby I suspect the increase would only be 1-3 watts which is still less than the 8-port controller that I tried had used.

Sitting in open air (19 C), the CPU heatsink was at about 32 C during the rsync and touching each chip on the motherboard after power-down, none were detectably warm. The hottest component was the heatsink on the Marvell SATA controller card which sat at about 38 C.

The low power clearly translated to low heat, which meant I was able to get by with just 1 case fan at an extremely low setting: to be honest I probably could have relied on the PSU fan alone.


Limitations: The weak points of this setup

Unfortunately, the system as it stands will hold a max of 6 drives: 1 OS drive and 5x storage drives. Realistically, 4x storage drives becomes the day-to-day max because it’s worth having 1 spare port ready for hard drive upgrades/replacements. Other controller cards are possibilities down the road but options are really limited when the only expansion slot operates at a max PCIE rate of 2x.

The CPU being maxed during the file transfer is another drawback. This 2 core Celeron is worked pretty hard, and while it may be able to handle some other tasks in the future (ie Plex transcoding via Intel Quicksync), any time it’s asked to do 2 things at once I suspect it’ll slow to a crawl.

Switching from the J4005B to the J4105 would add 2 SATA ports which brings max drives from 6 to 8, and would double the core count: I’d expect slightly higher power usage but didn’t repeat all my tests with that configuration.



Doing it all over again: What I’d do differently

On one hand, I’m pleased that I managed to get below 10 watts: I’ve got a system that’ll likely serve files and do other tasks for years to come, all inside a nice low power envelope.

On the other hand, I’m really left to wonder if I might have managed to get there anyway with an undervolted i3 or Pentium Gold on a 300-series motherboard with the multiplier capped. Keep in mind that my previous Skylake build was idling at 10W – while it didn’t have 4x spun down drives or a full ATX PSU to contend with, it’s within the realm of possibility that improvements in Kaby Lake and beyond may be enough to offset those.

In any case, were I to go at this again, I suspect I’d go with a Micro-ATX board with 6 SATA ports and just tinker as much as possible to get the power consumption down. Obviously the cost would be a bit higher (and I wouldn’t have been able to use my spare DDR4-SODIMM), but future expansion would be substantially easier.

27 Comments | Leave a Comment

 Sort by Oldest | Sort by Newest
  1. Jaron Ensley on December 31, 2019 - click here to reply
    Hi Matt -

    Just wanted to say that your work regarding 32-bit EFI/64-bit CPU Macbooks was a lifesaver. Just wanted to say thanks and you should make a YouTube video on how to do it correctly, because there are a lot of videos on how to do it wrong.

    Thanks, Jaron
  2. Luis on February 24, 2020 - click here to reply
    Very good post. Similar configuration to what I'm looking for (I may wait for J5040 processor to be released). I was searching for pico-PSU, after reading this, I changed my mind.
  3. Valerio on April 6, 2020 - click here to reply
    very nice article !
    If you went for a i3 and a non-ITX motherboard, how much more power it should require compared to the atom build ?
    From your experience, what idle power can a cpu like Pentium Gold G5400(T) with a mATX motherboard reach (just cpu/ram running idle) ?

    • Hey Valerio. As for an i3, I had a previous Skylake build that I managed to get down to 10W idle. Of course at load the wattage was quite a bit higher than the Goldmont. As to a non-ITX motherboard, it shouldn't inherently pull more power: however they often have a higher component count and fewer components is usually better in terms of power savings. The biggest challenge really seems to be the ATX PSU... efficiency really tends to really start dropping off at sub-20W to the point where you're fighting for every watt saved.
      • Valerio on April 6, 2020 - click here to reply
        Thanks Matt,
        so basically, just use lower wattage and small components, low power psu, small motherboard, etc.. every build i saw with i3 and such, does not go lower than 30w idle :( I'll try to see how efficient my spare enermax Eco80+ is, if it's not enough i'll try a different one, maybe a pico psu, it's hard to find this kind of real world tests online :D
  4. xtos on May 4, 2020 - click here to reply
    Excellent article....thank you. Just was I was looking for.
    (I feel grateful google's 1st result was your page for search term "lowest power consumption pc as nas")
  5. LucianLS on May 10, 2020 - click here to reply
    Thanks for this article! Here's my build with the similar ASRock J4105: https://forum.openmediavault.org/index.php?thread/32310-my-low-power-nas-in-a-closet/
    • Definitely like your build! Also nice to see the consumption while watching a film: it's something I was curious about but never got around to testing.
  6. Mathew7 on May 27, 2020 - click here to reply
    My concern is about UPS runtime, not yearly cost. I don't have exact power measurements, but my BackUPS RS 900G estimates 200minutes of uptime (5%, so around 27W) with NAS(+1xspun-up 3.5")+router (9W)+modem. My previous server had 44W by itself with 2x 3.5" HDDs spun-up.

    So I ended up using a Qnap TS-253Be (non-e also similar) with linux and a single 14TB 3.5" Seagate Ironwolf (thinking about a lower-rpm WD red and move this to the backup server)
    My config:
    - J3455 CPU + 2x8GB RAM (came with 1x2GB)
    - 1x14TB 3.5" Seagate Iron
    - 4xPCIe slot with 2x M.2 NVME adapter (Qnap PCIe 2.0, 4xPCIe to 2 4xPCIE)
    - 512GB Samsung 970 PRO (for torrents)
    - 128GB Samsung SM951 (for OS)
    But I lack ECC RAM and questionable PSU (although it's the original "certified" by Qnap)

    - BIOS boots only from internal flash or SATA drives (no boot from PCIe slots, so I have to load kernel from internal flash)
    - this model has 4GB internal flash (older Intel NASes had only 512MB)
    Don't know about Windows (maybe install on a SATA SSD and then transfer OS to NVME with boot partition on internal flash)

    I assume the 4-bay version to be as low-power as this and have 4x3.5" + 2xM.2. I think Qnap even has a 10G+2xM.2 PCIe adapter.
  7. Airbag888 on June 4, 2020 - click here to reply
    OMG where have you been all my life! I have looked for low power enthusiasts all over and never seem to find them..
    Albeit my use case intends on combining NAS, Home Server box in 1 I also am after the holy grail of low power consumption.
    My current aging NAS (Dlink ugh) caps at 11MB/s writes which sucks when transferring drone videos.
    I also want a space for docker images and some VMs for services like homeautomation.
    Anyhow electricity being expensive here I NEED the low power goodness for 24/7 runtime..

    What were the synology NAS that you looked at and what were their idle power consumption btw?

    Have you considered the Asrock A300 (with ryzen, etc) might be overkill for a NAS though :)

    Anyway thank you and look forward to more write ups or better youtube videos in that niche
    • As to the Synology NAS products, I'd looked at a few that were commonly available (Amazon etc) and then checked Synology's website (they list power consumption for models under the "Specs" heading). Currently in the 2-5 bay range depending on model they seem to list in the 5-15W consumption range with "HDD Hibernation" and in the 15-35W consumption range as "Access", though those power measurements seem to be taken with 1TB WD Reds which have lower power consumption than typical higher capacity 3.5" drives. To get a specific number for a model you'll have to look at its spec sheet. Those numbers are certainly decent, but obviously there are potential advantages to custom building something custom.
    • ballardian on August 12, 2020 - click here to reply
      I don't know if you have done your build yet but your use case sounds similar to mine. I'm still running a Windows 10 Pro home server on an embedded Atom 330 processor from 10 years ago. At this point raspberry pi 4 is probably just or more capable. I'm looking to replace it with a box that is more powerful but still low power/heat due to my case and use.

      My leading candidate is the GigaIPC mITX-1605A -- basically it runs a Ryzen Mobile processor at 17 TDP max (another 7 TDP max for graphics onboard for a total of 25W). This is as powerful on passmark as my 1 year old i7 laptop. Draw back is it doesn't have 6 SATA like I need but it does have a mini PCIe slot that I plan to add a SATA controller into though. It isn't a cheap board but after my last very low power, very low cost board has run for 10 years paying a bit more for longevity I think is worth it this time around. As my server isn't on all the time only when needed a higher draw is a good trade off for very low sleep mode.

      There is more information on my research / ideas on my blog which is techdabble dot wordpress dot com if you are interested.
  8. Danny on July 5, 2020 - click here to reply
    Thanks so much for posting this, it helps me a lot with my research. The NUC7CJYH (J4005 NUC) seems to be more efficient, drawing about 5W idle. It's not much power saved and it only has 1 2.5" drive plus M.2, so it's only suitable as a media server, however if you already have a NAS are looking for a home server, the NUC could be a better (and cheaper) choice.
  9. Danny on July 5, 2020 - click here to reply
    The Synology systems tend to be pretty power efficient actually; the spec sheets undersell their real world performance. For example, the 2-bay 220j will do 5.5W from wall on idle.

    Here is a test: https://www.techpowerup.com/review/synology-ds220j-2-bay-nas/12.html

    If your primary concern is power consumption, the Synology NASes will have you covered. There are many other reasons to build custom, but power consumption isn't really one IMO.
  10. Marc Gutt on September 23, 2020 - click here to reply
    I'm using the Asrock J5005 in my Backup Unraid NAS and as its possible to use SATA Port Multipliers you are NOT limited through the 4 ports!

    But my main NAS has a low power consumption as well. The Gigabyte C246N-WU2 (CEC 2019 enabled, ErP enabled) with an i3-8100 consumes only 6.65 W incl SATA SSD, 16GB RAM and 1G LAN connection. Now the final NAS with Unraid installed consumes 23,60W with 8 (!) 12TB HGST 3.5 inch HDDs in standby and a 10G network adapter (this consumes alone 6W). Sadly there is no adapter that adds the SATA DevSleep ability (uses the 3.3V pins to send HDD in a state that consumes only 5mW). This is something used in enterprise storages and notebooks.

    "the 2-bay 220j will do 5.5W"
    It seems you did not read the test setup. They used super small SSDs for this test and these consume nearly nothing. This is good to compare different NAS models, but has nothing to do with the consumption in the real world - as long you do not install SSDs as well ;)
  11. Maurizio on October 26, 2020 - click here to reply
    Excellent and accurate article, I am experiencing something similar too, I have an Asrock J4105M and I would like to add an 8 port SATA card with 88SE9215 chipset to it, but before buying it I would like to know if it is compatible with this MB and if you see all the ports, I tried an IBM M5015 but it was not seen by the BIOS and it got too hot. I have already installed OpenMediaVault under Debian 10. Someone has had such experiences?
  12. Sean on October 30, 2020 - click here to reply
    Thanks for your write up Matt.
    I've gone down mini-itx path with soc for my silent server, so am trying to decide between the built in 4xSATA (J4105) or 2xSATA(J4105B). Either way I need the PCIE card that adds more SATA ports (I've already ordered, due to the long lead time, the one you recommended on your other page: PCE8SAT-M01)
    You said that your J4105 had issues with the ASM1061 chip serving 2 of the 4 SATAs on your other page. I was thinking that having 4xSATA ports built-in would be an advantage as you wouldn't have to put as many drives on the PCIE card (and therefore theoretically each HDD could run at a higher data transfer rate), but your experience with the ASM1061 chip is causing me some hesitation. Do you think J4105B is the safer bet?
    Second question: The J4105B has a 16x mechanical PCIE slot, however when reading the Asrock website I understood that it only had two lanes. Did you give any thought to going with a 2-lane PCIE 2.0 card (instead of the 1x PCE8SATA-M01), to get even more bandwidth to share among additional hdds?
    I'm probably going to run a single SSD for the OS and 6 hdds.
    • Matt Gadient on October 30, 2020 - click here to reply
      I generally do prefer onboard SATA. You avoid card seating issues, free up PCIE slots, BIOS/boot integration (and sometimes a degree of configuration) is taken care of by the motherboard manufacturer, and there are a few other positives when these components are being integrated directly onto the mainboard by the motherboard manufacturer.

      With that said, my ASM1061 did fail. There are a ton of these chips out there and they certainly don't all fail. But I don't know what the overall failure rate is. I'm not a huge fan of the ASM1061 in general... IIRC if adding a port multiplier it lacks FBS and it also doesn't support the lowest link state power management level which is a bit of an irritant. Going off of memory here, but I seem to recall power consumption being a bit higher than the common cheap 4-port Marvel controller (a controller I actually prefer).

      Putting all that aside, assuming my failure is closer to a 1-off than it is the-tip-of-the-iceberg, you certainly do get 2 high speed ports.

      To the 2x card question, I did try what purported to be a 2x card (the ASM-based SA3008 - listed as 4x slot and 2x interface/bandwidth) and it prevented the ethernet from coming up. Specs for these things are... sparse so it's always possible it's really a 4x interface. Or maybe something else about that card just doesn't play nice with my motherboard. Or maybe my card was just wonky. You could always try a 2x card out though: I took a quick peak on AliExpress just now and there seem to be some 8-port cards that list "Marvell 88SE9705" which I'm assuming is supposed to be 88SM9705 (5 port port multiplier, presumably feeding off of an x2 controller like the 88SE9235). The big reasons I haven't bothered to try it are that:Essentially, I've just accepted that I'll have up to 8 drives sharing a 1x interface. Since my OS drive gets very little reads/writes it's on the 8-port to free up another Intel port for a drive that can make better use of the bandwidth. Throughput's fine for my current usage, but if I ended up needing more I'm probably at the point where the most sensible course of action would be to use a mATX board with more ports and PCI-E slots.
      • Sean on November 4, 2020 - click here to reply
        Thanks for your reply Matt. Answered all my questions perfectly. I'm probably leaning towards the J4105-ITX (4xSATA) and if there are any issues with the ASM1061 (i.e. losing 2xSATA) it would still be equivalent to what I would have out-of-the-box with the J4105B-ITX anyway. I'll stick with my 1x card for the moment.
        On a side note, I've got an 8 year lead-acid battery background and hadn't ever heard about reviving sulphated batteries using the method you discussed. You seem to have a broad knowledge across quite a few subjects. All the best.
  13. Sty_X on January 27, 2021 - click here to reply

    Your experience interests me a lot because I'm also working on the design of a NAS whose #1 goal is the lowest possible power consumption.
    And just like you the CM Asrock J seemed to me to be a good possibility but they don't seem to support RAID.
    What I don't understand is that in your article you seem to set up a RAID 5 with this type of card. Something must have escaped me... Could you enlighten me?

    Yours sincerely,
    • Matt Gadient on January 27, 2021 - click here to reply
      Hi Sty_X. The motherboard may not support Intel RAID (normally enabled in the BIOS on motherboards that support it), but software RAID options work because they do not need any sort of hardware-level support. So ZFS or BTRFS (Linux) or Storage Spaces (Windows) all work fine. I used BTRFS for this system.
  14. Bjorn on February 16, 2021 - click here to reply
    Hi Matt and many thanks for this article!

    Let me ask your opinion, if you have some time to give me, about my low conso server project.

    Currently I have an i3-8100T, an Asrock Z370M-ITX/ac motherboard, 3 SSDs (2 in software RAID and one SSD in backup) and a Be Qiet 80+Gold 400W power supply. The basic console under Debian is 22-23W. I have the impression that the SSDs don't really consume anything because the power consumption is about 18-19W.
    Reading your comments I understand that you have been able to reduce the conso of this type of processor (i3) to around 10W. Could you point me in this direction because I never have o/c or underclocked components.

    I also have the possibility to get an Asrock J5005-ITX. Do you think it might be interesting to replace my current hardware with this card ?

    Thanks again !
    • Matt Gadient on February 17, 2021 - click here to reply
      The 10W Skylake computer was using the power supply that came with the Antec ISK 110, which I suspect is extremely efficient. I don't know what the power consumption is of your Be Quiet 400W power supply at low loads, but that could be a factor.

      Undervolting did not impact the idle power consumption for my processor (it was 10W whether undervolted or not). Intel's improvements to idle power consumption over the years has really been amazing. Undervolting only affected the power consumption under load, so if your computer is usually idle I don't think I'd go to the effort of undervolting if you're unfamiliar with it. A bad crash at the wrong time can cause data loss, so usually I tinker with undervolting and stress testing immediately after purchase and then do a wipe/reinstall once everything is stable.

      Keep in mind your motherboard has 2xLAN and Wifi as well as a more capable Z370 chipset (mine had the H110 which is a very low-end chipset). Those could easily pull a little extra power.

      If you're intent on reducing the power consumption as much as possible, you may want to try disabling one of the LAN ports and the Wifi in the BIOS (assuming you only need 1 LAN port), along with anything else you don't use. Setting the CPU fan in the BIOS to as low a speed as you can manage without causing overheating can be worth looking into as well since fans can easily sneak a few watts of power if they're running at high speeds. As far as SSDs go, some actually do use a little more power than others - usually it's small enough to not matter, but when you're into the under-20W range it can certainly be measurable. I usually see what Anandtech or Toms measured for power consumption of current SSDs before I buy - otherwise I tend to go for Samsung in a low power build since they are usually consistently low. Really though the only way to know for sure what each of your components are using is to test everything individually.

      As for the J5005-ITX, it will almost certainly use less power than your current motherboard/CPU, but obviously you have the cost of DDR4 SODIMM modules, and whether it's worth the extra cost it to shave off perhaps 5-10W is something you'd have to decide for yourself. I'm also not excited about the ASMedia SATA ports (2 out of the 4 ports) that the ASRock Jxxxx-series motherboards all seem to use. Aside from that though, they're nice efficient little motherboards.
      • Bjorn on February 17, 2021 - click here to reply

        I have just tested the power supply at no load (i.e. by starting it by shunting 2 pins of the ATX plug): it seems to consume 8W (measurement made with a wattmeter connected to the plug) ... I find this quite consequent !

        If the power supply consumes 8W on the 18-20W total it would make a consumption of the CM+CPU+RAM around 10-12W. The Asrock J4005B-ITX board of your article seems to have a consumption of about 6-7W is it right ?

        Concerning the settings of the motherboard I will need wifi and a LAN port. I don't think it's possible to switch off the power of one of the LAN ports.

        I didn't quite understand the meaning of one of your remarks which seems to be important "but it is obvious that you have the cost of the DDR4 SODIMM modules, and you will have to decide yourself if it is worth paying the extra 5 to 10 W that it will cost you to shave it".

        Thank you!
        • Matt Gadient on February 17, 2021 - click here to reply
          An estimate of 6-7W for the J4005B-ITX would be reasonable. However, the 2 sticks of DDR4 SODIMM RAM likely use around 0.5W and it's hard to know how much unloaded power consumption of the power supply can actually be utilized by the motherboard once connected, or what the exact efficiency of the power supply is at those low levels. So 6-7W is okay as an estimate but you'd really need a bench power supply to know for certain.

          I only mentioned the DDR4 SODIMM because it's another expense to consider unless you have DDR4 SODIMM memory sitting around. Buying a new motherboard + RAM to perhaps save 5-10W is a tough trade-off because in most countries it would take 20+ years before the electricity savings was enough to pay for the new motherboard and RAM. Of course, there can be other reasons that make it worth doing (low heat, silent operation, building a new computer anyway, running a long time on an uninterruptible power supply, great for limited off-grid power, etc) - long term cost savings just isn't one of them.

Leave a Comment

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.