9w idle: creating a low power home NAS / file server with 4 storage drives

Currently if you pay a somewhat average rate for electricity, the math works out pretty nicely: 1W = $1/year (approx) in electricity for something running 24/7. Subtract a little if you pay to heat your home, and add a little for extra AC in the summer.

I needed to put together a NAS / file server to replace an old power-hungry one. This time I was looking to do better in terms of power usage, and hoping to spend a bit less.

I started by looking at my most recent (non-NAS) machines. The most recent machine I put together ran an i3-6300 which I wrote about at Building a low power PC on Skylake – 10 watts idle. It idled at 10 watts (spoiler) and pulled 56-58 watts running Prime95 depending on the undervolt. Both measurements taken from the wall. However, it was being used as a typical desktop machine.

My laptop with a Kaby Lake (R) manages 5-8W at idle (and that includes the screen!). While I’d obviously have a very hard time hitting that in a desktop machine with off-the-shelf components, I was hoping to build something that would at least idle in the 6-8W ballpark.

Was I successful? Let’s find out!

 

Hard Drives

Normally I’d start at the CPU/motherboard but this is a situation where X relies on Y which relies on Z. It’s easier to start with Z.

In How to shuck the Seagate Expansion 4TB portable (STEA4000400), and why…, I talked about 2.5″ drives pulling about 1-2 watts whereas 3.5″ drives tended to pull from 3-10 watts.

Let’s look at some data for current Seagate SMR drives.

2.5″ 5TB3.5″ 5TB3.5″ 8TB
spinup max3.75w10-24w10-24w
write2.10w~5.5w~7.5w
read1.9w~5.5w~7.5w
idle1.3w~3.5w~5.0w
idle low power0.85w??
standby/sleep0.18wunder 0.75wunder 0.75w

The chart uses SMR variants because that’s the only place you can get high-capacity 2.5″ drives. The reality is the 3.5″ drives tend to use 3-4x more power across the board. Note that there are a number of non-SMR 3.5″ drives that do a bit better than the one in the chart (the chart shows the Seagate Archive), though they still fall into the “pulls 3x-4x more power” category.


Power consumption really starts to matter when you get multiple drives going:

  • At some point, those spinning-rust hard drives become the most power hungry device in your machine.
  • The annual electrical cost starts to add up when you have a lot of 3.5″ drives, and that’s before you account for extra fans and higher summer AC use.
  • Finding a PSU that’s efficient at a low-power idle with drives spun down while ALSO having the capacity to spin up a bunch of 3.5″ hard drives is challenging.

Going with shucked 2.5″ drives is currently an economical long term choice when it’s viable. That said, if you’re doing frequent writes, need fast resilver/rebuild times, or need huge amounts of total storage and are limited by SATA ports, 3.5″ drives (ideally non-SMR for performance) may be the way to go.

Of course if you’re looking at small storage requirements (under 4TB for example), looking at high capacity SSDs will get you high performance with low power draw but with much higher up-front cost.

For my usage (heavy reads, fewer writes), 2.5″ SMR drives were the way to go.

4 of the 2.5″ drives means a total of less than 1 watt when spun down, approx 4-5 watts when spun up, and approx 8 watts when actively reading and writing.

 

PSU

One advantage the previous 10 watt Skylake machine had was that it was powered by an extremely efficient Antec pico-style PSU built into the case and powered from a 19V adapter.

On the other hand, the old power-hungry machine used a standard ATX power supply.

For this new build, I strongly considered going with a Pico PSU but eventually decided against it. Here’s why…

Pico PSU 5V Amp/Current Capability

When multiple hard drives spin up, they can pull a good bit of juice from the 5V rail. On its own not bad: 6x hard drives would generally peak at under 25 watts over the 5V rail for typical drives. Other components powered via the motherboard come into play though: For example, each device connected to a USB 3.0 port can pull up to 0.9A (or 1.5A if it’s a charging port) so ballpark 5w to 7.5W there. As for motherboard-specific components, the total power sum generally isn’t advertised.

The majority of standard ATX PSUs handle 20A on the 5V rail, bu it’s pretty tough to find a standalone Pico ATX PSU that handles much more than 6-8A.  If you look at the specs of a number of Pico-style supplies you’ll find that the 12V rails have ample power, but 3.3V/5V don’t scale up as well. This makes sense, as most PicoPSUs seem to essentially pass through 12V from the adapter, so most of the current-related work they do is bucking down to 5V or 3.3V.

I did find some that were “rated” to handle the eventual power draw I predicted. However, in a number of cases the wires or pins were undersized for the amperage I’d be asking of it, and voltage drop became a concern.

If this were the only issue, I’d have direct-soldered some new wires and given it a shot. However….

 

Power Brick Quality and Pricing

I was mildly alarmed to see no-name power adapters “frequently bought together” on Amazon with the higher end Pico PSUs. Since the bulk of the work/protection/filtering happens in the mains adapter, it would be odd to cheap out here.

Looking on Digikey for some reasonable adapters (with high efficiency), it became clear that I could get solid adapters with detailed spec sheets, but the cost was starting to get up there.

Still, total price was competitive with ATX adapters. However…

 

Pico PSU Quality Concerns

I’m pretty sure I’ve bought $5 buck converters with higher component counts than some of the Pico PSUs I came across. And those buck converters didn’t have the same strict requirements that typical ATX power supplies do for ripple, transient response, overload/short-circuit protection, power sequencing, etc.

Looking again at the Antec Pico-style supply still running the Skylake machine, I realized it was substantially more complex than any of the Pico PSUs I came across, despite having a power brick to do a lot of the work.

Ultimately, this is what ended the PicoPSU search. For a basic desktop it wouldn’t be a major problem if a Pico PSU caused instability or destroyed a component. Instability causing a RAID array to be corrupted (or multiple drives destroyed) on the other hand…. kind of a big risk to take. Despite being around for a number of years, Pico PSUs are still a bit of a “wild west”, similar to the early ATX PSU days before major web publications started doing substantial testing.

PSU (continued) – Antec Earthwatts 380W ATX

Since a Pico PSU was out of the question, I intended to get the most efficient ATX supply I could find. Toms Hardware performs phenomenal testing on PSUs and their review of the Corsair RM650 seemed to show the best efficiency at low wattages. Unfortunately after ordering it, I found it was too long for the case (oops).

Ideally I’d have something under 200W, but since it’s almost impossible to find branded sub-300W ATX power supplies, I dug into my storage bin and pulled out some spare PSUs, tested them for no-load power draw, in addition to power with the motherboard I ended up using (which you’ll read about next).

Quick PSU Power Draw results

OffNo-LoadMB + BIOS
Antec EarthWatts 380w Bronze0w3w9w
Antec Earthwatts 500w0w4w10-11w
Antec Earthwatts 450w Plat0w4w8-9w
Apevia WIN-500XSPX4w17-18wno test

Yikes on the APEVIA! I wasn’t about to hook it up to the motherboard by the way. It actually came with a case I wrote about years ago and was never used beyond the initial picture (cables are still twist-tied together). Yes, I’m glad I never used it. Yes, it’s possible I’ll salvage the fan. Yes APEVIA, you could have lowered the weight of the all the cases you sold by omitting the PSU and just FedEx-ing all your PSUs directly to the dump.

I settled on the Antec Earthwatts 380W Bronze.

Just being powered on with no load aside from it’s own fan, the PSU used 3W off the hop. Powering the MB/CPU/RAM brought things to 9W.

Speaking of the motherboard…

Motherboard and CPU – ASRock here we come

I knee-capped myself a bit here. And by a bit, I mean a lot. Two factors pushed me towards a certain motherboard:

  1. I already had an extra 8GB DDR4 SO-DIMM kicking around from my laptop upgrade.
  2. I was looking to spend as little as possible, while still getting a current generation CPU.

If you try to find a motherboard that solves both the above problems, right now you’ll undoubtedly land on an ASROCK Jxxx ITX complete with a Goldmont Plus CPU (Celeron J4005, Celeron J4105, or Pentium Silver J5005).

Here’s what I ended up with… the ASRock J4005B-ITX motherboard:

The ASRock J4005B-ITX

It’s less blurry in real life.

How did this motherboard/cpu choice kneecap me?

Here are a few limitations:

  • The Intel controller within Goldmont Plus processors only has support for 2 SATA ports.
  • The Goldmont Plus processors only have 6 PCI Express lanes which limits the number of 3rd party SATA controllers the manufacturer (ASRock in this case) can put in.
  • These ASRock boards are ITX which results in them only having 1 PCIE slot which means only 1 PCIE SATA card can be installed.
  • These budget Goldmont Plus boards have no voltage/frequency tuning available, so no undervolting available. No s0ix power states available either for further power reduction (though I don’t know if this is a CPU or motherboard limitation).

Let’s stop for a moment and evaluate. I’m creating a NAS, and I already limited my ability to add hard drives. Aiming for low power and I’ve already limited my ability to tweak power settings in the BIOS… not off to a good start, are we?!

If I’d been willing to spend a bit more up front and forego the usage of my extra DDR4 SODIMM, I’d have likely considered a current generation i3 and a non-ITX motherboard that had more SATA ports with some expansion slots for extra controllers. If only willing to forgo the DDR4 SODIMM, the ASRock J4005M or J4105M micro-ATX boards would have at least given 3 PCI-Express slots.

 

 

Motherboard Woes: ASRock J4005B-ITX and J4105-ITX

I also picked up the J4105-ITX for another machine which is fairly similar. Here are some pain points I came across between the 2 boards:

  1. The worst Memory QVL list I’ve ever seen. Seriously, I actually searched for a lot of the modules listed and they’re not even available at retail. To make matters worse, reviews have people showing problems with memory compatibility.
  2. PCI-E incompatibility with a PCIE-x2 card which kicked the ethernet out (mentioned later).
  3. Turbo won’t work if you use Windows Server 2019 and you’ll have a lot of missing devices shown in Device Manager (neither Windows Update nor ASRock have drivers available for Win Server). Note that Win 10 is fine as it has most of the drivers via Windows Update with ASRock filling in the rest.
  4. Hard power-offs can cause the system to not boot unless the power is killed for a period of time.
  5. Swapping RAM can require the CMOS to be cleared (or numerous restart attempts).
  6. J4105-ITX-specific: The ASM1061 that adds an extra 2 SATA ports (for a total of 4) started dying within a year, causing Command Timeouts to any hard drive that was plugged into it. Not that the ASM1061 is a very good controller to begin with…

On the plus side, both motherboards do support 16GB of RAM despite the specs claiming a max of 8. I tried 1 x 16GB stick and 2 x 8GB sticks for dual-channel. I didn’t test 32GB, though I suspect it would work. The RAM I tested with was a Kingston HyperX 16GB stick (dual-rank DDR4-2666 though it comes up as 2400), Kingston ValueRAM 8GB stick (single-rank), and my original Micron 8GB stick (single-rank).

UPDATE: I did finally get 32GB of RAM going (2x Kingston HyperX 16GB DDR4-2666 @ 2400Mhz). As the BIOS is extremely fickle when changing RAM, the process I ended up using which worked consistently was to (a) put in the new RAM, (b) short the clear-CMOS pins for a few seconds then release, (c) power up the machine, and (d) keep hitting the delete key. After what seems like 30+ seconds the fan speed changes briefly system then hard reboots (powers off then on automatically), but this time with the screen coming on and allowing you to press DEL to enter setup.

Power Consumption – Early Idle Tests (10-12 watts)

The initial test with just a keyboard and monitor attached resulted in 9 watts at the BIOS screen.

Once adding an SSD and booting into an OS, both Windows and Ubuntu would idle around 10-12 watts (though Ubuntu needed “powertop” tuning to get there).

It’s worth noting that in Ubuntu, power consumption was in that 10-12 watt range regardless of whether the Desktop or Server (cli-only) edition was used. Some GNOME stuff in the background would cause the CPU to bounce out of certain idle states, but if you’re trying to decide between Desktop and Server it’s really not going to make much difference in terms of power consumption. If you have a monitor hooked up you may as well use Desktop as it’s quick/easy to get it to turn off the screen after X minutes whereas the Server edition seems to simply leave it on all the time by default: fine if it’s headless but unfortunate if you have a monitor attached and forget to shut it off manually (using consoleblank in grub can help here).

Components for the 9W NAS

Power Consumption – Pre-Tuning Idle AND Heavy Network/Disk Activity (4 HDD) (13-14 watts / 22 watts)

I installed a 4-port Marvel 88SE9215 SATA controller card in the PCIE slot.

I also tried an 8-port SATA controller card: the SA3008 which uses an ASM1806 PCIE bridge to drive 4x ASM1061 SATA controllers (incidentally, the ASRock J4105 uses the ASM1061 for 2 of the 4 ports it provides on the motherboard). The tiny bit of literature on the SA3008 out there suggests that it uses a 2x PCIE interface (despite being a 4x-sized card) and this motherboard supports 2x PCIE.

Unfortunately, the SA3008 card interfered with the Realtek network controller which wouldn’t come up. The card also pulled +4 watts compared to the Marvel-based card, really got warm even when inactive, and didn’t have any TIM between the controllers and the heatsink.

Update: I did later install an 8-port Marvel/JMicron 1x card which has worked quite well (written about here), though the power results below reflect the Marvel 4-port card.

Next, I set up a BTRFS RAID5 array with zstd:9 compression enabled across 4x Seagate SMR drives (4-5TB each).

Idle with this setup (drives not spun down) I was looking at 13-14 watts.

I ran an rsync from the old server to the new one. rsync and sshd had the CPU pegged and total consumption at the wall came to 25 watts. Note that rsync was operating between 6-32MB/s as it went through the files despite a gigabit connection, gravitating towards the low end as time went on. I eventually disabled mitigations and mounted the BTRFS array with nobarrier and speeds went up to a consistent 30+MB/s. Most of the CPU usage can be attributed to ZSTD compression being forced at a fairly high level.

If you’re doing substantial rsyncs onto a compressed BTRFS file system and are thinking about using these Jxxx-ITX boards, you may want to consider opting for a 4-core variant if you need a higher copy speed.

 

Power Consumption – Post Tuning Idle

As I alluded to above, I had done some tweaking. Here are the major bits:

  • PowerTOP in Linux (auto-tune at start).
  • Hard drives spun down after 30 mins.
  • Replaced PSU fan with a Noctua fan.
  • 1 case fan with the speed just above stall.

With the hard drives spun down, I was looking at a consistent 9 watts idle from the wall.

 

Highlights: The strengths of this setup

9 watts at spun-down idle (where it sits most of the time) is fairly reasonable considering there’s 1 SSD + 4 hard drives at my disposal with 16-20TB total capacity (12-15TB via RAID-5). Keeping in mind it’s being driven by an ATX PSU, this isn’t really a bad showing all things considered. For comparison, I took a look at a few Synology NAS devices and with the exception of a couple models, they all idle at a higher power draw.

If capacity were to become a major issue in the future, using 3.5″ drives instead could be done at the expense of about +15 watts idle, though if in a situation where they could be aggressively kept in sleep/standby I suspect the increase would only be 1-3 watts which is still less than the 8-port controller that I tried had used.

Sitting in open air (19 C), the CPU heatsink was at about 32 C during the rsync and touching each chip on the motherboard after power-down, none were detectably warm. The hottest component was the heatsink on the Marvell SATA controller card which sat at about 38 C.

The low power clearly translated to low heat, which meant I was able to get by with just 1 case fan at an extremely low setting: to be honest I probably could have relied on the PSU fan alone.

 

Limitations: The weak points of this setup

Unfortunately, the system as it stands will hold a max of 6 drives: 1 OS drive and 5x storage drives. Realistically, 4x storage drives becomes the day-to-day max because it’s worth having 1 spare port ready for hard drive upgrades/replacements. Other controller cards are possibilities down the road but options are really limited when the only expansion slot operates at a max PCIE rate of 2x.

The CPU being maxed during the file transfer is another drawback. This 2 core Celeron is worked pretty hard, and while it may be able to handle some other tasks in the future (ie Plex transcoding via Intel Quicksync), any time it’s asked to do 2 things at once I suspect it’ll slow to a crawl.

Switching from the J4005B to the J4105 would add 2 SATA ports which brings max drives from 6 to 8, and would double the core count: I’d expect slightly higher power usage but didn’t repeat all my tests with that configuration.

 

 

Doing it all over again: What I’d do differently

On one hand, I’m pleased that I managed to get below 10 watts: I’ve got a system that’ll likely serve files and do other tasks for years to come, all inside a nice low power envelope.

On the other hand, I’m really left to wonder if I might have managed to get there anyway with an undervolted i3 or Pentium Gold on a 300-series motherboard with the multiplier capped. Keep in mind that my previous Skylake build was idling at 10W – while it didn’t have 4x spun down drives or a full ATX PSU to contend with, it’s within the realm of possibility that improvements in Kaby Lake and beyond may be enough to offset those.

In any case, were I to go at this again, I suspect I’d go with a Micro-ATX board with 6 SATA ports and just tinker as much as possible to get the power consumption down. Obviously the cost would be a bit higher (and I wouldn’t have been able to use my spare DDR4-SODIMM), but future expansion would be substantially easier.

16 Comments | Leave a Comment

 Sort by Oldest | Sort by Newest
  1. Jaron Ensley on December 31, 2019 - click here to reply

    Hi Matt –

    Just wanted to say that your work regarding 32-bit EFI/64-bit CPU Macbooks was a lifesaver. Just wanted to say thanks and you should make a YouTube video on how to do it correctly, because there are a lot of videos on how to do it wrong.

    Thanks, Jaron

  2. Luis on February 24, 2020 - click here to reply

    Very good post. Similar configuration to what I’m looking for (I may wait for J5040 processor to be released). I was searching for pico-PSU, after reading this, I changed my mind.

  3. Valerio on April 6, 2020 - click here to reply

    Hi,
    very nice article !
    If you went for a i3 and a non-ITX motherboard, how much more power it should require compared to the atom build ?
    From your experience, what idle power can a cpu like Pentium Gold G5400(T) with a mATX motherboard reach (just cpu/ram running idle) ?

    Thanks
    Valerio

    • Hey Valerio. As for an i3, I had a previous Skylake build that I managed to get down to 10W idle. Of course at load the wattage was quite a bit higher than the Goldmont. As to a non-ITX motherboard, it shouldn’t inherently pull more power: however they often have a higher component count and fewer components is usually better in terms of power savings. The biggest challenge really seems to be the ATX PSU… efficiency really tends to really start dropping off at sub-20W to the point where you’re fighting for every watt saved.

      • Valerio on April 6, 2020 - click here to reply

        Thanks Matt,
        so basically, just use lower wattage and small components, low power psu, small motherboard, etc.. every build i saw with i3 and such, does not go lower than 30w idle :( I’ll try to see how efficient my spare enermax Eco80+ is, if it’s not enough i’ll try a different one, maybe a pico psu, it’s hard to find this kind of real world tests online :D

  4. xtos on May 4, 2020 - click here to reply

    Excellent article….thank you. Just was I was looking for.
    (I feel grateful google’s 1st result was your page for search term “lowest power consumption pc as nas”)

  5. LucianLS on May 10, 2020 - click here to reply

    Thanks for this article! Here’s my build with the similar ASRock J4105: https://forum.openmediavault.org/index.php?thread/32310-my-low-power-nas-in-a-closet/

    • Definitely like your build! Also nice to see the consumption while watching a film: it’s something I was curious about but never got around to testing.

  6. Mathew7 on May 27, 2020 - click here to reply

    My concern is about UPS runtime, not yearly cost. I don’t have exact power measurements, but my BackUPS RS 900G estimates 200minutes of uptime (5%, so around 27W) with NAS(+1xspun-up 3.5″)+router (9W)+modem. My previous server had 44W by itself with 2x 3.5″ HDDs spun-up.

    So I ended up using a Qnap TS-253Be (non-e also similar) with linux and a single 14TB 3.5″ Seagate Ironwolf (thinking about a lower-rpm WD red and move this to the backup server)
    My config:
    – J3455 CPU + 2x8GB RAM (came with 1x2GB)
    – 1x14TB 3.5″ Seagate Iron
    – 4xPCIe slot with 2x M.2 NVME adapter (Qnap PCIe 2.0, 4xPCIe to 2 4xPCIE)
    – 512GB Samsung 970 PRO (for torrents)
    – 128GB Samsung SM951 (for OS)
    But I lack ECC RAM and questionable PSU (although it’s the original “certified” by Qnap)

    Notes:
    – BIOS boots only from internal flash or SATA drives (no boot from PCIe slots, so I have to load kernel from internal flash)
    – this model has 4GB internal flash (older Intel NASes had only 512MB)
    Don’t know about Windows (maybe install on a SATA SSD and then transfer OS to NVME with boot partition on internal flash)

    I assume the 4-bay version to be as low-power as this and have 4×3.5″ + 2xM.2. I think Qnap even has a 10G+2xM.2 PCIe adapter.

  7. Airbag888 on June 4, 2020 - click here to reply

    OMG where have you been all my life! I have looked for low power enthusiasts all over and never seem to find them..
    Albeit my use case intends on combining NAS, Home Server box in 1 I also am after the holy grail of low power consumption.
    My current aging NAS (Dlink ugh) caps at 11MB/s writes which sucks when transferring drone videos.
    I also want a space for docker images and some VMs for services like homeautomation.
    Anyhow electricity being expensive here I NEED the low power goodness for 24/7 runtime..

    What were the synology NAS that you looked at and what were their idle power consumption btw?

    Have you considered the Asrock A300 (with ryzen, etc) might be overkill for a NAS though :)

    Anyway thank you and look forward to more write ups or better youtube videos in that niche

    • As to the Synology NAS products, I’d looked at a few that were commonly available (Amazon etc) and then checked Synology’s website (they list power consumption for models under the “Specs” heading). Currently in the 2-5 bay range depending on model they seem to list in the 5-15W consumption range with “HDD Hibernation” and in the 15-35W consumption range as “Access”, though those power measurements seem to be taken with 1TB WD Reds which have lower power consumption than typical higher capacity 3.5″ drives. To get a specific number for a model you’ll have to look at its spec sheet. Those numbers are certainly decent, but obviously there are potential advantages to custom building something custom.

    • ballardian on August 12, 2020 - click here to reply

      I don’t know if you have done your build yet but your use case sounds similar to mine. I’m still running a Windows 10 Pro home server on an embedded Atom 330 processor from 10 years ago. At this point raspberry pi 4 is probably just or more capable. I’m looking to replace it with a box that is more powerful but still low power/heat due to my case and use.

      My leading candidate is the GigaIPC mITX-1605A — basically it runs a Ryzen Mobile processor at 17 TDP max (another 7 TDP max for graphics onboard for a total of 25W). This is as powerful on passmark as my 1 year old i7 laptop. Draw back is it doesn’t have 6 SATA like I need but it does have a mini PCIe slot that I plan to add a SATA controller into though. It isn’t a cheap board but after my last very low power, very low cost board has run for 10 years paying a bit more for longevity I think is worth it this time around. As my server isn’t on all the time only when needed a higher draw is a good trade off for very low sleep mode.

      There is more information on my research / ideas on my blog which is techdabble dot wordpress dot com if you are interested.

  8. Danny on July 5, 2020 - click here to reply

    Thanks so much for posting this, it helps me a lot with my research. The NUC7CJYH (J4005 NUC) seems to be more efficient, drawing about 5W idle. It’s not much power saved and it only has 1 2.5″ drive plus M.2, so it’s only suitable as a media server, however if you already have a NAS are looking for a home server, the NUC could be a better (and cheaper) choice.

  9. Danny on July 5, 2020 - click here to reply

    The Synology systems tend to be pretty power efficient actually; the spec sheets undersell their real world performance. For example, the 2-bay 220j will do 5.5W from wall on idle.

    Here is a test: https://www.techpowerup.com/review/synology-ds220j-2-bay-nas/12.html

    If your primary concern is power consumption, the Synology NASes will have you covered. There are many other reasons to build custom, but power consumption isn’t really one IMO.

  10. Marc Gutt on September 23, 2020 - click here to reply

    I’m using the Asrock J5005 in my Backup Unraid NAS and as its possible to use SATA Port Multipliers you are NOT limited through the 4 ports!

    But my main NAS has a low power consumption as well. The Gigabyte C246N-WU2 (CEC 2019 enabled, ErP enabled) with an i3-8100 consumes only 6.65 W incl SATA SSD, 16GB RAM and 1G LAN connection. Now the final NAS with Unraid installed consumes 23,60W with 8 (!) 12TB HGST 3.5 inch HDDs in standby and a 10G network adapter (this consumes alone 6W). Sadly there is no adapter that adds the SATA DevSleep ability (uses the 3.3V pins to send HDD in a state that consumes only 5mW). This is something used in enterprise storages and notebooks.

    @Danny
    “the 2-bay 220j will do 5.5W”
    It seems you did not read the test setup. They used super small SSDs for this test and these consume nearly nothing. This is good to compare different NAS models, but has nothing to do with the consumption in the real world – as long you do not install SSDs as well ;)

Leave a Comment

You can use an alias and fake email. However, if you choose to use a real email, "gravatars" are supported. You can check the privacy policy for more details.