Ampere Server Build Log 1

Ampere Server Build Log 1

I had a need of a second prod-ready server and a few weeks ago, and the AIC RMC-2E chassis that I decided to order finally arrived.

And there it is, proudly sitting on the bottom of the rack. I think that this 2U form factor is fantastic, and now my 3U main rig feels so chonky.

It's still empty inside, awaiting delivery of the rest of the server parts.

The stock fans were replaced with Noctuas for reduced noise, and I've also added a SAS/SATA PCI-E card as the motherboard that I purchased do not have SATA ports.
AIC RMC-2E Reversible Front/Rear 2U Rackmount Chassis, 4 x 2.5
The RMC-2E chassis is AIC first model focus on Edge-Computing appliance product. Its flexibility of support both front and rear I/O is the biggest feature on ALL application scenarios. The flexible and variable can easily reach through simple change/adjustment on the field. The 450mm depth is easy to implement into all cabinets.

Slightly unfortunate that their final color for manufacturing is black, while their promotional Youtube videos were in silver.

I'm happy with my rough spray paint job though. (I made so many dumb mistakes in the progress, but the end result was fine)

CPU - Ampere Altra Q64-22

I've decided to go with an Ampere Altra q64-22 as the CPU of choice, which is a 64 core ARM CPU.

It's one of the lower end SKUs that Ampere offers, but the options available to me from Newegg were the q64-22, m128-26, and the m128-30.

The m128s are between 150-180W TDP, whereas the q64-22 is only 69W TDP, and given that this chassis only supports 60mm chassis fans, I wanted a low-TDP build to keep noise levels low. I decided that the 64 core lower TDP SKU was the right model for me. I also don't expect to be running anything CPU heavy, as this machine is designed for just a few things:

  • A ZFS+Samba backup SSD NAS
  • The leader of my Docker Swarm cluster, which should be more reliable as the primary Portainer host than my Raspberry Pis running off of MicroSD cards.
  • An LLM Server
  • Just a place to experiment with being an early adopter of the Ampere CPU
  • Remote development environment, including a self-hosted CI/CD tool, deployment orchestration, and build host

Although this server will not be for gaming, I still want to equip it with a GPU for running LLMs. My 3U server has been mainly running Monster Hunter Wilds for the past few weeks and it doesn't have the additional bandwidth to be my main LLM machine. 😅

GPU - RTX 3090

As for GPU choice, I've decided to go with a Turbo Edition RTX 3090, which is the blower style variant of the 3090, perfect for a 2U chassis in terms of both size and airflow. Blower cards use exactly two PCI-E rows of space and also does not exceed the PCIE mounting bracket in terms of width. Also, airflow in servers must travel from front-to-back, which makes blower GPUs more effective than gaming GPUs at removing heat in a rack.

The 3090s are quite popular GPUs for their Price/VRAM ratio, as they have a sizeable 24GB VRAM, allowing you to run larger models.

The only other consumer-level GPU that offers more than 24GB VRAM is the RTX 5090 at 32GB, and we all know those don't exist. The 5090 FE is technically exactly 2 rows tall, but protrudes quite a bit on its width, and the chassis does not have enough clearance for it.

Out in the wild, I heard that there are 48GB blower 4090s, but I don't know if I really want to go that far.

My current GPU, the RTX 5000 ADA

My original plan with GPUs was to transplant my current RTX 5000 ADA, which is a 32GB VRAM card, to this second server. And I would eventually upgrade my gaming rig to a dedicated gaming GPU. However, a few things happened:

  1. The 50 series launch had me disappointed. The 5090 for example is rated for a whopping 500+ TDP! That's more than double my RTX 5000 ADA. And also because of the crazy TDP requirements, the 5090 does not have a SKU with a blower style fan. The 5080 also is rated for a crazy 360W TDP with no blower style SKU. Any other card would most likely be a downgrade from the RTX 5000 ADA.
  2. After getting the RMC-2E 2U Chassis, I've come to the revelation that 2U is the best U. My current 3U chassis looks chonky on the rack, and I eventually want to downsize it to a Sliger CX2151c when I need to free up the last precious row of space on my 12U rack.
CX2151c | Sliger
Sliger Designs is a manufacturing company based in the United States specializing in computer cases and systems.

For me to downsize, this means that I need to stick to a 2-row-height GPU, meaning I keep the RTX 5000 ADA or side-grade to a Turbo-edition (blower style) RTX 4080. This meant I had the following options:

Option 1. Grab a 4080 for my gaming rig and give my new server the 5000 ADA

Option 2. Keep the 5000 ADA for the gaming rig and grab either of the following:

GPU TDP VRAM Market Price Price per VRAM
RTX A4500 200W 20 $1300 $65
RTX 4000 ADA 130W 20 $1500 $75
RTX A5000 230W 24 $1400 $58
RTX 4500 ADA 210W 24 $2300 $96
RTX 3090 350W 24 $1000 $41

While the 3090 was the best bang for my buck, the RTX A5000 was also a close consideration. The A5000 is essentially an RTX 3080 with higher VRAM and lower TDP, so I decided that I can power-cap the 3090 to about 250W and achieve the perfect GPU for my usecase.

That said, I leaned towards option 2 with an RTX 3090 going into this new Ampere server, and keeping the RTX 5000 ADA as my gaming GPU.

RAM, Storage, and other peripherals

I've already purchased an LSI SAS3008 as a SATA controller, as the Asrock Ampere motherboard does not have any SATA ports for the SATA backplane of the chassis. The SlimSAS ports on the motherboard are also only for PCIE devices, not SATA.

As for RAM, these Ampere systems are known to scale really well with more channels of memory, so I decided to go with 8 sticks of 32GB RDIMM to fill up all channels for a total of 256GB.

Lastly, I need 4x2.5" drives to fill up the SSD backplane for my ZFS pool. I'm still debating what's the right drives to get here, but I am physically restricted to SATA SSDs.

Next update should be the build log of the system, alongside the OS installation, migrating the Docker Swarm leader and the Portainer host, setting up a remote development environment, and more.