Downsizing my 3U Gaming build to 2U

Downsizing my 3U Gaming build to 2U

2U is the best U, and you're wrong if you disagree.

Considering that I only have 12 U's of precious space available on the rack, and how the rest of my rack-mount equipment is 2U or less, 3U just felt a little bit too chonky.

When I purchased the 3U chassis, it felt like the perfect balance between size and cooling, and I still do not doubt that decision.

CX3171a | Sliger
Sliger Designs is a manufacturing company based in the United States specializing in computer cases and systems.

This chassis can support up to a 3-row GPU, fits most 360mm AIO, and you can even further improve cooling by adding smaller exhaust fans on the back.

3U build that I am using

The problem? It's 3U. It looks unnecessarily big when juxtaposed to the rest of the rack. I don't "really" need an AIO. I'm running a 120W TDP CPU (9900X) and a 250W TDP GPU (RTX 5000 ADA), which are designed to work well air-cooled.

3U is compact for a gaming PC, but it still feels too big on the rack.

The GPU is also a blower-style workstation GPU, meaning that it is exactly 2 rows thick (or I'd call it 2 rows thin) and cooling performance isn't really impacted by 2U vs 3U.

At the end of the day, I've made up my mind and I'm planning a downsized build also utilizing a Sliger chassis, the CX2151c, primarily for aesthetics, but I also wanted those additional 2.5" bays on the front, and also free up precious rack space for the future.

CX2151c | Sliger
Sliger Designs is a manufacturing company based in the United States specializing in computer cases and systems.

The only thing I would need to replace is the CPU cooler, and to find a low-profile cooler for AM5 to replace the 360 AIO.

Sliger officially recommends the ID-cooling IS-55, which is a popular SFF air cooler, which you can further improve the performance a bit by replacing the provided 120mm fan with a Noctua one.

During my research, the Thermalright AX90-X53 also caught my attention, which I heard the performance exceeds that of the IS-55.

Thermalright AXP90-X53 Full Black Review
by u/kikimaru024 in sffpc

Lastly, I kept the Dynatron A47 as another option, a CPU cooler supposedly designed for a server chassis. I was happy with Dynatron's W1 that I am using in my 2U Ampere server, and I rate their products highly.

Reality strikes

Long story short, ID-cooling IS-55 didn't fit. I am using extra tall RAM sticks, and IS-55 requires heatsink-less RAM for clearance.

I also almost purchased the Dynatron A47, but something felt wrong. That bad feeling was confirmed when I read several reviews (of the lower end A24 model, but applicable here) - essentially, the way Dynatron's coolers mount to consumer AM5 motherboards, the cooler will blow air sideways, not the expected front-to-back. This is mostly because the socket and RAM placements on server motherboards vs consumer motherboards:

On the left is a server AM5 motherboard. On the right is a consumer AM5 motherboard.

Note that the RAM and CPU socket is rotated in the server motherboard, which is optimized for front-to-back airflow, unlike the consumer motherboard which is designed for traditional towers.

Essentially, without buying a new motherboard, the Dynatron A47 was not an option.

That means that I was left with the Thermalright AX90-X53.

Test Fit

To make sure I wasn't making a huge mistake, I went ahead and tried fitting the cooler in my 3U chassis. This is obviously not a 1:1 comparison as the 2U will have even less space and the CPU will be choking a lot more, but this was more of a "test fit" so that I can set my expectations.

Firstly, I encountered some pretty bad thermal throttling, going over 90 degrees in MH:Wilds.

This was kind of expected, because AMD's "120W TDP" isn't truly 120W TDP. The 9900x is configured out-of-box to actually draw 162W of power during load, and I'm not even talking about a "short boost", I'm talking about sustained draw during load. So first thing I did was power-limit the CPU through Ryzen Master and set PPT value to 120W, so that the CPU will not draw any more than 120, and this was all I needed to get temps down to manageable levels.

The Chassis Arrives

Well, it looks really good in person. It doesn't have the chonky feeling that the 3U did.

Transplanting the parts was easy. The build itself felt exactly the same as the 3U build, just in a slightly smaller form factor. But during the build process you don't really notice the size difference.

I'm using 3x80mm Arctic PWM fans, which were the fans recommended by Sliger, and they are really quiet even at 100%, although I am a bit skeptical of their cooling performance. They only go up to 3000RPM. (Note that my definition of silent might be different than yours)

You might see already that the RAM sticks are blocking off airflow to the CPU, and I'm not too sure if this build is going to be able to do hard work.

While quiet, I've seen the temps go upwards to 95 degrees during load (I haven't run any benchmarks yet, load is based on a quick MH:Wilds playtest). No throttling yet, but I'm not too comfortable knowing that I essentially have no thermal headroom.

In case the temps just don't work for me, I've made a backup purchase of a server AM5 motherboard + Dynatron A47, which I'll use if I can't find a way to tame the temps. These two parts are arriving tomorrow, but I'll be returning them unless I absolutely need them.

Painted, Mounted, and Next Steps

I finished up spray-painting the front plate to match the rest of my Unifi-aluminum color scheme, and I can definitely say that the rack looks much better now.

I haven't yet decided on what exactly I will do with the remaining 1U of space.

I might grab a Unifi Aggregation switch for 10G networking, but if I really do need 10G networking, I would rather upgrade my current Pro Max PoE to a Pro HD PoE switch, the main difference being that the Pro HD model has multiple 10G ports. This means that I can get 10Gb networking without having to consume the extra row of space.

Switch Pro HD 24 PoE - Ubiquiti Store
Professional-grade, Layer 3 Etherlighting™ switch with (2) 10 GbE PoE++, (22) 2.5 GbE PoE++, and (4) 10G SFP+ ports.

Or maybe more Raspberry Pis? Nvidia Jetsons? Mac mini? To be quite honest, I don't really have a need to add anything more to my rack at this moment, so I'll be rocking this configuration until the next new shiny thing comes up.