The fine folks over at ServeTheHome have done a fantastic job documenting the Dell C6100. This is a rather interesting system because:

  1. It packs quite a bit of compute power into 2U for colocation
  2. They are surprisingly cheap on the secondary markets

I'll lead you over to the ServeTheHome blog posts which deliver a nice overview:

My ideal config is a redundant set of servers running border services including routing, NAT, firewall, VPN and a redundant set of servers running a variety of applications, VMs, etc.

The C6100 comes in either a 12 disk 3.5" chassis, or a 24 disk 2.5" chassis. The disks are factory split between the systems in either three 3.5" per node, or six 2.5" per node.

My border networking nodes don't need a lot of disk space. In fact, they don't even really need RAID thanks to the magic of PF and CARP. So preferably I don't want to waste the hot swap bays on them. There are several options here.. PXE or iSCSI boot, USB thumb disk boot, or figure out how to cram some storage inside the unit. Turns out the last option isn't that hard if you need traditional local storage.

Using info from the ServeTheHome forums, I built a 5V power tap. I bound the two 5V rails and two grounds from the internal USB header to a donor 4-pin Molex to SATA power converter. I recommend sticking with a low power SSD like the Crucial M500 or certain Samsung units as this is stretching the USB-standard power envelope a bit at .5A per port.

Dell c6100 5v USB to SATA power tap

Dell c6100 5v USB to SATA power tap

Here's a parts list per node:

  • Qty: 4 - Molex 12" picoblade precrimped wire - Mouser link
  • Qty: 1 - Molex 8 position picoblade connector - Mouser link
  • Qty: 1 - SATA power connectors salvaged from a 4-pin Molex converter
  • Qty: 1 - 6" SATA cables

Dell c6100 internal SSD storage

Dell c6100 internal SSD storage

I mounted the internal SSDs with a stack of automotive trim tape on top of the Southbridge heatsink. A block of foam usually sits here to support mezzanine cards so I'm not too concerned. If need be, I can cold swap these disks without too much trouble while the other nodes continue to run.

This in turn frees up the hot swap bays to be split between the two app nodes. This is great because six 3.5" disks gives me the right balance of flexibility, capacity, performance, and cost. Rewiring this is a bit of a chore. You need to order two SFF-8087 to 4x SATA 7pin breakout cables (check Monoprice). You can reuse the two original cables with some creative wiring (altering the numbering of the front bays so they are right-to-left numbered), or purchase four of the aforementioned cables for correct numbering.

Full c6100 loadout

Full c6100 loadout

If you're running SSDs in an array, I recommend hunting down the LSI SAS2008 based "XX2X2" mezzanine card which runs at 6gbps and supports >2TB disks. You need various bits to install this, including new SATA cables, PCI-E riser, and metal brackets. At the time of my purchase, the easiest way to get this was buying the older "Y8Y69" mezzanine card with all those bits and buying a bare "XX2X2" card.

These hacks probably don't make much sense in all settings as they're fairly time consuming. If you have the budget, the 24 2.5" disk chassis is the way to go. But for personal use, this is a great build for me!


Comments

comments powered by Disqus