Last update: 2014.01.13
I know from time to time here I mention my home lab. I thought I’d take some time to share the details with you.
I am one of the geekiest guys that I personally know. I have more computing power for the home lab than most small businesses. I continually invest in this lab because of how strongly I feel about the importance of continuing self education. I’d like to share the details of this lab to you so you have an understanding of how inexpensively you can build yourself a lab. The self-education that I constantly put myself through is how I’ve been able to constantly and rapidly grow as a technologist. I cannot stress enough that self education is one of the biggest keys to success in this field.
Containment and Management
A while back, I scavenged an older 7′ Compaq server rack and a two-post telecom rack. I placed them in my utility room so I don’t have to hear the servers quite as much. My goal is absolute containment – if a new purchase won’t fit in these two racks, something has to go to make room for the new stuff!
I also have a rack-mount LCD and keyboard combo that I repaired a while back, and my brother-in-law was nice enough to repair the power supply in a salvaged Dell KVM-over-IP switch for me. Thanks Joe!
I also have a Panasonic FV-20NLF1 constantly-running fan in that room that moves hot air out from behind the servers and into the rest of the basement. During the winter it heats the rest of the house quite nicely. During the summer, I have plans to vent the hot air out to the garage where it will dissipate with no problems or impact to the household cooling.
I also have two rack-mount APC Smart-UPS 2200VA UPSs, five APC Back-UPS XPS 1500VA UPSs, and an APC Back-UPS 1000VA UPS in the server room to keep the nasty storms that roll through this part of Nebraska from cooking my lab. Check out this photo I snapped in 2012 during one of those storms. See what I mean? This was directly over my house!
When it comes to any sort of lab or shared computing environment, shared storage is your number one focus. I have two storage arrays in the lab. Unless you have incomprehensible amounts of patience, if you have the volume of VMs running 24/7 that I do, one or two shared disks simply will not do. My storage was constructed with performance in mind as well as redundancy and management. IOPs DO matter, even in a lab. Eventually I will migrate this to smaller, less noisy and heat-producing SSD-based units, but for now, this unit works great!
My first storage device is a home-build SAN with 18 500GB SATA drives. It is a similar device by an Ebay vendor located here. It’s a Rackables 3U rackmount chassis that has 16 hot-swap SATA bays in the front and two more in the back. It has two dual-core AMD Opterons with 8GB of memory and two 3Ware 8-port SATA controllers (so it cost less than the item listed above). I also added a dual-port PCI-X gigabit network adapter so I have fast reliable network performance.
I added a Compact Flash to SATA adapter that mounts in an expansion bay and popped in a 4GB CF card so that I can boot this thing with no spindles attached. I have the latest stable version of OpenFiler installed on the CF card, and use it as the firmware for the SAN. I have the iSCSI target configured on the machine and used it as shared storage for the VMware environment. It is absolutely bullet proof and has had rock solid reliability for years.
The most expensive part of this storage array is, of course, the disks. I have 18 500GB Hitachi Deskstar SATA drives in this unit. I have two disk groups of nine disks in a RAID-5 configuration with a hot spare per disk group. This gives me six usable terabytes of raw disk that I can present to my VMware and Hyper-V environments in whatever manner I need.
I did not want to just put local storage in my compute nodes because then I have less hands-on opportunities with moving data around a virtualized environment and the design considerations around it. Plus, it makes it super easy to swap in and out compute nodes.
I purchased a Synology 1812+ eight-bay network attached storage unit. I absolutely love this device. I’ve got four Western Digital 3TB Red drives and four 1.5TB disks for secondary storage. It is a file server for the house, media server for the home theater, VMware iSCSI target (it’s even VAAI ready!) for primary VMs, OpenVPN server, data replicator to an off-site location’s Synology DS713+ for disaster recovery purposes, and even replicates some of my data to Amazon Glacier for archival purposes. This thing is incredible, and I wholeheartedly recommend that you get one of these for your home.
The second array is a mix-and-match array that serves a number of different uses. It is a Norco 4U rackmount case with 20 hot-swap bays. I have a quad-core AMD Phenom II with 8GB of ram in it. I have a handful of different SATA controllers – a SuperMicro SAS controller with a SAS-to-SATA cable adapter, onboard, and a couple of 3Ware 4-port adapters. It runs OpenFiler for the NAS software, and has a multitude of SATA disks in it for a total of about 10TB of usable storage. Right now this unit is powered off because the Synology unit took over from this one, but it’s ready to go if I need it.
My networking infrastructure is not very complex. I have four 24-port and one 48-port gigabit switches that I either bought or scavenged over the years. I have one dedicated to iSCSI storage traffic. I have one dedicated to management and traditional VM traffic. I have one that is configured for 20 VLANs that I use when working with vCloud Director, vFabric Data Director, and other packages that are designed to VLANs rather than flat networks. I also have two 48-port 100Mb switches that I use for all other things in the home network, such as printers, temperature monitors, game consoles, Blu-Ray players, etc.
Everything goes out of the house through a SonicWall TZ2400 through business-class cable Internet from Cox Communications. I will soon have a VLAN carved off just for a DMZ, and will put a few honeypot VMs in there for fun.
I primarily use VMware vSphere 5.5 in the home lab. It’s still my favorite virtualization platform on the market today, and it makes my life easy. It allows me to abstract the hardware from the resources contained in the compute nodes, and it will scale depending on how I need it to. I can do anything I want in this cluster, and then some.
Three compute nodes are currently configured with nine datastores and a truckload of shared resources connected.
VMware vSphere Compute Nodes – HP DL380 G6
Here is where the fun is at. I have a full VMware cluster.
The VMware cluster is made up of three HP DL380 G6 servers. They run dual-socket, quad-core Intel Xeon X5550 CPUs at 2.67GHz, and each have 72GB of memory and four 1GB NICs on the motherboard. I use an SD card per server to boot ESXi 5.5 from, and each server has four 146GB 10k SAS disks for local temporary storage (and hopefully participate in vSAN activities soon).
These are most certainly fast enough for my purposes, and have been absolutely rock solid with reliability. I cannot stress this enough. With my home lab, speed is not the primary concern here.The time taken to maintain this environment is more important to me than raw performance, because I simply don’t have enough free time to babysit misbehaving equipment. In eight years of having the DL380′s and DL385′s in my lab, I have never had to replace or repair anything with this hardware. This includes power supplies, fans, power regulators, etc. This experience keeps me purchasing HP for the home setup when appropriate.
These nodes can be purchased from Ebay for less than $1000 shipped. This vendor has been very good to me, and their great prices on refurbished HP servers are unparalleled.
Microsoft Hyper-V Computer Nodes – SuperMicro X7DCA-L
I also have three SuperMicro X7DCA-L machines configured as a Hyper-V 2012 R2 failover cluster. They are 1U, dual socket, quad-core Intel Xeon L5420 servers with 24GB of ram and a 250GB internal drive. With rails, they were each less than $350 shipped. I have a PCI-Express riser card in each server so that I can add an additional network adapter. I purchased dual port PCI-Express Intel gigabit network adapters for each of these servers. Again, they are not the fastest, but they are great buys for the money. More on these as I ramp up on Hyper-V this year!
As for the assortment of virtual machines, I have a little bit of everything. I’ve got about 85 VMs at the moment, with about 20 running all the time. MSDN and VMware trials help a lot.
Of course, I have a bunch of SQL Server VMs around for testing. I have everything from SQL Server 2000 up to AlwaysOn clusters with SQL Server 2012 all the way to clones of SQL Server 2014 CTP releases.
I am experimenting with VMware vCloud Director and VMware vFabric Data Director so that I can rapidly provision and destroy machines for individual tests without having to worry about consuming network IPs, renaming and configuring domain settings, etc.
I also have experimental Active Directory environments, Linux firewalls and routers, development tools such as Subversion and Redmine, security templates like BackTrack and MetaSploit, and an occasional virtual desktop or two.
I try to have the capabilities to simulate any situation that I might encounter in the day-to-day life of this geek. This gives me the ability to experiment with patches and infrastructure so that when I am in the field, I know what to expect and what to warn customers about.
I also want the computing power to work through any one of the research projects that I think of. All of the examples in this blog come from this environment!
Anyways, that’s the end of the home lab details. It has taken a while to get to this state of equipment, and it’s always in flux and evolving with the changing needs (and the availability of the almighty dollar to fund the changes). Contact me if you want more details on the lab or have any other questions!
And… when I was taking photos of the equipment at Kendal Van Dyke (@SQLDBA)’s request, my cat wandered in and wanted a photo snapped of him! Say hello to my cat Jack.