I know from time to time here I mention my home lab. I thought I’d take some time to share the details with you.
I am one of the geekiest guys that I personally know. I have more computing power for the home lab than most small businesses. I continually invest in this lab because of how strongly I feel about the importance of continuing self education. I’d like to share the details of this lab to you so you have an understanding of how inexpensively you can build yourself a lab. The self-education that I constantly put myself through is how I’ve been able to constantly and rapidly grow as a technologist. I cannot stress enough that self education is one of the biggest keys to success in this field.
Containment and Management
A while back, I scavenged an older 7′ Compaq server rack and a two-post telecom rack. I placed them in my utility room so I don’t have to hear the servers quite as much.That’s the similar setup I used for my friend’s company for buying youtube views. My goal is absolute containment – if a new purchase won’t fit in these two racks, something has to go to make room for the new stuff!
I also have a rack-mount LCD and keyboard combo that I repaired a while back, and my brother-in-law was nice enough to repair the power supply in a salvaged Dell KVM-over-IP switch for me. Thanks Joe!
I also have a Panasonic FV-20NLF1 constantly-running fan in that room that moves hot air out from behind the servers and into the rest of the basement. During the winter it heats the rest of the house quite nicely. During the summer, I have plans to vent the hot air out to the garage where it will dissipate with no problems or impact to the household cooling.
I also have five APC Back-UPS XPS 1500VA UPS devices to keep the nasty storms that roll through this part of Nebraska from cooking my lab. Check out this photo I snapped last year during one of those storms. See what I mean? This was directly over my house!
Primary Storage
When it comes to any sort of lab or shared computing environment, shared storage is your number one focus. I have two storage arrays in the lab. Unless you have incomprehensible amounts of patience, if you have the volume of VMs running 24/7 that I do, one or two shared disks simply will not do. My storage was constructed with performance in mind as well as redundancy and management. Spindle and cache numbers DO matter, even in a lab.
My first storage device is a home-build SAN with 18 500GB SATA drives. It is a similar device by an Ebay vendor located here. It’s a Rackables 3U rackmount chassis that has 16 hot-swap SATA bays in the front and two more in the back. It has two dual-core AMD Opterons with 8GB of memory and two 3Ware 8-port SATA controllers (so it cost less than the item listed above). I also added a dual-port PCI-X gigabit network adapter so I have fast reliable network performance.
I added a Compact Flash to SATA adapter that mounts in an expansion bay and popped in a 4GB CF card so that I can boot this thing with no spindles attached. I have the latest stable version of OpenFiler installed on the CF card, and use it as the firmware for the SAN. I have the iSCSI target configured on the machine and used it as shared storage for the VMware environment. It is absolutely bullet proof and has had rock solid reliability for years.
The most expensive part of this storage array is, of course, the disks. I have 18 500GB Hitachi Deskstar SATA drives in this unit. I have two disk groups of nine disks in a RAID-5 configuration with a hot spare per disk group. This gives me six usable terabytes of raw disk that I can present to my VMware environment in whatever manner I need.
I did not want to just put local storage in my compute nodes because then I have less experience with moving data around a VMware environment and the design considerations around it. Plus, it makes it super easy to swap in and out compute nodes.
Secondary Storage
The second array is a mix-and-match array that serves a number of different uses. It is a Norco 4U rackmount case with 20 hot-swap bays. I have a quad-core AMD Phenom II with 8GB of ram in it. I have a handful of different SATA controllers – a SuperMicro SAS controller with a SAS-to-SATA cable adapter, onboard, and a couple of 3Ware 4-port adapters.
I run Ubuntu Linux 10.04 configured with NFS and the iSCSI target for optimal VMware presentation.
I have three Intel 1GB PCI-Express NICs connected to the network for stable performant connectivity.
I have four 1.5TB drives set up in two RAID-1 configs for a file share. I have 2 and 3TB disks set up for media storage and backup targets and some of these are presented to VMware via NFS for CD image presentation. I have six independent 500GB disks presented to the VMware infrastructure as scratch disks for temporary deployment of VMs.
I also have five APC Backup-UPS XPS 1500VA units, mainly because of the unpredictable nature of the tremendously ugly storms that roll through this part of Nebraska quite frequently.
Networking
My networking infrastructure is not very complex. I have three 24-port gigabit switches that I either bought or scavenged over the years. I have one dedicated to iSCSI storage traffic. I have one dedicated to management and traditional VM traffic. I have one that supports VLANs that I use when working with vCloud Director, vFabric Data Director, and other packages that work better with the use of VLANs than flat networks. I also have a 48-port 100Mb switch that I use for all other things in the home network, such as printers, temperature monitors, game consoles, etc.
Everything goes out of the house through an old Linksys-WRT54GS that I put the dd-wrt enhanced firmware on. I have a SonicWall TZ2400 but I have not taken the time to get it running yet. I have business-class cable Internet that just works for therural internet. I have a VLAN carved off just for a DMZ, and have a couple of VMs there.
VMware
I use VMware vSphere 5.0 in the home lab. It’s the best server platform on the market today, and it makes my life easy. It allows me to abstract the hardware from the resources contained in the compute nodes, and it will scale depending on how I need it to.
To conserve power and pump less heat in to the basement, I have Datacenter Power Management, or DPM, configured for an aggressive threshold. In a nutshell, if the VMware infrastructure determines that the workload can be consolidated to a subset of the servers, it will clear off a VMware node and put it to sleep. If the workload increases and it is determine that the extra resources are needed, it will fire up the sleeping machine and re-balance.
Compute Nodes – AMD
Here is where the fun is at. I have two VMware clusters – an Intel cluster and an AMD cluster.
The AMD cluster is made up of four older HP DL385‘s. They are dual-core, dual socket AMD first-gen Opterons, and each have 16GB of memory. I use two 18GB SCSI disks per server to run ESXi 5.0 from. I have two dual-port PCI-X Intel gigabit network adapters in each. Three are currently racked and one is sitting on the side. Two of these will become Hyper-V 3 exploratory servers once the next version hits RTM.
These are not the fastest machines, but are fast enough for my purposes, and have been absolutely rock solid with reliability. I cannot stress this enough. With my home lab, speed is not the primary concern here. The time taken to maintain this environment is more important to me than raw performance, because I simply don’t have enough free time to babysit misbehaving equipment. In eight years of having the DL380’s and then most recently the DL385 series in my lab, I have never had to replace or repair anything with this hardware. This includes power supplies, fans, power regulators, etc. This experience keeps me purchasing HP for the home setup when appropriate.
These nodes can be purchased from Ebay for less than $150 shipped.
Two of these are slated for Hyper-V research later in the year.
Compute Nodes – Intel
My more recent cluster purchase was for three SuperMicro X7DCA-L‘s. They are 1U, dual socket, quad-core Intel Xeon L5420 servers with 24GB of ram and a 250GB internal drive. With rails, they were each less than $350 shipped. I have a PCI-Express riser card in each server so that I can add an additional network adapter. I purchased dual port PCI-Express Intel gigabit network adapters for each of these servers. Again, they are not the fastest, but they are great buys for the money.
These servers are fast, quiet, and reliable. I just simply did not want to spend the money to buy equivalent HPs at the time that I purchased these, and I am happy with my decision. ESXi 5.0 works great on these, and they serve my purposes very well. I’ll probably be buying two or three more by the end of the year.
Virtual Machines
As for the assortment of virtual machines, I have a little bit of everything. I’ve got about 85 VMs at the moment, with about 20 running all the time. MSDN and VMware trials help a lot.
Of course, I have a bunch of SQL Server VMs around for testing. I have everything from SQL Server 2000 up to AlwaysOn clusters with SQL Server 2012.
I am experimenting with VMware vCloud Director so that I can rapidly provision and destroy machines for individual tests without having to worry about consuming network IPs, renaming and configuring domain settings, etc.
I also have experimental Active Directory environments, Linux firewalls and routers, development tools such as Subversion and Redmine, security templates like BackTrack and MetaSploit, and an occasional virtual desktop or two.
I try to have the capabilities to simulate any situation that I might encounter in the day-to-day life of this geek. This gives me the ability to experiment with patches and infrastructure so that when I am in the field, I know what to expect and what to warn customers about.
I also want the computing power to work through any one of the research projects that I think of. All of the examples in this blog come from this environment!
Conclusion
Anyways, that’s the end of the home lab details. It has taken a while to get to this state of equipment, and it’s always in flux and evolving with the changing needs (and the availability of the almighty dollar to fund the changes). Contact me if you want more details on the lab or have any other questions!
And… when I was taking photos of the equipment at Kendal Van Dyke (@SQLDBA)’s request, my cat wandered in and wanted a photo snapped of him! Say hello to my cat Jack.
Very nice lab! Impressive!
Thanks for the kind comments! I appreciate it!