Archive | Home Lab

New Gear – Synology DS1813+

synology_logo

Late last week saw the arrival of a long-awaited new NAS, the Synology DS1813+. This unit will replace my extremely old ReadyNAS NVX. I am now in the process of migrating data from the old box to the new one. Once that is complete, I will have some more information to share in a series of posts in the home lab and other sections of this blog. For now, I give you the hardware specs for the Synology DS1813+ to whet your appetite for more info!

ds1813_front

Hardware

  • CPU Frequency : Dual Core 2.13GHz
  • Floating Point
  • Memory : DDR3 2GB (Expandable, up to 4GB)
  • Internal HDD/SSD : 3.5″ or 2.5″ SATA(II) X 8 (Hard drive not included)
  • Max Internal Capacity : 32TB (8 X 4TB HDD) (Capacity may vary by RAID types) (See All Supported HDD)
  • Hot Swappable HDD
  • External HDD Interface : USB 3.0 Port X 2, USB 2.0 Port X 4, eSATA Port X 2
  • Size (HxWxD) : 157 X 340 X 233 mm
  • Weight : 5.21kg
  • LAN : Gigabit X 4
  • Link Aggregation
  • Wake on LAN/WAN
  • System Fan : 120x120mm X2
  • Easy Replacement System Fan
  • Wireless Support (dongle)
  • Noise Level : 24.1 dB(A)
  • Power Recovery
  • AC Input Power Voltage : 100V to 240V AC
  • Power Frequency : 50/60 Hz, Single Phase
  • Power Consumption : 75.19W (Access); 34.12W (HDD Hibernation)
  • Operating Temperature : 5°C to 35°C (40°F to 95°F)
  • Storage Temperature : -10°C to 70°C (15°F to 155°F)
  • Relative Humidity : 5% to 95% RH
  • Maximum Operating Altitude : 6,500 feet
  • Certification : FCC Class B, CE Class B, BSMI Class B
  • Warranty : 3 Years

5,099 total views, no views today

0

Home Lab 3.0

intel_nucToday, I performed a huge update to the home lab. There are so many changes since my last post that I think it warrants a v3.0 designation. What follows is some info on the new hardware.

First, let’s detail the new ESXi hosts. I recently discovered the Intel NUC via Twitter and a couple of blog posts. This box is small form-factor (about 4in  by 4in) Core i3 1.8GHz dual core machine, supporting up to 16GB of RAM. A smaller and lower power box when compared with the Mac Mini ESXi hosts that I built a year or so ago. I wish these had been around back then, as I could have built 5-6 or six of them for the price of the 2 Mac Minis that I am still using.

My goal in adding these machines the home lab was to provide additional resources for my vCloud Suite testing. Now I have 3 functional clusters with at least 2 hosts each. The first, a 16GB, 2-node cluster will be used for the vCenter Server Appliance and miscellaneous management systems. The second, a 32GB, 2-node cluster is for the vCloud Director appliance and vShield. Finally, the new 80GB, 5-node cluster will be used for vCloud resources. Below is a networking diagram detailing most of the environment.

As you can see from the diagram, I am up to 9 hosts at this point. Pretty crazy, I know, but there are so many products in VMware’s portfolio that it’s difficult to test them without a decent amount of memory.

wygtya_diagramvcenter_screenshot

Also, as you can see, I am running with several Netgear ReadyNAS units. The main one is the ReadyNAS NVX which I use for iSCSI and NFS connectivity. The other two, ReadyNAS Duo units, are used primarily for ISO and template storage and run NFS exclusively. I am expecting some new hardware that I will review and detail in the coming months, so you’ll have to watch this space for more information about that.

netgear_gs108t

Finally, I added an 8-port Netgear switch, the GS108T which supports LACP and will serve as a good method by which to bind ports together for the new NAS in the near future. It comes at a good time, since one of my GS116 switches is having some issues. I’ll need to send that one in for repair shortly.

Stay tuned for more updates soon. Thanks for reading!

4,574 total views, no views today

0

Home Lab Update – Japan Edition

digium_asterisk

Some small part of my time spent here in Japan, outside of my working hours, has been making some changes/improvements to the home lab (it’s a global network, after all ;).

While is was living here, I had a lab in the server closet in my house. It consisted of a VPN firewall, a couple switches, a NAS, and a couple of ESXi servers. You may have seen a post detailing the hardware in a previous entry (it has been a great while, though!) In the US, I had a smaller version of this setup, all of it connected via VPN to Japan. This time I am turning the tables.

I don’t have any lab gear here (yet), persay, but I did bring along a Watchguard XTM to create a “branch office” VPN here. At the very least, this makes it easier for me to troubleshoot issues with my parents’-in-law computer. I may add to this infrastructure in the future as a DR site, given the availability of extremely low-power PCs such as the Intel NUC. Despite the cheap & fast bandwidth here, the latency remains a hurdle to any useful viability.

In addition, I brought along a Cisco SPA303 SIP phone. This is part of an experiment that I started a while back to get free long distance between the US and Japan (mainly between my wife and her parents). As such, and in preparation for this, I have installed and configuredAsterisk 11.x on 2 of my cloud servers inside the Rackspace Cloud. I have configured a softphone on my home PC and my iPhone to connect into my Asterisk server in the cloud. In addition, I have configured the Cisco SPA303 to connect into this Asterisk instance.

Which brings me to my issues… NAT (I suspect) is causing me headaches. Despite the fact that the connection works, whenever I make a call, the audio disappears into oblivion before I can finish saying hello. What’s worse, is that the audio only works from the iPhone softphone side, not from the Cisco SPA303. I can’t get any audio from that at all.

Does anyone have any suggestions for how to resolve this? I have a couple of options, both of which I have tried on the VPN side. 1) NAT port 5060 to the internal IP of the phone. 2) Set-up a SIP application-layer gateway on the Watchguard (not exactly sure I am doing it right). It doesn’t sound like I can do both at the same time; have to choose one or the other.

Any help would be greatly appreciated. Thanks for reading!

1,855 total views, no views today

0

ESXi 5 on an Apple Mac Mini – Update

mac_mini

Well yesterday saw some potential good news for those of us Apple and VMware fanboys. The major limitation of the Mac Mini that most people were warning me about was (and I agreed) the fact that when running ESX we were limited to a single GigE port. When adding FT, iSCSI/NFS and other network traffic we would quickly hit the wall in terms of network bandwidth.

It seems Apple has come out with a Thunderbolt-GigE adapter that could potentially alleviate this bottleneck. The question is whether this will truly be an extension of the PCI Express bus as some have suggested and show up without any additional Thunderbolt drivers for ESX or if we need to add a custom/3rd-party VIB into the vmkernel to get it to work.

Good news is, the Thunderbolt-GigE adapter is available now for only $29. My plan is to drop by the Apple Store on my way home from work today and try it out when I get home. Worst case, I have a GigE adapter for my MacBook Air if it for some reason doesn’t work.

Let me know via comment on this post or on other forms of social media if you have tried or plan to try this. Perhaps we can compare notes and try to get it working… assuming there are issues!

4,801 total views, no views today

4

ESXi 5 on an Apple Mac Mini

mac_minien looking to refresh my home lab environment for some time now. Since I moved into a rental townhouse upon my move to NC, I have had power issues when my home lab is 100% powered-on. A couple of times, I have even tripped the breaker entirely. Between my desktop machine, (2) HP Proliant ML110 G5s, a Netgear ReadyNAS NVX, and (2) Netgear ReadyNAS Duos, it’s just too much load for a single breaker to handle; let alone the cost of powering the whole thing.

I had my eye on the Apple Mac Mini 2011 w/Lion Server since it was released as this would make the perfect low power and low noise option. This model comes with a quad core CPU that supports up to 16GB of RAM and uses 85W (peak) of power per machine. The current servers I have are dual core, and are all limited at 8GB of RAM. The idea was to replace the (4) 8GB servers that I have with (2) Apple Mac Minis running ESXi 5.0U1. I had some minor concerns about the fact that the Mini has only a single NIC, but I don’t really foresee 100% utilization, but if I do, I always have the older servers available for more capacity. Perhaps someone will come up with a Thunderbolt to Ethernet adapter to address this bottleneck.

When the new Mac Mini was released someone tried to get ESXi installed and running but had some issues getting the Gigabit on-board NIC to be recognized in ESXi. Apparently the driver for the Broadcom NIC that Apple uses didn’t get included in the release of ESXi 5. As a result, I put my plans on the back burner, thinking that someone would eventually figure it out.

Well, finally it appears as though someone got it working with ESXi, by installing a custom VIB from VMware for the infamous Broadcom NIC (found here). This was posted on the following site back in January (I have been busy; what can I say). The blog post was pointed out to me on Google+ by my blogger friend & fellow Tech Field Day delegate: Shannon Snowden over at Virtualization Information.

Since he was also successful in getting it working, I thought I would take a stab at it. I did have a couple of pre-requisites around what I wanted to accomplish by doing this, however.

I wanted to have the ability to host nested ESXi servers on the machine, so that I could have an all-in-one ESXi lab cluster. In order to realistically accomplish this, I needed to have an SSD in the machine, one with high IOPS performance and high enough capacity to hold all the VMs. I currently use an OCZ Vertex 2 in my desktop machine, and decided that I would do a quick search to see the current deals at Newegg, Amazon, etc. By coincidence, Newegg had a deal (which has since expired) for a 240GB OCZ Vertex 3 with a free 32GB OCZ Onyx for $249.99 (before $20 rebate). They also had the 2x8GB Corsair SODIMMS that others have installed successfully in the Mac Mini for $99. I decided on picking up 2 of each in anticipation of a 2-node build.

The parts arrived 2 days later, so I took a lunch trip over to the Apple store and picked up the Mac Mini w/Lion Server ($939 with my NetApp discount) After returning, I spent 30m disassembling the Mini and swapping out the RAM and 2 HDDs. It was a bit tricky getting at the 2nd HDD, but well-documented on iFixIt. After getting the hardware in working order, I performed the install of ESXi on the 32GB drive and reserved the 240GB drive as a local VMFS datastore. In addition, I added the required Broadcom NIC driver. It only ended up taking 1-2hr total, including the hardware upgrade, to get everything working. Once it was proven that everything worked, I picked up another Mini last Fri. and performed the same operation; this time it went much faster. By about 10PM on Fri. night, I had a working, silent, 2-node ESXi cluster.

I did run into one issue that I wanted to point out… Upon powering on the 2nd machine and starting the ESXi installer, I noticed some pretty sluggish performance. The installer was taking longer to got through the motions than it had for the initial build. I thought maybe I had a memory issue or another problem. As I was moving the machine around on my desk, I noticed that there was an unreasonable amount of heat coming off the aluminum casing. I removed the cover/foot from the bottom and realized that the power connector for the fan wasn’t fully-seated and thus the fan was not spinning. As this is the last component to get re-installed after replacing the HDDs, it is somewhat difficult to ensure that it is re-seated properly. Also, it’s hard to hear whether the fan is spinning as the Mini is so quiet. Please check it carefully before closing the access panel, so you don’t make the same mistake. Thankfully, it doesn’t appear that any harm was done and the sytem is running much better and much cooler now.

Once I had everything in working physical order, I decided to start working on getting the virtual layer set up to suit my needs. I started by moving my AD, MSSQL and vCenter VMs temporarily to the local SSD storage to get an idea of the performance. It was pretty good, however, I wanted to make sure that I wasn’t isolating those important VMs on local storage so I then moved them back to my iSCSI datastore on the ReadyNAS, where they now sit. This allowed me to use Update Manager to update the ESXi installs with all the latest patches, etc.

I can report that the Minis are working quite well and were successful at performing a “burn in” over the weekend. I haven’t done any real load testing on them yet, but I do plan on getting the nested ESXi builds started this week. I will update you all with another blog post with the results of that testing.

Finally, I decided that this is such a cool use case of nested ESXi and pefect hardware for a home lab that I submitted an abstract last week for VMworld to talk about how to implement and get the most out of it to learn ESXi and have a very portable VMware lab solution. Please stay tuned as the public voting comes available. I would love to have you vote so that I can expose the importance and usefulness of a home lab to more and potential VMware professionals. Also stay tuned for more on the Mac Mini as I perform additional testing.

Thanks, as always, for reading!

6,257 total views, no views today

19

Powered by WordPress. Designed by WooThemes