re-organization of my multi-site homelab deployment – Part 1: primary datacenter

We are now 1 month further than my previous post about the re-organization of the lab and alot of things have starting moving already. So in this post i will explain what has changed from the first plan and what has been arranged already. The biggest change that has happened the last month is that i have found a new job and will start there on nov 8th, this means that my datacenter rental will expire soon. My last working day here will be on oct 29th and i want to have everything moved over before i start at my new job. This post will focus on my primary datacenter location but i had to close down another one aswell, i will post a different story for this one.

Primary location - Trans-ix Datacenter Amsterdam

I had to move out of the rackspace that my current employer was renting and had to find something new. Through an supplier of one of our customers i've gotten in contact with Trans-IX. Last week i signed a contract for 3U of rackspace and an /29 ip block. I will write a new post later about the network design behind the lab as i am going to try some cool things with these ip addresses. The previous plan was to rent 2U and use a virtualized firewall on the cluster of servers. As i am a cisco-guy i wanted to use a cisco asa or ftd firewall, which is possible, but this was way to expensive and it would be cheaper to just rent 1u more and use my current 5506-X.

The plan is to move in on oct 16th but i still have to prepare a couple of things. Below i will explain my setup and what hardware i am going to use.

Location

Renting 3U rackspace at trans-ix which is hosted at the northc location in Amsterdam. Power costs will be determined after the first week which is when they will change the power usage on my contract(hopefully it gets lower). I will have a single uplink cable but an /29 subnet.

Hardware

2x Dell PowerEdge R620

  • 2x Intel e5-2640v1 (Got a deal on 2 servers, will probably upgrade the CPU's to v2 later)
  • 240gb ddr3 ecc registered
  • 8x 600gb 10K sas
  • 1x 500gb nvme ssd
  • Dell H310 mm flashed to IT mode
  • Broadcom dual port 10gbe nic
  • Broadcom quad port 1gbe nic

Dell PowerConnect 5524

  • 24 port 1gbe
  • 2 port 10gbe
  • HDMI stacking ports
  • fully managed

Cisco asa 5506-x

  • 8 port 1gbe

Quick picture of the planned hardware at home. Old asa just to visualize the new setup

Software

The software has not really been changed that much from the planned post. I will still be using VMWare ESXI in a 2-node vsan cluster but instead of using an RPI as a witness node i will be using another host in my network. The VPN has been really stable the past 2 years of operating my homelab that i am willing to try it out without the RPI. But if i do need a local witness node i can now easily add it because of the extra rackspace and the switch.

Networking

Last time i was still researching what would be the best way to setup the networking at this location but now it will be easiest part of all. I will be using the same cisco asa that i currently have at the datacenter location and will only change the wan ip addresses. The internal networks will stay exactly the same. To connect everything together i only have to create a couple of vlans on the Dell switch and create the LACP trunks.

After the configuration has finished here i will have to add the ip addresses to the config of the other locations to rebuild the vpn and bgp config.

After all the changed to the location has been finished i will write a new post explaining the network design that i have choosen.

Costs

Below a quick write-up of the current and monthly costs of this move.

Hardware costs:

  • 2x Dell R620 with dual e5-2640 cpus/ 4x8gb memory/ dual 10gbe nic/ dual 1gbe nic(not used) = €510 together
  • 2x Dell H310 mm raid controllers = €50
  • EZDIY-FAB Pcie to nvme card = €15
  • 4x Sandisk cruzer fit 16gb = €21
  • Network and power cables = €85

Hosting costs:

  • 3U+power/internet = ~€150 monthly

Total costs: €681, ~€150 monthly

It became quite alot more expensive than planned but it does give me a better and more complete solution for my primary location. But making this change did free up a couple of other servers that could be sold to regain some of the costs.

Conclusion

The last post was really all about a plan or idea that i had at the time to be able to keep my datacenter location alive. Now a month further most of the plan has made it through and is now a reality. Some parts of the plan did change just to make it more stable and complete instead of making it as cheap as possible.