Plan: re-organization of my multi-site homelab deployment – Part 1: primary datacenter

After my 2 year break from posting and using this blog i am back! The last time i was working on a new post showing you my plans for setting my multi-site homelab network, sadly i did not finish nor safe the post thus it never appeared on the site. Now that i have used this homelab for some time it is time for some re-organization of the lab. I will probably be working somewhere else the next year which will close down 2 of the 4 locations i currently have for hosting servers. The primary datacenter location will be closed down soon anyway as we are slowly moving our customers over to cloud based and azure VMs, the other location might still be saved.

There where a couple of ideas prior to creating this post which helped me setup the following plan.

Primary location - Trans-ix Datacenter Amsterdam

My current setup is based around 1 primary location and a fallback location. The fallback location will always be my home network but i prefer to have the primary in a datacenter somewhere. At the moment i am lucky that i have been able to place my server in the rack that my employer is renting, this way i only have to pay for the energy that the server consumes. I did research the cost of renting my own rack space with the idea to rent 5U(2x 2u server+1u for networking), this would become really expensive and did not want to pay that much. But 2U should be doable if i substract the current cost for power and VPS that i have hosted at OVH. There might also be a possibility that i can obtain 2 hp DL360P gen8 servers with 24x8gb memory total, my plan is also centered around these servers.

Location

Trans-ix Amsterdam, renting 2U of space with hopefully 2A of power. Networking could be done with a single ipv4 address if my other location goes through.

Hardware

2x HP DL360p Gen8 with the following hardware:

  • 2x intel e5-2630v2(Might upgrade to e5-2697v2)
  • 192gb ddr3 ecc registered
  • 8x 600gb 10k sas(If possible)
  • 1x 500gb nvme ssd(have to buy)
  • builtin p420i raid controller
  • dual port 10GBe nic
  • quad port 1GBe nic(FlexibleLOM)

1x Raspberry pi 4 8gb(have to buy)

  • 32gb microsd card for boot
  • 2x 32gb+ usb sticks for vsan storage

Software

The 2 servers will be running ESXI with vSAN in a 2-node cluster. One of the requirements for this setup is that you have a witness node, i could run the witness node on one of the other servers in my lab but i want to try it with the rpi. The raspberry pi is small enough that i can probebly just chuck it somewhere in the rack without it consuming more u space. There are also hats that can take 2 usb inputs to create a redundent power supply for the rpi, this could be used with the 2 servers.

Management of the server will be done using Vcenter which will be hosted on this cluster aswell(primary location hosts vcenter).

Networking

This is the part where i still have to do some research. The idea now is to connect the 2 servers together using the 10GBe ports, these will be used primarily for the vsan traffic. But then i have 2 other challenges, how do i connect the Rpi to the cluster and how am i going to connect to the internet. My first and most ideal though would be to use a cisco firepower 1010 as my firewall. This firewall has 8 switch ports which gives my a firewall and switch in 1 single device, it would only cost me another U and power. The firewall would also fix both my issues.

I could also use the distributed switch from esxi and configure 1 or more uplinks which will be connected to eachother. One of the servers will also have a second uplink which will connect to the RPI. I do not know if this works the way i want it to but its also not the best solution.

The last idea i had is buyin a small 8 port managed switch and attach it with the RPI to the servers. On this switch i can connect my uplink to the internet, my rpi, 2 uplinks to the servers and the ilo of both servers. Yes this is a single point of failure but making it completely reduntant would cost alot more.

For the firewall i am researching the pricing for virtual cisco firewalls, my preference goes to cisco hardware with the anyconnect VPN.

Costs

The hardware cost for the first location would not be that high with the current plans because i have most of the hardware. The costs for hosting i still have to work out, they said it was about 25 euro per month per U. On top of this you pay the power/networking costs.

Hardware costs:

  • NVMe ssd = ~60 euro each
  • RPI = ~90 euro
  • misc = ~30 euro

Networking costs:

  • 8 port switch = ~100 euro new (will buy this from ebay)
  • Virtual firewall = still waiting on the offer but probably to expensive

Hosting costs:

  • 2U = 50 euro per month
  • Power 2A = ?
  • Internet 1 ipv4 = ?
  • Management = ?

Total costs: 340 one-time, ~100 euro monthly

Conclusion

The plans for the new location are quite nice and redundant with quite low hardware costs but it will cost quite some more than per month that i pay now. If i would have to gues what i pay now for hosting(min the services that will stay) it would be around 45 euro per month, this will easily double that.

It might also be possible to use this without the RPI at all by deploying the witness node on one of the other servers. But i will have to research the impact it has on the vpn and what will happen if the vpn drops. There are still some things to research and figure out before i can continue.

Looking at the current solutions for the network side of the deployment i might just have to get 3U's of space and do it properly.The low costs are also based on the idea that i can keep the server that my employer has gifted me a long time ago. For now it will just stay a plan but i will research the last couple things to see if its viable to deploy this setup.