Faking Bare Metal in the Cloud
with Ravello Systems
As CloudOps’ Lead OpenStack Architect, I have created a technical how-to post explaining how Ravello Systems helps simulate a bare metal lab using public cloud resources like AWS and Google Cloud. It provides advanced features that are not available on public clouds and similarly allows for monthly operational expense management.
Using Ravello Systems to Simulate a Bare Metal Like to Mirantis OpenStack Lab on AWS and Google Cloud
A month before joining CloudOps (January 2015), I was looking for a way to leverage Amazon Web Services (AWS) to deploy an OpenStack based cloud. I could have used DevStack on AWS (QEMU) and it would have worked, but my requirements were highly specific. I wanted to replicate a Mirantis OpenStack deployment using public cloud resources.
Like many Cloud testing ideas, it started with a Google search and thanks to AdWords, Ravello popped up on top of the list. After going through the website, and grasping a few elements about how they were using nested virtualization, I was hooked.
It was exciting to think that I could go to my soon-to-be CEO and COO and say that my bare metal lab was no longer necessary and that I would have an all OPEX environment.
About Mirantis OpenStack (MOS)
MOS is a hardened OpenStack distribution with the Fuel deployment orchestrator at the helm. Basically, once your Fuel node is up, you PXE boot the other nodes of your OpenStack Cloud and they will show up in Fuel under “unallocated nodes”. From there you select them, assign them a role and hit deploy. Note that MOS allows deployment in the control plane in HA, which is a real time saver.
This implementation is based on Mirantis OpenStack 6.0
Step 1 – Preparing the network layout (VLAN)
Fuel operates with a set of logical networks. In this scheme, these logical networks are mapped as follows:
- Admin (Fuel) network: untagged on the scheme
- Public/Floating network: VLAN 101 (we don’t use VLAN in our lab)
- Management network: VLAN 100
- Storage network: VLAN 102
- Fixed/Private network: VLANs 103-200 (for our LAB I used 110 to 119)
Since we are doing one network per NIC, the VLAN tagging (trunk) at the Ravello level is only necessary for the Fixed/Private network.
Step 2 – The Fuel Node
The Fuel node starts with the EMPTY VM template from Ravello (two vCPUs / 4GB of RAM). Make sure two NICs are in place with specific network information NIC1 : 10.20.0.2/24 (Gateway and DNS 10.20.0.1) – NIC2 : 10.30.0.2/24 (No Gateway/DNS). You also need to upload the Fuel ISO in your library to attach it to the CDROM and boot the empty node with it.
Of course, you’ll also have to assign external services to NIC1 for external access via SSH and HTTP.
The final network layout of the Fuel node will look like this:
You can now launch the Fuel node and proceed to the installation!
Installation instructions for Fuel PXE configuration and modification prior to installation are located on the Mirantis website by clicking here.
All you need to do is activate the second NIC, put same IP info as NIC2 in Ravello and make sure you tell Fuel that NIC2 is for PXE.
Step 3 – A little help from Ravello
Now that our Fuel node is up and running, you need to setup a node that will become your template for the control plane and all the compute and storage resources.
In order to do that, you will start with the EMPTY VM from Ravello, give it five NICs (all in DHCP), remove all external services and add the iPXE.ISO on CDROM has the first bootable device.
You can download iPXE.iso from http://boot.ipxe.org/ipxe.iso and like the Fuel ISO, you’ll need to upload it to your own library.
Lastly, save it to VM in our Library (More / Save to Library) and enable “Nested Virtualization” flag to run KVM on this VM, which will now be viewed as a bare metal box for your OpenStack deployment.
Step 4 – Preparing the other nodes (NON-HA Lab)
Now that you have your Ravello MOS VM, you can build your node pool for your lab deployment.
The steps are simple, you deploy as many as needed and prep them by type.
- 1 X Controller
- 2 x vCPU / 6 GB of RAM / 150 GB storage
- Set a static IP to all the NICs in accordance with the network you defined in Phase 1
- Make sure ipxe.iso is the first bootable device
- 3 X Compute-Cinder
- 2 x vCPU / 8 GB of RAM / 200 GB of storage
- 1 X Zabbix (Zabbix role available in Fuel 6.0 when experimental features are enabled)
- 2 x vCPU / 4 GB of RAM / 150 GB of storage
You must pay particular attention to the network settings on each node. Below is an example of Ravello MOS VM network (5 NICs) configuration. Please note that you MUST select VirtIO for the NIC device type.
|Controller||10.30.0.3 / 24||192.168.0.2 / 24||172.16.0.2 / 24
GW : 172.16.0.1DNS : 220.127.116.11
External services ports : 80 and 6080
|192.168.1.2 / 24||192.168.2.2 / 24|
|Compute-1||10.30.0.4 / 24||192.168.0.3 / 24||172.16.0.3 / 24||192.168.1.3 / 24||192.168.2.3 / 24|
|Compute-2||10.30.0.5 / 24||192.168.0.4 / 24||172.16.0.4 / 24||192.168.1.4 / 24||192.168.2.4 / 24|
|Compute-3||10.30.0.6 / 24||192.168.0.5 / 24||172.16.0.5 / 24||192.168.1.5 / 24||192.168.2.5 / 24|
|Zabbix||10.30.0.7 / 24||192.168.0.6 / 24||172.16.0.6 / 24||192.168.1.6 / 24||192.168.2.6 / 24|
Note that only the controller node has Gateway, DNS and external services defined.
The final network layout of your MOS Lab will look like this:
Step 5 – Booting up the nodes and using FUEL to deploy your Openstack lab
Now that you have a full lab defined in Ravello, boot all the nodes so that Fuel can detect them.
Node booting up on iPXE:
You can now log into the Fuel interface to create your environment. Use the URL from Ravello on your Fuel node for port 8000.
You can now create your new OpenStack environments:
Now you can add your nodes to your environment. Just map your PXE NIC last four characters in the Ravello interface to the node you see in Fuel and map the roles.
Once all the nodes are assigned a role, select them all and click on Configure Interfaces.
Put the NICs in their proper order (PXE / management / public / storage / private) and click Apply.
The last step is to go to the network settings section and modify the VLANs accordingly, like the Neutron L2 VLAN ID Range to reflect what was put on the private NIC on the Ravello interface and verify the network.
Once saved, click Verify Networks to make 100% sure that your network settings are accurate.
If everything is green then you can click deploy on the top right corner and the deployment will start.
If deployment is a success, you will see both public IPs assigned for Horizon (controller) and Zabbix. Make sure you map those same IPs on your public network NIC in Ravello (if not the same) then you will be able to login to Zabbix (IP or FQDN/Zabbix) and Horizon from their Internet public IP.
Once deployed, you can now access the Horizon UI from the public IP assigned to the Controller Node on port 80 from the Ravello interface.
Ravello Systems enables us to build our labs using public cloud resources and save the application blueprint for future use, all on an OPEX billing model.
This is by far the most flexible lab/learning environment making I’ve come across. . SMBs and large enterprises alike could benefit from this service.
By Stacy Véronneau, Lead OpenStack Architect at CloudOps
Since 2005, CloudOps have enabled hundreds of enterprises and web-based companies to build their businesses in the cloud. Our best-in-class cloud architecture and proven approaches allow companies to confidently, securely, and reliably capture business opportunities while achieving higher levels of performance.
We also build, own and operate cloud.ca, a 100% Canadian cloud infrastructure.