- OpenStack Essentials(Second Edition)
- Dan Radez
- 1224字
- 2021-07-14 10:10:44
Installing RDO using Triple-O
The Triple-O project is an OpenStack installation tool developed by the OpenStack community. A Triple-O deployment consists of two OpenStack deployments. One of the deployments is an all-in-one OpenStack installation that is used as a provisioning tool to deploy a multi-node target OpenStack deployment. This target deployment is the deployment intended for end users. Triple-O stands for OpenStack on OpenStack. OpenStack on OpenStack would be OOO, which lovingly became referred to as Triple-O. It may sound like madness to use OpenStack to deploy OpenStack, but consider that OpenStack is really good at provisioning virtual instances. Triple-O applies this strength to bare-metal deployments to deploy a target OpenStack environment. In Triple-O, the two OpenStacks are called the undercloud and the overcloud. The undercloud is a baremetal management enabled all-in-one OpenStack installation that will build for you in a very prescriptive way. Baremetal management enabled means it is intended to manage physical machines instead of virtual machines. The overcloud is the target deployment of OpenStack that is intended be exposed to end users. The undercloud will take a cluster of nodes provided to it and deploy the overcloud to them, a fully featured OpenStack deployment. In real deployments, this is done with a collection of baremetal nodes. Fortunately, for learning purposes, we can mock having a bunch of baremetal nodes by using virtual machines.
Mind blown yet? Let's get started with this RDO Manager based OpenStack installation to start unraveling what all this means. There is an RDO Manager quickstart project that we will use to get going.
The RDO Triple-O wiki page will be the most up-to-date place to get started with RDO Triple-O. If you have trouble with the directions in this book, please refer to the wiki. OpenSource changes rapidly and RDO Triple-O is no exception. In particular, note that the directions refer to the Mitaka release of OpenStack. The name of the release will most likely be the first thing that changes on the wiki page that will impact your future deployments with RDO Triple-O.
Start by downloading the pre-built undercloud image from the RDO Project's repositories. This is something you could build yourself but it would take much more time and effort to build than it would take to download the pre-built one. As mentioned earlier, the undercloud is a pretty prescriptive all-in-one deployment which lends itself well to starting with a pre-built image. These instructions come from the readme of the Triple-O quickstart GitHub repository (https://github.com/redhat-openstack/tripleo-quickstart/):
myhost# mkdir -p /usr/share/quickstart_images/ myhost# cd /usr/share/quickstart_images/ myhost# wget https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2.md5 \ https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2
Make sure that your ssh
key exists:
Myhost# ls ~/.ssh
If you don't see the id_rsa
and id_rsa.pub
files in that directory list, run the command ssh-keygen
. Then make sure that your public key is in the authorized keys file:
myhost# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Once you have the undercloud image and you ssh
keys pull a copy of the quickstart.sh
file, install the dependencies and execute the quickstart script:
myhost# cd ~ myhost# wget https://raw.githubusercontent.com/redhat-openstack/tripleo-quickstart/master/quickstart.sh myhost#sh quickstart.sh -u \ file:///usr/share/quickstart_images/undercloud.qcow2 \ localhost
quickstart.sh
will use Ansible to set up the undercloud virtual machine and will define a few extra virtual machines that will be used to mock a collection of baremetal nodes for an overcloud deployment. To see the list of virtual machines that quickstack.sh
created, use virsh
to list them:
myhost# virsh list --all Id Name State ---------------------------------------------------- 17 undercloud running - ceph_0 shut off - compute_0 shut off - control_0 shut off - control_1 shut off - control_2 shut off
Along with the undercloud virtual machine, there are ceph
, compute
, and control
virtual machine definitions. These are the nodes that will be used to deploy the OpenStack overcloud. Using virtual machines like this to deploy OpenStack is not suitable for anything but your own personal OpenStack enrichment. These virtual machines represent physical machines that would be used in a real deployment that would be exposed to end users. To continue the undercloud installation, connect to the undercloud virtual machine and run the undercloud configuration:
myhost# ssh -F /root/.quickstart/ssh.config.ansible undercloud undercloud# openstack undercloud install
The undercloud install command will set up the undercloud machine as an all-in-one OpenStack installation ready be told how to deploy the overcloud. Once the undercloud installation is completed, the final steps are to seed the undercloud with configuration about the overcloud deployment and execute the overcloud deployment:
undercloud# source stackrc undercloud# openstack overcloud image upload undercloud# openstack baremetal import --json instackenv.json undercloud# openstack baremetal configure boot undercloud# neutron subnet-list undercloud# neutron subnet-update <subnet-uuid> --dns-nameserver 8.8.8.8
There are also some scripts and other automated ways to make these steps happen: look at the output of the quickstart script or Triple-O quickstart docs in the GitHub repository to get more information about how to automate some of these steps.
The source command puts information into the shell environment to tell the subsequent commands how to communicate with the undercloud. We will look at this more in depth in Chapter 2, Identity Management. The image upload command uploads disk images into Glance that will be used to provision the overcloud nodes. We will look at this more in Chapter 3, Image Management. The first baremetal command imports information about the overcloud environment that will be deployed. This information was written to the instackenv.json
file when the undercloud virtual machine was created by quickstart.sh
. The second configures the images that were just uploaded in preparation for provisioning the overcloud nodes. The two neutron commands configure a DNS server for the network that the overclouds will use, in this case Google's. Finally, execute the overcloud deploy:
undercloud# openstack overcloud deploy --control-scale 1 --compute-scale 1 --templates --libvirt-type qemu --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
Let's talk about what this command is doing. In OpenStack, there are two basic node types, control and compute. A control node runs the OpenStack API services, OpenStack scheduling service, database services, and messaging services. Pretty much everything except the hypervisors are part of the control tier and are segregated onto control nodes in a basic deployment. In an HA deployment, there are at least three control nodes. This is why you see three control nodes in the list of virtual machines quickstart.sh
created. RDO Triple-O can do HA deployments, though we will focus on non-HA deployments in this book. Note that in the command you have just executed, control scale and compute scale are both set to 1
. This means that you are deploying one control and one compute node. The other virtual machines will not be used. Take note of the libvirt-type
parameter. It is only required if the compute node itself it virtualized, which is what we are doing with RDO Triple-O, to set the configuration properly for the instances to nested. Nested virtualization is when virtual machines are running inside of a virtual machine. In this case, the instances will be virtual machines running inside of the compute node, which is a virtual machine. Finally, the ceph
storage scale and storage environment file will deploy Ceph at the storage backend for Glance and Cinder. If you leave off the Ceph and storage environment file parameters, one less virtual machine will be used for deployment. There is more information on storage backends in Chapter 6, Block Storage. The indication the overcloud deploy has succeeded will give you a Keystone endpoint and a success message:
Overcloud Endpoint: http://192.0.2.6:5000/v2.0 Overcloud Deployed