- Extending OpenStack
- Omar Khedher
- 747字
- 2021-06-24 18:51:48
Production OpenStack environment
Before starting to deploy our infrastructure code in the production environment, several important points must be considered:
- The infrastructure code should be fully tested in both test and staging (preproduction) environments.
- Any new service that will extend the OpenStack layout should go through a change management pipeline.
- Make sure that the base production setup is 100% consistent. Fragile components should not exist
- The OpenStack services production setup should be seen as Ansible modules. When you are adding a new module for further service extension, it should be designed integrally and tested independently.
- Design for failure. OpenStack is well architected to keep its components highly available.
In this section, we will go through a sample production layout. The first deployment will bring the basic OpenStack services up and running. Bear in mind that the first design layout should be extensible and managed from the system management tool.
The sample diagram layout shown in the first section of this chapter can be extended, and eventually more services can be forked across different nodes.
As the OpenStack production environment must provide redundancy and service extensibility, Ansible Playbooks have been designed for that purpose.
The following diagram illustrates an extended layout of the production environment, which defines the following hosts:
- Cloud Controllers: These running the following OpenStack management services:
- Keystone-* APIs
- Glance-* APIs
- Nova-* APIs
- Cinder-* APIs
- Neutron-server
- Swift proxy
- Horizon
Optionally, the cloud controller could run a common service infrastructure as follows:
-
- RabbitMQ
- MySQL Galera database
- Compute Nodes: These run the following hypervisor machines:
- Nova-compute
- Neutron--plugin-agent
- Logging host: Logs generated by OpenStack services need to be shipped and filtered for fast troubleshooting tasks. Log host will run a full logging stack including the following:
- ElasticSearch
- Logstash
- Rsyslog
- Kibana
- Network node(s): This will run the following Neutron agents:
- L2 agent
- L3 agent
- DHCP agent
- Block storage node(s): This will host block storage volumes, along with installed LVM, and run the following OpenStack services:
- Cinder-volume
- Cinder-scheduler
- Object storage node(s): Optionally, a dedicated storage blob device can run the following object storage service:
- Swift-* API
- Load balancer node(s): This runs the following services:
- HAProxy
- Keepalived
- Deployment host: This will run the following services:
- Ansible service and repository
- Razor PXE boot server
From a network perspective, a production setup might differ slightly from a staging environment because of the network device's high cost. On the other hand, it is essential to design an OpenStack production setup that is as close to this ideal as possible, in a pre production environment even if you have to consolidate different networks in the same logical interface. For this reason, designing a network layout differs depending on the cost and performance of the hardware devices that can be used. The previous diagram depicts a network layout suitable to support and integrate new OpenStack services. It would be easy to extend it at scale.
The different network segments are as follows:
- Administrative network: Dedicated network to run Ansible and PXE boot installer.
- VM internal network: This is a private network between virtual machines and the L3 network, providing routing to the external network and floating IPs backward to the virtual machines.
- Management network: This consists of OpenStack services communication, including infrastructure services, such as databases queries and queue messaging traffic.
- Storage network: This isolates storage traffic using a virtual LAN through switch for both block and object storage clusters.
- External network: This faces the public internet, providing external connectivity to instances. It exposes virtual IPs for load balancers used to connect internal OpenStack services APIs.
At this level, a successful run of the playbooks will be achieved when the following criteria are met:
- A network is configured correctly
- Target machines are reachable by Ansible
- Required packages are installed per target host
- The /etc/openstack_deploy/openstack_user_config.yml file will just need to be adjusted based on the networking IP configuration. The basic physical environment parameters that will be customized are as follows:
- cidr_networks
- management
- tunnel
- storage
- used_ips
- global_overrides: internal_lb_vip_address and external_lb_vip_address
- shared-infra_hosts
- os-infra_hosts
- storage-infra-hosts
- identity_hosts
- compute_hosts
- storage_hosts
- network_hosts
- repo-infra_hosts
- log_hosts
- haproxy_hosts
- cidr_networks
In addition, the /etc/openstack_deploy/user_variables.yml file can be adjusted to use kvm as a virtualization type for the Compute Nodes. The previous layout can be extended more with additional components using Ansible playbooks.