Menu
Online Notes
  • Privacy Policy
Online Notes

Preparing nodes for openstack deployment

Posted on October 9, 2020December 4, 2020 by sandeep

Have 4 physical servers behind a UTM device which also acts hosts DNS service and acts as the gateway. This device is connected to a managed switch where all the physical servers are connected. IP of the UTM device or gateway & dns-server is 10.1.1.1

All nodes except contoller node will host compute and cinder servers, so configured two physical partitions in these nodes. Second partition for cinder volumes.

Step 1 : Install Ubuntu server 18.04 server (minimal) in all nodes.

DNS is updated with (in case of DNS services not available, have to update /etc/hosts with the following in all nodes so that each node and reach the other user hostname)

10.1.1.30 node1 node1.xxxxx.net
10.1.1.40 node2 node2.xxxxx.net
10.1.1.50 node3 node3.xxxxx.net
10.1.1.60 controller controller.xxxxx.net

Step 2 : Configure the FQDN to match the DNS entries in each node – Could not configure the FQDN during installation.

sudo hostnamectl set-hostname controller.xxxx.net
sudo hostnamectl set-hostname node1.xxxx.net

Commented out the following in /etc/hosts

# 127.0.1.1 controller

Step 3 : Changes to network management – Optional and can skip to next step – Not mandatory – Just a personal choice.

In all nodes disable Consistent Network Device Naming, update /etc/default/grub entry as follows :

GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevicename=0"

Update grub is essential for the changes to take effect during next boot.

update-grub

Enable networking and disable netplan

sudo apt install -y ifupdown

On controller node, which has only two NICs configure one for management and one for provider network. Edit /etc/network/interfaces and update the contents as follows :

source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
allow-hotplug eth0
auto eth0
iface eth0 inet static
  address 10.1.1.25
  netmask 255.255.255.0
  broadcast 10.1.1.255
  gateway 10.1.1.1
  dns-nameservers 10.1.1.1 8.8.8.8

allow-hotplug eth1
auto eth1
iface eth1 inet manual
  up ip link set dev $IFACE up
  down ip link set dev $IFACE down

On the other nodes, which have 4 NICs, bond the first three interfaces as a single bonded interface and the fourth one for provider network.

Note : On the switch (Cisco) had configured LAG for the 3 interfaces and also enabled LACP.

Edit /etc/network/interfaces and update with following contents :

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto bond0
iface bond0 inet static
  address 10.1.1.30
  netmask 255.255.255.0
  broadcast 10.1.1.255
  gateway 10.1.1.1
  dns-nameservers 10.1.1.1 8.8.8.8
    slaves eth0 eth1 eth2
    bond_mode 6
    bond-miimon 100
    bond_downdelay 200
    bond_updelay 200

auto eth3
iface eth3 inet manual
  up ip link set dev $IFACE up
  down ip link set dev $IFACE down

Install ifenslave package and enable bonding module (only where bonding is planned – other than controller all nodes – in my case)

sudo apt install -y ifenslave
sudo modprobe bonding
echo "bonding" >> /etc/modules

Disable and mask NetworkManager and netplan in all nodes – enable “networking”.

Copy the following commands in say nmchanges.sh. Add execute permission “chmod +x nmchanges.sh” and finally execute the script with nohup so that momentary network disruptions do not impact the ssh session

sudo systemctl stop systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

sudo systemctl disable systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

sudo systemctl mask systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

sudo apt -y purge nplan netplan.io

sudo systemctl unmask networking

sudo systemctl enable networking

Reboot the system so that the network management changes take effect.

sudo reboot

Step 4 : Disable snapd and motd (Message of the day). Optional – can skip to next step – Just a personal choice.

sudo apt purge -y snapd

Disable message of the day. Edit /etc/default/motd-news and set ENABLE=0. In addition to that we can disable the motd.timer

sudo systemctl disable motd-news.timer
sudo systemctl mask motd-news.timer

Comment out the following lines in /etc/pam.d/sshd as shown below

# session optional pam_motd.so motd=/run/motd.dynamic
# session optional pam_motd.so noupdate
# session optional pam_mail.so standard noenv # [1]

Remove the execute permission for few motd scripts

sudo chmod -x /etc/update-motd.d/10-help-text /etc/update-motd.d/50-motd-news /etc/update-motd.d/80-livepatch
sudo service sshd restart

Step 5 : Configure timezone and update DNS resolver configuration

timedatectl set-timezone Asia/Kolkata

Update DNS resolver configuration and restart – Edit /etc/systemd/resolved.conf. Uncomment and update DNS entry – 10.1.1.1 is the IP address of the UTM device which hosts DNS services also.

[Resolve]
DNS=10.1.1.1

systemctl restart systemd-resolved

Step 6 : Install and configure chrony for time synchronization

apt install -y chrony

Edit /etc/chrony/chrony.conf

Comment out pool entries and add one server entry (public NTP server in cotnroller node and controller.xxxx.net in compute nodes).

server time.google.com iburst

Add the following to allow compute nodes to synch from controller (in controller node only)

allow 10.1.1.0/24

Restart chrony services

systemctl restart chronyd.service

Step 7 : Other miscellaneous configuration updates

Avoid running out of file descriptor handles – Add the following at the end of /etc/security/limits.conf

* nproc hard 65535
* nproc soft 65535
* nofiles hard 65535
* nofiles soft 65535

Disable swap – Optional. In my case all compute nodes have 192G RAM and the dedicated controller has 48G RAM and hence a personal choice.

swapoff -a
rm -f /swap.img

Edit /etc/fstab and remove the line related to swap – so that the swap disabling gets permanent.

Optionally reboot the system – Not mandatory – Just a personal choice.

reboot

Step 8 : Add openstack packages repository and install openstack client. “crudinit” tool will be helpful while updating config files.

add-apt-repository -y cloud-archive:train
apt update 
apt -y upgrade
apt install -y python3-openstackclient crudini
18.04 bionic install Openstack pre-requisites train ubuntu
Howto Installation Steps

Recent Posts

  • Launching an instance
  • Removing compute node/host from an Openstack deployment
  • Cinder – Block storage services – Storage Nodes
  • Cinder – Block storage service – Controller node
  • Horizon – Dashboard on Controller Node
  • Neutron – Networking service on Compute nodes
  • Neutron – Networking service on Controller node
  • Nova – Compute services on Compute nodes
  • Nova – Compute services on Controller node
  • Placement services on Controller Node

Recent Comments

  • Adding ClamAV Anti-virus checks to existing Postfix, Amavis+Spamassasin configuration – Online Notes on Configuring secure mail server using Postfix with Dovecot, Amavis, Spamassasin, Postgrey and OpenDKIM

Archives

  • October 2020
  • February 2020
  • January 2020
  • December 2019

Categories

  • Howto
  • Installation Steps
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
©2021 Online Notes | Powered by WordPress & Superb Themes