In my case all the three compute nodes will also be storage nodes. In all the three nodes had created a single virtual disk in RAID controller. /dev/sda1 was use for efi boot and /dev/sda2 for OS installation. a new partition /dev/sda3 was created for the remaining space.
Install the supporting utility packages
apt install -y lvm2 thin-provisioning-tools
Create the LVM physical volume /dev/sda3
pvcreate /dev/sdb
Create the LVM volume group cinder-volumes
vgcreate cinder-volumes /dev/sdb
Reconfigure LVM to scan only the devices that contain the cinder-volumes
volume group. Edit the /etc/lvm/lvm.conf
In the devices
section, add a filter that accepts the /dev/sdb
device and rejects all other devices
devices { ... filter = [ "a/sdb/", "r/.*/"]
Install and configure components
apt install -y cinder-volume
Edit /etc/cinder/cinder.conf
and update configurations in respective sections
[database] connection = mysql+pymysql://cinder:commonpass@controller/cinder
[DEFAULT] transport_url = rabbit://openstack:commonpass@controller auth_strategy = keystone # Replace my_ip with compute nodes management IP address my_ip = 10.1.1.x glance_api_servers = http://controller:9292 # Add the following sections [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = commonpass [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = tgtadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp
Restart the Block Storage volume service including its dependencies
service tgt restart service cinder-volume restart