How to do it...
- Map the block device to the client-node1:
# rbd map --image rbd1 --name client.rbd

Notice the mapping of the images has failed due to a feature set mismatch!
- With Ceph Jewel the new default format for RBD images is 2 and Ceph Jewel default configuration includes the following default Ceph Block Device features:
- layering: layering support
- exclusive-lock: exclusive locking support
- object-map: object map support (requires exclusive-lock)
- deep-flatten: snapshot flatten support
- fast-diff: fast diff calculations (requires object-map)
Using the krbd (kernel rbd) client on client-node1 we will be unable to map the block device image on CentOS kernel 3.10 as this kernel does not support object-map, deep-flatten and fast-diff (support was introduced in kernel 4.9). In order to work around this we will disable the unsupported features, there are several options to do this:
- Disable the unsupported features dynamically (this is the option we will be using):
# rbd feature disable rbd1
exclusive-lock object-map
deep-flatten fast-diff
-
-
- When creating the RBD image initially utilize the --image-feature layering option with the rbd create command which will only enable the layering feature:
-
# rbd create rbd1 --size 10240
--image-feature layering
--name client.rbd
-
-
- Disable the feature in the Ceph configuration file:
-
rbd_default_features = 1
All these features work for the user-space RBD client librbd.

- Retry mapping the block device with the unsupported features now disabled:
# rbd map --image rbd1 --name client.rbd
- Check the mapped block device:
rbd showmapped --name client.rbd

- To make use of this block device, we should create a filesystem on this and mount it:
# fdisk -l /dev/rbd0
# mkfs.xfs /dev/rbd0
# mkdir /mnt/ceph-disk1
# mount /dev/rbd0 /mnt/ceph-disk1
# df -h /mnt/ceph-disk1

- Test the block device by writing data to it:
dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

- To map the block device across reboots, we will need to create and configure a services file:
- Create a new file in the /usr/local/bin directory for mounting and unmounting and include the following:
# cd /usr/local/bin
# vim rbd-mount

-
- Save the file and make it executable:
# sudo chmod +x rbd-mount
This can be done automatically by grabbing the rbd-mount script from the Ceph-Designing-and-Implementing-Scalable-Storage-Systems repository and making it executable:
# wget https://raw.githubusercontent.com/PacktPublishing/
Ceph-Designing-and-Implementing-Scalable-Storage-Systems/Module_1/master/
rbdmap -O /usr/local/bin/rbd-mount
# chmod +x /usr/local/bin/rbd-mount
-
- Go to the systemd directory and create the service file, include the following in the file rbd-mount.service:
# cd /etc/systemd/system/
# vim rbd-mount.service

This can be done automatically by grabbing the service file from the Ceph-Designing-and-Implementing-Scalable-Storage-Systems/Chapter02 repository:
# wget https://raw.githubusercontent.com/PacktPublishing/
Ceph-Designing-and-Implementing-Scalable-Storage-Systems/Chapter02/rbd-mount.service
-
- After saving the file and exiting Vim, reload the systemd files and enable the rbd-mount.service to start at boot time:
# systemctl daemon-reload
# systemctl enable rbd-mount.service
- Reboot client-node1 and verify that block device rbd0 is mounted to /mnt/ceph-disk1 after the reboot:
root@client-node1 # reboot -f
# df -h
