本节主要是将rbd(也就是块存储),通过LIO映射为iscsi(也就是ip san),从而提供给其他系统使用。这种方式比fuse后再映射出来性能要好,因为fuse是要经过内核的,要做切换。
0、原本计划使用容器继续处理,但没能成功,如果你成功了麻烦告诉我一下如何做的。
#这条路没测试成功,原理上应该是一样的 ceph orch daemon add iscsi rbd --placement="1 ceph-0002"
后面是通过编译源码的方式来完成的。
1、我们在“CEPH环境搭建03”中已经建立了rbd pool,而且建立了r1,r2,r3三个存储设备
rbd ls r1 r2 r4
2、升级linux内核
#ceph-0002 #当前内核版本比较低,会导致LIO操作失败,表现为在所以必须升级 #在gwcli做attach或create硬盘时,会报错 #Issuing disk create/update request Failed : disk create/update failed on xxx. LUN allocation failure... #去看rbd-target-api的日志,会发现: #Could not set LIO device attribute cmd_time_out/qfull_time_out for device: rbd/r2. Kernel not supported. - error(Cannot find attribute: qfull_time_out) uname -a 4.15.0-91-generic x86_64 GNU/Linux # 查看可以使用的内核版本 apt list | grep linux-generic linux-generic/bionic-updates,bionic-security 4.15.0.101.91 amd64 [upgradable from: 4.15.0.91.83] linux-generic-hwe-16.04/bionic-updates,bionic-security 4.15.0.101.91 amd64 linux-generic-hwe-16.04-edge/bionic-updates,bionic-security 4.15.0.101.91 amd64 linux-generic-hwe-18.04/bionic-updates,bionic-security 5.3.0.53.110 amd64 linux-generic-hwe-18.04-edge/bionic-updates,bionic-security 5.3.0.53.110 amd64 # 安装高版本内核 apt-get install linux-generic-hwe-18.04-edge # 重启 reboot
3、编译并发布程序
#ceph-0002 #安装所需包 apt install pkg-config libglib2.0-dev librbd1 libnl-3-200 libkmod2 #python2 pip install kmod pyudev urwid pyparsing rados rbd netifaces crypto requests flask #缺少两个包,暂时没有影响 #pip install gobject, python-openssl #python3 apt-get install python3-pip python3-dev python3-openssl apt install python-dev python3-pyparsing pip3 install gobject pyudev urwid pyparsing netifaces crypto requests flask #缺少三个包,暂时没有影响 #pip3 install kmod rados rbd #tcmu-runner git clone https://github.com/open-iscsi/tcmu-runner #修改脚本,extra/install_dep.sh,在debianh后增加",ubuntu",让ubuntu与debian相同处理即可 ./extra/install_dep.sh cmake -Dwith-glfs=false -Dwith-qcow=false -DSUPPORT_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX=/usr make install #rtslib-fb git clone https://github.com/open-iscsi/rtslib-fb.git cd rtslib-fb python setup.py install #configshell-fb git clone https://github.com/open-iscsi/configshell-fb.git cd configshell-fb python setup.py install #targetcli-fb git clone https://github.com/open-iscsi/targetcli-fb.git cd targetcli-fb python setup.py install mkdir /etc/target mkdir /var/target #ceph-iscsi git clone https://github.com/ceph/ceph-iscsi.git cd ceph-iscsi python setup.py install --install-scripts=/usr/bin cp usr/lib/systemd/system/rbd-target-gw.service /lib/systemd/system cp usr/lib/systemd/system/rbd-target-api.service /lib/systemd/system #启动服务 systemctl daemon-reload systemctl enable tcmu-runner systemctl start tcmu-runner systemctl enable rbd-target-gw systemctl start rbd-target-gw systemctl enable rbd-target-api systemctl start rbd-target-api
4、修改配置文件
#ceph-0002 vi /etc/ceph/iscsi-gateway.cfg [config] # Name of the Ceph storage cluster. A suitable Ceph configuration file allowing # access to the Ceph storage cluster from the gateway node is required, if not # colocated on an OSD node. cluster_name = ceph pool = rbd # cluster_client_name = client.igw.ceph-0002 minimum_gateways = 1 gateway_ip_list = 192.168.1.102,192.168.1.103 # Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph # drectory and reference the filename here gateway_keyring = ceph.client.admin.keyring # API settings. # The API supports a number of options that allow you to tailor it to your # local environment. If you want to run the API under https, you will need to # create cert/key files that are compatible for each iSCSI gateway node, that is # not locked to a specific node. SSL cert and key files *must* be called # 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory # on *each* gateway node. With the SSL files in place, you can use 'api_secure = true' # to switch to https mode. # To support the API, the bear minimum settings are: api_secure = false # Additional API configuration options are as follows, defaults shown. # api_user = admin # api_password = admin # api_port = 5001 # trusted_ip_list = 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.104
5、将节点加入监控
#ceph-0001 ceph dashboard iscsi-gateway-list {"gateways": {}} ceph dashboard iscsi-gateway-add http://admin:admin@ceph-0002:5000 Success ceph dashboard iscsi-gateway-list {"gateways": {"localhost.vm": {"service_url": "http://admin:admin@ceph-0002:5000"}}} # 这个localhost.vm要记住
6、配置iscsi
#ceph-0004 gwcli # 创建gateway > /> cd /iscsi-target > /iscsi-target...-igw/gateways> create localhost.vm 192.168.1.102 skipchecks=true OS version/package checks have been bypassed Adding gateway, sync'ing 0 disk(s) and 0 client(s) ok > /iscsi-target...-igw/gateways> ls o- gateways ......................... [Up: 1/1, Portals: 1] o- localhost.vm ..................... [192.168.1.102 (UP)] # 挂载硬盘 > /iscsi-target...-igw/gateways> cd /disks > /disks> attach rbd/r2 ok > /disks> ls o- disks ............................... [1G, Disks: 1] o- rbd ..................................... [rbd (1G)] o- r2 ................................... [rbd/r2 (1G)] > /disks> attach rbd/r4 ok > /disks> ls o- disks ............................... [2G, Disks: 2] o- rbd ..................................... [rbd (2G)] o- r2 ................................... [rbd/r2 (1G)] o- r4 ................................... [rbd/r4 (1G)] #授权管理 > /iscsi-targets> cd iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw/ > /iscsi-target...-gw:iscsi-igw> ls o- iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw ...[Auth: None, Gateways: 1] o- disks ..................................... [Disks: 0] o- gateways ....................... [Up: 1/1, Portals: 1] | o- localhost.vm .................. [192.168.1.102 (UP)] o- host-groups ............................. [Groups : 0] o- hosts .................. [Auth: ACL_ENABLED, Hosts: 0] # 创建initiator > /iscsi-target...-gw:iscsi-igw> cd hosts > /iscsi-target...csi-igw/hosts> create iqn.2020-06.com.neohope:ceph-0004 ok > /iscsi-target...ope:ceph-0004> ls o- iqn.2020-06.com.neohope:ceph-0004 ... [Auth: None, Disks: 0(0.00Y)] # 创建授权 > /iscsi-target...ope:ceph-0004> auth username=myissicuid password=myissicpwdpwd ok # 分配硬盘 > /iscsi-target...ope:ceph-0004> disk add rbd/r2 ok > /iscsi-target...ope:ceph-0004> ls o- iqn.2020-06.com.neohope:ceph-0004 ......... [Auth: CHAP, Disks: 1(1G)] o- lun 0 .............................. [rbd/r2(1G), Owner: localhost.vm]
7、挂载iscsi盘
#ceph-0004 #安装需要的软件 apt-get install open-iscsi #查看可用的iscsi服务 iscsiadm -m discovery -t sendtargets -p 192.168.1.102 192.168.1.102:3260,1 iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw #修改配置文件,initiator要和上面分配的一致 vim /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2020-06.com.neohope:ceph-0004 #重启服务 systemctl status iscsid #设置登录信息 iscsiadm -m node -T iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw-o update --name node.session.auth.authmethod --value=CHAP iscsiadm: No records found iscsiadm -m node -T iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw --op update --name node.session.auth.username --value=myissicuid iscsiadm -m node -T iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw --op update --name node.session.auth.password --value=myissicpwdpwd #登录,挂载iscsi盘 iscsiadm -m node -T iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw --login Logging in to [iface: default, target: iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw, portal: 192.168.1.102,3260] (multiple) Login to [iface: default, target: iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw, portal: 192.168.1.102,3260] successful.
8、使用iscsi盘
#ceph-0004 #查看硬盘,会发现多出一块 fdisk -l Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors Disk /dev/mapper/ceph--44634c9f--cf41--4215--bd5b--c2db93659bf1-osd--block--b192f8e5--55f2--4e75--a7ce--54d007410829: 20 GiB, 21470642176 bytes, 41934848 sectors Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors #查看sda硬盘情况 fdisk -l /dev/sda Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes root@ceph-0004:/dev# fdisk -l /dev/sda Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes #格式化 sudo mkfs.ext4 -m0 /dev/sda mke2fs 1.44.1 (24-Mar-2018) Creating filesystem with 262144 4k blocks and 65536 inodes Filesystem UUID: 42229c39-e23c-46b2-929d-469e66196498 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done #挂载 mount -t ext4 /dev/sda /mnt/iscsi #基本操作 cd /mnt/iscsi/ ls vi iscis.txt ls
9、卸载iscsi盘
#ceph-0004 #取消mount umount /mnt/iscsi #登出 iscsiadm -m node -T iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw --logout Logging out of session [sid: 1, target: iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw, portal: 192.168.1.102,3260] Logout of [sid: 1, target: iqn.2020-06.com.neohope.iscsi-gw:iscsi-igw, portal: 192.168.1.102,3260] successful. #查看硬盘列表,会发现iscsi盘已经不见了 fdisk -l