Spring Boot各模块作用

1、spring-boot
web容器整合,快速开发,上下文、外部配置、日志的统一管理

2、spring-boot-autoconfigure
自动配置,自动判断需要的jar包

3、spring-boot-actuator
生产环境管理,用rest方式给出多种接口:
mappings/autoconfig/configprops/beans
env/info/health/heapdump/metrics
loggers/logfile/dump/trace
shutdown/auditevents

4、spring-boot-starters
各种配置好的功能模块,用于快速拼装各种需要的功能

5、spring-boot-loader
用于加载jar包中的jar,可以实现单个jar/war的运行
注意:这种jar包放到jar包时,不要再次压缩

6、spring-boot-cli
快速开发groovy

7、spring-boot-devtools
方便调试,远程调试

OpenStack搭建私有云10

本节介绍对象存储的基本操作,仅在CT01进行操作

. user01-openrc
#查看状态
swift stat
#新建container
openstack container create container01
#文件上传
openstack object create container01 hi.txt
#文件ls
openstack object list container01
#查看文件信息
openstack object show container01 hi.txt
#设置tag
openstack object set --property owner=neohope container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取消tag
openstack object unset --property owner container01 hi.txt
#查看文件信息
openstack object show container01 hi.txt
#取回文件
mv hi.txt hi.txt.bak
openstack object save container01 hi.txt
#删除文件
openstack object delete container01 hi.txt

PS:
如果遇到权限问题,可以尝试将/srv/node安全级别降到最低

#chcon -R system_u:object_r:swift_data_t:s0 /srv/node

OpenStack搭建私有云09

本节开始安装swift,用于对对象存储进行管理,需要在CT01、OS01、OS02进行操作
一、在CT01安装对应模块
1、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt swift
openstack role add --project serviceproject --user swift admin
openstack service create --name swift --description "OpenStack Object Storage" object-store

openstack endpoint create --region Region01 object-store public http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store internal http://CT01:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region Region01 object-store admin http://CT01:8080/v1

2、安装

apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached

3、修改配置文件
3.1、新建目录/etc/swift,并下载文件

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

3.2修改配置文件
/etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = swift
password = swift
delay_auth_decision = True

[filter:cache]
use = egg:swift#memcache
memcache_servers = CT01:11211

二、在OS01、OS02安装对应模块
1、硬盘初始化(每台虚拟机分配两块硬盘)

apt-get install xfsprogs rsync
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkdir -p /srv/node/sdb
mkdir -p /srv/node/sdc

2、修改/etc/fstab

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

3、挂载硬盘

mount /srv/node/sdb
mount /srv/node/sdc

4、修改/etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.3.13

[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

5、修改/etc/default/rsync

RSYNC_ENABLE=true

6、重启rsync

service rsync start

7、软件安装

apt-get install swift swift-account swift-container swift-object
curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

8、修改/etc/swift/account-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

9、修改/etc/swift/container-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon container-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift

10、修改/etc/swift/object-server.conf

[DEFAULT]
bind_ip = 10.0.3.13
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon object-server

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

11、授权

chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

三、在CT01进行配置
1、创建配置文件

cd /etc/swift

swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdb --weight 100
swift-ring-builder account.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6202 --device sdc --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance

swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdb --weight 100
swift-ring-builder container.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6201 --device sdc --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance

swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 10.0.3.13 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 10.0.3.14 --port 6200 --device sdc --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance

2、拷贝配置文件
将account.ring.gz、container.ring.gz和object.ring.gz拷贝到OS02和OS02的目录/etc/swift

3、下载配置文件

sudo curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton

4、编辑/etc/swift/swift.conf

[swift-hash]
swift_hash_path_suffix = neohope
swift_hash_path_prefix = neohope

[storage-policy:0]
name = Policy-0
default = yes

5、拷贝配置文件swift.conf,到所有节点的/etc/swift

6、在非对象存储节点运行

chown -R root:swift /etc/swift
service memcached restart
service swift-proxy restart

7、在对象存储节点运行

chown -R root:swift /etc/swift
swift-init all start

OpenStack搭建私有云08

本节开始用命令行方式启动虚拟机,仅在CT01进行操作

一、网络配置
1、新建虚拟网络(外网)

. admin-openrc
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

2、确认配置文件正确(外网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2_type_flat]
flat_networks = provider

linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp0s8

3、创建子网(外网)

openstack subnet create --network provider --allocation-pool start=192.168.12.100,end=192.168.12.120 --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.12.0/24 provider

4、新建虚拟网络(内网)

openstack network create selfservice

5、确认配置文件正确(内网)
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
tenant_network_types = vxlan

[ml2_type_vxlan]
vni_ranges = 1:1000

6、创建子网(内网)

openstack subnet create --network selfservice --dns-nameserver 8.8.8.8 --gateway 172.16.172.2 --subnet-range 192.168.13.0/24 selfservice

7、创建路由,让内网可以通过外网访问外网

. admin-openrc
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router provider

ip netns
neutron router-port-list router
ping -c 4 192.168.12.107

二、虚拟机flavor配置

openstack flavor list
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 2 flavor02

三、虚拟机keypair配置

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

四、虚拟机security group配置

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

五、查看配置

openstack flavor list
openstack image list
openstack network list
openstack security group list

六、创建虚拟机,并访问
1、外网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=PROVIDER_NET_ID --security-group default --key-name mykey provider-instance

openstack server list
openstack console url show provider-instance
ping -c 4 192.168.12.1
ping -c 4 openstack.org

ping -c 4 192.168.12.104 
ssh cirros@192.168.12.104 

2、内网虚拟机

openstack server create --flavor flavor02 --image cirros --nic net-id=SELFSERVICE_NET_ID --security-group default --key-name mykey selfservice-instance

openstack server list
openstack console url show selfservice-instance
ping -c 4 192.168.13.1
ping -c 4 openstack.org

openstack floating ip create provider
openstack server add floating ip selfservice-instance 192.168.12.106
openstack server list
ping -c 4 192.168.12.106
ssh cirros@192.168.12.106

七、创建挂载块存储
1、创建并挂载

. admin-openrc
openstack volume create --size 2 volumeA
openstack volume list
openstack server add volume provider-instance volumeA

2、虚拟机中验证

sudo fdisk -l

OpenStack搭建私有云07

本节开始安装cinder,用于对块存储进行管理,需要在CT01及BS01进行操作

一、在CT01安装相应模块
1、创建数据库

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

2、创建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt cinder
openstack role add --project serviceproject --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack endpoint create --region Region01 volume public http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume internal http://CT01:8776/v1/%\(tenant_id\)s
openstack endpoint create --region Region01 volume admin http://CT01:8776/v1/%\(tenant_id\)s

openstack endpoint create --region Region01 volumev2 public http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 internal http://CT01:8776/v2/%\(tenant_id\)s
openstack endpoint create --region Region01 volumev2 admin http://CT01:8776/v2/%\(tenant_id\)s

3、安装

apt install cinder-api cinder-scheduler

4、修改配置
4.1、/etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
my_ip = 10.0.3.10

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4.2、/etc/nova/nova.conf

[cinder]
os_region_name = Region01

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "cinder-manage db sync" cinder

service nova-api restart
service cinder-scheduler restart
service apache2 restart

二、在BS01安装相关模块
1、安装lvm2并作初始化处理

apt install lvm2

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

2、修改lvm配置文件
/etc/lvm/lvm.conf

devices {
    filter = [ "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "a/sdb/", "r/.*/"]
    #filter = [ "a/sda/", "r/.*/"]
}

3、安装cinder-volume

apt install cinder-volume

4、修改配置文件
/etc/cinder/cinder.conf

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.0.12
enabled_backends = lvm
glance_api_servers = http://CT01:9292

[database]
connection = mysql+pymysql://cinder:cinder@CT01/cinder

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
iscsi_ip_address=10.0.3.12

5、并重启服务

service tgt restart
service cinder-volume restart

三、在CT01验证

. admin-openrc
openstack volume service list

然后,就可以在Dashboard中,新建并分配块存储咯。

OpenStack搭建私有云06

本节开始安装Dashboard,用于对OS进行管理,仅在CT01进行操作

1、安装

apt install openstack-dashboard

2、修改配置
/etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "CT01"
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

#Dashboard节点
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    ' default' : {
        ' BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache' ,
        ' LOCATION' : 'CT01:11211' ,
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

TIME_ZONE = "Asia/Shanghai"

3、重启服务

service apache2 reload

4、浏览器打开网页
http://CT01/horizon
可以用admin或user01用户进行访问

PS:出现500错误

#查看apache日志发现,是下面文件权限设置有问题,改一下就好了
sudo chown www-data:www-data /var/lib/openstack-dashboard/secret_key

5、用下面的步骤创建实例
创建网络、创建配置、创建实例

6、实例启动后,点击进入实例,就可以通过控制台连接实例了

OpenStack搭建私有云05

本节开始安装Neutron服务,Neutron用于管理虚拟网络,在CT01和PC01分别进行相关模块的安装

一、在CT01安装相关模块
1、新建数据库

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt neutron
openstack role add --project serviceproject --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region Region01 network public http://CT01:9696
openstack endpoint create --region Region01 network internal http://CT01:9696
openstack endpoint create --region Region01 network admin http://CT01:9696

3、安装

apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

4、修改配置
4.1、/etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:neutron@CT01/neutron

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

[nova]
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = nova
password = nova

4.2、/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

4.3、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4.4、/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

4.5、 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

4.6、 /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_ip = CT01
metadata_proxy_shared_secret = metadata

4.7、 /etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

sudo service nova-api restart
sudo service neutron-server restart
sudo service neutron-linuxbridge-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restart

二、在PC01安装相关模块
1、安装

apt install neutron-linuxbridge-agent

2、修改配置文件
2.1、/etc/neutron/neutron.conf

[database]
#注释下面内容
#connection

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = neutron
password = neutron

2.2、/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp0s8

[vxlan]
enable_vxlan = true
local_ip = 10.0.3.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

2.3、/etc/nova/nova.conf

[neutron]
url = http://CT01:9696
auth_url = http://CT01:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = Region01
project_name = serviceproject
username = neutron
password = neutron

3、重启服务

service nova-compute restart
service neutron-linuxbridge-agent restart

三、在CT01进行验证
1、验证

. admin-openrc
openstack extension list --network
openstack network agent list

OpenStack搭建私有云04

本节开始安装Nova服务,Nova用于管理虚拟计算,在CT01和PC01分别进行相关模块的安装。

一、首先在CT01,安装相关模块

1、新建数据库

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

2、新建用户及endpoint

. admin-openrc
openstack user create --domain default --password-prompt nova
openstack role add --project serviceproject --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute

openstack endpoint create --region Region01 compute public http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute internal http://CT01:8774/v2.1
openstack endpoint create --region Region01 compute admin http://CT01:8774/v2.1

openstack user create --domain default --password-prompt placement
openstack role add --project serviceproject --user placement admin
openstack service create --name placement --description "Placement API" placement

openstack endpoint create --region Region01 placement public http://CT01:8778
openstack endpoint create --region Region01 placement internal http://CT01:8778
openstack endpoint create --region Region01 placement admin http://CT01:8778

3、安装nova

apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

4、修改配置文件
/etc/nova/nova.conf

[api_database]
connection = mysql+pymysql://nova:nova@CT01/nova_api

[database]
connection = mysql+pymysql://nova:nova@CT01/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#移除下面的节点
#log_dir 

5、初始化

sudo su -s /bin/sh -c "nova-manage api_db sync" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
sudo su -s /bin/sh -c "nova-manage db sync" nova
sudo nova-manage cell_v2 list_cells

6、重启服务

sudo service nova-api restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart

二、然后在PC01,安装相关模块
1、安装

apt install nova-compute
apt install nova-compute-qemu

2、修改配置
2.1、 /etc/nova/nova.conf

[DEFAULT]
transport_url = rabbit://openstack:openstack@CT01
my_ip = 10.0.3.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#log_dir

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = nova
password = nova

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://CT01:6080/vnc_auto.html

[glance]
api_servers = http://CT01:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = Region01
project_domain_name = Default
project_name = serviceproject
auth_type = password
user_domain_name = Default
auth_url = http://CT01:35357/v3
username = placement
password = placement

2.2、 /etc/nova/nova-compute.conf

[libvirt]
#egrep -c '(vmx|svm)' /proc/cpuinfo
#如果命令等于0,要改为qemu
virt_type = qemu

3、重启服务

service nova-compute restart

三、然后在CT01,进行相关操作
1、将PC01加入管理
1A、执行命令

. admin-openrc
openstack hypervisor list
sudo su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova


1B、修改配置文件
/etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300

2、验证安装

. admin-openrc
openstack compute service list
openstack catalog list
openstack image list

OpenStack搭建私有云03

本节开始安装Glance服务,Glance用于管理虚拟镜像,仅在CT01进行操作

1、新建数据库

CREATE DATABASE glance CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

2、新建OS用户及endpoint

. admin-openrc

openstack user create --domain default --password-prompt glance
openstack role add --project serviceproject --user glance admin
openstack service create --name glance --description "OpenStack Image" image

openstack endpoint create --region Region01 image public http://CT01:9292
openstack endpoint create --region Region01 image internal http://CT01:9292
openstack endpoint create --region Region01 image admin http://CT01:9292

3、安装glance

apt install glance

4、修改配置文件
4.1、/etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

4.2、/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:glance@CT01/glance

[keystone_authtoken]
#注释掉其他内容
auth_uri = http://CT01:5000
auth_url = http://CT01:35357
memcached_servers = CT01:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = serviceproject
username = glance
password = glance

[paste_deploy]
flavor = keystone

5、填充数据库,并重启服务

sudo su -s /bin/sh -c "glance-manage db_sync" glance

service glance-registry restart
service glance-api restart

6、下载系统镜像,并上传

. admin-openrc

wget -O cirros-0.3.5-x86_64-disk.img http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

openstack image list

OpenStack搭建私有云02

本节开始安装Keystone服务,Keystone用于管理OS内的全部权限,仅在CT01进行操作

1、安装mysql及pymysql

#安装mysql
apt-get install mysql-server

#修改配置文件
vi /etc/mysql/my.cnf
#添加下面内容
[client]
default-character-set=utf8
[mysqld]
character-set-server=utf8
 
#重启mysql
/etc/init.d/mysql restart

#安装pymysql 
pip install pymysql 

2、安装rabbitmq

#安装
apt install rabbitmq-server

#并设置权限
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

3、安装memcached

#安装
apt install memcached python-memcache

#修改配置文件
vi /etc/memcached.conf
-l CT01

#重启服务
service memcached restart

4、创建Keystone库

CREATE DATABASE keystone CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

5、安装Keysotne

apt install keystone

6、修改Keysotne配置文件
/etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:keystone@CT01/keystone
[token]
provider = fernet

7、初始化

#填充数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone

#初始化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password bootstrap --bootstrap-admin-url http://CT01:35357/v3/ --bootstrap-internal-url http://CT01:5000/v3/ --bootstrap-public-url http://CT01:5000/v3/ --bootstrap-region-id Region01

#删除不需要的库
rm -f /var/lib/keystone/keystone.db

#进行配置
keystone-install-configure

8、运行下面的命令

export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3

9、创建project、用户及角色

openstack project create --domain default --description "service os project" serviceproject
openstack project create --domain default --description "user os project" userproject

openstack user create --domain default --password-prompt user01
openstack role create user
openstack role add --project userproject --user user01 user

10、禁用部分授权
/etc/keystone/keystone-paste.ini

#删掉下面节点中admin_token_auth的内容
[pipeline:public_api],[pipeline:admin_api],[pipeline:api_v3] 

11、验证安装

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://CT01:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://CT01:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name userproject --os-username user01 token issue

12、编写两个授权脚本
12.1、admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=bootstrap
export OS_AUTH_URL=http://CT01:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.2、user01-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=userproject
export OS_USERNAME=user01
export OS_PASSWORD=user01
export OS_AUTH_URL=http://CT01:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

12.3、验证

. admin-openrc
openstack token issue

. user01-openrc
openstack token issue