OpenStack Newton HA高可用搭建
作者:互联网
OpenStack Newton HA高可用搭建
更新源:
yum clean all
yum makecache
修改网卡名称:
vi /etc/default/grub
net.ifnames=0
grub2-mkconfig -o /boot/grub2/grub.cfg
修改网卡文件名称
安装常用工具:
yum install vim net-tools wget ntpdate ntp bash-completion -y
vim /etc/hosts配置
10.1.1.141 controller1
10.1.1.142 controller2
10.1.1.143 controller3
10.1.1.144 compute1
10.1.1.146 cinder1
网络配置
external:10.254.15.128/27
admin mgt:10.1.1.0/24
tunnel:10.2.2.0/24
时间同步
ntpdate 10.6.0.2
vim /etc/ntp.conf
添加:
server 10.6.0.2 iburst
systemctl enable ntpd
systemctl restart ntpd
systemctl status ntpd
ntpq -p
免密码登陆:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@10.254.15.141
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@10.254.15.142
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@10.254.15.143
一、搭建mariadb galera cluster
1. mariadb galera cluster集群介绍
mariadb galera cluster是mysql高可用性和可扩展性的解决方案
官网:http://galeracluster.com/products/
mariadb galera cluster是一套在mysql innodb存储引擎上面实现multi-master及数据实时同步的系统结构,业务层面无需做读写分离工作,数据库读写压力都能按照既定的规则分发到各节点上去。在数据方面完全兼容mariadb和mysql。
特性:
(1).同步复制synchronous replication
(2).active-active multi-master拓扑逻辑
(3).可对集群中任一节点进行数据读写
(4).自动成员控制,故障节点自动从集群中移除
(5).自动节点加入
(6).真正并行的复制,基于行级
(7).直接客户端连接,原生的mysql接口
(8).每个节点都包含完整的数据副本
(9).多台数据库中数据同步由wsrep接口实现
缺点:
(1).目前的复制仅仅支持innodb存储引擎,任何写入其他引擎的表,包括mysql.*表将不会复制,但ddl语句会被复制的,因此创建用户将会被复制,但是insert into mysql.user..将不会被复制的。
(2).delete操作不支持没有主键的表,没有主键的表在不同的节点顺序将不同,如果执行select...limit...将出现不同的结果集。
(3).在多主环境下lock/unlock tables不支持,以及锁函数get_lock(),release_lock()...
(4).查询日志不能保存表中。如果开启查询日志,只能保存到文件中。
(5).允许最大的事务大小由wsrep_max_ws_rows和wsrep_max_ws_size定义。任何大型操作将被拒绝。如大型的load data操作。
(6).由于集群是乐观的并发控制,事务commit可能在该阶段中止。如果有两个事务向在集群中不同的节点向同一行写入并提交,失败的节点将中止。对于集群别的中止,集群返回死锁错误代码(error:1213 sqlstate:40001(er_lock_deadlock))。
(7).xa事务不支持,由于在提交上可能回滚。
(8).整个集群的写入吞吐量是由最弱的节点限制,如果有一个节点变得缓慢,那么整个集群将是缓慢的。为了稳定的高性能要求,所有的节点应使用统一硬件。
(9).集群节点最少3个。
(10).如果ddl语句有问题将破坏集群。
2.安装mariadb galera
在三个node上分别执行
yum install -y MariaDB-server MariaDB-client galera xinetd rsync ntp ntpdate bash-completion percona-xtrabackup socat gcc gcc-c++ vim
systemctl start mariadb.service
mysql_secure_installation (执行设置mysql密码)
3.在三个节点上创建sst用户,目的是给xtrabakcup-v2数据同步使用
mysql -uroot -p
grant all privileges on *.* to 'sst'@'localhost' identified by 'gdxc1902';
flush privileges;
4.配置mariadb cluster集群
第一个节点上,添加内容如下:
vim /etc/my.cnf.d/client.cnf
[client]
port = 3306
socket = /var/lib/mysql/mysql.sock
vim /etc/my.cnf.d/server.cnf
添加内容
[isamchk]
key_buffer_size = 16M
#key_buffer_size这个参数是用来设置索引块(index blocks)缓存的大小,它被所有线程共享,严格说是它决定了数据库索引处理的速度,尤其是索引读的速度。
[mysqld]
datadir=/var/lib/mysql
#指定数据data目录绝对路径
innodb-data-home-dir = /var/lib/mysql
#指定innodb数据存放的家目录
basedir = /usr
#指定mariadb的安装路径,填写全路径可以解决相对路径所造成的问题
binlog_format=ROW
#该参数可以有三种设置值:row、statement和mixed。row代表二进制日志中记录数据表每一行经过写操作后被修改的最终值。各个参与同步的salve节点,也会参照这个最终值,将自己数据表上的数据进行修改;statement形式是在日志中记录数据操作过程,而非最终执行结果。各个参与同步的salve节点会解析这个过程,并形成最终记录;mixed设置值,是以上两种记录方式的混合体,mysql服务会自动选择当前运行状态下最适合的日志记录方式。
character-set-server = utf8
#设置数据库的字符集
collation-server = utf8_general_ci
#collation是描述数据在数据库中是按照什么规则来描述字符的,以及字符是如何被排序和比较,这里我们设置的字符格式是utf8_general_ci
max_allowed_packet = 256M
#设置mysql接受数据包最大值,比如你执行的sql语句过大,可能会执行失败,这个参数就是让能执行的sql语句大小调高
max_connections = 10000
#设置mysql集群最大的connection连接数,这个在openstack环境里非常重要
ignore-db-dirs = lost+found
#设置忽略把lost+found当做数据目录
init-connect = SET NAMES utf8
#设置初始化字符集编码(仅对非超级用户有效)
innodb_autoinc_lock_mode = 2
#这种模式下任何类型的inserts都不采用auto-inc锁,性能最好,但是在同一条语句内部产生auto-increment值间隙
innodb_buffer_pool_size = 2000M
#设置缓冲池字节大小,innodb缓存表和索引数据的内存区域;这个值设置的越大,在不止一次的访问相同的数据表数据时,消耗的磁盘i/o就越少。在一个专用的数据库服务器,则可能将其设置为高达80%的机器物理内存大小。不过在实际的测试中,发现无限的增大这个值,带来的性能提升也并不显著,对cpu的压力反而增大,设置合理的值才是最优。参考:https://my.oschina.net/realfighter/blog/368225
innodb_doublewrite = 0
#设置0是禁用doublewrite,一般在不关心数据一致性(比如使用了raid0)或文件系统可以保证不会出现部分写失效,你可以通过将innodb_doublewrite参数设置为0还禁用doublewrite。
innodb_file_format = Barracuda #设置文件格式为barracuda,barracude是innodb-plugin后引入的文件格式,同时barracude也支持antelope文件格式,barracude在数据压缩上优于antelope,配合下面的innodb_file_per_table=1使用
innodb_file_per_table = 1
#开启独立的表空间,使每个innodb的表,有自己的独立空间。如删除文件后可以回收那部分空间。
innodb_flush_log_at_trx_commit = 2
#默认值1:每一次事务提交或事务外的指令都需要把日志写入(flush)硬盘,这是很费时的。特别是使用电池供电缓存(battery backed up cache)时。设成2:意思是不写入硬盘而是写入系统缓存,日志仍然会每秒flush到硬盘,所以你一般不会丢失超过1-2秒的更新。设成0写入会更快一点,但安全方面比较差,mysql挂了可能会丢失事务的数据。设置为2只会在整个操作系统挂了时才可能丢失数据,所以在openstack环境里设置为2安全点。
innodb_flush_method = O_DIRECT
#设置打开、刷写模式,设置O_DIRECT的意思是innodb使用O_DIRECT打开数据文件,使用fsync()更新日志和数据文件;文件的打开方式为O_DIRECT会最小化缓冲对io的影响,该文件的io是直接在用户空间的buffer上操作的,并且io操作是同步的,因此不管是read()系统调用还是write()系统调用,数据都保证是从磁盘上读取的,所以单纯从写入的角度讲,O_DIRECT模式性能最差,但是在openstack环境下,设置这个模式可以减少操作系统级别vfs的缓存使用内存过多和innodb本身的buffer的缓存冲突,同时也算是给操作系统减少点压力。
innodb_io_capacity = 500
#这个参数数据控制innodb checkpoint时的io能力,一般可以按一块sas 15000转的磁盘200个计算,6块盘的sas做的raid10这个值可以配到600即可。如果是普通的sata一块盘只能按100算。这里要注意,对于普通机械硬盘,由于其随机io的iops最多也就是300,所以innodb_io_capacity开的过大,反而会造成磁盘io不均匀;如果是ssd场景,由于io能力大大增强,所以innodb_io_capacity可以调高,可以配置到2000以上,但是目前的openstack环境为了节约成本都是普通机械硬盘,所以这里一般根据自己的环境情况调试值
innodb_locks_unsafe_for_binlog = 1
#开启事物锁机制,强制mysql使用多版本数据一致性读。
innodb_log_file_size = 2000M
#如果对innodb数据表有大量的写入操作,那么选择合适的innodb_log_file_size值对提升mysql性能很重要。然而设置太大了,就会增加恢复的时间,因此在mysql崩溃或者突然断电等情况会令mysql服务器花很长时间来恢复。在笔者维护的openstack环境里,计算节点是12台,虚拟机在500左右。
innodb_read_io_threads = 8
#设置数据库从磁盘读文件的线程数,用于并发;根据服务器cpu核心数以及读写频率设置
innodb_write_io_threads = 8
#设置数据库写磁盘文件的线程数,用于并发;根据服务器cpu核心数以及读写频率设置
key_buffer_size = 64
#这个参数是用来设置索引块(index blocks)缓存的大小,它被所有线程共享,严格说是它决定了数据库索引处理的速度,尤其是索引读的速度。那我们怎么才能知道key_buffer_size的设置是否合理呢,一般可以检查状态值key_read_requests和key_reads,比例key_reads/key_read_requests应该尽可能的低,比如1:100,1:1000,1:10000。其值可以用show status like 'key_read%';命令查得。
myisam-recover-options = BACKUP
#设置自动修复myisam表的方式,backup模式会自动修复;这种模式如果在恢复过程中,数据文件被更改了,会将tbl_name.myd文件备份为tbl_name-datetime.bak。
myisam_sort_buffer_size = 64M
#设置myisam表发生变化时重新排序所需的缓冲值,一般64M足够
open_files_limit = 102400
#设置最大文件打开数,需要参照os的ulimit值和max_aonnections的大小
performance_schema = on
#打开收集数据库性能参数的数据库
query_cache_limit = 1M
#设置单个查询能够使用的缓冲区大小
query_cache_size = 0
#关闭query_cache
query_cache_type = 0
#关闭query_cache_type
skip-external-locking
#设置跳过“外部锁定”,当“外部锁定”起作用时,每个进程若要访问数据表,则必须等待之前的进程完成操作并解绑锁定;由于服务器访问数据表时经常需要等待解锁,因此会让mysql性能下降。所以这里设置跳过。
skip-name-resolve
#禁用DNS主机名查找
socket = /var/lib/mysql/mysql.sock
#设置mysql.sock的绝对路径
table_open_cache = 10000
#描述符缓存大小,可减少文件打开/关闭次数
thread_cache_size = 8
#可以用show status like 'open%tables';查看当前open_tables的值是多少,然后适当调整
thread_stack = 256K
#用来存放每个线程的标识信息,如线程id,线程运行时环境等,可以通过设置thread_stack来决定给每个线程分配多大内存
tmpdir = /tmp
#设置mysql临时文件存放的目录
user = mysql
#设置mysql数据的系统账户名
wait_timeout = 1800
#设置单个connection空闲连接的超时时间,mysql默认是8小时,这里我们设置为1800秒,也就是说,一个connection如果空闲超过30分钟,那么就会被释放
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.1.1.141,10.1.1.142,10.1.1.143"
#gcomm是特殊的地址,仅仅是galera cluster初始化启动时候使用
wsrep_cluster_name = openstack
#mysql cluster集群名称
wsrep_node_name=controller1
wsrep_node_address=10.1.1.141
#此节点IP地址
wsrep_sst_method=xtrabackup-v2
#有xtrabakcup模式和rsync模式,新版本的支持xtrabackup-v2模式。rsync在数据同步(sst和ist)的时候,速度最快,但是会锁住提供数据的节点,然后无法提供访问;xtrabackup只会短暂的锁住节点,基本不影响访问。SST:state snapshot transfer,节点初始化的方式,做数据的全量同步;IST:incremental state transfer,当一个节点加入,他当前的guid与现集群相同,且确实的数据能够在donor的writeset的cache中找到,则可以进行ist,否则只能全部初始化数据走sst模式。经过相关调研,目前xtravackup-v2模式是最好sst方式。
wsrep_sst_auth=sst:gdxc1902
#设置sst同步数据所需的mysql用户和密码,因为上面我们用的是xtrabackup-v2模式,那么xtrabackup-v2模式就会用到sst这个mysql用户去做节点之间的验证,gdxc是密码
wsrep_slave_threads=4
#指定线程数量,建议每个core启动4个复制线程,这个参数很大程度受到i/o能力的影响(本人维护的openstack集群服务器配置是:cpu 48core disk 1.8T 10000转的,这个参数设置的是12)
default_storage_engine=InnoDB
#设置数据库默认引擎为innodb
bind-address=10.1.1.141
#mysql服务绑定的IP
[mysqld_safe]
nice = 0
#调用系统的nice命令设置进程优先级,linux系统的普通用户只能在0-19中设置,mysql用户为普通用户,设置为0应该就是让mysql进程优先级最高了。
socket = /var/lib/mysql/mysql.sock
syslog
vim /etc/my.cnf.d/mysql-clients.cnf
添加内容
[mysqldump]
max_allowed_packet = 16M
#mysql根据配置文件会限制server接受的数据包大小。有时候大的插入和更新会受max_allowed_packet参数限制,导致写入或者更新失败。
quick
#强制mysqldump从服务器查询取得记录直接输出而不是取得所有记录后将他们缓存到内存中
quote-names
#使用()引起表和列名。默认为打开状态,使用--skip-quote-names取消该选项。
注:相关详细参数可以参考官网:http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html
第二个和第三个节点的my.cnf配置如下,注意改下相关ip和节点名称:
vim /etc/my.cnf.d/client.cnf
[client]
port = 3306
socket = /var/lib/mysql/mysql.sock
vim /etc/my.cnf.d/server.cnf
[isamchk]
key_buffer_size = 16M
[mysqld]
datadir=/var/lib/mysql
innodb-data-home-dir = /var/lib/mysql
basedir = /usr
binlog_format=ROW
character-set-server = utf8
collation-server = utf8_general_ci
max_allowed_packet = 256M
max_connections = 10000
ignore-db-dirs = lost+found
init-connect = SET NAMES utf8
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_size = 2000M
innodb_doublewrite = 0
innodb_file_format = Barracuda
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_io_capacity = 500
innodb_locks_unsafe_for_binlog = 1
innodb_log_file_size = 2000M
innodb_read_io_threads = 8
innodb_write_io_threads = 8
key_buffer_size = 64
myisam-recover-options = BACKUP
myisam_sort_buffer_size = 64M
open_files_limit = 102400
performance_schema = on
query_cache_limit = 1M
query_cache_size = 0
query_cache_type = 0
skip-external-locking
skip-name-resolve
socket = /var/lib/mysql/mysql.sock
table_open_cache = 10000
thread_cache_size = 8
thread_stack = 256K
tmpdir = /tmp
user = mysql
wait_timeout = 1800
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.1.1.141,10.1.1.142,10.1.1.143"
wsrep_cluster_name = openstack
wsrep_node_name=controller2
wsrep_node_address=10.1.1.142
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sst:gdxc1902
wsrep_slave_threads=4
default_storage_engine=InnoDB
bind-address=10.1.1.142
[mysqld_safe]
nice = 0
socket = /var/lib/mysql/mysql.sock
syslog
vim /etc/my.cnf.d/mysql-clients.cnf
[mysqldump]
max_allowed_packet = 16M
quick
quote-names
4.设置mysql最大连接数
修改完server.cnf,然后修改下mysql.service文件,让数据库最大支持连接数调整到10000(这样做是很有用的,笔者在维护openstack环境中,在vm数量比较大负载较高的时候,经常出现数据库连接数不够导致访问界面出现各种内容刷不出的情况)
vim /usr/lib/systemd/system/mariadb.service
在[Service]添加两行如下参数:
LimitNOFILE = 10000
LimitNPROC = 10000
都修改完毕后,执行
systemctl daemon-reload
等启动了mariadb服务就能通过show variables like 'max_connections';可查看当前连接数值
5.关于mysql服务的启动顺序
三个节点my.cnf都配置完成后,全部执行
systemctl stop mariadb.service
然后在第一个节点用下面的命令初始化启动mariadb集群服务
/usr/sbin/mysqld --wsrep-new-cluster --user=root &
其他两个节点分别启动mariadb
systemctl start mariadb.service
systemctl status mariadb.service
上面无法连接可尝试下面的命令:
/usr/sbin/mysqld --wsrep-cluster-address="gcomm://10.1.1.141:4567"
最后,其他两个节点启动成功了,再回到第一个节点执行:
pkill -9 mysql
pkill -9 mysql
systemctl start mariadb.service
systemctl status mariadb.service
注意:如果遇到启动不了服务的情况,看下具体的错误,如果是[ERROR]Can't init tc log错误可以通过以下方法解决:
cd /var/lib/mysql
chown mysql:mysql *
然后再重启服务即可,这是因为/var/lib/mysql/tc.log用户组和用户名不是mysql,更改下权限就可以了
6.查看mariadb数据库集群状态
mysql -uroot -p
show status like 'wsrep_cluster_size%';
show variables like 'wsrep_sst_meth%';
登陆这两节点的mysql里,发现mysql句群数变成了3,说明这个集群有3个节点了,集群已经搭建成功!
7.测试
下面来测试一把,在ctr3中创建一张表,并插入记录,看ctr1和ctr2中能否查询得到。
具体状况截图:
ctr3:创建test数据库
GREATE DATABASE test;
ctr2:使用ctr3创建的test数据库,并且创建一个名为example的表
USE TEST;
GREATE TABLE example (node_id INT PRIMARY KEY,node_name VARCHAR(30));
ctr1:插入一条测试数据
INSERT INTO test.example VALUES (1,'TEST1');
SELECT * FROM test.example;
ctr1 ctr2 ctr3分别查询这条数据,发现都能查询到
SELECT * FROM test.example;
二、安装rabbitmq cluster集群
1.每个节点都安装erlang
yum install -y erlang
2.每个节点都安装rabbitmq
yum install -y rabbitmq-server
3.每个节点都启动rabbitmq及设置开机启动
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
4.创建openstack,注意替换为自己合适的密码
rabbitmqctl add_user openstack gdxc1902
5.将openstack用户赋予权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
6.看下监听端口,rabbitmq用的是5672端口
netstat -ntlp |grep 5672
7.查看rabbitmq插件
/usr/lib/rabbitmq/bin/rabbitmq-plugins list
8.每个节点都打开rabbitmq相关插件
/usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
打开相关插件后,重启下rabbitmq服务
systemctl restart rabbitmq-server
浏览器输入:http://10.254.15.141:15672 默认用户名密码:guest/guest
通过这个界面,我们能直观的看到rabbitmq的运行和负载情况
当然我们可以不用guest,我们换一个另外用户,比如mqadmin
rabbitmqctl add_user mqadmin mqadmin
rabbitmqctl set_user_tags mqadmin administrator
rabbitmqctl set_permissions -p / mqadmin ".*" ".*" ".*"
我们还可以通过命令把密码换了,比如把guest用户的密码变成passw0rd
rabbitmqctl change_password guest passw0rd
9.查看rabbitma状态
rabbitmqctl status
10.集群配置
cat /var/lib/rabbitmq/.erlang.cookie 到每个节点上查看cookie
在controller1上操作:
scp /var/lib/rabbitmq/.erlang.cookie controller2:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie controller3:/var/lib/rabbitmq/.erlang.cookie
在controller2上操作:
systemctl restart rabbitmq-server
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller1
rabbitmqctl start_app
在controller3上操作:
systemctl restart rabbitmq-server
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller1
rabbitmqctl start_app
查看集群状态:
rabbitmqctl cluster_status
11.集群管理
如果遇到rabbitmq脑裂情况,按以下步骤操作,重新设置集群:
登陆没加入集群的节点:
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
最后再重新执行添加集群操作即可!
如果某个节点下面这个路径/var/lib/rabbitmq 有多余的文件,请全部删除掉!
正常下面只有mnesia这一个文件夹
如果有一堆这样的文件就状态不对:
12.rabbitmq优化
rabbitmq一般优化的地方比较少,本人总结了下往上的一些资料以及mirantis官方的一篇博客,对rabbitmq的优化总结了下面几点,大家可以采纳:
a.尽可能的吧rabbitmq部署在单独的服务器中
因为使用专用节点,rabbitmq服务能尽全部的享受cpu资源,这样性能更高
b.让rabbitmq跑在hipe模式下
rabbitmq是用erlang语言编写的,而开启hipe能让erlang预编译运行,这样性能可以提升30%以上(关于测试结果,可以参考:https://github.com/binarin/rabbit-simple-benchmark/blob/master/report.md)。
但是开启hipe模式会让rabbitmq第一次启动很慢,大概需要2分钟;另外就是如果启用了hipe,rabbitmq的调试可能变得很难,因为hipe可以破坏错误回溯,使它们不可读。
enable hipe方法:
vim /etc/rabbitmq/rabbitmq.config
去掉{hipe_compile, true}前面注释(包括后面的逗号)即可,然后重启rabbitmq服务(重启过程中,你会发现启动过程很慢)。
scp -p /etc/rabbitmq/rabbitmq.config controller2:/etc/rabbitmq/rabbitmq.config
scp -p /etc/rabbitmq/rabbitmq.config controller3:/etc/rabbitmq/rabbitmq.config
c.不要对rpc队列使用队列镜像
研究表明,在3节点集群上启用队列镜像会使消息吞吐量下降两倍。另一方面,rpc消息生命周期会变短,如果消息变短丢失,它只导致当前正在进行的操作失败,因此没有镜像的整体rpc队列似乎是一个很好的权衡,不过也不是所有的消息队列都不启用镜像,ceilometer队列可以启用队列镜像,因为ceilometer的消息必须保留;但是如果你的环境装了ceilometer组件,最好给ceilometer单独一个rabbitmq集群,因为在通常情况下,ceilometer不会产生大量的消息队列,但是,如果ceilometer卡住有问题,那么关于ceilometer的消息队列就会很多溢出,这会造成rabbitmq集群的崩溃,这样必然导致其他openstack服务中断。
d.减少发送的指标数量或者频率
在openstack环境下运行rabbitmq的另一个最佳实践是减少发送的指标数量和或其频率。减少了相关指标数量和或频率,也自然减少了消息在rabbitmq服务中堆积的机会,这样rabbitmq就可以把更多的资源用来处理重要的openstack服务队列,以间接的提高rabbitmq性能。一般ceilometer和mongodb的消息队列可以尽量的挪开。
e.增加rabbitmq socket最大打开数
vim /etc/sysctl.conf
最下面添加:fs.file-max = 1000000
sysctl -p 执行生效
scp -p /etc/sysctl.conf controller2:/etc/sysctl.conf
scp -p /etc/sysctl.conf controller3:/etc/sysctl.conf
设置ulimit最大打开数
vim /etc/security/limits.conf
添加两行:
* soft nofile 655350
* hard nofile 655350
scp -p /etc/security/limits.conf controller2:/etc/security/limits.conf
scp -p /etc/security/limits.conf controller3:/etc/security/limits.conf
设置systemctl管理的服务文件最大打开数为1024000
vim /etc/systemd/system.conf
添加两行:
DefaultLimitNOFILE=1024000
DefaultLimitNPROC=1024000
scp -p /etc/systemd/system.conf controller2:/etc/systemd/system.conf
scp -p /etc/systemd/system.conf controller3:/etc/systemd/system.conf
改完后服务器都重启下,重启完毕查看值是否更改好,运行ulimit -Hn 查看
修改后,登陆rabbitmq web插件可以看到最大文件打开数和socket数都变大,默认值是最大文件打开数是1024和socket数是829.
参考:https://www.mirantis.com/blog/best-practices-rabbitmq-openstack/
https://www.qcloud.com/community/article/135
pcs restart导致rabbitmq用户丢失的bug:https://access.redhat.com/solutions/2374351
三、安装pacemaker
三个ctr节点需要安装以下包:
pacemaker
pcs(centos or rhel) or crmsh
corosync
fence-agent (centos or rhel) or cluster-glue
resource-agents
yum install -y lvm2 cifs-utils quota psmisc
yum install -y pcs pacemaker corosync fence-agents-all resource-agents
yum install -y crmsh
1.三个ctl节点都设置pcs服务开机启动
systemctl enable pcsd
systemctl enable corosync
systemctl start pcsd
systemctl status pcsd
2.设置hacluster用户密码,每个节点都需要设置
passwd hacluster
3.配置编写corosync.conf文件
vim /etc/corosync/corosync.conf
添加以下内容:
totem{
version:2
secauth:off
cluster_name:openstack_cluster
transport:udpu
}
nodelist{
node{
ring0_addr:controller1
nodeid:1
}
node{
ring0_addr:controller2
nodeid:2
}
node{
ring0_addr:controller3
nodeid:3
}
}
quorum{
provider:corosync_votequorum
}
logging{
to_logfile:yes
logfile:/var/log/cluster/corosync.log
to_syslog:yes
}
scp -p /etc/corosync/corosync.conf controller2:/etc/corosync/corosync.conf
scp -p /etc/corosync/corosync.conf controller3:/etc/corosync/corosync.conf
在3个节点上分别启动corosync服务
systemctl enable corosync
systemctl restart corosync
systemctl status corosync
4.设置集群相互验证,在controller1上操作即可
pcs cluster auth controller1 controller2 controller3 -u hacluster -p gdxc1902 --force
5.在controller1上创建并启动名为openstack_cluster的集群,其中controller1 controller2 controller3为集群成员
pcs cluster setup --force --name openstack_cluster controller1 controller2 controller3
6.设置集群自启动
pcs cluster enable --all
7.设置集群启动
pcs cluster start --all
8.查看并设置集群属性
pcs cluster status
9.检查pacemaker服务
ps aux |grep pacemaker
10.检验corosync的安装及当前corosync状态
corosync-cfgtool -s
corosync-cmapctl |grep members
pcs status corosync
11.检查配置是否正确(假若没有输出任何则配置正确)
crm_verify -L -V
如果想忽略这个错误,那么做以下操作
禁用STONITH
pcs property set stonith-enabled=false
无法仲裁时候,选择忽略
pcs property set no-quorum-policy=ignore
12.pcs其他命令
查看pcs支持的资源代理标准
pcs resource providers
13.通过crm设置VIP
crm
configure
crm(live)configure# primitive vip_public ocf:heartbeat:IPaddr2 params ip="10.254.15.140" cidr_netmask="27" nic=eth0 op monitor interval="30s"
crm(live)configure# primitive vip_management ocf:heartbeat:IPaddr2 params ip="10.1.1.140" cidr_netmask="24" nic=eth1 op monitor interval="30s"
commit
或者
pcs resource create vip_public ocf:heartbeat:IPaddr2 params ip="10.254.15.140" cidr_netmask="27" nic=eth0 op monitor interval="30s"
pcs resource create primitive vip_management ocf:heartbeat:IPaddr2 params ip="10.1.1.140" cidr_netmask="24" nic=eth1 op monitor interval="30s"
四、安装haproxy
1.安装haproxy
在三个节点上分别安装haproxy
yum install -y haproxy
systemctl enable haproxy.service
2.跟rsyslog集合配置haproxy日志,在三个节点上都操作
cd /etc/rsyslog.d/
vim haproxy.conf
添加:
$ModLoad imudp
$UDpServerRun 514
$template Haproxy,"%rawmsg% \n"
local0.=info -/var/log/haproxy.log;Haproxy
local0.notice -var/log/haproxy-status.log;Haproxy
local0.* ~
scp -p /etc/rsyslog.d/haproxy.conf controller2:/etc/rsyslog.d/haproxy.conf
scp -p /etc/rsyslog.d/haproxy.conf controller3:/etc/rsyslog.d/haproxy.conf
systemctl restart rsyslog.service
systemctl status rsyslog.service
3.在三个节点上配置haproxy.cfg
cd /etc/haproxy
mv haproxy.cfg haproxy.cfg.orig
vim haproxy.cfg
添加下面内容:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 16000
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 10000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend stats-front
bind *:8088
mode http
default_backend stats-back
backend stats-back
mode http
balance source
stats uri /haproxy/stats
stats auth admin:gdxc1902
listen RabbitMQ-Server-Cluster
bind 10.1.1.140:56720
mode tcp
balance roundrobin
option tcpka
server controller1 controller1:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen RabbitMQ-Web
bind 10.254.15.140:15673
mode tcp
balance roundrobin
option tcpka
server controller1 controller1:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen Galera-Cluster
bind 10.1.1.140:3306
balance leastconn
mode tcp
option tcplog
option httpchk
server controller1 controller1:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server controller2 controller2:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server controller3 controller3:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
listen keystone_admin_cluster
bind 10.1.1.140:35357
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen keystone_public_internal_cluster
bind 10.1.1.140:5000
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen Memcache_Servers
bind 10.1.1.140:22122
balance roundrobin
mode tcp
option tcpka
server controller1 controller1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen dashboard_cluster
bind 10.254.15.140:80
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:8080 check inter 2000 fall 3
server controller2 controller2:8080 check inter 2000 fall 3
server controller3 controller3:8080 check inter 2000 fall 3
listen glance_api_cluster
bind 10.1.1.140:9292
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen glance_registry_cluster
bind 10.1.1.140:9090
balance roundrobin
mode tcp
option tcpka
server controller1 controller1:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova_compute_api_cluster
bind 10.1.1.140:8774
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova-metadata-api_cluster
bind 10.1.1.140:8775
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova_vncproxy_cluster
bind 10.1.1.140:6080
balance source
option tcpka
option tcplog
server controller1 controller1:6080 check inter 2000 rise 2 fall 5
server controller2 controller2:6080 check inter 2000 rise 2 fall 5
server controller3 controller3:6080 check inter 2000 rise 2 fall 5
listen neutron_api_cluster
bind 10.1.1.140:9696
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen cinder_api_cluster
bind 10.1.1.140:8776
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
scp -p /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg
scp -p /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/haproxy.cfg
systemctl restart haproxy.service
systemctl status haproxy.service
参数解释,具体的看分享的《haproxy终极参考手册》:
inter<delay>:设定健康状态检查的时间间隔,单位为毫秒,默认为2000;也可以使用fastinter和downinter来根据服务器状态优化此时间延迟。
rise<count>:设定健康状态检查中,某离线的server从离线状态转换至正常状态需要成功检查次数。
fall<count>:默认server从正常转换为不可用状态需要检查的次数。
3.配置haproxy能监控galera数据库集群
在controller1上进入mysql,创建clustercheck
grant process on *.* to 'clustercheckuser'@'localhost' identified by 'gdxc1902';
flush privileges;
三个节点分别创建clustercheck文本,里面是clustercheckuser用户密码
vim /etc/sysconfig/clustercheck
添加:
MYSQL_USERNAME=clustercheckuser
MYSQL_PASSWORD=gdxc1902
MYSQL_HOST=localhost
MYSQL_PORT=3306
scp -p /etc/sysconfig/clustercheck controller2:/etc/sysconfig/clustercheck
scp -p /etc/sysconfig/clustercheck controller3:/etc/sysconfig/clustercheck
确认下是否存在/usr/bin/clustercheck,如果没有从网上下载一个,然后放到/usr/bin目录下面,记得chmod +x /usr/bin/clustercheck 赋予权限
这个脚本的作用就是让haproxy能监控galera cluster状态
scp -p /usr/bin/clustercheck controller2:/usr/bin/clustercheck
scp -p /usr/bin/clustercheck controller3:/usr/bin/clustercheck
在controller1上检查haproxy服务状态
clustercheck (存在/usr/bin/clustercheck可以直接运行clustercheck命令)
结合xinetd监控galera服务(三个节点安装xinetd)
yum install -y xinetd
vim /etc/xinetd.d/mysqlchk
添加以下内容:
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure = USERID
only_from = 0.0.0.0/0
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
scp -p /etc/xinetd.d/mysqlchk controller2:/etc/xinetd.d/mysqlchk
scp -p /etc/xinetd.d/mysqlchk controller3:/etc/xinetd.d/mysqlchk
vim /etc/services
最后一行添加:mysqlchk 9200/tcp # mysqlchk
scp -p /etc/services controller2:/etc/services
scp -p /etc/services controller3:/etc/services
重启xinetd服务
systemctl restart xinetd.service
systemctl status xinetd.service
5.三个节点修改内核参数
echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf
echo 'net.ipv4.ip_forward=1'>>/etc/sysctl.conf
sysctl -p
第一个参数的意思是设置haproxy能够绑定到不属于本地网卡的地址上。
第二个参数的意思是内核是否转发数据包,默认是禁止的,这里我们设置打开。
注意!如果不设置这两个参数,你的第二个和第三个ctr节点haproxy服务将启动不了
6.三个节点启动haproxy服务
systemctl restart haproxy.service
systemctl status haproxy.service
7.访问haproxy前端web平台
http://10.254.15.140:8088/haproxy/stats admin/gdxc1902
五、安装配置keystone
1.在controller1上创建keystone数据库
CREATE DATABASE keystone;
2.在congtroller1上创建数据库用户及赋予权限
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'gdxc1902';
注意替换为自己的数据库密码
3.在三个节点上分别安装keystone和memcached
yum install -y openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-utils
4.优化配置memcached
vim /etc/sysconfig/memcached
PORT="11211"
#定义端口
USER="memcached"
#定义运行memcache的用户
MAXCONN="8192"
#定义最大连接数
CACHESIZE="1024"
#定义最大内存使用值
OPTIONS="-l 127.0.0.1,::1,10.1.1.141 -t 4 -I 10m"
# -|设置服务绑定IP,-t设置线程数,-I调整分配slab页大小
注意!OPTIONS中10.1.1.141改成各个节点对应的IP
scp -p /etc/sysconfig/memcached controller2:/etc/sysconfig/memcached
scp -p /etc/sysconfig/memcached controller3:/etc/sysconfig/memcached
5.在三个节点上分别启动memcache服务并设置开机自启动
systemctl enable memcached.service
systemctl restart memcached.service
systemctl status memcached.service
6.配置/etc/keystone/keystone.conf文件
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
>/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf DEFAULT debug false
openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose true
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_endpoint http://10.1.1.140:35357
openstack-config --set /etc/keystone/keystone.conf DEFAULT public_endpoint http://10.1.1.140:5000
openstack-config --set /etc/keystone/keystone.conf eventlet_server public_bind_host 10.1.1.141
openstack-config --set /etc/keystone/keystone.conf eventlet_server admin_bind_host 10.1.1.141
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/keystone/keystone.conf cache memcache_dead_retry 60
openstack-config --set /etc/keystone/keystone.conf cache memcache_socket_timeout 1
openstack-config --set /etc/keystone/keystone.conf cache memcache_pool_maxsize 1000
openstack-config --set /etc/keystone/keystone.conf cache memcache_pool_unused_timeout 60
openstack-config --set /etc/keystone/keystone.conf catalog template_file /etc/keystone/default_catalog.templates
openstack-config --set /etc/keystone/keystone.conf catalog driver sql
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:gdxc1902@10.1.1.140/keystone
openstack-config --set /etc/keystone/keystone.conf database idle_timeout 3600
openstack-config --set /etc/keystone/keystone.conf database max_pool_size 30
openstack-config --set /etc/keystone/keystone.conf database max_retries -1
openstack-config --set /etc/keystone/keystone.conf database retry_interval 2
openstack-config --set /etc/keystone/keystone.conf database max_overflow 60
openstack-config --set /etc/keystone/keystone.conf identity driver sql
openstack-config --set /etc/keystone/keystone.conf identity caching false
openstack-config --set /etc/keystone/keystone.conf fernet_tokens key_repository /etc/keystone/fernet-keys/
openstack-config --set /etc/keystone/keystone.conf fernet_tokens max_active_keys 3
openstack-config --set /etc/keystone/keystone.conf memcache servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/keystone/keystone.conf memcache dead_retry 60
openstack-config --set /etc/keystone/keystone.conf memcache socket_timeout 1
openstack-config --set /etc/keystone/keystone.conf memcache pool_maxsize 1000
openstack-config --set /etc/keystone/keystone.conf memcache pool_unused_timeout 60
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_use_ssl false
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/keystone/keystone.conf token expiration 3600
openstack-config --set /etc/keystone/keystone.conf token caching False
openstack-config --set /etc/keystone/keystone.conf token provider fernet
scp -p /etc/keystone/keystone.conf controller2:/etc/keystone/keystone.conf
scp -p /etc/keystone/keystone.conf controller3:/etc/keystone/keystone.conf
7.配置httpd.conf文件
vim /etc/httpd/conf/httpd.conf
servername controller1 (如果是controller2那就写congtroller2)
listen 8080 (80->8080 haproxy里用了80,不修改启动不了)
sed -i "s/#ServerName www.example.com:80/ServerName controller1/" /etc/httpd/conf/httpd.conf
sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf
8.配置keystone与httpd结合
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5002
Listen 35358
<VirtualHost *:5002>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone_access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35358>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone_access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
把这个文件拷贝到另外两个节点上:
scp -p /etc/httpd/conf.d/wsgi-keystone.conf controller2:/etc/httpd/conf.d/wsgi-keystone.conf
scp -p /etc/httpd/conf.d/wsgi-keystone.conf controller3:/etc/httpd/conf.d/wsgi-keystone.conf
9.在controller1上设置数据库同步
su -s /bin/sh -c "keystone-manage db_sync" keystone
10.三个节点都初始化fernet
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
11.同步三个节点fernet信息,在congtroller1操作
scp -p /etc/keystone/fernet-keys/* controller2:/etc/keystone/fernet-keys/
scp -p /etc/keystone/fernet-keys/* controller3:/etc/keystone/fernet-keys/
scp -p /etc/keystone/credential-keys/* controller2:/etc/keystone/credential-keys/
scp -p /etc/keystone/credential-keys/* controller3:/etc/keystone/credential-keys/
12.三个节点都要启动httpd,并设置httpd开机启动
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
systemctl list-unit-files |grep httpd.service
13.在congtroller1上创建admin用户角色
keystone-manage bootstrap \
--bootstrap-password gdxc1902 \
--bootstrap-role-name admin \
--bootstrap-service-name keystone \
--bootstrap-admin-url http://10.1.1.140:35357/v3/ \
--bootstrap-internal-url http://10.1.1.140:35357/v3/ \
--bootstrap-public-url http://10.1.1.140:5000/v3/ \
--bootstrap-region-id RegionOne
这样,就可以在openstack命令行里使用admin账号登陆了。
验证。测试是否配置合理:
openstack project list --os-username admin --os-project-name admin --os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 --os-auth-url http://10.1.1.140:5000 --os-password gdxc1902
14.在controller1上创建admin用户环境变量,创建/root/admin-openrc文件并写入如下内容:
vim /root/admin-openrc
添加以下内容:
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=gdxc1902
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://10.1.1.140:35357/v3
15.在controller1上创建service项目
source /root/admin-openrc
openstack project create --domain default --description "Service Project" service
16.在controller1上创建demo项目
openstack project create --domain default --description "Demo Project" demo
17.在controller1上创建demo用户
openstack user create --domain default demo --password gdxc1902
注意:gdxc1902为demo用户密码
18.在controller1上创建user角色将demo用户赋予user角色
openstack role create user
openstack role add --project demo --user demo user
19.在controller1上验证keystone
unset OS_TOKEN OS_URL
openstack --os-auth-url http://10.1.1.140:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password gdxc1902
openstack --os-auth-url http://10.1.1.140:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password gdxc1902
20.在controller1上创建demo用户环境变量,创建/root/demo-openrc文件并写入下列内容:
vim /root/demo-openrc
添加:
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=demo
export OS_PROJECT_NAME=demo
export OS_PASSWORD=gdxc1902
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://10.1.1.140:35357/v3
scp -p /root/admin-openrc controller2:/root/admin-openrc
scp -p /root/admin-openrc controller3:/root/admin-openrc
scp -p /root/demo-openrc controller2:/root/demo-openrc
scp -p /root/demo-openrc controller3:/root/demo-openrc
六、安装配置glance
1.在controller1上创建glance数据库
CREATE DATABASE glance;
2.在controller1上创建数据库用户并赋予权限
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'gdxc1902';
3.在controller1上创建glance用户并赋予admin权限
source /root/admin-openrc
openstack user create --domain default glance --password gdxc1902
openstack role add --project service --user glance admin
4.在controller1上创建image服务
openstack service create --name glance --description "OpenStack Image service" image
5.在controller1上创建glance的endpoint
openstack endpoint create --region RegionOne image public http://10.1.1.140:9292
openstack endpoint create --region RegionOne image internal http://10.1.1.140:9292
openstack endpoint create --region RegionOne image admin http://10.1.1.140:9292
6.在三个节点上安装glance相关rpm包
yum install -y openstack-glance
7.在三个节点上修改glance配置文件/etc/glance/glance-api.conf
注意红色的密码设置成你自己的
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
>/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT debug False
openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller1
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_port 9393
openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller1
openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_port 9191
openstack-config --set /etc/glance/glance-api.conf DEFAULT auth_region RegionOne
openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_client_protocol http
openstack-config --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url False
openstack-config --set /etc/glance/glance-api.conf DEFAULT workers 4
openstack-config --set /etc/glance/glance-api.conf DEFAULT backlog 4096
openstack-config --set /etc/glance/glance-api.conf DEFAULT image_cache_dir /var/lib/glance/image-cache
openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/glance/glance-api.conf DEFAULT scrub_time 43200
openstack-config --set /etc/glance/glance-api.conf DEFAULT delayed_delete False
openstack-config --set /etc/glance/glance-api.conf DEFAULT enable_v1_api False
openstack-config --set /etc/glance/glance-api.conf DEFAULT enable_v2_api True
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/glance/glance-api.conf oslo_concurrency lock_path /var/lock/glance
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:gdxc1902@10.1.1.140/glance
openstack-config --set /etc/glance/glance-api.conf database idle_timeout 3600
openstack-config --set /etc/glance/glance-api.conf database max_pool_size 30
openstack-config --set /etc/glance/glance-api.conf database max_retries -1
openstack-config --set /etc/glance/glance-api.conf database retry_interval 2
openstack-config --set /etc/glance/glance-api.conf database max_overflow 60
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken token_cache_time -1
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
scp -p /etc/glance/glance-api.conf controller2:/etc/glance/glance-api.conf
scp -p /etc/glance/glance-api.conf controller3:/etc/glance/glance-api.conf
8.在三个节点上修改glance配置文件/etc/glance/glance-registry.conf
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
>/etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT debug False
openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller1
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_port 9191
openstack-config --set /etc/glance/glance-registry.conf DEFAULT workers 4
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:gdxc1902@10.1.1.140/glance
openstack-config --set /etc/glance/glance-registry.conf database idle_timeout 3600
openstack-config --set /etc/glance/glance-registry.conf database max_pool_size 30
openstack-config --set /etc/glance/glance-registry.conf database max_retries -1
openstack-config --set /etc/glance/glance-registry.conf database retry_interval 2
openstack-config --set /etc/glance/glance-registry.conf database max_overflow 60
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-registry.conf glance_store os_region_name RegionOne
scp -p /etc/glance/glance-registry.conf controller2:/etc/glance/glance-registry.conf
scp -p /etc/glance/glance-registry.conf controller3:/etc/glance/glance-registry.conf
9.在controller1上同步glance数据库
su -s /bin/sh -c "glance-manage db_sync" glance
10.在三个节点上启动glance及设置开机启动
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
11.在三个节点上将glance版本号写入环境变量openrc文件中
echo " " >> /root/admin-openrc && \
echo " " >> /root/demo-openrc && \
echo "export OS_IMAGE_API_VERSION=2"|tee -a /root/admin-openrc /root/demo-openrc
12.搭建glance后端存储
因为是ha环境,3个控制节点必须要有一个共享的后端存储,不然request发起请求的时候不确定会去调用哪个控制节点的glance服务,如果没有共享存储池存镜像,那么会遇到创建vm时候image找不到的问题。
这里我们采用nfs的方式把glance的后端存储建立起来,当然在实际的生产环境当中,一般会用ceph、glusterfs等方式,这里我们以nfs为例子来讲述后端存储的搭建。
首先准备一台物理机或者虚拟机,要求空间要大,网络最好是在万兆。
这里我们用10.1.1.125 这台虚拟机
首先在这台机器上安装glance组件:
yum install -y openstack-glance python-glance python-glanceclient python-openstackclient openstack-nova-compute
其次安装nfs服务
yum install -y nfs-utils rpcbind
创建glance images的存储路径并赋予glance用户相应的权限:
mkdir -p /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images
配置nfs把/var/lib/glance目录共享出去
vim /etc/exports
/var/lib/glance *(rw,sync,no_root_squash)
启动相关服务,并把nfs设置开机启动
systemctl enable rpcbind
systemctl enable nfs-server.service
systemctl restart rpcbind
systemctl restart nfs-server.service
systemctl status nfs-server.service
让nfs共享目录生效
showmount -e
接着在3个controller节点上做如下操作:
mount -t nfs 10.1.1.125:/var/lib/glance/images /var/lib/glance/images
echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/glance/ /var/lib/glance/" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
df -h
13.在controller1上下载测试镜像文件
wget http://10.254.15.138/images/cirros-0.3.4-x86_64-disk.img
14.在controller1上传镜像到glance
source /root/admin-openrc
glance image-create --name "cirros-0.3.4-x86_64" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
如果你做好了一个centos6.7系统镜像,也可以用这命令操作,例如:
glance image-create --name "CentOS6.7-x86_64" --file CentOS6.7.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
查看镜像列表:
glance image-list
七、安装配置nova
1.在controller1上创建nova数据库
CREATE DATABASE nova;
CREATE DATABASE nova_api;
2.在controller1上创建数据库用户并赋予权限
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'gdxc1902';
3.在controller1上创建nova用户及赋予admin权限
source /root/admin-openrc
openstack user create --domain default nova --password gdxc1902
openstack role add --project service --user nova admin
4.在controller1上创建computer服务
openstack service create --name nova --description "OpenStack Compute" compute
5.在controller1上创建nova的endpoint
openstack endpoint create --region RegionOne compute public http://10.1.1.140:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://10.1.1.140:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://10.1.1.140:8774/v2.1/%\(tenant_id\)s
6.在三台控制节点上安装nova组件
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-cert openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
7.在三台控制节点上配置nova的配置文件/etc/nova/nova.conf
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
>/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT debug False
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 9774
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 9775
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.141
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_use_baremetal_filters False
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_weight_classes nova.scheduler.weights.all_weighers
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_host_subset_size 30
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_driver nova.scheduler.filter_scheduler.FilterScheduler
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_max_attempts 3
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_available_filters nova.scheduler.filters.all_filters
openstack-config --set /etc/nova/nova.conf DEFAULT ram_allocation_ratio 3.0
openstack-config --set /etc/nova/nova.conf DEFAULT disk_allocation_ratio 1.0
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 16.0
openstack-config --set /etc/nova/nova.conf DEFAULT service_down_time 180
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_workers 4
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_workers 4
openstack-config --set /etc/nova/nova.conf DEFAULT rootwrap_config /etc/nova/rootwrap.conf
openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host True
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_host 10.1.1.141
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_port 6080
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:gdxc1902@10.1.1.140/nova
openstack-config --set /etc/nova/nova.conf database idle_timeout 3600
openstack-config --set /etc/nova/nova.conf database max_pool_size 30
openstack-config --set /etc/nova/nova.conf database retry_interval 2
openstack-config --set /etc/nova/nova.conf database max_retries -1
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:gdxc1902@10.1.1.140/nova_api
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/nova/nova.conf glance api_servers http://10.1.1.140:9292
openstack-config --set /etc/nova/nova.conf conductor use_local False
openstack-config --set /etc/nova/nova.conf conductor workers 4
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.141
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://10.1.1.140:6080/vnc_auto.html
注意:其他节点上记得替换IP,还有密码。
scp -p /etc/nova/nova.conf controller2:/etc/nova/nova.conf
scp -p /etc/nova/nova.conf controller3:/etc/nova/nova.conf
8.在controller1上同步nova数据
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova
9.在controller1上设置开机启动
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
controller2、3上设置开机启动(注意比起ctr1节点少了openstack-nova-consoleauth.service):
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
controller1上启动nova服务:
systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
controller2、3上启动nova服务:
systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl list-unit-files |grep openstack-nova-*
10.随便一个节点上验证nova服务
unset OS_TOKEN OS_URL
echo "export OS_REGION_NAME=RegionOne" >> admin-openrc
source /root/admin-openrc
nova service-list
openstack endpoint list
八、安装配置neutron
1.在controller1上创建neutron数据库
CREATE DATABASE neutron;
2.在controller1上创建数据库用户并赋予权限
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'gdxc1902';
3.在controller1上创建neutron用户及赋予admin权限
source /root/admin-openrc
openstack user create --domain default neutron --password gdxc1902
openstack role add --project service --user neutron admin
4.在controller1上创建network服务
openstack service create --name neutron --description "OpenStack Networking" network
5.在controller1上创建nova的endpoint
openstack endpoint create --region RegionOne network public http://10.1.1.140:9696
openstack endpoint create --region RegionOne network internal http://10.1.1.140:9696
openstack endpoint create --region RegionOne network admin http://10.1.1.140:9696
6.在三台控制节点上安装neutron相关软件
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
7.在三台控制节点上配置neutron的配置文件/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
>/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT debug False
openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose true
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host controller1
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_port 9797
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.ml2.plugin.Ml2Plugin
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180
openstack-config --set /etc/neutron/neutron.conf DEFAULT mac_generation_retries 32
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 600
openstack-config --set /etc/neutron/neutron.conf DEFAULT global_physnet_mtu 1500
openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT api_workers 4
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_workers 4
openstack-config --set /etc/neutron/neutron.conf DEFAULT agent_down_time 75
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
openstack-config --set /etc/neutron/neutron.conf DEFAULT router_distributed False
openstack-config --set /etc/neutron/neutron.conf DEFAULT router_scheduler_driver neutron.scheduler.l3_agent_scheduler.ChanceScheduler
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 0
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:gdxc1902@10.1.1.140/neutron
openstack-config --set /etc/neutron/neutron.conf database idle_timeout 3600
openstack-config --set /etc/neutron/neutron.conf database max_pool_size 30
openstack-config --set /etc/neutron/neutron.conf database max_retries -1
openstack-config --set /etc/neutron/neutron.conf database retry_interval 2
openstack-config --set /etc/neutron/neutron.conf database max_overflow 60
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://10.1.1.140:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password gdxc1902
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf agent report_interval 30
openstack-config --set /etc/neutron/neutron.conf agent root_helper sudo\ neutron-rootwrap\ /etc/neutron/rootwrap.conf
scp -p /etc/neutron/neutron.conf controller2:/etc/neutron/neutron.conf
scp -p /etc/neutron/neutron.conf controller3:/etc/neutron/neutron.conf
8.在三个控制节点上配置/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 path_mtu 1500
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
scp -p /etc/neutron/plugins/ml2/ml2_conf.ini controller2:/etc/neutron/plugins/ml2/ml2_conf.ini
scp -p /etc/neutron/plugins/ml2/ml2_conf.ini controller3:/etc/neutron/plugins/ml2/ml2_conf.ini
9.在三个控制节点上配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.141
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller2:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller3:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
注意eth0是public网卡,一般这里写的网卡名都是能访问外网的,如果不是外网网卡,那么vm就会与外界隔离。
10.在三个控制节点上配置/etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug false
scp -p /etc/neutron/l3_agent.ini controller2:/etc/neutron/l3_agent.ini
scp -p /etc/neutron/l3_agent.ini controller3:/etc/neutron/l3_agent.ini
11.在三个控制节点上配置/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT debug false
scp -p /etc/neutron/dhcp_agent.ini controller2:/etc/neutron/dhcp_agent.ini
scp -p /etc/neutron/dhcp_agent.ini controller3:/etc/neutron/dhcp_agent.ini
12.在三个控制节点上重新配置/etc/nova/nova.conf,配置这步的目的是让compute节点能使用neutron网络
openstack-config --set /etc/nova/nova.conf neutron url http://10.1.1.140:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.1.1.140:35357
openstack-config --set /etc/nova/nova.conf neutron auth_plugin password
openstack-config --set /etc/nova/nova.conf neutron project_domain_id default
openstack-config --set /etc/nova/nova.conf neutron user_domain_id default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password gdxc1902
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret gdxc1902
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
13.在三个控制节点上将dhcp-option-force=26,1450写入/etc/neutron/dnsmasq-neutron.conf
echo "dhcp-option-force=26,1450" > /etc/neutron/dnsmasq-neutron.conf
14.在三个控制节点上配置/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 10.1.1.140
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret gdxc1902
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_protocol http
scp -p /etc/neutron/metadata_agent.ini controller2:/etc/neutron/metadata_agent.ini
scp -p /etc/neutron/metadata_agent.ini controller3:/etc/neutron/metadata_agent.ini
15.在三个控制节点上创建软链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
16.在controller1上同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
17.在三个控制节点上重启nova服务,因为刚才改了nova.conf
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
18.在三个控制节点上重启neutron服务并设置开机启动
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
19.在三个控制节点上启动neutron-l3-agent.service并设置开机启动
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
systemctl status neutron-l3-agent.service
20.随便一节点上执行验证
source /root/admin-openrc
neutron agent-list
21.创建vxlan模式网络,让虚拟机能外出
a.首先执行环境变量
source /root/admin-openrc
b.创建flat模式的public网络,注意这个public是外出网络,必须是flat模式
neutron --debug net-create --shared provider --router:external True --provider:network_type flat --provider:physical_network provider
执行完这步,在界面里进行操作,把public网络设置为共享和外部网络,创建后,结果为:
c.创建public网络子网,名为public-sub,网段就是10.254.15.160/27,并且IP范围是162-190(这个一般是给vm用的floating IP了),DNS设置为218.30.26.68,网关为10.254.15.161
neutron subnet-create provider 10.254.15.160/27 --name provider-sub --allocation-pool start=10.254.15.162,end=10.254.15.190 --dns-nameserver 218.30.26.68 --gateway 10.254.15.161
d.创建名为private的私有网络,网络模式为vxlan
neutron net-create private-test --provider:network_type vxlan --router:external False --shared
e.创建名为private-subnet的私有网络子网,网段为192.168.1.0,这个网段就是虚拟机获取的自由的IP地址
neutron subnet-create private-test --name private-subnet --gateway 192.168.1.1 192.168.1.0/24
例如你们公司的私有云环境是用于不同的业务,比如行政、销售、技术等,那么你可以创建3个不同名称的私有网络
neutron net-create private-office --provider:network_type vxlan --router:external False --shared
neutron subnet-create private-office --name office-subnet --gateway 192.168.2.1 192.168.2.0/24
neutron net-create private-sale --provider:network_type vxlan --router:external False --shared
neutron subnet-create private-sale --name sale-subnet --gateway 192.168.3.1 192.168.3.0/24
neutron net-create private-tachnology --provider:network_type vxlan --router:external False --shared
neutron subnet-create private-tachnology --name tachnology-subnet --gateway 192.168.4.1 192.168.4.0/24
22.检查网络服务
neutron agent-list
九、安装dashboard
1.安装dashboard相关软件包
yum install openstack-dashboard -y
2.修改配置文件/etc/openstack-dashboard/local_settings
wget http://10.254.15.147/local_settings
修改配置文件
cp local_settings /etc/openstack-dashboard/
scp -p /etc/openstack-dashboard/local_settings controller2:/etc/openstack-dashboard/local_settings
scp -p /etc/openstack-dashboard/local_settings controller3:/etc/openstack-dashboard/local_settings
3.启动dashboard服务并设置开机启动
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
十、安装配置cinder
1.在controller1上创建数据库用户并赋予权限
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'gdxc1902';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'gdxc1902';
2.在controller1上创建neutron用户及赋予admin权限
source /root/admin-openrc
openstack user create --domain default cinder --password gdxc1902
openstack role add --project service --user cinder admin
3.在controller1上创建network服务
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
4.在controller1上创建nova的endpoint
openstack endpoint create --region RegionOne volume public http://10.1.1.140:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://10.1.1.140:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://10.1.1.140:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://10.1.1.140:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.1.1.140:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.1.1.140:8776/v2/%\(tenant_id\)s
5.在三台控制节点上安装neutron相关软件
yum install -y openstack-cinder
6.在三台控制节点上配置cinder配置文件/etc/cinder/cinder.conf
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT debug False
openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose True
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.141
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen_port 8778
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.1.1.140:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova
openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova
openstack-config --set /etc/cinder/cinder.conf DEFAULT allow_availability_zone_fallback True
openstack-config --set /etc/cinder/cinder.conf DEFAULT service_down_time 180
openstack-config --set /etc/cinder/cinder.conf DEFAULT report_interval 10
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_workers 4
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_force_upload True
openstack-config --set /etc/cinder/cinder.conf DEFAULT rootwrap_config /etc/cinder/rootwrap.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:gdxc1902@10.1.1.140/cinder
openstack-config --set /etc/cinder/cinder.conf database idle_timeout 3600
openstack-config --set /etc/cinder/cinder.conf database max_pool_size 30
openstack-config --set /etc/cinder/cinder.conf database max_retries -1
openstack-config --set /etc/cinder/cinder.conf database retry_interval 2
openstack-config --set /etc/cinder/cinder.conf database max_overflow 60
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
scp -p /etc/cinder/cinder.conf controller2:/etc/cinder/cinder.conf
scp -p /etc/cinder/cinder.conf controller3:/etc/cinder/cinder.conf
7.在controller1上同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
8.在三台控制节点上启动cinde服务,并设置开机启动
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
9.安装cinder节点,cinder节点这里我们需要额外的添加一个硬盘(/dev/sdb)用作cinder的存储服务(注意!这一步是在cinder节点操作的)
yum install lvm2 -y
10.启动服务并设置开机启动(注意!这一步是在cinder节点操作的)
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
11.创建lvm,这里的/dev/sdb就是额外添加的硬盘注意!(注意!这一步是在cinder节点操作的)
fdisk -l
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
12.编辑存储节点lvm.conf文件
vim /etc/lvm/lvm.conf
在devices下面添加filter = ["a/sda/","a/sdb/","r/.*/"],129行
然后重启lvm2服务
systemctl restart lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
13.安装openstack-cinder、targetcli(注意!这一步是在cinder节点操作的)
yum install openstack-cinder openstack-utils python-keystone scsi-target-utils targetcli ntpdate -y
14.配置cinder配置文件(注意!这一步是在cinder节点操作的)
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT debug False
openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose True
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.146
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.1.1.140:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True
openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova
openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova
openstack-config --set /etc/cinder/cinder.conf DEFAULT service_down_time 180
openstack-config --set /etc/cinder/cinder.conf DEFAULT report_interval 10
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_workers 4
openstack-config --set /etc/cinder/cinder.conf DEFAULT os_region_name RegionOne
openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:gdxc1902@10.254.15.140/cinder
openstack-config --set /etc/cinder/cinder.conf database idle_timeout 3600
openstack-config --set /etc/cinder/cinder.conf database max_pool_size 30
openstack-config --set /etc/cinder/cinder.conf database max_retries -1
openstack-config --set /etc/cinder/cinder.conf database retry_interval 2
openstack-config --set /etc/cinder/cinder.conf database max_overflow 60
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf oslo_convcurrency lock_path /var/lib/cinder/tmp
15.启动openstack-cinder-volume和target并设置开机启动(注意!这一步是在cinder节点操作的)
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
16.在任意节点上验证cinder服务是否正常
source /root/admin-openrc
cinder service-list
netstat -ntlp|grep 3260
17.相关命令
cinder service-list //查看cinder service服务
cinder-manage service remove cinder-volume controller1 //删除不用的cinder service服务
http://blog.csdn.net/qq806692341/article/details/52397440 // Cinder命令总结
十一、compute节点部署
1.安装相关依赖包
yum install -y openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils ntpdate
2.配置nova.conf
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
>/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT debug False
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf DEFAULT force_raw_images True
openstack-config --set /etc/nova/nova.conf DEFAULT remove_unused_original_minimum_age_seconds 86400
openstack-config --set /etc/nova/nova.conf DEFAULT image_service nova.image.glance.GlanceImageService
openstack-config --set /etc/nova/nova.conf DEFAULT use_cow_images True
openstack-config --set /etc/nova/nova.conf DEFAULT heal_instance_info_cache_interval 60
openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state
openstack-config --set /etc/nova/nova.conf DEFAULT rootwrap_config /etc/nova/rootwrap.conf
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host True
openstack-config --set /etc/nova/nova.conf DEFAULT connection_type libvirt
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.144
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal False
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 30
openstack-config --set /etc/nova/nova.conf DEFAULT resume_guests_state_on_host_boot True
openstack-config --set /etc/nova/nova.conf DEFAULT api_rate_limit False
openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries_interval 3
openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 1500
openstack-config --set /etc/nova/nova.conf DEFAULT report_interval 60
openstack-config --set /etc/nova/nova.conf DEFAULT remove_unused_base_images False
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_memory_mb 512
openstack-config --set /etc/nova/nova.conf DEFAULT service_down_time 180
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc keymap en-us
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.144
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://10.1.1.140:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://10.1.1.140:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
openstack-config --set /etc/nova/nova.conf libvirt cpu_mode host-model
注意!如果是在物理机上virt_type请改成kvm
在线热迁移:
源和目标节点的cpu类型要一致。
源和目标节点的libvirt版本要一致。
源和目标节点能相互识别对方的主机名称,比如可以在/etc/hosts中加入对方的主机名
vim /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf libvirt block_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC
openstack-config --set /etc/nova/nova.conf libvirt live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
注:如果cpu型号不一样,比如一个cpu版本低,一个cpu版本高,那么cpu版本低上面的虚拟机可以热迁移或者冷迁移到cpu版本高的上面,但是反过来不行,如果要cpu版本高的迁移到版本低的上,需要做如下设置:
vim /etc/nova/nova.conf
在[libvirt]组额外添加下面两参数:
openstack-config --set /etc/nova/nova.conf libvirt libvirt_cpu_mode custom
openstack-config --set /etc/nova/nova.conf libvirt libvirt_cpu_model kvm64
修改/etc/sysconfig/libvirtd和/etc/libvirt/libvirtd.conf文件
sed -i 's/#listen_tls = 0/listen_tls = 0/g' /etc/libvirt/libvirtd.conf
sed -i 's/#listen_tcp = 1/listen_tcp = 1/g' /etc/libvirt/libvirtd.conf
sed -i 's/#auth_tcp = "sasl"/auth_tcp = "none"/g' /etc/libvirt/libvirtd.conf
sed -i 's/#LIBVIRTD_ARGS="--listen"/LIBVIRTD_ARGS="--listen"/g' /etc/sysconfig/libvirtd
在nfs-backend节点上操作:
mkdir -p /var/lib/nova/instances
mkdir -p /var/lib/glance/imagecache
然后/etc/exports里添加以下内容
/var/lib/nova/instances *(rw,sync,no_root_squash)
/var/lib/glance/imagecache *(rw,sync,no_root_squash)
重启nfs相关服务
systemctl restart rpcbind
systemctl restart nfs-server
让nfs目录生效
showmount -e
在计算节点上挂载共享目录
mount -t nfs 10.1.1.125:/var/lib/nova/instances /var/lib/nova/instances
mount -t nfs 10.1.1.125:/var/lib/glance/imagecache /var/lib/nova/instances/_base
echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/nova/instances /var/lib/nova/instances" >> /etc/rc.d/rc.local
echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/glance/imagecache /var/lib/nova/instances/_base" >> /etc/rc.d/rc.local
cd /var/lib/nova
chown -R nova:nova instances/
chown -R nova:nova instances/_base
chmod +x /etc/rc.d/rc.local
cat /etc/rc.d/rc.local
df -h
nova-manage vm list
nova live-migration ID compute2
nova-manage vm list
3.设置libvirtd.service和openstack-nova-compute.service开机启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
4.添加环境变量
cat <<END >/root/admin-openrc
cat <<END >/root/demo-openrc
5.验证
source /root/admin-openrc
openstack compute service list
6.安装neutron相关软件包
yum install -y openstack-neutron-linuxbridge ebtables ipset
7.配置neutron.conf
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
>/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT debug False
openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host compute1
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 600
openstack-config --set /etc/neutron/neutron.conf DEFAULT global_physnet_mtu 1500
openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://10.1.1.140:8774/v2
openstack-config --set /etc/neutron/neutron.conf agent root_helper sudo
openstack-config --set /etc/neutron/neutron.conf agent report_interval 10
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password gdxc1902
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_use_ssl False
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues True
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit amqp_durable_queues False
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://10.1.1.140:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.1.1.140:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password gdxc1902
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
8.配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
>/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth1
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.144
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
9.配置nova.conf设置nova跟neutron结合
openstack-config --set /etc/nova/nova.conf neutron url http://10.1.1.140:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.1.1.140:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password gdxc1902
10.重启和enable相关服务
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service && systemctl restart neutron-linuxbridge-agent.service
systemctl status openstack-nova-compute.service neutron-linuxbridge-agent.service
11.计算节点要是想用cinder,那么需要配置nova配置文件(注意!这一步是在计算节点操作的)
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
12.然后在三个控制节点上重启nova服务
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
13.验证
source /root/admin-openrc
neutron ext-list
neutron agent-list
到此,compute节点搭建完毕,运行nova host-list可以查看新加入的compute1节点。
如果需要再添加另外一个compute节点,只要重复上面的步骤,记得把计算机名和IP地址改下。
附录:
创建flaor命令:
openstack flavor create m1.tiny --id 1 --ram 512 --disk 1 --vcpus 1
openstack flavor create m1.small --id 2 --ram 2048 --disk 20 --vcpus 1
openstack flavor create m1.medium --id 3 --ram 4096 --disk 40 --vcpus 2
openstack flavor create m1.large --id 4 --ram 8192 --disk 80 --vcpus 4
openstack flavor create m1.xlarge --id 5 --ram 16384 --disk 160 --vcpus 8
openstack flavor list
https://github.com/gaelL/openstack-log-colorizer/ //查看log文件并添加颜色工具
wget -O /usr/local/bin/openstack_log_colorizer https://raw.githubusercontent.com/gaelL/openstack-log-colorizer/master/openstack_log_colorizer
chmod +x /usr/local/bin/openstack_log_colorizer
cat log | openstack_log_colorizer --level warning
cat log | openstack_log_colorizer --include error TRACE
cat log | openstack_log_colorizer --exclude INFO warning
定时计划任务:
crontab -e //编写计划任务
* * * * * source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
* * * * * sleep 10; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
* * * * * sleep 20; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
* * * * * sleep 30; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
* * * * * sleep 40; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
* * * * * sleep 50; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py
crontab -l //查看计划任务
systemctl restart crond.service
tail -f log文件 //查看log文件
nova reset-state --active ID //修改虚拟机状态,重启后恢复正常
自动ln脚本:
vim ln_all_image.py
import os
import logging
import logging.handlers
import hashlib
import commands
#Log
LOG_FILE = 'ln_all_image.log'
handler = logging.handlers.RotatingFileHandler(LOG_FILE,maxBytes =1024*1024,backupCount = 5)
fmt = '%(asctime)s - %(filename)s:%(lineno)s - %(name)s - %(message)s'
formatter = logging.Formatter(fmt)
handler.setFormatter(formatter)
logger = logging.getLogger('images')
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
#image_list = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{ print $NF }'").strip().split('\n')
image_list = commands.getoutput("""glance image-list |awk -F"|" '{print $2}'|grep -v -E '(ID|^$)'""").strip().split()
status = commands.getoutput("""openstack image list |awk 'NR>2{print $6}'|grep -v -E '(ID|^$)'""").strip().split()
queued = "queued"
saving = "saving"
#print status
#print type(status)
if queued in status or saving in status:
image_list_1 = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{l[NR]=$0} END {for (i=1;i<=NR-3;i++)print l[i]}' | awk '{print $9}' |grep -v ^$").strip().split()
logger.info('new snapshoot is create now...')
for ida in image_list_1:
image_id = ida.strip()
image_id_hash = hashlib.sha1()
image_id_hash.update(ida)
newid1 = image_id_hash.hexdigest()
commands.getoutput('ln /var/lib/glance/images/{0} /var/lib/glance/imagecache/{1}'.format(ida,newid1))
commands.getoutput('chown qemu:qemu /var/lib/glance/imagecache/{0}'.format(newid1))
commands.getoutput('chmod 644 /var/lib/glance/imagecache/{0}'.format(newid1))
else:
image_list_2 = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{ print $NF }'").strip().split()
logger.info('no image take snapshoot,ln all images...')
for idb in image_list_2:
image_id = idb.strip()
image_id_hash = hashlib.sha1()
image_id_hash.update(idb)
newid2 = image_id_hash.hexdigest()
commands.getoutput('ln /var/lib/glance/images/{0} /var/lib/glance/imagecache/{1}'.format(idb,newid2))
commands.getoutput('chown qemu:qemu /var/lib/glance/imagecache/{0}'.format(newid2))
commands.getoutput('chmod 644 /var/lib/glance/imagecache/{0}'.format(newid2))
十二、把相关服务和资源添加到pacermaker
0.pacermaker参数说明
primitive添加格式:
primitive 唯一ID 资源代理类型:资源代理的提供程序:资源代理名称
params attr_list
meta attr_list
op op_type [<attribute>=<value>...]
primitive参数说明:
资源代理类型:lsb.ocf,stonith,service
资源代理的提供程序:heartbeat,pacemaker
资源代理名称:即resource agent,如:IPaddr2,httpd,mysql
params:实例属性,是特定资源类的参数,用于确定资源类的行为方式及其控制的服务实例。
meta:元属性,是可以为资源添加的选项。它们告诉CRM如何处理特定资源。
op:操作,默认情况下,集群不会确保您的资源一直正常。要指示集群确保资源状况依然正常,需要向资源的定义中添加一个监视操作monitor。可为所有类或资源代理添加monitor。
op_type:包括start,stop,monitor
interval:执行操作的频率。单位:秒。
timeout:需要等待多久才声明操作失败。
requires:需要满足什么条件才能发生此操作。允许的值:nothing、quorum和fencing。默认值取决于是否启用屏蔽和资源的类是否为stonith。对于STONITH资源,默认值为nothing。
on-fail:此操作失败时执行的操作。允许值:
ignore:假装资源没有失败。
block:不对资源执行任何进一步操作。
stop:停止资源并且不在其他位置启动该资源。
restart:停止资源并(可能在不同的节点上)重启动。
fence:关闭资源失败的节点(STONITH)。
standby:将所有资源从资源失败的节点上移走。
enabled:如果值为false,将操作视为不存在。允许的值:true、false。
例:
primitive r0 ocf:linbit:drbd \
params drbd_resource=r0 \
op monitor role=Master interval=60s \
op monitor role=Slave interval=300s
meta元属性参数说明:
priority:如果不允许所有的资源处于活动状态,集群会停止优先级较低的资源以便保持较高优先级资源处于活动状态。
target-role:此资源试图保持的状态,包括started和stoped
is-managed:是否允许集群启动和停止资源,包括true和false。
migration-threshold:用来定义资源的故障次数,假设已经为资源配制了一个首选在节点上运行的位置约束。如果那里失败了,系统会检查migration-threshold并与故障计数进行比较。如果故障计数>=migration-threshold,会将资源迁移到下一个自选节点。
默认情况下,一旦达到阈值,就只有在管理员手动重置资源的故障计数后(在修复故障原因后),才允许在该节点运行有故障的资源。
但是,可以通过设置资源的failure-timeout选项使故障计数失效。如果设置migration-threshold=2和failure-timeout=60s,将会导致资源在两次故障后迁移到新的节点,并且可能允许在一分钟后移回(取决于黏性和约束分数)。
迁移阈值概念有两个例外,在资源启动失败或停止失败时出现,启动故障会使故障计数设置为INFINITY,因此总是导致立即迁移。停止故障会导致屏障(stonith-enabled设置为true时,这是默认设置)。如果不定义STONITH资源(或stonith-enabled设置为false),则该资源根本不会迁移。
failure-timeout:在恢复为如同未发生故障一样正常工作(并允许资源返回它发生故障的节点)之前,需要等待几秒钟,默认值为0(disabled)
resource-stickiness:资源留在所处位置的自愿程度如何,即黏性,默认为0。
multiple-active:如果发现资源在多个节点上活动,集群该如何操作,包括:
block(将资源标记为未受管),stop_ohly(停止所有活动实例),stop_start(默认值,停止所有活动实例,并在某个节点启动资源)
requires:定义某种条件下资源会被穹顶。默认资源会被fencing。为以下这几种值时除外
*nothing - 集群总能启动资源;
*quorum - 集群只有在大多数节点在线时能启动资源,当stonith-enabled为false或资源为stonith时,其为默认值;
*fencing - 集群只有在大多数节点在线,或在任何失败或未知节点被关闭电源时,才能启动资源;
*unfencing - 集群只有在大多数节点在线,或在任何失败或未知节点被关闭电源时并且只有当节点没被fencing时,才能启动资源。当为某一fencing设备,而被stonith的meta参数设置为provides=unfencing时,其为默认值。
1.添加rabbitmq服务到pcs
rabbitmq的pcs资源在/usr/lib/ocf/resource.d/rabbitmq下
在每个控制节点操作:
systemctl disable rabbitmq-server
primitive rabbitmq-server systemd:rabbitmq-server \
op start interval=0s timeout=30 \
op stop interval=0s timeout=30 \
op monitor interval=30 timeout=30 \
meta priority=100 target-role=Started
clone rabbitmq-server-clone rabbitmq-server meta target-role=Started
commit
在controller1上操作:
cat /var/lib/rabbitmq/.erlang.cookie 查看本机rabbitmq cookie值
crm configure
primitive p_rabbitmq-server ocf:rabbitmq:rabbitmq-server-ha \
params erlang_cookie=NGVCLPABVAERDMWKMGYT node_port=5672 \
op monitor interval=30 timeout=60 \
op monitor interval=27 role=Master timeout=60 \
op start interval=0s timeout=360 \
op stop interval=0s timeout=120 \
op promote interval=0 timeout=120 \
op demote interval=0 timeout=120 \
op notify interval=0 timeout=180 \
meta migration-threshold=10 failure-timeout=30s resource-stickiness=100
ms p_rabbitmq-server-master p_rabbitmq-server \
meta interleave=true master-max=1 master-node-max=1 notify=true ordered=false requires=nothing target-role=Started
commit
做完资源添加操作大概过4-5分钟后,服务才能接管起来:
crm status
2.添加haproxy到pcs:
在每个控制节点上操作:
systemctl disable haproxy
在controller1上操作:
crm configure
primitive haproxy systemd:haproxy \
op start interval=0s timeout=20 \
op stop interval=0s timeout=20 \
op monitor interval=20s timeout=30s \
meta priority=100 target-role=Started
定义haproxy服务与VIP绑定
colocation haproxy-with-vip_management inf: vip_management:Started haproxy:Started
colocation haproxy-with-vip_public inf: vip_public:Started haproxy:Started
verify
commit
3.添加httpd和memcache到pcs
在每个控制节点操作:
systemctl disable httpd
systemctl disable memcached
在congtroller1上操作:
primitive httpd systemd:httpd \
op start interval=0s timeout=30s \
op stop interval=0s timeout=30s \
op monitor interval=30s timeout=30s \
meta priority=100 target-role=Started
primitive memcached systemd:memcached \
op start interval=0s timeout=30s \
op stop interval=0s timeout=30s \
op monitor interval=30s timeout=30s \
meta priority=100 target-role=Started
clone openstack-dashboard-clone httpd meta target-role=Started
clone openstack-memcached-clone memcached meta target-role=Started
commit
4.添加glance相关服务到pcs
在每个控制节点操作:
systemctl disable openstack-glance-api openstack-glance-registry
在congtroller1上操作:
primitive openstack-glance-api systemd:openstack-glance-api \
op start interval=0s timeout=30s \
op stop interval=0s timeout=30s \
op monitor interval=30s timeout=30s \
meta priority=100 target-role=Started
primitive openstack-glance-registry systemd:openstack-glance-registry \
op start interval=0s timeout=30s \
op stop interval=0s timeout=30s \
op monitor interval=30s timeout=30s \
meta priority=100 target-role=Started
clone openstack-glance-api-clone openstack-glance-api \
meta target-role=Started
clone openstack-glance-registry-clone openstack-glance-registry \
meta target-role=Started
commit
5.添加nova相关服务到pcs
在每个控制节点操作:
systemctl disable openstack-nova-api openstack-nova-cert openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
在congtroller1上操作:
primitive openstack-nova-api systemd:openstack-nova-api \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-nova-cert systemd:openstack-nova-cert \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-nova-consoleauth systemd:openstack-nova-consoleauth \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-nova-scheduler systemd:openstack-nova-scheduler \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-nova-conductor systemd:openstack-nova-conductor \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-nova-novncproxy systemd:openstack-nova-novncproxy \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
clone openstack-nova-api-clone openstack-nova-api \
meta target-role=Started
clone openstack-nova-cert-clone openstack-nova-cert \
meta target-role=Started
clone openstack-nova-scheduler-clone openstack-nova-scheduler \
meta target-role=Started
clone openstack-nova-conductor-clone openstack-nova-conductor \
meta target-role=Started
clone openstack-nova-novncproxy-clone openstack-nova-novncproxy \
meta target-role=Started
commit
6.添加cinder相关服务到pcs
在每个控制节点操作:
systemctl disable openstack-cinder-api openstack-cinder-scheduler
primitive openstack-cinder-api systemd:openstack-cinder-api \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-cinder-scheduler systemd:openstack-cinder-scheduler \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
clone openstack-cinder-api-clone openstack-cinder-api \
meta target-role=Started
clone openstack-cinder-scheduler-clone openstack-cinder-scheduler \
meta target-role=Started
commit
7.添加neutron相关服务到pcs
在每个控制节点操作:
systemctl disable neutron-server neutron-l3-agent neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
primitive openstack-neutron-server systemd:neutron-server \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-neutron-l3-agent systemd:neutron-l3-agent \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-neutron-dhcp-agent systemd:neutron-dhcp-agent \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
primitive openstack-neutron-metadata-agent systemd:neutron-metadata-agent \
op start interval=0s timeout=45 \
op stop interval=0s timeout=45 \
op monitor interval=30s timeout=30 \
meta priority=100 target-role=Started
clone openstack-neutron-server-clone openstack-neutron-server \
meta target-role=Started
clone openstack-neutron-l3-agent-clone openstack-neutron-l3-agent \
meta target-role=Started
clone openstack-neutron-linuxbridge-agent-clone openstack-neutron-linuxbridge-agent \
meta target-role=Started
clone openstack-neutron-dhcp-agent-clone openstack-neutron-dhcp-agent \
meta target-role=Started
clone openstack-neutron-metadata-agent-clone openstack-neutron-metadata-agent \
meta target-role=Started
commit
8.排错
crm resource cleanup rabbitmq-server-clone
crm resource cleanup openstack-dashboard-clone
crm resource cleanup openstack-memcached-clone
crm resource cleanup openstack-glance-api-clone
crm resource cleanup openstack-glance-registry-clone
crm resource cleanup openstack-nova-api-clone
crm resource cleanup openstack-nova-cert-clone
crm resource cleanup openstack-nova-scheduler-clone
crm resource cleanup openstack-nova-conductor-clone
crm resource cleanup openstack-nova-novncproxy-clone
crm resource cleanup openstack-cinder-api-clone
crm resource cleanup openstack-cinder-scheduler-clone
crm resource cleanup openstack-neutron-server-clone
crm resource cleanup openstack-neutron-l3-agent-clone
crm resource cleanup openstack-neutron-linuxbridge-agent-clone
crm resource cleanup openstack-neutron-dhcp-agent-clone
crm resource cleanup openstack-neutron-metadata-agent-clone
win2016计算节点nova配置
[libvirt]
cpu_mode=host-passthrough
sftp要下载文件夹,需要进入到目录内,然后再使用命令get -r ./.来下载整个文件夹内的数据:
标签:--,Newton,nova,etc,conf,openstack,OpenStack,HA,config 来源: https://www.cnblogs.com/mk21/p/15094490.html