安装配置mgr(单主模式)
MGR全称是MySQL Group Replication,实质是利用MySQL二进制日志,基于ROW格式+GTID,一个通信框架+事务排序控制实现的。
MySQL安装(略)
192.168.30.113 MySQL5.7.24 192.168.30.118 MySQL5.7.24 192.168.30.119 MySQL5.7.24
|
添加hostname和ip映射关系
3台机器的/etc/hosts中都要添加:
192.168.30.118 mysql-118 192.168.30.119 mysql-119 192.168.30.113 mysql-113
|
配置my.cnf
gtid_mode=on enforce_gtid_consistency=ON master_info_repository=TABLE relay_log_info_repository=TABLE binlog_checksum=NONE transaction_write_set_extraction=XXHASH64 loose-group_replication_group_name="3db33b36-0e51-409f-a61d-c99756e90155" loose-group_replication_start_on_boot=off loose-group_replication_local_address= "192.168.30.119:44061" #改为本地地址 loose-group_replication_group_seeds= "192.168.1.113:44061,192.168.30.118:44061,192.168.30.119:44061" #此配置3台都一样 loose-group_replication_bootstrap_group= off group_replication_ip_whitelist="192.168.30.118,192.168.30.119,192.168.30.113" #下面2项任选其一 #多主模式 loose-group_replication_single_primary_mode=off loose-group_replication_enforce_update_everywhere_checks=on #单主模式 loose-group_replication_single_primary_mode=on loose-group_replication_enforce_update_everywhere_checks=off
|
创建复制账户及插件安装
登录mgr的3个node,都创建复制账户并依次安装group replication插件:
grant replication slave on *.* to rpl_user@'%' identified by 'mgrtest'; install plugin group_replication SONAME 'group_replication.so';
|
构建集群
在每个node上依次执行:
change master to master_user='rpl_user',master_password='mgrtest' for channel 'group_replication_recovery';
|
启动MGR
将node1作为master启动:
mysql> set global group_replication_bootstrap_group=on; #只在主节点执行 Query OK, 0 rows affected (0.00 sec) mysql> show variables like 'group_replication_bootstrap_group'; +-----------------------------------+-------+ | Variable_name | Value | +-----------------------------------+-------+ | group_replication_bootstrap_group | ON | +-----------------------------------+-------+ 1 row in set (0.00 sec) mysql> start group_replication; Query OK, 0 rows affected (2.06 sec) mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 7b8d01ee-081d-11e9-94a1-005056a99b7f | tikv3 | 4406 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 1 row in set (0.00 sec)
|
再将node2、node3依次加入集群:
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> start group_replication; Query OK, 0 rows affected, 1 warning (5.78 sec)
|
加入后查看集群状态:
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ | group_replication_applier | 7b8d01ee-081d-11e9-94a1-005056a99b7f | mysql-118 | 4406 | ONLINE | | group_replication_applier | 8b8d01ee-081d-11e9-94a1-005056a99b7f | mysql-119 | 4406 | ONLINE | | group_replication_applier | 9b8d01ee-081d-11e9-94a1-005056a99b7f | mysql-113 | 4406 | ONLINE | +---------------------------+--------------------------------------+-------------+-------------+--------------+ 3 rows in set (0.00 sec)
|
查看哪个是主节点:
mysql> select variable_value from performance_schema.global_status where variable_name='group_replication_primary_member'; +--------------------------------------+ | variable_value | +--------------------------------------+ | 8b8d01ee-081d-11e9-94a1-005056a99b7f | +--------------------------------------+ 1 row in set (0.00 sec)
|
安装配置consul
下载consul
[root@mysql-118 ~]# wget https://releases.hashicorp.com/consul/1.4.0/consul_1.4.0_linux_amd64.zip 将consul拷贝至/opt/consul目录,创建datadir:mkdir -p /data/consul [root@mysql-118 ~]# unzip consul_1.4.0_linux_amd64.zip Archive: consul_1.4.0_linux_amd64.zip inflating: consul [root@mysql-118 data]# cd /opt/consul/
|
配置并启动consul
[root@mysql-118 consul]# vi conf/server.json { "data_dir": "/data/consul", "enable_script_checks": true, "datacenter": "dc1", "log_level": "INFO", "server": true, "bind_addr": "192.168.30.118", "client_addr": "0.0.0.0", "ui":true, "retry_join": ["192.168.30.119"], "retry_interval": "30s", "rejoin_after_leave": true, "start_join": ["192.168.30.119"] } [root@mysql-118 consul]# vi start.sh #!/bin/bash nohup ./consul agent -config-dir=/opt/consul/conf 2>&1 1>consul.log & [root@mysql-118 consul]# ./start.sh [root@mysql-118 consul]# ps -ef|grep consul root 21959 1 1 2月20 pts/0 01:21:48 ./consul agent -config-dir=/opt/consul/conf root 22306 2326 0 20:30 pts/0 00:00:00 grep --color=auto consul
|
搭建consul集群
将118上的consul拷贝到119和113并依次启动,在119、113执行join命令加入118:
[root@mysql-113 consul]# ./consul join 192.168.30.118 [root@mysql-119 consul]# ./consul join 192.168.30.118 [root@mysql-118 consul]# ./consul members Node Address Status Type Build Protocol DC Segment mysql-113 192.168.30.113:8301 alive server 1.4.0 2 dc1 <all> mysql-118 192.168.30.118:8301 alive server 1.4.0 2 dc1 <all> mysql-119 192.168.30.119:8301 alive server 1.4.0 2 dc1 <all>
|
查看consul状态
通过浏览器访问:http://192.168.30.118:8500/ui
[root@mysql-118 consul]# ./consul catalog nodes Node ID Address DC mysql-113 7ee4a169 192.168.30.113 dc1 mysql-118 92d621a3 192.168.30.118 dc1 mysql-119 7309e6e7 192.168.30.119 dc1 [root@mysql-118 consul]# dig @127.0.0.1 -p 8600 mysql-118.node.consul ; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> @127.0.0.1 -p 8600 mysql-118.node.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48405 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;mysql-118.node.consul. IN A ;; ANSWER SECTION: mysql-118.node.consul. 0 IN A 192.168.30.118 ;; ADDITIONAL SECTION: mysql-118.node.consul. 0 IN TXT "consul-network-segment=" ;; Query time: 1 msec ;; SERVER: 127.0.0.1#8600(127.0.0.1) ;; WHEN: 一 2月 25 20:32:29 CST 2019 ;; MSG SIZE rcvd: 102 [root@mysql-118 consul]#
|
节点切换测试
停止118的consul进程,可以看到119变成了leader:
[root@mysql-118 consul]# ./consul operator raft list-peers Node ID Address State Voter RaftProtocol mysql-118 92d621a3-5c68-934a-2ad9-151aff8bb27d 192.168.30.118:8300 follower true 3 mysql-113 7ee4a169-bd6b-3997-927e-2d04cdf076a1 192.168.30.113:8300 follower true 3 mysql-119 7309e6e7-9795-19a3-fd8d-b11ee2dc0cfc 192.168.30.119:8300 leader true 3
|
集群数据同步测试
1、当3个节点都正常的情况下,都可以写入数据并同步查询
在119上写入一个id,其他节点都能查到:
[root@mysql-119 consul]# ./consul kv put id 11 Success! Data written to: id [root@mysql-119 consul]# ./consul kv get id 11 [root@mysql-113 consul]# ./consul kv get id 11 [root@mysql-118 consul]# ./consul kv get id 11
|
然后在113上将id改为12:
[root@mysql-113 consul]# ./consul kv put id 12 Success! Data written to: id [root@mysql-113 consul]# ./consul kv get id 12 [root@mysql-119 consul]# ./consul kv get id 12 [root@mysql-118 consul]# ./consul kv get id 12
|
118上写入name:
[root@mysql-118 consul]# ./consul kv put name 'test' Success! Data written to: name [root@mysql-118 consul]# ./consul kv get name test [root@mysql-113 consul]# ./consul kv get name test [root@mysql-119 consul]# ./consul kv get name test
|
2、挂掉一个节点,另外2个节点继续进行数据写入,挂起的节点恢复后可以自动同步数据
停止118,主节点切换到了119:在119上修改id的值:
[root@mysql-113 consul]# ./consul operator raft list-peers Node ID Address State Voter RaftProtocol mysql-118 92d621a3-5c68-934a-2ad9-151aff8bb27d 192.168.30.118:8300 follower true 3 mysql-113 7ee4a169-bd6b-3997-927e-2d04cdf076a1 192.168.30.113:8300 follower true 3 mysql-119 7309e6e7-9795-19a3-fd8d-b11ee2dc0cfc 192.168.30.119:8300 leader true 3 [root@mysql-119 consul]# ./consul kv put id 119 Success! Data written to: id [root@mysql-119 consul]# ./consul kv get id 119 [root@mysql-113 consul]# ./consul kv get id 119
|
启动118,查看id的值看看是否也是119:
[root@mysql-118 consul]# ./start.sh [root@mysql-118 consul]# /opt/consul/consul operator raft list-peers Node ID Address State Voter RaftProtocol mysql-118 92d621a3-5c68-934a-2ad9-151aff8bb27d 192.168.30.118:8300 follower true 3 mysql-113 7ee4a169-bd6b-3997-927e-2d04cdf076a1 192.168.30.113:8300 follower true 3 mysql-119 7309e6e7-9795-19a3-fd8d-b11ee2dc0cfc 192.168.30.119:8300 leader true 3 [root@mysql-118 consul]# ./consul kv get id 119
|
consul是用Raft来实现分布式一致性的。
添加MGR检测服务
部署域名解析
配置mycat
测试