当前位置:Gxlcms > 数据库问题 > MySQL--MMM高可用

MySQL--MMM高可用

时间:2021-07-01 10:21:17 帮助过:22人阅读

技术分享

优点:高可用性,扩展性好,出现故障自动切换,对于主主同步,在同一时间只提供一台数据库写操作,保证的数据的一致性。当主服务器挂掉以后,另一个主立即接管,其他的从服务器能自动切换,不用人工干预。

         缺点:monitor节点是单点,不过这个你也可以结合keepalived或者haertbeat做成高可用;至少三个节点,对主机的数量有要求,需要实现读写分离,还需要在前端编写读写分离程序。在读写非常繁忙的业务系统下表现不是很稳定,可能会出现复制延时、切换失效等问题。MMM方案并不太适应于对数据安全性要求很高,并且读、写繁忙的环境中。

         适用场景:

MMM的适用场景为数据库访问量大,并且能实现读写分离的场景。

         Mmm主要功能由下面三个脚本提供:

mmm_mond  负责所有的监控工作的监控守护进程,决定节点的移除(mmm_mond进程定时心跳检测,失败则将write ip浮动到另外一台master)等等

mmm_agentd  运行在mysql服务器上的代理守护进程,通过简单远程服务集提供给监控节点

mmm_control  通过命令行管理mmm_mond进程

在整个监管过程中,需要在mysql中添加相关授权用户,授权的用户包括一个mmm_monitor用户和一个mmm_agent用户,如果想使用mmm的备份工具则还要添加一个mmm_tools用户。

二、部署实施

1、环境介绍

OS:centos7.264位)数据库系统:mysql5.7.13

关闭selinux

配置ntp,同步时间

角色

IP

hostname

Server-id

Write vip

Read vip

Master1

192.168.31.83

master1

1

192.168.31.2

 

Master2(backup)

192.168.31.141

master2

2

 

192.168.31.3

Slave1

192.168.31.250

slave1

3

 

192.168.31.4

Slave2

192.168.31.225

slave2

4

 

192.168.31.5

monitor

192.168.31.106

monitor1

 

 

 

2、在所有主机上配置/etc/hosts文件,添加如下内容:

# vim /etc/hosts

192.168.31.83 master1

192.168.31.141 master2

192.168.31.250 slave1

192.168.31.225 slave2

192.168.31.106 monitor1

在所有主机上安装perl perl-devel perl-CPAN libart_lgpl.x86_64 rrdtool.x86_64  rrdtool-perl.x86_64

#yum -y install perl-*  libart_lgpl.x86_64  rrdtool.x86_64  rrdtool-perl.x86_64

注:使用centos7在线yum源安装

安装perl的相关库

#cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP Proc::Daemon Log::Log4perl

 

 

3、在master1master2slave1slave2主机上安装mysql5.7和配置复制

master1master2互为主从,slave1slave2master1的从

在每个mysql的配置文件/etc/my.cnf中加入以下内容, 注意server-id不能重复。

master1主机:

log-bin = mysql-bin

binlog_format = mixed

server-id = 1

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

log-slave-updates = 1

auto-increment-increment = 2

auto-increment-offset = 1

master2主机:

log-bin = mysql-bin

binlog_format = mixed

server-id = 2

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

log-slave-updates = 1

auto-increment-increment = 2

auto-increment-offset = 2

slave1主机:

server-id = 3

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

read_only   = 1

slave2主机:

server-id = 4

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

read_only= 1

在完成了对my.cnf的修改后,通过systemctl restart mysqld重新启动mysql服务

注:所有MySQL主机的uuid不能一样,修改/usr/local/mysql/data/auto.cnf中的值

 

4台数据库主机若要开启防火墙,要么关闭防火墙或者创建访问规则:

firewall-cmd --permanent --add-port=3306/tcp

firewall-cmd --reload

主从配置(master1master2配置成主主,slave1slave2配置成master1的从):

master1上授权:

mysql> grant replication slave on *.* to rep@‘192.168.31.%‘ identified by ‘123456‘;

master2上授权:

mysql> grant replication slave on *.* to rep@‘192.168.31.%‘ identified by ‘123456‘;

master2slave1slave2配置成master1的从库:

master1上执行show master status; 获取binlog文件和Position

mysql> show master status;

+------------------+----------+--------------+------------------+--------------------------------------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+--------------+------------------+---------------------------------------------------+

| mysql-bin.000001 |      452 |              |          |                   |

+------------------+----------+--------------+------------------+-----------------------------------------------------+

master2slave1slave2执行

mysql> change master to master_host=‘192.168.31.83‘,master_port=3306,master_user=‘rep‘,master_password=‘123456‘,master_log_file=‘mysql-bin.000001‘,master_log_pos=452;

mysql>start slave;

验证主从复制:

master2主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.83

Master_User: rep

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000001

Read_Master_Log_Pos: 452

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 320

Relay_Master_Log_File: mysql-bin.000001

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

slave1主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.83

Master_User: rep

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000001

Read_Master_Log_Pos: 452

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 320

Relay_Master_Log_File: mysql-bin.000001

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

slave2主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.83

Master_User: rep

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000001

Read_Master_Log_Pos: 452

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 320

Relay_Master_Log_File: mysql-bin.000001

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

如果Slave_IO_RunningSlave_SQL_Running都为yes,那么主从就已经配置OK

master1配置成master2的从库:

master2上执行show master status ;获取binlog文件和Position

mysql> show master status;

+------------------+----------+--------------+------------------+--------------------------------------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+--------------+------------------+---------------------------------------------------+

| mysql-bin.000001 |      452 |              |           |                   |

+------------------+----------+--------------+------------------+----------------------------------------------------+

master1上执行:

mysql> change master to master_host=‘192.168.31.141‘,master_port=3306,master_user=‘rep‘,master_password=‘123456‘,master_log_file=‘mysql-bin.000001‘,master_log_pos=452;

mysql> start slave;

验证主从复制:

master1主机:

mysql> show slave status\G;

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.31.141

Master_User: rep

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000001

Read_Master_Log_Pos: 452

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 320

Relay_Master_Log_File: mysql-bin.000001

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

如果Slave_IO_RunningSlave_SQL_Running都为yes,那么主从就已经配置OK

4mysql-mmm配置:

4mysql节点上创建用户

创建代理账号:

mysql> grant super,replication client,process on *.* to ‘mmm_agent‘@‘192.168.31.%‘ identified by ‘123456‘;

创建监控账号:

mysql> grant replication client on *.* to ‘mmm_monitor‘@‘192.168.31.%‘ identified by ‘123456‘;

1:因为之前的主从复制,以及主从已经是ok的,所以我在master1服务器执行就ok了。

检查master2slave1slave2三台db上是否都存在监控和代理账号

mysql> select user,host from mysql.user where user in (‘mmm_monitor‘,‘mmm_agent‘);

+-------------+----------------------------+

| user        | host         |

+-------------+----------------------------+

| mmm_agent   | 192.168.31.% |

| mmm_monitor | 192.168.31.% |

+-------------+------------------------------+

mysql> show grants for ‘mmm_agent‘@‘192.168.31.%‘;

+-----------------------------------------------------------------------------------------------------------------------------+

| Grants for mmm_agent@192.168.31.%                                             |

+-----------------------------------------------------------------------------------------------------------------------------+

| GRANT PROCESS, SUPER, REPLICATION CLIENT ON *.* TO ‘mmm_agent‘@‘192.168.31.%‘ |

+-----------------------------------------------------------------------------------------------------------------------------+

mysql> show grants for ‘mmm_monitor‘@‘192.168.31.%‘;

+-----------------------------------------------------------------------------------------------------------------------------+

| Grants for mmm_monitor@192.168.31.%                             |

+-----------------------------------------------------------------------------------------------------------------------------+

| GRANT REPLICATION CLIENT ON *.* TO ‘mmm_monitor‘@‘192.168.31.%‘ |

2

mmm_monitor用户:mmm监控用于对mysql服务器进程健康检查

mmm_agent用户:mmm代理用来更改只读模式,复制的主服务器等

5mysql-mmm安装

monitor主机(192.168.31.106) 上安装监控程序

#cd /tmp

#wgethttp://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz

#tar -zxf mysql-mmm-2.2.1.tar.gz

#cd mysql-mmm-2.2.1

#make install

在数据库服务器(master1master2slave1slave2)上安装代理

#cd /tmp

#wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz

#tar -zxf mysql-mmm-2.2.1.tar.gz

#cd mysql-mmm-2.2.1

#make install

6、配置mmm

编写配置文件,五台主机必须一致:

完成安装后,所有的配置文件都放到了/etc/mysql-mmm/下面。管理服务器和数据库服务器上都要包含一个共同的文件mmm_common.conf,内容如下:

active_master_rolewriter#积极的master角色的标示,所有的db服务器要开启read_only参数,对于writer服务器监控代理会自动将read_only属性关闭。

 

<host default>

cluster_interface             eno16777736                            #群集的网络接口

pid_path                            /var/run/mmm_agentd.pid    #pid路径

bin_path                            /usr/lib/mysql-mmm/             #可执行文件路径

replication_user               rep                                               #复制用户

replication_password      123456                                       #复制用户密码

agent_user                        mmm_agent                              #代理用户

agent_password              123456                                       #代理用户密码

</host>

 

<host master1>                                                                    #master1host

ip                                   192.168.31.83                          #master1ip

mode                                  master                                        #角色属性,master代表是主

peer                               master2                                      #master1对等的服务器的host名,也就是master2的服务器host

</host>

 

<host master2>                                                                    #master的概念一样

ip                                         192.168.31.141

mode                                  master

peer                                    master1

</host>

 

<host slave1>                                    #从库的host,如果存在多个从库可以重复一样的配置

ip                                         192.168.31.250                        #从的ip

mode                                  slave                                            #slave的角色属性代表当前host是从

</host>

 

<host slave2>                                                                       #slave的概念一样

ip                                         192.168.31.225

mode                                  slave

</host>

 

<role writer>                                                                         #writer角色配置

hosts                           master1,master2                      #能进行写操作的服务器的host名,如果不想切换写操作这里可以只配置master,这样也可以避免因为网络延时而进行write的切换,但是一旦master出现故障那么当前的MMM就没有writer了只有对外的read操作。

ips                                       192.168.31.2                             #对外提供的写操作的虚拟IP

mode                                  exclusive                                    #exclusive代表只允许存在一个主,也就是只能提供一个写的IP

</role>

 

<role reader>                                                                        #read角色配置

hosts                                  master2,slave1,slave2             #对外提供读操作的服务器的host,当然这里也可以把master加进来

ips             192.168.31.3, 192.168.31.4, 192.168.31.5    #对外提供读操作的虚拟ip,这三个iphost不是一一对应的,并且ipshosts的数目也可以不相同,如果这样配置的话其中一个hosts会分配两个ip

mode                                  balanced                                    #balanced代表负载均衡

</role>

 

同时将这个文件拷贝到其它的服务器,配置不变

#for host in master1 master2 slave1 slave2 ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done

 

代理文件配置

编辑 4mysql节点机上的/etc/mysql-mmm/mmm_agent.conf 
在数据库服务器上,还有一个mmm_agent.conf需要修改,其内容是:

includemmm_common.conf

this master1

注意:这个配置只配置db服务器,监控服务器不需要配置,this后面的host名改成当前服务器的主机名。

启动代理进程 

/etc/init.d/mysql-mmm-agent的脚本文件的#!/bin/sh下面,加入如下内容 

# vim /etc/init.d/mysql-mmm-agent
source /root/.bash_profile 

添加成系统服务并设置为自启动

#chkconfig --add mysql-mmm-agent

#chkconfigmysql-mmm-agent on

#/etc/init.d/mysql-mmm-agent start

注:添加source /root/.bash_profile目的是为了mysql-mmm-agent服务能启机自启。

自动启动和手动启动的唯一区别,就是激活一个console 。那么说明在作为服务启动的时候,可能是由于缺少环境变量

服务启动失败,报错信息如下:

Daemon bin: ‘/usr/sbin/mmm_agentd‘

Daemon pid: ‘/var/run/mmm_agentd.pid‘

Starting MMM Agent daemon... Can‘t locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.

BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.

failed

解决方法:

# cpanProc::Daemon
# cpan Log::Log4perl

# /etc/init.d/mysql-mmm-agent start

Daemon bin: ‘/usr/sbin/mmm_agentd‘

Daemon pid: ‘/var/run/mmm_agentd.pid‘

Starting MMM Agent daemon... Ok

# netstat -antp | grep mmm_agentd

tcp  0   0 192.168.31.83:9989    0.0.0.0:*   LISTEN      9693/mmm_agentd

配置防火墙

firewall-cmd --permanent --add-port=9989/tcp

firewall-cmd --reload

编辑 monitor主机上的/etc/mysql-mmm/mmm_mon.conf 

includemmm_common.conf

 

<monitor>

ip                                127.0.0.1 #为了安全性,设置只在本机监听,mmm_mond默认监听9988

pid_path            /var/run/mmm_mond.pid

bin_path            /usr/lib/mysql-mmm/

status_path              /var/lib/misc/mmm_mond.status

ping_ips                    192.168.31.83,192.168.31.141,192.168.31.250,192.168.31.225          #用于测试网络可用性 IP 地址列表,只要其中有一个地址 ping 通,就代表网络正常,这里不要写入本机地址

auto_set_online  0                 #设置自动online的时间,默认是超过60s就将它设置为online,默认是60s,这里将其设为0就是立即online

</monitor>

 

<check default>

check_period                    5               #检查周期默认为5s

trap_period                       10             #一个节点被检测不成功的时间持续trap_period秒,就慎重的认为这个节点失败了,默认值:10s

timeout                              2               #检查超时的时间,默认值:2s

restart_after               10000      #在完成restart_after次检查后,重启checker进程默认10000

max_backlog               86400      #记录检查rep_backlog日志的最大次数,默认值:60

</check>

 

<host default>

monitor_user                    mmm_monitor        #监控db服务器的用户

monitor_password          123456                      #监控db服务器的密码

</host>

 

debug                                 0                                 #debug 0正常模式,1debug模式

 

启动监控进程:

/etc/init.d/mysql-mmm- monitor的脚本文件的#!/bin/sh下面,加入如下内容 
source /root/.bash_profile 

添加成系统服务并设置为自启动

#chkconfig --add mysql-mmm-monitor

#chkconfigmysql-mmm-monitor on

#/etc/init.d/mysql-mmm-monitor start

启动报错:

Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.

failed

解决方法:安装下列perl的库

#cpanProc::Daemon

#cpan Log::Log4perl

[root@monitor1 ~]# /etc/init.d/mysql-mmm-monitor start

Daemon bin: ‘/usr/sbin/mmm_mond‘

Daemon pid: ‘/var/run/mmm_mond.pid‘

Starting MMM Monitor daemon: Ok

[root@monitor1 ~]# netstat -anpt | grep 9988

tcp  0  0 127.0.0.1:9988   0.0.0.0:*      LISTEN      8546/mmm_mond

1:无论是在db端还是在监控端如果有对配置文件进行修改操作都需要重启代理进程和监控进程。

2MMM启动顺序:先启动monitor,再启动 agent

 

检查集群状态:

[root@monitor1 ~]# mmm_control show

master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)

master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)

slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)

slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

如果服务器状态不是ONLINE,可以用如下命令将服务器上线,例如:

#mmm_controlset_online主机名

例如:[root@monitor1 ~]#mmm_cont rol set_online master1

从上面的显示可以看到,写请求的VIPmaster1上,所有从节点也都把master1当做主节点。

 

查看是否启用vip

[root@master1 ~]# ip addr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:6d:2f:82 brdff:ff:ff:ff:ff:ff

inet 192.168.31.83/24 brd 192.168.31.255 scope global eno16777736

valid_lft forever preferred_lft forever

inet 192.168.31.2/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe6d:2f82/64 scope link

valid_lft forever preferred_lft forever

[root@master2 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff

inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736

valid_lft 35850sec preferred_lft 35850sec

inet 192.168.31.5/32 scope global eno16777736

valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe75:1a9c/64 scope link

valid_lft forever preferred_lft forever

[root@slave1 ~]# ipaddr show dev eno16777736

eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000

link

人气教程排行