Group Replication is a new technology and from time to time we encounter issues.
Today my issue is the primary node server crash and it comes up again it didn’t join my cluster.
I logged in to the server manually start the group replication. but got and error.
+--------------------------------------+----------------+-------------+--------------+-------------+ | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_TYPE | +--------------------------------------+----------------+-------------+--------------+-------------+ | a1c5e25e-2715-11e7-bbe4-0800273fb9a2 | 192.168.56.103 | 3306 | OFFLINE | SECONDARY | +--------------------------------------+----------------+-------------+--------------+-------------+ 1 row in set (0.00 sec) root@lnxmygrp03 [(none)]> start group_replication; ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log. root@lnxmygrp03 [(none)]> exit
On the error log is shows below.
2017-04-22T01:03:42.886040-05:00 0 [ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.' 2017-04-22T01:05:07.042575-05:00 18 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.' 2017-04-22T01:05:09.879836-05:00 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: a1c5e25e-2715-11e7-bbe4-0800273fb9a2:1-3, a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3, a293fead-2715-11e7-bdf9-080027d1779e:1-3, af35aeee-66ac-4b38-9b79-9964c5ebe251:1-10 > Group transactions: a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3, a293fead-2715-11e7-bdf9-080027d1779e:1-3, af35aeee-66ac-4b38-9b79-9964c5ebe251:1-11' 2017-04-22T01:05:09.880160-05:00 0 [ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.' [admin@lnxmygrp03 mysql_tranlog]$
Node 1 is the new primary now.
+--------------------------------------+----------------+-------------+--------------+-------------+ | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_TYPE | +--------------------------------------+----------------+-------------+--------------+-------------+ | a1e3e0e0-2715-11e7-b2cd-0800274048c8 | 192.168.56.101 | 3306 | ONLINE | PRIMARY | | a293fead-2715-11e7-bdf9-080027d1779e | 192.168.56.102 | 3306 | ONLINE | SECONDARY | +--------------------------------------+----------------+-------------+--------------+-------------+ 2 rows in set (0.00 sec) root@lnxmygrp01 [(none)]>
Now lets validate the gtid executed.
root@lnxmygrp01 [(none)]> SELECT @@global.gtid_executed; +-------------------------------------------------------------------------------------------------------------------------------+ | @@global.gtid_executed | +-------------------------------------------------------------------------------------------------------------------------------+ | a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3, a293fead-2715-11e7-bdf9-080027d1779e:1-3, af35aeee-66ac-4b38-9b79-9964c5ebe251:1-12 | +-------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) root@lnxmygrp01 [(none)]> root@lnxmygrp02 [(none)]> SELECT @@global.gtid_executed; +-------------------------------------------------------------------------------------------------------------------------------+ | @@global.gtid_executed | +-------------------------------------------------------------------------------------------------------------------------------+ | a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3, a293fead-2715-11e7-bdf9-080027d1779e:1-3, af35aeee-66ac-4b38-9b79-9964c5ebe251:1-12 | +-------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) root@lnxmygrp02 [(none)]> root@lnxmygrp03 [(none)]> SELECT @@global.gtid_executed; +-------------------------------------------------------------------------------------------------------------------------------+ | @@global.gtid_executed | +-------------------------------------------------------------------------------------------------------------------------------+ | a1c5e25e-2715-11e7-bbe4-0800273fb9a2:1-3, <<<== this is the issue a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3, a293fead-2715-11e7-bdf9-080027d1779e:1-3, af35aeee-66ac-4b38-9b79-9964c5ebe251:1-10 | +-------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) root@lnxmygrp03 [(none)]>
How to Fix it? On the current primary node, do below.
root@lnxmygrp01 [(none)]> SET GTID_NEXT='a1c5e25e-2715-11e7-bbe4-0800273fb9a2:1'; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> BEGIN; Query OK, 0 rows affected (0.01 sec) root@lnxmygrp01 [(none)]> COMMIT; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> SET GTID_NEXT='a1c5e25e-2715-11e7-bbe4-0800273fb9a2:2'; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> BEGIN; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> COMMIT; Query OK, 0 rows affected (0.01 sec) root@lnxmygrp01 [(none)]> SET GTID_NEXT='a1c5e25e-2715-11e7-bbe4-0800273fb9a2:3'; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> BEGIN; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]> COMMIT; Query OK, 0 rows affected (0.01 sec) root@localnxmygrp01lhost [(none)]> SET GTID_NEXT='AUTOMATIC'; Query OK, 0 rows affected (0.00 sec) root@lnxmygrp01 [(none)]>
Now let’s go back to node 3 and start the replication.
root@lnxmygrp03 [(none)]> start group_replication;
Query OK, 0 rows affected (2.60 sec)
root@lnxmygrp03 [(none)]>
| MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_TYPE |
+--------------------------------------+----------------+-------------+--------------+-------------+
| a1e3e0e0-2715-11e7-b2cd-0800274048c8 | 192.168.56.101 | 3306 | ONLINE | PRIMARY |
| a1c5e25e-2715-11e7-bbe4-0800273fb9a2 | 192.168.56.103 | 3306 | ONLINE | SECONDARY |
| a293fead-2715-11e7-bdf9-080027d1779e | 192.168.56.102 | 3306 | ONLINE | SECONDARY |
+--------------------------------------+----------------+-------------+--------------+-------------+
3 rows in set (0.00 sec)
root@lnxmygrp03 [(none)]>
check if all nodes has the same info now.
root@localhost [(none)]> SELECT @@global.gtid_executed;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| @@global.gtid_executed |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| a1c5e25e-2715-11e7-bbe4-0800273fb9a2:1-3,
a1e3e0e0-2715-11e7-b2cd-0800274048c8:1-3,
a293fead-2715-11e7-bdf9-080027d1779e:1-3,
af35aeee-66ac-4b38-9b79-9964c5ebe251:1-13 |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
root@localhost [(none)]>
Now my replication is back to 3 nodes.
Hello, Thanks for this,
After this I am getting an error “There was an error when connecting to the donor server. Check group replication recovery’s connection credentials.”.
I am doing MySQL Innodb Cluster setup on Redhat Virtual Machines.
After lots of efforts, i do that and stuck this point.
I am getting this error while adding the instance in mysql Cluster.
I have done it on Ubuntu it is perfectly running without any error and any extra effort.
Please help.