redis 集群 CLUSTERDOWN Hash slot not served 报错解决思路

背景:  因机房故障,导致redis-cluster 集群中 有个节点 down 了 ,再从新拉起节点加入集群后,就没在意

过来小许,服务器磁盘告警来了,我看了下 应用服务器上的tomcat 的日志,不停的刷,日志内容 报redis 错误, CLUSTERDOWN Hash slot not served

redis 集群 CLUSTERDOWN Hash slot not served 报错解决思路

吓得我 立马 去 redis 集群上去 检查 是否是我的刚刚加入的节点有问题

[root@admin ~]# redis-cli -c -h 192.168.207.251 -p 7001

192.168.207.251:7001>set testfxkj   6666

(error) CLUSTERDOWN Hash slot not served

192.168.207.251:7001>exit

提示hash 槽有问题 ;

退出redis集群 ,使用 redis-trib.rb check  对集群进行检测

备注: 我这里是redis 3.2 版本 ,使用 redis-trib.rb check 进行检测

            如果是 redis 3 版本以上 请使用 redis-cli –cluster check 命令

[root@admin ~]# redis-trib.rb check 192.168.207.251:7001

检测后 提示 如下图, [ERR] Not all 16384 slots are covered by nodes

redis 集群 CLUSTERDOWN Hash slot not served 报错解决思路

可以看到 16384号slots没有被分配

下一步,我们尝试使用 redis 官方推荐的 redis-trib.rb fix 命令 来修复集群(同样 redis 3版本以上的请使用redis-cli –cluster fix 进行修复  )

[root@admin ~]# redis-trib.rb fix 192.168.207.251:7001
>>> Performing Cluster Check (using node 192.168.207.251:7001)
M: 3d1808df8a82dd705a67e8fef26cb8c8482bd69a 192.168.207.251:7001
   slots:0-5459 (5460 slots) master
   1 additional replica(s)
S: 6fa8513ee71e0153e2e746fef2a3b1ce08348766 192.168.207.159:7003
   slots: (0 slots) slave
   replicates a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32
M: 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c 192.168.207.160:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: d057d13bc64b9be9f68c0291700965b3751127c0 192.168.207.159:7005
   slots: (0 slots) slave
   replicates 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c
M: a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32 192.168.207.160:7002
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: a84cb46e811925ee48f0ff8c25e753be752fc45c 192.168.207.159:7004
   slots: (0 slots) slave
   replicates 3d1808df8a82dd705a67e8fef26cb8c8482bd69a
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
>>> Fixing slots coverage...
List of not covered slots: 5460
Slot 5460 has keys in 0 nodes: 
The folowing uncovered slots have no keys across the cluster:
5460
Fix these slots by covering with a random node? (type 'yes' to accept): yes
>>> Covering slot 6829 with 192.168.207.251:7001
/usr/local/ruby/lib/ruby/gems/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:114:in `call': ERR Slot 6829 is already busy (Redis::CommandError)
        from /usr/local/ruby/lib/ruby/gems/2.2.0/gems/redis-3.2.2/lib/redis.rb:2646:in `block in method_missing'
        from /usr/local/ruby/lib/ruby/gems/2.2.0/gems/redis-3.2.2/lib/redis.rb:57:in `block in synchronize'
        from /usr/local/ruby/lib/ruby/2.2.0/monitor.rb:211:in `mon_synchronize'
        from /usr/local/ruby/lib/ruby/gems/2.2.0/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'
        from /usr/local/ruby/lib/ruby/gems/2.2.0/gems/redis-3.2.2/lib/redis.rb:2645:in `method_missing'
        from /usr/local/bin/redis-trib.rb:462:in `block in fix_slots_coverage'
        from /usr/local/bin/redis-trib.rb:459:in `each'
        from /usr/local/bin/redis-trib.rb:459:in `fix_slots_coverage'
        from /usr/local/bin/redis-trib.rb:398:in `check_slots_coverage'
        from /usr/local/bin/redis-trib.rb:361:in `check_cluster'
        from /usr/local/bin/redis-trib.rb:1139:in `fix_cluster_cmd'
        from /usr/local/bin/redis-trib.rb:1695:in `<main>'

fix 修复失败,提示 slot 6829 正在使用

登录集群,CLUSTER DELSLOTS 6829  (使一个特定的Redis Cluster节点去忘记一个主节点正在负责的哈希槽)

[root@admin ~]# redis-cli -c -h 192.168.207.251 -p 7001

192.168.207.251:7001> CLUSTER DELSLOTS 6829

OK

再次 使用 redis-trib.rb fix  去修复集群

[root@admin ~]# redis-trib.rb fix 192.168.207.251:7001
>>> Performing Cluster Check (using node 192.168.207.251:7001)
M: 3d1808df8a82dd705a67e8fef26cb8c8482bd69a 192.168.207.251:7001
   slots:0-6829 (6829 slots) master
   1 additional replica(s)
S: 6fa8513ee71e0153e2e746fef2a3b1ce08348766 192.168.207.159:7003
   slots: (0 slots) slave
   replicates a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32
M: 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c 192.168.207.160:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: d057d13bc64b9be9f68c0291700965b3751127c0 192.168.207.159:7005
   slots: (0 slots) slave
   replicates 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c
M: a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32 192.168.207.160:7002
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: a84cb46e811925ee48f0ff8c25e753be752fc45c 192.168.207.159:7004
   slots: (0 slots) slave
   replicates 3d1808df8a82dd705a67e8fef26cb8c8482bd69a
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
>>> Fixing slots coverage...
List of not covered slots: 6829
Slot 5460 has keys in 0 nodes: 
The folowing uncovered slots have no keys across the cluster:
5460
Fix these slots by covering with a random node? (type 'yes' to accept): yes
>>> Covering slot 6829 with 192.168.207.251:7001

再次check

[root@admin ~]# redis-trib.rb fix 192.168.207.251:7001
>>> Performing Cluster Check (using node 192.168.207.251:7001)
M: 3d1808df8a82dd705a67e8fef26cb8c8482bd69a 192.168.207.251:7001
   slots:0-6829  (6829 slots) master
   1 additional replica(s)
S: 6fa8513ee71e0153e2e746fef2a3b1ce08348766 192.168.207.159:7003
   slots: (0 slots) slave
   replicates a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32
M: 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c 192.168.207.160:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: d057d13bc64b9be9f68c0291700965b3751127c0 192.168.207.159:7005
   slots: (0 slots) slave
   replicates 36dc79859c644cb03e6c1b0ec4a2d8bf56cfa39c
M: a10cf2bdaf1c445c34d6ae447a2f79bc5d4e9c32 192.168.207.160:7002
   slots:5460-10922 (5463 slots) master
   1 additional replica(s)
S: a84cb46e811925ee48f0ff8c25e753be752fc45c 192.168.207.159:7004
   slots: (0 slots) slave
   replicates 3d1808df8a82dd705a67e8fef26cb8c8482bd69a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

发现 16384 个哈希槽全部分配好了

这时我们再登录集群测试写个KEY

[root@admin ~]# redis-cli -c -h 192.168.207.251 -p 7001

192.168.207.251:7001>set testfxkj   6666

ok

发现 成功写入了KEY

同时也看了下  tomcat 得到日志 也不再刷 CLUSTERDOWN Hash slot not served

文章来源:https://www.cnaaa.net,转载请注明出处:https://www.cnaaa.net/archives/8182

(0)
杰斯的头像杰斯
上一篇 2023年5月15日 下午5:16
下一篇 2023年5月16日 下午5:20

相关推荐

  • Linux 从 lsof 开始,深入理解 Linux 虚拟文件系统!

    背景 有时会出现这样的情况,磁盘空间显示已经被占满,但是在查看磁盘的具体文件占用情况时,发现磁盘仍然有很大的空余空间。 1. 执行df命令查看磁盘使用情况,发现磁盘已经满了。 2. 执行du命令查看各个目录的磁盘占用情况,把各个目录文件的大小相加,发现并没有占满磁盘,有10多G空间莫名失踪。 3. 为何会出现这样的情况呢? 因为虽然文件已被删除,但是一些进程…

    2024年2月20日
    1.2K00
  • Docker部署MinIO对象存储服务器结合Cpolar实现远程访问

    前言MinIO是一个开源的对象存储服务器,可以在各种环境中运行,例如本地、Docker容器、Kubernetes集群等。它兼容Amazon S3 API,因此可以与现有的S3工具和库无缝集成。MinIO的设计目标是高性能、高可用性和可扩展性。它可以在分布式模式下运行,以满足不同规模的存储需求。 MinIO是一个开源的软件,可以免费使用,还可以在普通硬件上运行…

    2023年12月20日
    82500
  • pve系统崩溃,ceph集群节点不能加入问题处理

    本次故障缘起,pve系统盘突然坏了,导致ceph集群降级,重装pve系统后,发现ceph不能加入原有的集群,需要清除所有配置才可完成 首先需要从集群中删除故障ceph ceph的配置文件路径 /etc/ceph/ceph.conf 此次故障的节点是pve3,在一个正常的节点配置文件中删除故障的节点 同时需要在web页面 监视器中销毁之前故障的节点 osd销毁…

    2024年4月2日
    98200
  • Linux实验案例:构建Samba文件共享服务器

    根据公司的信息化建设要求,公司需要在局域网内部搭建一台文件服务器,便于对数据的集中管理和备份。考虑服务器的运行效率及稳定性、安全性问题,选择在CentOS系统中构建Samba服务器以提供文件资源共享服务。 需求描述 1.在/var/share/目录中建立三个子目录public、training、devel,用途如下。 public目录用于存放公共数据,如公司…

    2024年6月24日
    1.3K00

在线咨询: QQ交谈

邮件:712342017@qq.com

工作时间:周一至周五,8:30-17:30,节假日休息

关注微信
redis 集群 CLUSTERDOWN Hash slot not served 报错解决思路