minio 4*4 集群 故障测试
作者:互联网
因为一个minio有点集群故障(文件写入异常),所以基于官方的理论测试下集群容错性
一个计算规则
4*4 模式的,默认使用的纠删码条纹为16 (官方的模式是取最大,但是计算页面可以调整,对于minio来说这个是自动的),按照套路应该是可以一个server 以及4个盘异常的,不应该出现一个节点异常造成服务不可用的
对于纠删码的奇偶校验可以自己设置,一般集群环境是4
环境准备
- docker-compose 文件
version: '3.7'
services:
sidekick:
image: minio/sidekick:v1.2.0
tty: true
ports:
- "80:80"
command: --health-path=/minio/health/ready --address :80 http://minio{1...4}:9000
gateway:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
command: gateway s3 http://sidekick --console-address ":19000"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
ports:
- "9000:9000"
- "19000:19000"
minio1:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
volumes:
- data1-1:/data1
- data1-2:/data2
- data1-3:/data3
- data1-4:/data4
ports:
- "9001:9000"
- "19001:19001"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} --console-address ":19001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio2:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
volumes:
- data2-1:/data1
- data2-2:/data2
- data2-3:/data3
- data2-4:/data4
ports:
- "9002:9000"
- "19002:19002"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} --console-address ":19002"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio3:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
volumes:
- data3-1:/data1
- data3-2:/data2
- data3-3:/data3
- data3-4:/data4
ports:
- "9003:9000"
- "19003:19003"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} --console-address ":19003"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio4:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
volumes:
- data4-1:/data1
- data4-2:/data2
- data4-3:/data3
- data4-4:/data4
ports:
- "9004:9000"
- "19004:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
data1-1:
data1-2:
data1-3:
data1-4:
data2-1:
data2-2:
data2-3:
data2-4:
data3-1:
data3-2:
data3-3:
data3-4:
data4-1:
data4-2:
data4-3:
data4-4:
故障简单测试
直接停止一个服务节点(会有4个driver 下线),比如docker-compose stop minio1 ,然后通过gateway 以及入口上传文件测试服务的读写情况,之后启动服务之后看看数据恢复情况
服务节点日志
可以查看minio4 可以看到信息
查看minio1 的文件恢复情况
说明
经过以上简单的测试,实际上minio 的集群是靠谱的,核心可能还是在nginx 入口设置有些问题,后续应该调整下,以上只是实际的问题,想验证下理论是否有问题
参考资料
https://github.com/minio/minio/blob/master/docs/distributed/SIZING.md
https://min.io/product/erasure-code-calculator?number_of_servers=8&drives_per_server=16&drive_capacity=8&stripe_size=16&parity_count=4
https://github.com/rongfengliang/minio-cluster-sidekick-learning
标签:minio,故障测试,集群,9000,data4,data1,data3,data2 来源: https://www.cnblogs.com/rongfengliang/p/16060562.html