其他分享
首页 > 其他分享> > 【转载】解决 failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

【转载】解决 failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

作者:互联网

failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24 的解决方式

 

启动pod时,查看pod一直报如下的错误:

Warning FailedCreatePodSandBox 3m18s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1506a90c486e2c187e21e8fb4b6888e5d331235f48eebb5cf44121cc587a6f05" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
Normal SandboxChanged 3m1s (x12 over 4m13s) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m59s (x4 over 3m14s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a8dc84257ca6f4543c223735dd44e79c1d001724a54cd20ab33e3a7596fba5c9" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

 

查看ifconfig信息

# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
inet6 fe80::80bc:10ff:feb0:9d1b prefixlen 64 scopeid 0x20<link>
ether 82:bc:10:b0:9d:1b txqueuelen 1000 (Ethernet)
RX packets 1478990 bytes 119510314 (113.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1486862 bytes 136242849 (129.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

...

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::605e:12ff:feb8:7ce3 prefixlen 64 scopeid 0x20<link>
ether 62:5e:12:b8:7c:e3 txqueuelen 0 (Ethernet)
RX packets 55074 bytes 9896264 (9.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 57738 bytes 5642813 (5.3 MiB)
TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0

 

查看flannel信息

# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

 

如果直接删掉cni0等信息

# ifconfig cni0 down
# ip link delete cni0

 

这样操作后,虽然这个错能解决,pod也运行正常,但会将dns的pod挤掉

# kubectl get po -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d8c4cb4d-7lswb  0/1   CrashLoopBackOff  9   (116s ago) 22h   10.244.0.3   master   <none>   <none>
coredns-6d8c4cb4d-84z48  0/1   CrashLoopBackOff  9   (2m6s ago) 22h   10.244.0.2   master   <none>   <none>
ds-4cqxm           1/1   Running       0   33m          10.244.0.4   master   <none>   <none>
ds-d58vg           1/1   Running       0   33m          10.244.2.185 node2    <none>   <none>
ds-sjxwn           1/1   Running       0   33m          10.244.1.48  node1    <none>   <none>

 

此时查看coredns的pod信息

# kubectl describe po coredns-6d8c4cb4d-84z48 -n kube-system
Name: coredns-6d8c4cb4d-84z48
Namespace: kube-system
Priority: 2000000000
......

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 28m (x5 over 29m) kubelet Liveness probe failed: Get "http://10.244.0.2:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 28m kubelet Container coredns failed liveness probe, will be restarted
Normal Pulled 28m (x2 over 22h) kubelet Container image "registry.aliyuncs.com/google_containers/coredns:v1.8.6" already present on machine
Normal Created 28m (x2 over 22h) kubelet Created container coredns
Normal Started 28m (x2 over 22h) kubelet Started container coredns
Warning BackOff 9m29s (x27 over 16m) kubelet Back-off restarting failed container
Warning Unhealthy 4m32s (x141 over 29m) kubelet Readiness probe failed: Get "http://10.244.0.2:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

 

需要重新寻找解决办法,将之前的pod删掉,dns的pod也还是异常。没办法,将dns的pod删除后,自行拉起,问题才解决

# kubectl delete pod coredns-6d8c4cb4d-7lswb -n kube-system
pod "coredns-6d8c4cb4d-7lswb" deleted
# kubectl delete pod coredns-6d8c4cb4d-84z48 -n kube-system
pod "coredns-6d8c4cb4d-84z48" deleted

# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d8c4cb4d-8xghq 1/1 Running 0 3m48s 10.244.2.186 node2 <none> <none>
coredns-6d8c4cb4d-q65vq 1/1 Running 0 3m48s 10.244.1.49 node1 <none> <none>

 

原文链接:https://blog.csdn.net/red_sky_blue/article/details/123401541

标签:cni0,bridge,different,failed,kubelet,6d8c4cb4d,10.244,coredns,pod
来源: https://www.cnblogs.com/leozhanggg/p/16241951.html