解决项目迁移至Kubernetes集群中的代理问题
作者:互联网
解决项目迁移至Kubernetes集群中的代理问题
随着Kubernetes技术的日益成熟,越来越多的企业选择用Kubernetes集群来管理项目。新项目还好,可以选择合适的集群规模从零开始构建项目;旧项目迁移进Kubernetes集群就需要考虑很多因素,毕竟项目不能中断时间过久。
问题来源
近日在做项目迁移至Kubernetes集群时,遇到了一件有意思的问题:因为开发用的dubbo版本过低,在zookeeper注册不上,需要开发升级dobbo,然后在打包成镜像,所以要先把nodejs迁移进Kubernets集群。因为是部分业务迁移进Kubernets集群,所以要在traefik 前面还得加一层代理Nginx(Nginx为旧业务的入口,反向代理后面的微服务,阿里云的slb指向nginx,等到业务全部迁移完毕,slb就指向traefik)。此种架构为双层代理,即Slb-->Nginx-->Traefik-->Service 。
图解
解决方案:
- 迁移至k8s集群的业务走Nodeport,Nginx --> Nodeport。业务应用直接Nodeport,不好管理,1万台机器的时候 不能也Nodeport吧,端口自己要规划,机器多了 每个机器还都暴露端口,想想都不现实
- 迁移至k8s集群的业务走Clusterip,Nginx --> Traefik --> Service。这种方式合理。
解决问题
总不能拿生产环境来写博文吧,用虚机讲明。其实把虚机和生产机也就网络环境存在差别。
思路分析
- 部署k8s集群
- 部署nginx
- 部署traefik
- 部署应用
- 联调联试
部署k8s集群
使用我之前的博文部署方法:https://www.cnblogs.com/zisefeizhu/p/12505117.html
部署nginx
下载必要的组件
# hostname -I 20.0.0.101 # cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) # uname -a Linux fuxi-node02-101 4.4.186-1.el7.elrepo.x86_64 #1 SMP Sun Jul 21 04:06:52 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux # wget http://nginx.org/download/nginx-1.10.2.tar.gz # wget http://www.openssl.org/source/openssl-fips-2.0.10.tar.gz # wget http://zlib.net/zlib-1.2.11.tar.gz # wget https://ftp.pcre.org/pub/pcre/pcre-8.40.tar.gz # yum install gcc-c++
配置-编译-安装软件
# tar zxvf openssl-fips-2.0.10.tar.gz # cd openssl-fips-2.0.10/ # ./config && make && make install # cd .. # ll tar zxvf pcre-8.40.tar.gz # cd pcre-8.40/ # ./configure && make && make install # tar zxvf zlib-1.2.11.tar.gz # cd zlib-1.2.11/ # ./configure && make && make install # tar zxvf nginx-1.10.2.tar.gz # cd nginx-1.10.2/ #./configure --with-http_stub_status_module --prefix=/opt/nginx # make && make install
启动Nginx
# pwd /opt/nginx # ll 总用量 4 drwx------ 2 nobody root 6 4月 22 11:30 client_body_temp drwxr-xr-x 2 root root 4096 4月 22 12:53 conf drwx------ 2 nobody root 6 4月 22 11:30 fastcgi_temp drwxr-xr-x 2 root root 40 4月 22 11:29 html drwxr-xr-x 2 root root 41 4月 22 14:24 logs drwx------ 2 nobody root 6 4月 22 11:30 proxy_temp drwxr-xr-x 2 root root 19 4月 22 11:29 sbin drwx------ 2 nobody root 6 4月 22 11:30 scgi_temp drwx------ 2 nobody root 6 4月 22 11:30 uwsgi_temp # sbin/nginx
traefik 部署
https://www.cnblogs.com/zisefeizhu/p/12692979.html
环境检查
# kubectl get pods,svc -A | grep traefik kube-system pod/traefik-ingress-controller-z5qd7 1/1 Running 0 136m kube-system service/traefik ClusterIP 10.68.251.13280/TCP,443/TCP,8080/TCP 4h14m
浏览器访问
部署应用
这里的测试应用选择containous/whoami镜像
测试应用部署
# cat whoami.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-04-22 #FileName: whoami.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: v1 kind: Service metadata: name: whoami spec: ports: - protocol: TCP name: web port: 80 selector: app: whoami --- kind: Deployment apiVersion: apps/v1 metadata: name: whoami labels: app: whoami spec: replicas: 2 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: containous/whoami ports: - name: web containerPort: 80 # kubectl get svc,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/whoami ClusterIP 10.68.109.15180/TCP 3h30m NAME READY STATUS RESTARTS AGE pod/whoami-bd6b677dc-jvqc2 1/1 Running 0 3h30m pod/whoami-bd6b677dc-lvcxp 1/1 Running 0 3h30m
联调联试
因为选择的解决问题的方案是:nginx --> traefik --> service
- traefik -->service
- nginx --> traefik
- nginx --> service
traefik --> service
使用traefik 代理测试应用的资源清单:
# cat traefik-whoami.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-04-22 #FileName: traefik-whoami.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: traefik.containo.us/v1alpha1 kind: Ingre***oute metadata: name: simpleingre***oute spec: entryPoints: - web routes: - match: Host(`who.linux.com`) && PathPrefix(`/notls`) kind: Rule services: - name: whoami port: 80
本地hosts解析
traefik界面观察是代理成功:
访问who.linux.com/notls
nginx --> traefik
# cat conf/nginx.conf user nobody; worker_processes 4; events { use epoll; worker_connections 2048; } http { upstream app { server 20.0.0.202; } server { listen 80; # server_name who2.linux.com; access_log logs/access.log; error_log logs/error.log; location / { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_headers_hash_max_size 51200; proxy_headers_hash_bucket_size 6400; proxy_redirect off; proxy_read_timeout 600; proxy_connect_timeout 600; proxy_pass http://app; } } } # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 20.0.0.202 who.linux.com //k8s集群traefik所落节点,其实K8s任意节点都随便拉 # curl -iL who.linux.com/notls HTTP/1.1 200 OK Content-Length: 388 Content-Type: text/plain; charset=utf-8 Date: Wed, 22 Apr 2020 07:33:52 GMT Hostname: whoami-bd6b677dc-lvcxp IP: 127.0.0.1 IP: 172.20.46.67 RemoteAddr: 172.20.177.153:58168 GET /notls HTTP/1.1 Host: who.linux.com User-Agent: curl/7.29.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 20.0.0.101 X-Forwarded-Host: who.linux.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-ingress-controller-z5qd7 X-Real-Ip: 20.0.0.101
nginx要是不熟悉就看这大佬的博文吧:https://www.cnblogs.com/kevingrace/p/6095027.html
nginx --> service
# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 20.0.0.101 who.linux.com # curl -iL who.linux.com/notls HTTP/1.1 200 OK //响应信息 Server: nginx/1.10.2 //响应服务 Date: Wed, 22 Apr 2020 07:27:46 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 389 Connection: keep-alive Hostname: whoami-bd6b677dc-jvqc2 IP: 127.0.0.1 IP: 172.20.46.111 RemoteAddr: 172.20.177.153:38298 GET /notls HTTP/1.1 Host: who.linux.com User-Agent: curl/7.29.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 20.0.0.101 X-Forwarded-Host: who.linux.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-ingress-controller-z5qd7 X-Real-Ip: 20.0.0.101 nginx日志 # tail -f access.log 20.0.0.101 - - [22/Apr/2020:15:28:28 +0800] "GET /notls HTTP/1.1" 200 389 "-" "curl/7.29.0"
浏览器测试
继续测试
把traefik应用给关了,然后再测试
# kubectl delete -f . configmap "traefik-config" deleted customresourcedefinition.apiextensions.k8s.io "ingre***outes.traefik.containo.us" deleted customresourcedefinition.apiextensions.k8s.io "ingre***outetcps.traefik.containo.us" deleted customresourcedefinition.apiextensions.k8s.io "middlewares.traefik.containo.us" deleted customresourcedefinition.apiextensions.k8s.io "tlsoptions.traefik.containo.us" deleted customresourcedefinition.apiextensions.k8s.io "traefikservices.traefik.containo.us" deleted ingre***oute.traefik.containo.us "traefik-dashboard-route" deleted service "traefik" deleted daemonset.apps "traefik-ingress-controller" deleted serviceaccount "traefik-ingress-controller" deleted clusterrole.rbac.authorization.k8s.io "traefik-ingress-controller" deleted clusterrolebinding.rbac.authorization.k8s.io "traefik-ingress-controller" deleted
# kubectl delete -f traefik-whoami.yaml //关闭whoami traefik代理 ingre***oute.traefik.containo.us "simpleingre***oute" deleted
没得说了 测试结果很明确了:访问who.linux.com 流量走向:nginx-->traefik --> service 。
标签:22,Kubernetes,--,traefik,nginx,集群,迁移,com,whoami 来源: https://blog.51cto.com/u_15162069/2766339