其他分享
首页 > 其他分享> > helm-实际操作

helm-实际操作

作者:互联网

helm  简单入手-实战结果

1 概念

本指南讲述使用Helm来管理k8s集群软件包的基础知识。

Helm 有3个重要概念:
    1、helm: 一个命令行客户端工具,主要用于 Kubernetes 应用 chart 的创建、 打包、 发布和管理。
    2、Chart:应用描述,一系列用于描述 k8s 资源相关文件的集合。
    3、Release:基于 Chart 的部署实体,一个 chart 被 Helm 运行后将会生成对应的一个release;release是在 k8s 中创建出真实运行的资源对象。 一个chart通常可以多次安装到同一个集群中。每次安装都会创建一个新release,每个release都有自己的release name
    4、Repoistory 本质上是一个web服务器,该服务器保存了一系列的chart阮籍爱你包以供用户下载,并且提供了一个该Repository的chart包的清单文件以供查询。Helm可以同时管理多个不同的Repository

有了这些概念,我们可以这样解释Helm

Helm将charts安装到Kubernetes中,每个安装创建一个新release。要找到新的chart,可以搜索Helm charts存储库repositories

1.1  安装步骤

helm 下载得当前是最新版本,较老版本删除了tailer,V3和V2的架构对比如下:

V2版本的架构中,Tiller在Kubernetes集群中,Helm Client发请求给Tiller需要经过RBAC认证。而V3版本是Helm通过kubeconfig连接kube-apiserver,避免了使用者去配置RBAC权限。

curl -SLO https://get.helm.sh/helm-v3.4.2-linux-amd64.tar.gz   //也可以去官网下载(linux请下载amd):https://github.com/helm/helm/releases
tar -zxvf helm-v3.4.2-linux-amd64.tar.gz 
mv linux-arm64/helm /usr/local/bin/helm    
helm version   //查看是否安装成功

安装成功之后请添加一个库
//添加微软的repo库
helm repo add stable http://mirror.azure.cn/kubernetes/charts
//添加阿里的repo库
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

更新仓库

helm repo update

查看仓库列表

helm repo list

删除仓库

helm repo remove aliyun

helm工具常用的几个处理chart的命令

#创建一个新的chart:
$ helm create mychart
Created mychart/
#编辑完chart后,helm可以将其打包到chart压缩包中:
$ helm package mychart
Archived mychart-0.1.-.tgz
#使用helm来帮助查找chart格式或信息的问题
$ helm list mychart
No issues found
$helm search repo   //查看那些charts可用
$ helm search repo mysql   //查看mysql相关的repo
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                                       
aliyun/mysql                            0.3.5                           Fast, reliable, scalable, and easy to use open-...
stable/mysql                            1.6.9           5.7.30          DEPRECATED - Fast, reliable, scalable, and easy...
stable/mysqldump                        2.6.2           2.4.1           DEPRECATED! - A Helm chart to help backup MySQL...
stable/prometheus-mysql-exporter        0.7.1           v0.11.0         DEPRECATED A Helm chart for prometheus mysql ex...
aliyun/percona                          0.3.0                           free, fully compatible, enhanced, open source d...
aliyun/percona-xtradb-cluster           0.0.2           5.7.19          free, fully compatible, enhanced, open source d...
stable/percona                          1.2.3           5.7.26          DEPRECATED - free, fully compatible, enhanced, ...
stable/percona-xtradb-cluster           1.0.8           5.7.19          DEPRECATED - free, fully compatible, enhanced, ...
stable/phpmyadmin                       4.3.5           5.0.1           DEPRECATED phpMyAdmin is an mysql administratio...
aliyun/gcloud-sqlproxy                  0.2.3                           Google Cloud SQL Proxy                            
aliyun/mariadb                          2.1.6           10.1.31         Fast, reliable, scalable, and easy to use open-...
stable/gcloud-sqlproxy                  0.6.1           1.11            DEPRECATED Google Cloud SQL Proxy                 
stable/mariadb                          7.3.14          10.3.22         DEPRECATED Fast, reliable, scalable, and easy t...
[root@k8s-master ~]# 

1.2  创建缺省chart

$helm create vpc-client   //创建vpc-client的chart

$ helm install vpcins vpc-client    //安装vpc-client,release名字:vpcins,chart:vpc-chart
NAME: vpcins
LAST DEPLOYED: Wed Jan  6 14:44:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vpc-client,app.kubernetes.io/instance=vpcins" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

# helm status vpcins   //追踪release的状态

$ helm list
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
vpcins  default         1               2021-01-06 14:44:13.378954443 +0800 CST deployed        vpc-client-0.1.0        1.16.0     

&helm uninstall vpcins
release "vpcins" uninstalled

& helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

template-chart目录的文件结构如下:

├── charts    #依赖的chart
├── Chart.yaml #Chart本身的版本和配置信息
├── templates  #配置模板目录 
│ ├── deployment.yaml
│ ├── _helpers.tpl  #用于修改kubernetes object配置的模板
│ ├── hpa.yaml
│ └── ingress.yaml
│ ├── NOTES.txt    #helm提示信息
│ ├── serviceaccount.yaml
│ ├── service.yaml  #kubenetes service
│ └── tests
└── values.yaml

templates目录用于存放模板文件,模板文件通过 {{ .Values.xxxxxx }} 获取values.yaml中定义的值,例如:{{ .Values.replicas }} 、{{ .Values.image }}。

另外,部署自定义chart的命令是 helm install vpcins vpc-client。 helm install 命令的第一个参数称为 Release,名字可以随意取,模板文件使用 {{ .Release.Name }} 可获取Release名称;第二个参数 vpc-client 是自定义chart的目录名。

下图是deployment.yaml的内容

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "vpc-client.fullname" . }}
  labels:
    {{- include "vpc-client.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "vpc-client.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "vpc-client.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "vpc-client.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

上图为Deployment的yaml配置文件,其中的双大括号括起来的部分是Go template,其中的Values是在values.yaml文件中定义的

#vlaue.yaml 内容
# Default values for vpc-client.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: nginx
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

比如在Deployment.yaml中定义的容器镜像image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"其中的:

以上两个变量值是在create chart的时候自动生成的默认值,我们将默认值的镜像地址和tag修改为自己的镜像

1.3 修改yaml

没有特殊要求,一般需要修改的地方有image、service、healthCheck、persistentVolume.mountPaths,下图value.yaml文件作为参考

# Default values for mod-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:  # 设置后覆盖后面默认的镜像仓库
  imageRegistry: ""
  imagePullSecrets: []
#     - myRegistryKeySecretName

statefulset:
  enabled: false

## String to partially override fullname template (will maintain the release name)
##
nameOverride: ""

## String to fully override fullname template
##
fullnameOverride: ""

## By default deploymentStrategy is set to rollingUpdate with maxSurge of 25% and maxUnavailable of 25% .
## You can change type to `Recreate` or can uncomment `rollingUpdate` specification and adjust them to your usage.
deploymentStrategy: {}
  # rollingUpdate:
  #   maxSurge: 25%
  #   maxUnavailable: 25%
  # type: RollingUpdate

# 副本个数
replicaCount: 1

# 容器image及tag
image:
  registry: docker.io
  repository: bitnami/nginx
  tag: latest
  pullPolicy: IfNotPresent  # IfNotPresent: 有则不拉(减少流量和操作步骤),Always: 不管tag总拉(适合tag不变时更新)
  pullSecrets: []
  #  - private-registry-key

service:
  type: ClusterIP  # 一般不用修改
  ingressPort: 8080
  ports:
    http:  # 多端口暴露时,复制一段
      port: 8080  # Service port number for client-a port.
      protocol: TCP  # Service port protocol for client-a port.

## env set
## ref: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
env: []
#  - name: DEMO_GREETING
#    value: "Hello from the environment"
#  - name: DEMO_FAREWELL
#    value: "Such a sweet sorrow"

## command set
startCommand: []
#  - "java -Xdebug -Xnoagent -Djava.compiler=NONE"
#  - "-Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n"
#  - "-Djava.security.egd=file:/dev/urandom"
#  - "-jar /test.jar"
#  - "-Duser.timezone=GMT+08"

## Enable configmap and add data in configmap
config:
  enabled: false
  subPath: ""
  mountPath: /conf
  data: {}
    
############################# 示例 ####################################
## 以下示例,挂载文件至 /conf/app.conf
#  enabled: true
#  mountPath: /conf/app.conf  
#  subPath: app.conf    # 使用subPath时,上面mountPath路径写文件完整绝对路径
#  data:
#    app.conf: |-
#      appname = example-chart

## 以下示例,挂载多个文件至 /conf/ 下
#  enabled: true
#  mountPath: /conf    # 不使用subPath
#  data:
#    app.conf: |-
#      appname = example-chart
#    bpp.conf: |-
#      bppname
#
## 挂载多个文件至多个不同路径,需要相应修改 templates/deployment-statefulset.yaml 
############################# 示例 ####################################

## To use an additional secret, set enable to true and add data
## 用法同上,不另作说明
secret:
  enabled: false
  mountPath: /etc/secret-volume
  subPath: ""
  readOnly: true
  data: {} 

## liveness and readiness 
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
healthCheck:
  enabled: true
  type: tcp  # http/tcp
  port: http  # 健康检查的端口名或端口
  httpPath: '/'  # http时必须设置
  livenessInitialDelaySeconds: 10  # 初始延迟秒数
  livenessPeriodSeconds: 10  # 检测周期,默认值10,最小为1
  readinessInitialDelaySeconds: 10  # 初始延迟秒数
  readinessPeriodSeconds: 10   # 检测周期,默认值10,最小为1
 
resources: {}
  # 容器资源设置
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

## Node labels and tolerations for pod assignment
### ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
### ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
labels: {}
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
annotations: {}

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistentVolume:   # 是否存储持久化
  enabled: false
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, azure-disk on
  ##   Azure, standard on GKE, AWS & OpenStack)
  ##
  storageClass: "-"
  accessMode: ReadWriteOnce
  annotations: {}
  #   helm.sh/resource-policy: keep
  size: 1Gi  # 大小
  existingClaim: {}  # 使用已存在的pvc
  mountPaths: []
  #  - name: data-storage
  #    mountPath: /config
  #    subPath: config  # 多个路径使用同一个pvc使用subPath,用法同上面config中示例说明
  #  - name: data-storage
  #    mountPath: /data
  #    subPath: data

ingress:  # 是否使用nginx暴露域名或端口
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

## Add init containers. e.g. to be used to give specific permissions for data
## Add your own init container or uncomment and modify the given example.
initContainers: []

## Prometheus Exporter / Metrics
##
metrics:
  enabled: false
  image:
    registry: docker.io
    repository: nginx/nginx-prometheus-exporter
    tag: 0.1.0
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    pullSecrets: []
    #   - myRegistrKeySecretName
  ## Metrics exporter pod Annotation and Labels
  podAnnotations:
    # prometheus.io/scrape: "true"
    # prometheus.io/port: "9113"
    ## Metrics exporter resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
  resources: {}

## Uncomment and modify this to run a command after starting the core container.
## ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/bash","/pre-stop.sh"]
  # postStart:
  #   exec:
  #     command: ["/bin/bash","/post-start.sh"]

## Deployment additional volumes.
deployment:
  additionalVolumes: []

## init containers
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
## Add init containers. e.g. to be used to give specific permissions for data
## Add your own init container or uncomment and modify the given example.
initContainers: {}
#  - name: fmp-volume-permission
#    image: busybox
#    imagePullPolicy: IfNotPresent
#    command: ['chown','-R', '200', '/extra-data']
#    volumeMounts:
#      - name: extra-data
#        mountPath: /extra-data

## Additional containers to be added to the core pod.
additionalContainers: {}
#  - name: my-sidecar
#    image: nginx:latest
#  - name: lemonldap-ng-controller
#    image: lemonldapng/lemonldap-ng-controller:0.2.0
#    args:
#      - /lemonldap-ng-controller
#      - --alsologtostderr
#      - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
#    env:
#      - name: POD_NAME
#        valueFrom:
#          fieldRef:
#            fieldPath: metadata.name
#      - name: POD_NAMESPACE
#        valueFrom:
#          fieldRef:
#            fieldPath: metadata.namespace
#    volumeMounts:
#    - name: copy-portal-skins
#      mountPath: /srv/var/lib/lemonldap-ng/portal/skins

1.4 自定义chart

我们创建第一个模板vpc。

步骤1:首先创建一个vpc-client/templates/vpc.yaml

apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: vpc336

步骤2: 安装chart

$ helm install vpcins vpc-client
NAME: vpcins
LAST DEPLOYED: Wed Jan  6 15:45:01 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vpc-client,app.kubernetes.io/instance=vpcins" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

步骤3: 查看加载的模板

¥helm get manifest vpcins
---
# Source: vpc-client/templates/vpc.yaml
apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: vpc336

步骤4: 删除新创建的release

helm delete vpc-client

1.4 添加一个简单的模板调用

name: 成资源被认为不太好的做法。名称应该是唯一版本。所以我们希望通过插入release名称来生成一个名称字段。

提示: name: 由于 DNS 系统的限制,该字段限制为 63 个字符。因此,release 名称限制为 53 个字符。

简单修改一下vpc.yaml

apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: vpc{{.Release.Name}}

模板指令 {{.Release.Name}} 将 release 名称注入模板。传递给模板的值可以认为是 namespace 对象,其中 dot(.)分隔每个 namespace 元素

Release 前面的前一个小圆点表示我们从这个范围的最上面的 namespace 开始(我们将稍微谈一下 scope)。所以我们可以这样理解 .Release.Name:"从顶层命名空间开始,找到 Release 对象,然后在里面查找名为 Name 的对象"。

小技巧

如果你想测试模板渲染,但实际上没有安装任何东西时,可以使用helm install --debug --dry-run ./mychart。这会将chart发送到k8s做渲染。但是不会安装chart,它会将渲染模板返回,以便可以看到输出

[root@k8s-master ~]# helm install --debug --dry-run vpcins ./vpc-client
install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /root/vpc-client

NAME: vpcins
LAST DEPLOYED: Wed Jan  6 16:40:05 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1

---
# Source: vpc-client/templates/vpc.yaml
apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
  name: vpcvpcins

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vpc-client,app.kubernetes.io/instance=vpcins" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

使用 --dry-run 可以更容易地测试代码,但不能确保 Kubernetes 本身会接受生成的模板。最好不要假定你的 chart 只要 --dry-run 成功而被安装。

 

参考文档:

【1】https://whmzsu.github.io/helm-doc-zh-cn/

【2】https://helm.sh/

 

 

标签:name,io,实际操作,chart,##,vpc,helm
来源: https://blog.csdn.net/paopaolele/article/details/112243672