配置管理工具Kustomize

TOC

了解Kustomize

什么是Kustomize?
Kustomize是一个本地的配置管理工具,相当于轻量版的helm。它是Kubernetes原生配置管理工具,用来通过kustomization文件定制Kubernetes对象。

Kustomize环境

★准备工作★

学习Kustomize所需要具备的条件:

  • 1.在Kubernetes集群中应用
  • 2.安装kubectl(Kustomize已集成在kubectl中)或者Kustomize工具
  • 3.了解基本的Kubernetes概念,如Deployment、Service等

Kustomize工具安装

Kustomize工具功能会比kubectl内置的齐全。
Linux系统:

curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
chmod +x /usr/local/bin/kustomize

Go方法:

go install sigs.k8s.io/kustomize/kustomize/v5@latest
sudo mv ~/go/bin/kustomize /usr/local/bin/

MacOS系统:

brew install kustomize

基础用法

简单示例演示

使用一个简单的例子来说明Kustomize的简单使用。
我们创建一个目录,将使用到的yaml资源清单文件放入其中,文件结构如下:

$ tree demo
demo
├── configmap.yaml
├── deployment.yaml
├── kustomization.yaml
└── service.yaml

注意事项:

  • kustomization.yaml文件名是固定的;
  • kubectl apply -k path会自动找path下的kustomization.yaml文件;

configmap.yaml文件内容如下:

#configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-cm
data:
  Username: "admin"
  Password: "ePyvce5tp+84R2lA"

deployment.yaml文件内容如下:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oneapp
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: oneapp
  template:
    metadata:
      labels:
        app: oneapp
    spec:
      containers:
        - name: oneapp
          image: oneapp:latest
          command:
            - ./hello
            - --port=80
            - --user=$(USERNAME)
            - --passwd=$(PASSWORD)
          env:
            - name: USERNAME
              valueFrom:
                configMapKeyRef:
                  name: app-cm
                  key: Username
            - name: PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: app-cm
                  key: Password

service.yaml文件内容如下:

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: oneapp
  labels:
    app: oneapp
spec:
  selector:
    app: oneapp
  type: ClusterIP
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80

kustomization.yaml文件内容如下:

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
  name: demo  # 识别/文档用途
labels:     # 构建出来的每个资源上都有app=oneapp标签
  - pairs:
      app: oneapp
namePrefix: demo-   # 为所有资源添加一个名称前缀
nameSuffix: -v1    # 为所有资源添加一个名称后缀
namespace: default     # 为所有资源统一设置命名空间

resources:     # 指定要纳入的基础资源清单文件,应存于该目录下
- configmap.yaml
- deployment.yaml
- service.yaml

使用部署配置好的应用项目:

# 查看构建资源清单,使用kustomize或者kubectl自带的命令
kustomize build demo/
kubectl kustomize demo/
# 部署到Kubernetes集群
kubectl apply -k demo/

生成资源配置

Kustomize提供生成ConfigMap和Secret包含其他Kubernetes对象(如 Pod)所需要的配置或敏感数据的配置。
我们可以通过在kustomization.yaml中配置生成configMap和Secret配置,无须实质再创建一个yaml文件来配置这些。

configMapGenerator

基于文件生成ConfigMap

可以在configMapGeneratorfiles列表中添加表项。 示例如下:

# redis.conf
cat > redis.conf << EOF
host=100.100.10.102
port=6379
EOF
# kustomization.yaml
cat > kustomization.yaml << EOF
configMapGenerator:
- name: redis-configmap
  files:
  - redis.conf
EOF

使用kustomize工具检查生成configMap,生成内容如下:

$ kustomize build .
apiVersion: v1
data:
  redis.conf: |-
    host=100.100.10.102
    port=6379
kind: ConfigMap
metadata:
  name: redis-configmap-kfkkk57t22

从env文件生成ConfigMap

configMapGenerator中的envs列表中添加一个条目。示例如下:

# .env
cat > .env << EOF
host=100.100.10.102
port=6379
EOF
# kustomization.yaml
cat > kustomization.yaml << EOF
configMapGenerator:
- name: env-configmap
  envs:
  - .env
EOF

使用kustomize工具检查生成configMap,生成内容如下:

$ kustomize build .
apiVersion: v1
data:
  host: 100.100.10.102
  port: "6379"
kind: ConfigMap
metadata:
  name: env-configmap-gtgd256796

注意:.env文件中的每个变量在生成的ConfigMap中成为一个单独的键。这与之前的示例不同, 前一个示例将一个名为redis.conf的文件(及其所有条目)嵌入到同一个键的值中。

基于字面的键值偶对生成ConfigMap

ConfigMap也可基于字面的键值偶对来生成。要基于键值偶对来生成ConfigMap, 在configMapGeneratorliterals列表中添加表项。示例如下:

# kustomization.yaml
configMapGenerator:
- name: literals-configmap
  literals:
  - host=100.100.10.102
  - port=6379

使用kustomize工具检查生成configMap,生成内容如下:

$ kustomize build .
apiVersion: v1
data:
  host: 100.100.10.102
  port: "6379"
kind: ConfigMap
metadata:
  name: literals-configmap-gtgd256796

secretGenerator

使用secretGenerator的名称对其进行引用,然后在deployment.yaml中应用跟configMap中的应用一样。使用secretGenerator的名称对其进行引用。 Kustomize将自动使用生成的名称替换该名称。

基于文件生成Secret

要使用文件内容来生成Secret, 在secretGenerator下面的files列表中添加表项。 示例如下:

# .password
cat > .password << EOF
Username="admin"
Password="ePyvce5tp+84R2lA"
EOF
# kustomization.yaml
cat > kustomization.yaml << EOF
secretGenerator:
- name: file-secrets
  files:
  - .password
EOF

使用kustomize工具检查生成Secret,生成内容如下:

$ kustomize build .
apiVersion: v1
data:
  .password: VXNlcm5hbWU9ImFkbWluIgpQYXNzd29yZD0iZVB5dmNlNXRwKzg0UjJsQSI=
kind: Secret
metadata:
  name: file-secrets-45m74ft5th
type: Opaque

基于键值偶对生成Secret

基于键值偶对字面值生成 Secret,先要在secretGenerator的 literals列表中添加表项。示例如下:

# kustomization.yaml
secretGenerator:
- name: literals-secrets
  literals:
  - Username="admin"
  - Password="ePyvce5tp+84R2lA"

使用kustomize工具检查生成Secret,生成内容如下:

$ kustomize build .
apiVersion: v1
data:
  Password: ZVB5dmNlNXRwKzg0UjJsQQ==
  Username: YWRtaW4=
kind: Secret
metadata:
  name: literals-secrets-fh627bbh2c
type: Opaque

deployment中应用

(这里以ConfigMap为例,Secret的应用和ConfigMap一样)
要在Deployment中使用生成的ConfigMap,使用configMapGenerator的名称对其进行引用。 Kustomize将自动使用生成的名称替换该名称。比如:

apiVersion: apps/v1
kind: Deployment
  ......
spec:
  ......
  template:
    ......
    spec:
      volumes:
        - name: demo-cm
          configMap:
            name: redis-configmap
      containers:
        ......
        volumeMounts:
          - name: demo-cm
              mountPath: /etc/redis/redis.conf
              subPath: redis.conf  
        

使用kustomize工具生成检查之后的Deployment将通过名称引用生成的ConfigMap。如下所示:

$ kustomize build .
apiVersion: v1
data:
  redis.conf: |-
    host=100.100.10.102
    port=6379
kind: ConfigMap
metadata:
  name: redis-configmap-kfkkk57t22
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - image: demo:latest
        name: demo
        volumeMounts:
        - mountPath: /etc/redis/redis.conf
          name: demo-cm
          subPath: redis.conf
      volumes:
      - configMap:
          name: redis-configmap-kfkkk57t22
        name: demo-cm

images

可以用于更新image资源。
修改资源清单:
修改kustomization.yaml文件添加images字段,如下所示:

images:
  - name: myapp                  # 占位镜像名(base 中的)
    newName: docker.io/nginx    # 新镜像地址
    newTag: v2                  # 新tag名

使用命令更新:
在项目目录下执行kustomize命令进行修改,会发现kustomization.yaml文件会自动添加上面的内容,使用kustomize build生成的内容会更新镜像最新的名字和tag。如下所示:

kustomize edit set image myapp=docker.io/nginx:v2

注意:Kustomize在使用kustomize edit set image命令修改镜像时,必须在当前目录下执行,不能使用绝对路径或-f参数。因为kustomize edit是对本地目录下的kustomization.yaml文件进行修改,它本质上并不读取或解析资源,而是修改该目录下的配置文件。其行为是局部文件操作,不解析整个build结构,也就不支持你手动指定其他路径。

高级用法:环境覆盖(Overlays)

前面演示了如何规范使用Kustomize来管理项目资源清单,我们通常需要为不同环境(如开发、测试、生产)创建不同的配置。Kustomize通过baseoverlay的概念优雅地解决了这个问题。
Kustomize有个多个项目放在一个目录下管理的高级用法:Overlays,这样我们就可以把共同需要用的到资源部分放到base目录下,其他每个环境的可变的部分来进行覆盖生成新的资源清单内容。

运用的关键语法:
在这样的高级用法中我们用到的关键语法就是patches字段,它可以对已有资源(如 Deployment、Service 等)做差异性修改,也就是补丁作用,而不是重复定义完整资源文件。
我们需要定制的步骤:

  • 创建overlay,分离各个环境。原来的可以抽取为base环境,其他环境层可只定义变量覆盖。
  • 每个环境层定义自己的kustomization.yaml

高级用法示例演示

目录的层级结构如下:

$ tree demo-overlays
demo-overlays
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── production
    │   ├── deployment.yaml
    │   └── kustomization.yaml
    └── staging
        ├── deployment.yaml
        └── kustomization.yaml

6 directories, 9 files

overlays目录下分别的目录是指每个环境所需要的配置清单文件,里面可以存放一些不同环境配置不一样的可变数据,比如:replicas、strategy、resources等资源配置。

bases部分数据配置

首先我们先来配置base的kustomization.yaml文件数据,如下所示:

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
  name: demo-overlays
namespace: default

resources:
- deployment.yaml
- service.yaml

configMapGenerator:
- literals:
  - app.key=afe105eb4dc2dac3d020453aeb2e6d8f
  name: demo-cm
secretGenerator:
- literals:
  - Username="admin"
  - Password="ePyvce5tp+84R2lA"
  name: demo-secrets

base的deployment.yaml文件数据,如下所示:

# base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oneapp
spec:
  replicas: 1
  progressDeadlineSeconds: 600
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: oneapp
  template:
    metadata:
      labels:
        app: oneapp
    spec:
      containers:
        - name: oneapp
          image: oneapp
          command:
            - ./hello
            - --port=80
            - --user=$(USERNAME)
            - --passwd=$(PASSWORD)
          env:
            - name: APPKEY
              valueFrom:
                configMapKeyRef:
                  name: demo-cm
                  key: app.key
            - name: USERNAME
              valueFrom:
                secretKeyRef:
                  name: demo-secrets
                  key: Username
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: demo-secrets
                  key: Password
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: default-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30

base的service.yaml文件数据,如下所示:

# base/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: oneapp
  labels:
    app: oneapp
spec:
  selector:
    app: oneapp
  type: ClusterIP
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80

测试环境数据配置

接下来我们配置测试环境的kustomization.yaml文件数据,如下所示:

# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# nameSuffix: -staging
labels:     # 构建出来的每个资源上都有app=oneapp标签
  - pairs:
      app: oneapp
      variant: staging
commonAnnotations:
  note: Hello, I am staging!
resources:
- ../../base
patches:
  - path: deployment.yaml

测试环境下的deployment.yaml文件数据,如下所示:

# overlays/staging/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oneapp
spec:
  replicas: 1
  strategy:
    type:  Recreate
  template:
    spec:
      containers:
        - name: oneapp
          resources:
            limits:
              cpu: 800m
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 2Gi
          startupProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 30
            periodSeconds: 5
          readinessProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 3
            periodSeconds: 5
          livenessProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 3
            periodSeconds: 5
            timeoutSeconds: 10

正式环境数据配置

接下来我们配置正式环境的kustomization.yaml文件数据,如下所示:

# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# nameSuffix: -prod
labels:     # 构建出来的每个资源上都有app=oneapp标签
  - pairs:
      app: oneapp
      variant: production
commonAnnotations:
  note: Hello, I am production!
resources:
- ../../base
patches:
  - path: deployment.yaml

正式环境下的deployment.yaml文件数据,如下所示:

# overlays/production/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oneapp
spec:
  replicas: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  template:
    spec:
      containers:
        - name: oneapp
          resources:
            limits:
              cpu: 2
              memory: 4Gi
            requests:
              cpu: 800m
              memory: 2Gi
          startupProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 30
            periodSeconds: 5
          readinessProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 3
            periodSeconds: 5
          livenessProbe:
            httpGet:
              port: 80
              path: /ping
            failureThreshold: 3
            periodSeconds: 5
            timeoutSeconds: 10

测试验证

我们可以看看不同环境下生成的资源清单配置的差异。

staging环境验证

我们使用kustomize工具检查生成资源清单数据,如下所示:

$ kustomize build demo-overlays/overlays/string
apiVersion: v1
data:
  app.key: afe105eb4dc2dac3d020453aeb2e6d8f
kind: ConfigMap
metadata:
  annotations:
    note: Hello, I am staging!
  labels:
    app: oneapp
    variant: staging
  name: demo-cm-tdcmdt2ff7
  namespace: default
---
apiVersion: v1
data:
  Password: ZVB5dmNlNXRwKzg0UjJsQQ==
  Username: YWRtaW4=
kind: Secret
metadata:
  annotations:
    note: Hello, I am staging!
  labels:
    app: oneapp
    variant: staging
  name: demo-secrets-fh627bbh2c
  namespace: default
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    note: Hello, I am staging!
  labels:
    app: oneapp
    variant: staging
  name: oneapp
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: oneapp
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    note: Hello, I am staging!
  labels:
    app: oneapp
    variant: staging
  name: oneapp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: oneapp
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        note: Hello, I am staging!
      labels:
        app: oneapp
    spec:
      containers:
      - command:
        - ./hello
        - --port=80
        - --user=$(USERNAME)
        - --passwd=$(PASSWORD)
        env:
        - name: APPKEY
          valueFrom:
            configMapKeyRef:
              key: app.key
              name: demo-cm-tdcmdt2ff7
        - name: USERNAME
          valueFrom:
            secretKeyRef:
              key: Username
              name: demo-secrets-fh627bbh2c
        - name: PASSWORD
          valueFrom:
            secretKeyRef:
              key: Password
              name: demo-secrets-fh627bbh2c
        image: oneapp
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
          timeoutSeconds: 10
        name: oneapp
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
        resources:
          limits:
            cpu: 800m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 2Gi
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: default-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30

production环境验证

跟测试环境一样,我们使用kustomize工具检查生成资源清单数据,如下所示:

$ kustomize build demo-overlays/overlays/production
apiVersion: v1
data:
  app.key: afe105eb4dc2dac3d020453aeb2e6d8f
kind: ConfigMap
metadata:
  annotations:
    note: Hello, I am production!
  labels:
    app: oneapp
    variant: production
  name: demo-cm-tdcmdt2ff7
  namespace: default
---
apiVersion: v1
data:
  Password: ZVB5dmNlNXRwKzg0UjJsQQ==
  Username: YWRtaW4=
kind: Secret
metadata:
  annotations:
    note: Hello, I am production!
  labels:
    app: oneapp
    variant: production
  name: demo-secrets-fh627bbh2c
  namespace: default
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    note: Hello, I am production!
  labels:
    app: oneapp
    variant: production
  name: oneapp
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: oneapp
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    note: Hello, I am production!
  labels:
    app: oneapp
    variant: production
  name: oneapp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 10
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: oneapp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        note: Hello, I am production!
      labels:
        app: oneapp
    spec:
      containers:
      - command:
        - ./hello
        - --port=80
        - --user=$(USERNAME)
        - --passwd=$(PASSWORD)
        env:
        - name: APPKEY
          valueFrom:
            configMapKeyRef:
              key: app.key
              name: demo-cm-tdcmdt2ff7
        - name: USERNAME
          valueFrom:
            secretKeyRef:
              key: Username
              name: demo-secrets-fh627bbh2c
        - name: PASSWORD
          valueFrom:
            secretKeyRef:
              key: Password
              name: demo-secrets-fh627bbh2c
        image: oneapp
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
          timeoutSeconds: 10
        name: oneapp
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
        resources:
          limits:
            cpu: 2
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /ping
            port: 80
          periodSeconds: 5
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: default-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30