在k8s上搭建harbor

时效性: 2019年中

准备事宜

  • k8s集群
  • helm3
  • harbor helmchart
  • nginx ingress controller
  • nfs share server & nfs pv

相关连接

一些說明

harbor本身有很多組件,比如最核心的三方組件registry以及存放helmchart的chartmusuem,
以及用于漏洞扫描的clair, 存储和缓存用的postgres和redis,跑任务用的jobservice等

下面为成功部署后的deployment

harbor-harbor-chartmuseum     1/1     1            1           2d16h
harbor-harbor-clair           1/1     1            1           2d16h
harbor-harbor-core            1/1     1            1           2d16h
harbor-harbor-jobservice      1/1     1            1           2d16h
harbor-harbor-notary-server   1/1     1            1           2d16h
harbor-harbor-notary-signer   1/1     1            1           2d16h
harbor-harbor-portal          1/1     1            1           2d16h
harbor-harbor-registry        1/1     1            1           2d16h

下面时成功部署后的pod

harbor-harbor-chartmuseum-69c9cdcbf8-gmnl7     1/1     Running   5          2d16h
harbor-harbor-clair-c9757f6cb-nq6zk            1/1     Running   14         2d5h
harbor-harbor-core-7c44cfc7f-xp9nm             1/1     Running   10         2d16h
harbor-harbor-database-0                       1/1     Running   5          2d16h
harbor-harbor-jobservice-d5db6cc65-csg94       1/1     Running   11         2d16h
harbor-harbor-notary-server-79f8d8b8b8-hld8l   1/1     Running   9          2d5h
harbor-harbor-notary-signer-86cb6fb8bf-xqv2x   1/1     Running   11         2d5h
harbor-harbor-portal-7dddbf759c-8w7s8          1/1     Running   5          2d16h
harbor-harbor-redis-0                          1/1     Running   5          2d16h
harbor-harbor-registry-5c499d8d49-h7h7r        2/2     Running   10         2d16h

因为相对比较复杂,所以可以考虑使用helm的chart包完成部署,要注意的是helm最近发布了最新的v3版本,和v2版本相比,去除了服务端的tiller服务,更加简单,harbor的helm模板可以使用helm v3版本进行部署。

配置pv和pvc

helm的配置集中在values.yaml中设置,这里首先要考虑的是pv和pvc的设置

persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: "harbor-registry"
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteMany
      size: 5Gi
    chartmuseum:
      existingClaim: "harbor-chartmuseum"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteMany
      size: 5Gi
    jobservice:
      existingClaim: "harbor-jobservice"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteMany
      size: 1Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: "harbor-database"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteMany
      size: 1Gi

一般来说helm的配置不会具体限定使用哪一个pv, 而是只是确定一个对应的pvc名称。我们这里修改pvc名字为自己想要的pvc

然后这里给出配置pv的两种相对可行也相对简单的方式

第一种是使用local 的hostpath类型的pv,但是要注意,每一个pv最好独立使用一个目录,不要混搭,具体原因见
问题

这里列举一例

kind: PersistentVolume
metadata:
  name: harbor-registry
  namespace: harbor
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /home/harbor/registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: harbor-registry
  namespace: harbor
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  volumeName: "harbor-registry"

另一种也比较简单的方式是使用nfs卷进行挂载

apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-registry
  namespace: harbor
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 172.19.0.11
    path: "/var/nfsshare/registry"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: harbor-registry
  namespace: harbor
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  volumeName: "harbor-registry"

这里要注意的是hostpath模式是不支持多机共享写入的,所以只能ReadWriteOnce,不能使用ReadWriteMany, 可以通过volumeName设置pvc和pv绑定,同时也可通过nodeAffinity绑定节点。
而nfs卷则要配置相应的server和path(nfs安装见后文)
同时建议每一个组件使用独立的目录,不要混用。

配置nginx ingress controller

我们这里使用独立的ingress controller, 可以使用 nginx-ingress

nginx ingress controller可以使用多种方式部署(比如pod或daemonset),也可以使用helm完成部署。

这里给出daemonset直接配置的部署方式

kubectl apply -f common/ns-and-sa.yaml
kubectl apply -f common/default-server-secret.yaml
kubectl apply -f common/nginx-config.yaml
kubectl apply -f rbac/rbac.yaml
kubectl apply -f daemon-set/nginx-ingress.yaml

这里要注意的是,为了保证后面镜像能够通过这层ingress正常推送,需要修改common/nginx-config.yaml中关于body尺寸的限制, 类似如下

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  proxy-connect-timeout: "10s"
  proxy-read-timeout: "10s"
  client-max-body-size: "100m"

部署harbor

修改harbor的helmchart的values目录后,可以在harbor-helm的目录下(带有values.yaml文件的目录)执行

helm install harbor --namespace=harbor .

如果部署发现问题,可以使用如下命令删除

helm uninstall --namespace harbor harbor

nfs service端安装

安装包

sudo apt-get install nfs-kernel-server nfs-common

创建共享目录

sudo mkdir -p /var/nfsshare/xxx
sudo chmod -R 777 /var/nfsshare/xxx

在 /etc/exports 加入

/var/nfsshare/xxx    xxx.xxx.xxx.*(rw,sync,no_root_squash,no_all_squash)

或者类似

/var/nfsshare/xxx    *(rw,sync,no_root_squash,no_all_squash)

启动服务

/etc/init.d/nfs-kernel-server restart