013-K8S-使用Helm安装Harbor

当前集群环境

IP Hostname 用途
10.20.1.139 k8s-master01
10.20.1.140 k8s-node01
10.20.1.141 k8s-node02
10.20.1.142 k8s-node03 安装NFS,Harbor

一、安装准备

1. 安装NFS,配置存储卷

安装 NFS,配置存储卷自动分配 PV,用于持久化 Harbor 存储的镜像。

这里选用 k8s-master (10.20.1.139) 作为 nfs 服务端,其它节点作为 nfs 客户端

1.1 安装 NFS 服务

master节点、node01、node02、node03 节点都需要安装执行

$ yum install -y nfs-utils rpcbind

1.2 创建共享目录

仅在 nfs 服务端 (master节点) 执行

# 创建 共享目录
$ mkdir -p /root/data/harbor/

# 目录提权
chmod 777 /root/data/harbor/

# 变更用户组
chown nobody /root/data/harbor/

1.3 编辑共享目录读写配置

仅在 nfs 服务端 (master节点) 执行

$ vim /etc/exports

/root/data     10.20.1.0/24(rw,fsid=0,no_root_squash)
/root/data/harbor     10.20.1.0/24(rw,no_root_squash,no_all_squash,sync)

这里使用的是 NFS4 服务,上面的配置表示 10.20.1.0/24 网段的 ip 都可以与 nfs 主服务器共享 /root/data/harbor 目录内容

1.4 启动NFS服务

集群内所有节点都需要操作

 # 启动服务
$ systemctl start rpcbind
$ systemctl restart nfs-server.service

# 设置开机自启
$ systemctl enable rpcbind
$ systemctl enable nfs-server.service

1.5 测试 NFS 目录挂载

# NFS 客户端执行:创建目录
$ mkdir /data/test1

# NFS 客户端执行:挂载目录到NFS服务端
$ mount -t nfs4 10.20.1.139:/harbor /data/test1

# NFS 客户端执行:查看挂载结果
$ mount | grep /data/test1
10.20.1.139:/harbor on /data/test1 type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.20.1.142,local_lock=none,addr=10.20.1.139)

# NFS 客户端执行:写入数据
$ echo "this is client" > /data/test1/a.txt

# NFS 服务端执行:查看数据, 结论:客户端数据已写入服务端挂载目录
$ cat /root/data/harbor/a.txt 
this is client

# NFS 服务端执行:写入数据
$ echo "This is Server" >> /root/data/harbor/a.txt

# NFS 客户端执行:查看数据, 结论:服务端数据已写入客户端挂载目录
$ cat /data/test1/a.txt 
this is client
This is Server

取消挂载

# 取消挂载
umount /data/test1


# 如果取消挂载出现报错,例如:
$ umount /data/test1
umount.nfs4: /data/test1: device is busy
# 查看目录占用进程
$ fuser -m /data/test1
/data/test1:         32679c
# kill 进程
$ kill -9 32679


# 方式二:强制卸载
umount -l /data/test1

1.6 配置存储卷

1.6.1 创建命名空间

nfs存储卷相关配置放到指定的命名空间中

# 创建命名空间,
$ kubectl create ns nfs-storage
namespace/nfs-storage created

1.6.2 配置nfs-storage.yaml并运行

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  namespace: nfs-storage
  name: nfs-client
# provisioner: nfs-provisioner
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 指定动态配置器,NFS 子目录外部配置器
parameters:
  pathPattern: ${.PVC.namespace}/${.PVC.name} # 动态生成的 NFS 路径,格式为 <PVC 命名空间>/<PVC 名称>,例如 nfs-storageclass/test-claim。
  archiveOnDelete: "true" ##删除 pv,pv内容是否备份

---

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: nfs-storage
  name: nfs-client-provisioner # NFS 动态存储配置器,用于自动为 PVC 创建 PV
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner # 指定使用的 ServiceAccountName
      containers:
        - name: nfs-client-provisioner
          image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root # 挂载的卷名称,与 volumes 部分定义的卷对应
              mountPath: /persistentvolumes # 将 NFS 卷挂载到容器内的 /persistentvolumes 路径,供容器读写 NFS 共享数据。
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner # 指定配置器名称,与 StorageClass 保持一致
            - name: NFS_SERVER
              value: 10.20.1.139 # NFS 服务器地址
            - name: NFS_PATH
              value: /root/data/harbor # NFS 共享路径
      volumes:
        - name: nfs-client-root # 定义一个名为 nfs-client-root 的 NFS 卷,连接到 NFS 服务器的指定地址和路径
          nfs:
            server: 10.20.1.139 # NFS 服务器地址
            path: /root/data/harbor # NFS 共享路径
      nodeName: k8s-node02 # 指定 Pod 运行在 k8s-node02 节点上

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner # SA 的名称
  namespace: nfs-storage

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount # 绑定类型 ServiceAccount
    name: nfs-client-provisioner # ServiceAccount 的名称
    namespace: nfs-storage
roleRef:
  kind: ClusterRole # 绑定的角色类型
  name: nfs-client-provisioner-runner # 集群角色名称 nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner # 角色的名称,表明它与 NFS 客户端存储提供者的领导者选举(leader election)机制相关。
  namespace: nfs-storage
rules:
  - apiGroups: [""] # 空字符串表示核心 API 组(core API group),包含 Kubernetes 的基本资源,如 endpoints。
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-leader-locking-nfs-client-provisioner
  namespace: nfs-storage
subjects:
  - kind: ServiceAccount # 绑定资源类型为 ServiceAccount
    name: nfs-client-provisioner # 绑定的ServiceAccount 名称
    namespace: nfs-storage
roleRef:
  kind: Role # 绑定角色(nfs-storage名称空间的角色)
  apiGroup: rbac.authorization.k8s.io
  name: leader-locking-nfs-client-provisioner # 角色的名称

执行资源清单

$ kubectl apply -f nfs-storage.yaml

$ kubectl get all -n nfs-storage
NAME                                         READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-c485df84f-44hmr   1/1     Running   0          112s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           112s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-c485df84f   1         1         1       112s

2. 安装 Helm,添加 Harbor 仓库

2.1 安装 Helm

参考:https://georgechan95.github.io/blog/d8e3c7b3.html

下载二进制安装包

下载地址:https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz

解压二进制包

$ tar -zxvf helm-v3.12.3-linux-amd64.tar.gz

将二进制文件移动到对应的目录中

$ mv linux-amd64/helm /usr/local/bin/helm

测试

# 查看Helm版本
$ helm version
version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}

3. 安装 Ingress 控制器

参考:https://georgechan95.github.io/blog/6436eaf1.html 第三节

# 下载 ingress-nginx 需要的镜像
# k8s-node03 节点需要导入这些镜像
docker pull registry.k8s.io/ingress-nginx/controller:v1.9.4
docker pull registry.k8s.io/defaultbackend-amd64:1.5
docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
docker pull registry.k8s.io/ingress-nginx/opentelemetry:v20230721-3e2062ee5

master节点重新安装 Ingress-nginx ,让 k8s-node03 节点安装上 Ingress 控制器

# 卸载原有的 ingress-nginx
[root@k8s-master01 /opt/k8s/10/ingress-nginx]$ helm uninstall ingress-nginx --namespace ingress-nginx
# 或者更新 ingress-nginx
$ helm upgrade ingress-nginx /opt/k8s/10/ingress-nginx --namespace ingress-nginx -f /opt/k8s/10/ingress-nginx/values.yaml

# 重新安装
[root@k8s-master01 /opt/k8s/10/ingress-nginx]$ helm install ingress-nginx --namespace ingress-nginx --create-namespace .

# 查看 Ingress,k8s-node03 成功运行了 ingress控制器
$ kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-controller-5wzqt   1/1     Running   0          2m33s   10.20.1.141   k8s-node02   <none>           <none>
ingress-nginx-controller-9wfpq   1/1     Running   0          2m33s   10.20.1.142   k8s-node03   <none>           <none>
ingress-nginx-controller-z9sl8   1/1     Running   0          2m33s   10.20.1.140   k8s-node01   <none>           <none>

2. 安装 Harbor

2.1 添加 Harbor 仓库,下载安装包

# 添加Harbor仓库
$ helm repo add harbor https://helm.goharbor.io

# 从仓库中查看harbor安装包,最新的4个
$ helm search repo harbor -l |  grep harbor/harbor  | head  -4

# 拉取指定版本的harbor helm安装包
$ helm pull harbor/harbor --version 1.17.1

# 解压
tar -zxvf harbor-1.17.1.tgz harbor

2.2 创建命名空间,指定安装节点

这里将 harbor 安装到 k8s-node03 节点上,并在单独的命名空间中。

# 给 k8s-node03 节点打一个标签
$ kubectl label node k8s-node03 harbor=env

创建命名空间,并指定节点

$ cat namespace-harbor.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: harbor


# 执行资源清单
$ kubectl apply -f namespace-harbor.yaml

# 查看命名空间
$ kubectl get ns | grep harbor
harbor            Active   44m

2.3 修改 harbor 目录下的 values.yaml

2.3.1 修改 hostname 定义自己的域名

expose:
  type: ingress
  ingress:
    hosts:
      core: my.harbor.docker # 自定义域名
externalURL: https://my.harbor.docker # 与上面保持一致

2.3.2 持久卷配置

持久卷修改storageClass ,改成前面定义 nfs-client

persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      storageClass: "nfs-client"
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      jobLog:
        storageClass: "nfs-client"
        accessMode: ReadWriteOnce
        size: 1Gi
    database:
      storageClass: "nfs-client"
      accessMode: ReadWriteOnce
      size: 1Gi
    redis:
      storageClass: "nfs-client"
      accessMode: ReadWriteOnce
      size: 1Gi
    trivy:
      storageClass: "nfs-client"
      accessMode: ReadWriteOnce
      size: 5Gi

2.3.3 修改Pod节点选择器

修改 nodeSelector ,让 harbor 所有 Pod 都运行在 k8s-node03 节点上。

nginx:
  nodeSelector:
    harbor: env
    
portal:
  nodeSelector:
    harbor: env
    
core:
  nodeSelector:
    harbor: env
    
jobservice:
  nodeSelector:
    harbor: env
    
registry:
  nodeSelector:
    harbor: env
    
trivy:
  nodeSelector:
    harbor: env
    
database:
  nodeSelector:
    harbor: env
    
redis:
  nodeSelector:
    harbor: env
    
exporter:
  nodeSelector:
    harbor: env

修改database资源

database:
  internal:
    resources:
      requests:
        memory: 256Mi
        cpu: 100m

2.4 安装 Harbor 到集群

# 进入Harbor 解压目录,安装Harbor
$ helm install harbor /opt/software/harbor -n harbor

# 修改 values.yaml 更新安装
#$ helm upgrade harbor /opt/software/harbor -n harbor -f /opt/software/harbor/values.yaml

# 在任务目录下安装Harbor
#$ helm install harbor /opt/software/harbor -n harbor -f /opt/software/harbor/values.yaml

特别注意:

由于 habor 运行依赖的镜像都在国外仓库,因此在执行安装时会出现镜像拉取失败的情况,这里我通过docker代理的形式下载镜像,并导入到 k8s-node03 节点上。具体参考:Docker配置网络代理实现外网镜像下载

所有镜像都在 values.yaml 文件中

docker pull goharbor/nginx-photon:v2.13.1
docker pull goharbor/harbor-portal:v2.13.1
docker pull goharbor/harbor-core:v2.13.1
docker pull goharbor/harbor-jobservice:v2.13.1
docker pull goharbor/registry-photon:v2.13.1
docker pull goharbor/harbor-registryctl:v2.13.1
docker pull goharbor/trivy-adapter-photon:v2.13.1
docker pull goharbor/harbor-db:v2.13.1
docker pull goharbor/redis-photon:v2.13.1
docker pull goharbor/harbor-exporter:v2.13.1
echo "镜像下载完成"



docker save goharbor/nginx-photon:v2.13.1 -o nginx-photon-v2.13.1.tar
docker save goharbor/harbor-portal:v2.13.1 -o harbor-portal-v2.13.1.tar
docker save goharbor/harbor-core:v2.13.1 -o harbor-core-v2.13.1.tar
docker save goharbor/harbor-jobservice:v2.13.1 -o harbor-jobservice-v2.13.1.tar
docker save goharbor/registry-photon:v2.13.1 -o registry-photon-v2.13.1.tar
docker save goharbor/harbor-registryctl:v2.13.1 -o harbor-registryctl-v2.13.1.tar
docker save goharbor/trivy-adapter-photon:v2.13.1 -o trivy-adapter-photon-v2.13.1.tar
docker save goharbor/harbor-db:v2.13.1 -o harbor-db-v2.13.1.tar
docker save goharbor/redis-photon:v2.13.1 -o redis-photon-v2.13.1.tar
docker save goharbor/harbor-exporter:v2.13.1 -o harbor-exporter-v2.13.1.tar
echo "镜像打包完成"



docker load -i nginx-photon-v2.13.1.tar
docker load -i harbor-portal-v2.13.1.tar
docker load -i harbor-core-v2.13.1.tar
docker load -i harbor-jobservice-v2.13.1.tar
docker load -i registry-photon-v2.13.1.tar
docker load -i harbor-registryctl-v2.13.1.tar
docker load -i trivy-adapter-photon-v2.13.1.tar
docker load -i harbor-db-v2.13.1.tar
docker load -i redis-photon-v2.13.1.tar
docker load -i harbor-exporter-v2.13.1.tar
echo "镜像加载完成"

2.5 Docker 导入 Harbor 密钥

2.5.1 导出 Harbor 公钥

# 查看Harbor生成的Secret
$ kubectl get secret -n harbor
NAME                           TYPE                 DATA   AGE
harbor-core                    Opaque               8      13m
harbor-database                Opaque               1      13m
harbor-ingress                 kubernetes.io/tls    3      13m
harbor-jobservice              Opaque               2      13m
harbor-registry                Opaque               2      13m
harbor-registry-htpasswd       Opaque               1      13m
harbor-registryctl             Opaque               0      13m
harbor-trivy                   Opaque               2      13m
sh.helm.release.v1.harbor.v1   helm.sh/release.v1   1      13m

# 1. 查看 harbor 公钥内容,ca.crt 就是公钥
$ kubectl get secret harbor-ingress -n harbor -o yaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lRTlZRd3VoTk5ERjNSQmZPTTM3c25sakFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFERXdsb1lYSmliM0l0WTJFd0hoY05NalV3TnpBNE1EWXhNRE0wV2hjTk1qWXdOekE0TURZeApNRE0wV2pBVU1SSXdFQVlEVlFRREV3bG9ZWEppYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCCkR3QXdnZ0VLQW9JQkFRRGZQdjFoWDhPYVpYWkNkOXFzSld0RUtWbXdUakRrMlhGZFJQRS93dnV5T2V5UW1tZDYKSzFJUW0zYXpqQThvMGlvSHdScXhZaGNjYjN2SmUxRld1TDFoTTBQTkRWRWRVakxzc0kzWm1pcXlTano1N1YxMgpVeWlabGRqcHVCYlBQcGNhL3dLc0V6bG04SFFhMkI4NndsaEJzUFUxd0dxT1FPbSt5WlVBdEZ1dGVrT2xDNGZHCnM0SFd5MlVJNm5pclFKRXFWc2Foek83dHFaRzhNK1NZQTJLRnpkTEN1R0pkTnpjR3QrTzJ2NjRyNy9SVVRHTk8KVWlpRTIweXhYVFl2NTVrcnd6dlhRNXhKRlBzb0w1YmsycjVxU3lKVTlydmtpNEhjV0pubUhHSFN3OTJTU3FGZwpwVUZKNm4rQkEzbThEdk1CL2pZOW5EcUk1NzluM3dxSm9QaURBZ01CQUFHallUQmZNQTRHQTFVZER3RUIvd1FFCkF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXcKQXdFQi96QWRCZ05WSFE0RUZnUVVoZXRCUEpwUkxSKzdOTHFFWEZyN2FXakJ6VDB3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFINDJqOGtSUFNMdjNjaGFEU3BMUDdWeG9obU9lRythajYvMVBSTmFpWjlvZVdDd3kveGVRODlZCnFycEViVmF5anVJUlpFcDh2VE4yL3pxcUFzOGh5MHRSY3NxNzAraE8rUlJpdStNUFJ0ZzNmTlY3RlZVVzd5aXkKMG0xTmpPL1U1RXpxbnlQQkNGbWJZM1Z6Q3Vmdzc0bElFU0JDZHc4SW03ODRTeDBoa1dTRmd4RFY3djZZZDlKdgorN3laek4zQ2Flem1KS2xXT3VNQjByeTZNTC95bldPKzJxWVFZQjB6OGhyTFFwMS9aTzhpb05rMEVtL1pDL1JXCkNocVRmUFFsYzZUbGhJUFkrSnVYRDNVLzlIRXd4YWVWVzVqblI4cXpXdEJMVHUzSTE5ZERSUGJUQVN0N0w4MUoKU1BTb1lJMTFLTCt2cDBPRDY2ankzWjZ6VDQ4enNtbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROakNDQWg2Z0F3SUJBZ0lRTTh2VW9aMlhzMHowbVJFNXYyVU1BekFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFERXdsb1lYSmliM0l0WTJFd0hoY05NalV3TnpBNE1EWXhNRE0wV2hjTk1qWXdOekE0TURZeApNRE0wV2pBYk1Sa3dGd1lEVlFRREV4QnRlUzVvWVhKaWIzSXVaRzlqYTJWeU1JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpLVEVmcjg1S1hza1JUVjF4TGhYbnRraExTUmhGM25GV0hYSzZyQ3QKMjdSRXhTSnlabDdZQU81U2RlT2tCRGs1dE5oRVhVTG83MHhYbThRdTQxNC9oeThtdHpvYmJXWWJrcHNtNGxlYQp2UmM1dTl3REhiSjVhNVpnV1FLaWJTd01nUWtLR3RmbHdhUUFUcTVMQm05TXRVc2hCN1hUQXB6R1dENXZEc3MzCm5MZXpCbFp0TUFyZmtncU9vLzQxMFVWTWtYV05CcVFpNFFkUUFoTVVuclR6d0ovdWo5UDhFR3podGxBMkVMTU4KamIrL1FUK05HcGVEQmFGWXcrNDdJa0V5RkY2RFJqdE9VTSt0SWt0Q2hCek5xd1BVUjJ0UCtSWWhrZnM4TnJlYQp2bDdwNC8rWnlIRWkwVXMwV3B4R2lqbXZITHY3Rmw0dENXam5LZkRoMVZjZ253SURBUUFCbzMwd2V6QU9CZ05WCkhROEJBZjhFQkFNQ0JhQXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUZCd01DTUF3R0ExVWQKRXdFQi93UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVWhldEJQSnBSTFIrN05McUVYRnI3YVdqQnpUMHdHd1lEVlIwUgpCQlF3RW9JUWJYa3VhR0Z5WW05eUxtUnZZMnRsY2pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVAyZnUwd2p1CkQ0em1uWFdNRVYxMnhPbnJmZEUwNkxiUysxTXZMcmwyVGluZkwvY1F2M1VuNkZJWlVDVEsrSFJkUWxLdmZsK2cKSTZJUzJHWXJpcWQ1ZWk5Q1Z5YjZORUMrZzZ2RkhCZjZlK0tQaFNYNTVVTDhFQ2pDaUU4aURRdEZjOEdVUWg5MwoxNTFSdmo3VTZxVTV1WnBuck10NURJTy9LQ2F3NHRjcEpJUjhBR3RUUVFQUnRXRExvYzB5UityNWlaVWRpZyszClJNVzNReS9LZEtqamprK0UzYUU1Y2Z1RmFzeHNVZU9IcDcxQ2lhaEtGN3MwRW96QzhJRSsySFNEK3pLTFhpV3kKM203Q2pTdWhyTnhVL1pkNXZKSmkvVUUxWjB6dUh4dFBZenlHTWVQYWN0VFBDK1ZOWDJlU3hHRGgrN2tNS0Y3cQpqcmk1RVlRanNHQU43QT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBektURWZyODVLWHNrUlRWMXhMaFhudGtoTFNSaEYzbkZXSFhLNnJDdDI3UkV4U0p5ClpsN1lBTzVTZGVPa0JEazV0TmhFWFVMbzcweFhtOFF1NDE0L2h5OG10em9iYldZYmtwc200bGVhdlJjNXU5d0QKSGJKNWE1WmdXUUtpYlN3TWdRa0tHdGZsd2FRQVRxNUxCbTlNdFVzaEI3WFRBcHpHV0Q1dkRzczNuTGV6QmxadApNQXJma2dxT28vNDEwVVZNa1hXTkJxUWk0UWRRQWhNVW5yVHp3Si91ajlQOEVHemh0bEEyRUxNTmpiKy9RVCtOCkdwZURCYUZZdys0N0lrRXlGRjZEUmp0T1VNK3RJa3RDaEJ6TnF3UFVSMnRQK1JZaGtmczhOcmVhdmw3cDQvK1oKeUhFaTBVczBXcHhHaWptdkhMdjdGbDR0Q1dqbktmRGgxVmNnbndJREFRQUJBb0lCQUhIbUV1ZG9qdXdqZWFCNwpqTHljelVmQUdkTUNPSGZVY3A0MWtXYm1SeDNOUzZsYzdzZERhbjI2SjNNdDdBL2R1ZHlKc2lNbUpuZHB5aWtNCkcveTRiQ3RWZHZyc0FHLzNNTWw4U1R3WS9pcllUbTNjbW05ZzhtdUxHcnp2MW05azRPREFvenNsaHQ4cjVHL20KV2lPT3R1Y0FsYld3NFd6R3pTNDRNWi9PUTNtWlZkMHJXVG8vQWxJK1FCOE9oYmNVajZCMUEyaGhqL25Pa1BzUApXcFlsNzV2NURXQXo5b3lPSWZOY0dKaUo3d2pCUTBHZjBDK1ZMTnhGRCtyT25JTDhYQzA2ZHorVERpZHFLVzliCk1FMXdzSXRQVFI2QklQNEZPcm1QNWZvWDFaZmNMY2xCOVpiRW0yZFRrUzI4OG51L1gwU1VqTnl3VkRlVkpEd3UKSjl3VklDRUNnWUVBNkpZbDNxcVJtREpYUlpjR2hlRG82bStJOXBnYXJqOVJHbUVHWmFiRHBmMHl6SEl6YXpCTApmR2pmUHY5N0pnSEgxdHh3U2NuaUZtR0JZNnF2M0RJTWdrdmtraXM5MWdJQ0MrM2Z4N1E5YmhGMmVrMEZqMmM1CmkrM2pMRkZKSytNY0lwZUxYbnNEUFhndVV3VUljSVUxcWhTR0JENTNhM1FVRnU2aXljVW9FeTBDZ1lFQTRUNkYKQXllUktrZjVjNjlMN2Y5a1pLNFc1UnlBQlQzUExOU3NoV0dwclRMMDZnUTlWZkkwY0pzRmVSSHVOeFpsUU1hMQpKR0grQk1tNndJU3d0ODZZazhFK010YXpKZDRQRC8vT1lsc1VzdDEyWW5HTmVycS9zU0lZcnRVL20rcU1ZQUhQCkhPVGVlNE1zZkVMc0F6Wndqc214WkdlU2F2TXJyL2RaMnoybjBuc0NnWUFqM1pOMVpLUVM3aUJiRU5EbXNDbjYKakx4NEdqaHpDang5YnR6SHJCR2JkUkh5U09INDgzZVFkYk9IU1dvNkVDZzZ6NzlaQVpLbGxOK1krT2NwYzJaTwphVm1UMktzdVp4emRyZzdHQXRzK0w5OHZPTlZVcWJ4TUFhRDRZb2lBQmdOK3FoUEp1L3BoN2pobWdPNHVPN3hzCnY4Rnl3aGMwTUxBd1lSZ2xPUXZXK1FLQmdHNDVrUS9kSWYzRjRQM0tyK2FVejBVeHFFU1FNTm5meUcyUTJhZ2cKQmMrYkd4MFYzQW9lRDZsM1F6TmZJZXJWUzlGcUxDVFV5MkQrY3lSWkNyMjRIUlJaUVozUlVUUGJ1aFZEUW5VQgpTMXpJWVhHRlRnM2NLNGg4UGdYNGx6c3VpV2xHR1Z0emFLaWFwWDlkcEc5aUNhem1hS2ZRdzJjUS9yVUszMjhaCmVmSFhBb0dCQUsxQ2d6LzU4MWdOVmhXclM1NXhpMVVQaXd0NjZkU0dhMUUxMmNYRUtrM09Ubk1pd0hiczBZKzAKbnhPdHJMZVBhMzdDRGxrWjhnMEZMUU53TStLM2pPNnQ3RG53UWZDUi9BdjdvdVl4WEFtdFh0NzQvRCs3VjlLdQpxU2lhVTZabjJaNW04T2U4cHZaa2hrZlZNbEsyNEpuVlk5QlFqL29tbk1iZ3l2aENBTzhyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: harbor
    meta.helm.sh/release-namespace: harbor
  creationTimestamp: "2025-07-08T06:10:36Z"
  labels:
    app: harbor
    app.kubernetes.io/instance: harbor
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: harbor
    app.kubernetes.io/part-of: harbor
    app.kubernetes.io/version: 2.13.1
    chart: harbor
    heritage: Helm
    release: harbor
  name: harbor-ingress
  namespace: harbor
  resourceVersion: "866456"
  uid: f816d990-e733-45a1-999c-51ea9efeb1f7
type: kubernetes.io/tls



# 2. 将 ca.crt 解密后导出
$ echo 'LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lRTlZRd3VoTk5ERjNSQmZPTTM3c25sakFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFERXdsb1lYSmliM0l0WTJFd0hoY05NalV3TnpBNE1EWXhNRE0wV2hjTk1qWXdOekE0TURZeApNRE0wV2pBVU1SSXdFQVlEVlFRREV3bG9ZWEppYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCCkR3QXdnZ0VLQW9JQkFRRGZQdjFoWDhPYVpYWkNkOXFzSld0RUtWbXdUakRrMlhGZFJQRS93dnV5T2V5UW1tZDYKSzFJUW0zYXpqQThvMGlvSHdScXhZaGNjYjN2SmUxRld1TDFoTTBQTkRWRWRVakxzc0kzWm1pcXlTano1N1YxMgpVeWlabGRqcHVCYlBQcGNhL3dLc0V6bG04SFFhMkI4NndsaEJzUFUxd0dxT1FPbSt5WlVBdEZ1dGVrT2xDNGZHCnM0SFd5MlVJNm5pclFKRXFWc2Foek83dHFaRzhNK1NZQTJLRnpkTEN1R0pkTnpjR3QrTzJ2NjRyNy9SVVRHTk8KVWlpRTIweXhYVFl2NTVrcnd6dlhRNXhKRlBzb0w1YmsycjVxU3lKVTlydmtpNEhjV0pubUhHSFN3OTJTU3FGZwpwVUZKNm4rQkEzbThEdk1CL2pZOW5EcUk1NzluM3dxSm9QaURBZ01CQUFHallUQmZNQTRHQTFVZER3RUIvd1FFCkF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXcKQXdFQi96QWRCZ05WSFE0RUZnUVVoZXRCUEpwUkxSKzdOTHFFWEZyN2FXakJ6VDB3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFINDJqOGtSUFNMdjNjaGFEU3BMUDdWeG9obU9lRythajYvMVBSTmFpWjlvZVdDd3kveGVRODlZCnFycEViVmF5anVJUlpFcDh2VE4yL3pxcUFzOGh5MHRSY3NxNzAraE8rUlJpdStNUFJ0ZzNmTlY3RlZVVzd5aXkKMG0xTmpPL1U1RXpxbnlQQkNGbWJZM1Z6Q3Vmdzc0bElFU0JDZHc4SW03ODRTeDBoa1dTRmd4RFY3djZZZDlKdgorN3laek4zQ2Flem1KS2xXT3VNQjByeTZNTC95bldPKzJxWVFZQjB6OGhyTFFwMS9aTzhpb05rMEVtL1pDL1JXCkNocVRmUFFsYzZUbGhJUFkrSnVYRDNVLzlIRXd4YWVWVzVqblI4cXpXdEJMVHUzSTE5ZERSUGJUQVN0N0w4MUoKU1BTb1lJMTFLTCt2cDBPRDY2ankzWjZ6VDQ4enNtbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=' | base64 -d > ca.crt

2.5.2 Docker 安装公钥

# 将harbor证书保存在docker目录下,创建根域名相同的子目录,这一步建议所有机器都执行
$ mkdir -p /etc/docker/certs.d/my.harbor.docker
$ cp ca.crt /etc/docker/certs.d/my.harbor.docker/

2.5.3 修改Docker daemon.json

sudo vi /etc/docker/daemon.json
# 添加如下配置,my.harbor.docker 是自定义的harbor域名
{
  "insecure-registries": ["my.harbor.docker"]
}

# 保存后重启Docker
systemctl daemon-reload
systemctl restart docker

2.5.4 Dockers 登录Harbor

# 服务器写入Host域名映射
$ echo "10.20.1.142 my.harbor.docker" >> /etc/hosts


# docker 登录Harbor私服
$ docker login -u admin -p Harbor12345 my.harbor.docker
WARNING! Using --password via the CLI is insecure. Use --password-stdin.

WARNING! Your credentials are stored unencrypted in '/root/.docker/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded

2.5.6 浏览器访问

客户端写入 Host 域名映射

10.20.1.142 my.harbor.docker

访问地址:https://my.harbor.docker/

浏览器登录Harbor

2.5.6 测试推送镜像到 Harbor 私服

在Harbor中创建一个新的项目

新建项目

# 给镜像打标签
$ docker tag busybox:latest my.harbor.docker/my-harbor/busybox:latest

# 查看镜像
$ docker images | grep busybox
busybox                                                                       latest                ff7a7936e930   9 months ago    4.28MB
my.harbor.docker/my-harbor/busybox                                            latest                ff7a7936e930   9 months ago    4.28MB

# 将镜像推送到Harbor
$ docker push my.harbor.docker/my-harbor/busybox:latest
The push refers to repository [my.harbor.docker/my-harbor/busybox]
068f50152bbc: Pushed 
latest: digest: sha256:f2e98ad37e4970f48e85946972ac4acb5574c39f27c624efbd9b17a3a402bfe4 size: 527

再次查看Harbor,镜像已推送到Harbor中

查看镜像

2.6 卸载 Harbor

$ helm uninstall harbor -n harbor
$ kubectl delete pvc -n harbor --all

参考链接

https://www.cnblogs.com/qlsem/p/17714509.html


转载请注明来源,欢迎对文章中的引用来源进行考证,欢迎指出任何有错误或不够清晰的表达。可以在下面评论区评论,也可以邮件至 george_95@126.com