This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Common Problems and Solutions for Interconnecting with the Tanzu Kubernetes Cluster

This section describes the common problems and solutions for interconnecting with the Tanzu Kubernetes cluster. Currently, the following problems occur during interconnection with the Tanzu Kubernetes cluster:

  • A Pod cannot be created because the PSP permission is not created.
  • The mount point of the host is different from that of the native Kubernetes. As a result, a volume fails to be mounted.
  • The livenessprobe container port conflicts with the Tanzu vSphere port. As a result, the container restarts repeatedly.

1 - A Pod Cannot Be Created Because the PSP Permission Is Not Created

Symptom

When huawei-csi-controller and huawei-csi-node are created, only the Deployment and DaemonSet resources are successfully created, and no Pod is created for the controller and node.

Root Cause Analysis

The service account used for creating resources does not have the “use” permission of the PSP policy.

Solution or Workaround

  1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

  2. Run the vi psp-use.yaml command to create a file named psp-use.yaml

    vi psp-use.yaml
    
  3. Configure the psp-use.yaml file.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: huawei-csi-psp-role
    rules:
    - apiGroups: ['policy']
      resources: ['podsecuritypolicies']
      verbs: ['use']
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: huawei-csi-psp-role-cfg
    roleRef:
      kind: ClusterRole
      name: huawei-csi-psp-role
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: Group
      apiGroup: rbac.authorization.k8s.io
      name: system:serviceaccounts:huawei-csi
    - kind: Group
      apiGroup: rbac.authorization.k8s.io
      name: system:serviceaccounts:default
    
  4. Run the following command to create the PSP permission.

    kubectl create -f psp-use.yaml
    

2 - Changing the Mount Point of a Host

Symptom

A Pod fails to be created, and error message “mount point does not exist” is recorded in Huawei CSI logs.

Root Cause Analysis

The native Kubernetes cluster in the pods-dir directory of huawei-csi-node is inconsistent with the Tanzu Kubernetes cluster.

Solution or Workaround

  1. Go to the helm/esdk/ directory and run the vi values.yaml command to open the configuration file.

    vi values.yaml
    
  2. Change the value of kubeletConfigDir to the actual installation directory of kubelet.

    # Specify kubelet config dir path.
    # kubernetes and openshift is usually /var/lib/kubelet
    # Tanzu is usually /var/vcap/data/kubelet
    kubeletConfigDir: /var/vcap/data/kubelet
    

3 - Changing the Default Port of the livenessprobe Container

Symptom

The livenessprobe container of the huawei-csi-controller component keeps restarting.

Root Cause Analysis

The default port (9808) of the livenessprobe container of huawei-csi-controller conflicts with the existing vSphere CSI port of Tanzu.

Solution or Workaround

Change the default port of the livenessprobe container to an idle port.

  1. Go to the helm/esdk directory and run the vi values.yaml command to open the configuration file.

    vi values.yaml
    
  2. Change the default value 9808 of controller.livenessProbePort to an idle port, for example, 9809.

    controller:
      livenessProbePort: 9809
    
  3. Update Huawei CSI using Helm. For details, see Upgrading Huawei CSI.

4 - Failed to Create an Ephemeral Volume

Symptom

A generic ephemeral volume fails to be created, and the error message PodSecurityPolicy: unable to admit pod: [spec.volumes[0]: Invalid value: “ephemeral”: ephemeral volumes are not allowed to be used spec.volumes[0] is displayed.

Root Cause Analysis

The current PSP policy does not contain the permission to use ephemeral volumes.

Solution or Workaround

Add the permission to use ephemeral volumes to the default PSP pks-privileged and pks-restricted. The following is an example of modifying pks-privileged:

  1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

  2. Run the following command to modify the pks-privileged configuration.

    kubectl edit psp pks-privileged
    
  3. Add ephemeral to spec.volumes. The following is an example.

    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      annotations:
        apparmor.security.beta.kubernetes.io/allowedProfileName: '*'
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
      creationTimestamp: "2022-10-11T08:07:00Z"
      name: pks-privileged
      resourceVersion: "1227763"
      uid: 2f39c44a-2ce7-49fd-87ca-2c5dc3bfc0c6
    spec:
      allowPrivilegeEscalation: true
      allowedCapabilities:
      - '*'
      supplementalGroups:
        rule: RunAsAny
      volumes:
      - glusterfs
      - hostPath
      - iscsi
      - nfs
      - persistentVolumeClaim
      - ephemeral
    
  4. Run the following command to check whether the addition is successful.

    kubectl get psp pks-privileged -o yaml