1 - Installing Helm 3

This section describes how to install Helm 3.

For details, see https://helm.sh/docs/intro/install/.

Prerequisites

Ensure that the master node in the Kubernetes cluster can access the Internet.

Procedure

  1. Run the following command to download the Helm 3 installation script.

    curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
    
  2. Run the following command to modify the permission on the Helm 3 installation script.

    chmod 700 get_helm.sh
    
  3. Determine the Helm version to be installed based on the version mapping between Helm and Kubernetes. For details about the version mapping, see Helm Version Support Policy. Then run the following command to change the DESIRED_VERSION environment variable to the Helm version to be installed and run the installation command.

    DESIRED_VERSION=v3.9.0 ./get_helm.sh
    
  4. Run the following command to check whether Helm 3 of the specified version is successfully installed.

    helm version
    

    If the following information is displayed, the installation is successful.

    version.BuildInfo{Version:"v3.9.0", GitCommit:"7ceeda6c585217a19a1131663d8cd1f7d641b2a7", GitTreeState:"clean", GoVersion:"go1.17.5"}
    

2 - Collecting Information

2.1 - Obtaining the CSI Version

This section describes how to view the CSI version.

Procedure

  1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

  2. Run the following command to query information about the node where huawei-csi-node resides.

    kubectl get pod -A  -owide | grep huawei-csi-node
    

    The following is an example of the command output.

    NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE     IP               NODE            NOMINATED NODE   READINESS GATES
    huawei-csi    huawei-csi-node-87mss                      3/3     Running   0               6m41s   192.168.129.155      node-1          <none>           <none>
    huawei-csi    huawei-csi-node-xp8cc                      3/3     Running   0               6m41s   192.168.129.156      node-2          <none>           <none
    
  3. Use a remote access tool, such as PuTTY, to log in to any node where huawei-csi-node resides through the node IP address.

  4. Run the following command to view the CSI version.

    cat /var/lib/kubelet/plugins/csi.huawei.com/version
    

    The version information is displayed as follows:

    4.5.0
    

2.2 - Viewing Huawei CSI Logs

Viewing Logs of the huawei-csi-controller Service

  1. Run the following command to obtain the node where huawei-csi-controller is located.

    kubectl get pod -A -o wide | grep huawei
    

    The following is an example of the command output, where IP indicates the node IP address and NODE indicates the node name.

    NAME                                    READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
    huawei-csi-controller-695b84b4d8-tg64l  9/9     Running  0          14s     <host1-ip>     <host1-name> <none>           <none>
    
  2. Use a remote access tool, such as PuTTY, to log in to the node where the huawei-csi-controller service resides in the Kubernetes cluster through the management IP address.

  3. Go to the log directory.

    cd /var/log/huawei
    
  4. Run the following command to view the customized output logs of the container.

    vi huawei-csi-controller
    
  5. Go to the container directory.

    cd /var/log/containers
    
  6. Run the following command to view the standard output logs of the container.

    vi huawei-csi-controller-<name>_huawei-csi_huawei-csi-driver-<contrainer-id>.log
    

Viewing Logs of the huawei-csi-node Service

  1. Run the following command to obtain the node where huawei-csi-node is located.

    kubectl get pod -A -o wide | grep huawei
    

    The following is an example of the command output, where IP indicates the node IP address and NODE indicates the node name.

    NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
    huawei-csi-node-g6f7z    3/3     Running  0          14s     <host2-ip>     <host2-name> <none>           <none>
    
  2. Use a remote access tool, such as PuTTY, to log in to the node where the huawei-csi-node service resides in the Kubernetes cluster through the management IP address.

  3. Go to the log directory.

    cd /var/log/huawei
    
  4. Run the following command to view the customized output logs of the container.

    vi huawei-csi-node
    
  5. Go to the container directory.

    cd /var/log/containers
    
  6. Run the following command to view the standard output logs of the container.

    vi huawei-csi-node-<name>_huawei-csi_huawei-csi-driver-<contrainer-id>.log
    

2.3 - Collecting Logs

Performing Check Before Collection

  1. Use a remote access tool, such as PuTTY, to log in to the node where the oceanctl tool is installed in the Kubernetes cluster through the management IP address.

  2. Run the following command. The displayed version is v4.5.0.

    oceanctl version
    

    The following is an example of the command output.

    Oceanctl Version: v4.5.0
    
  3. Run the oceanctl collect logs –help command. The following information is displayed.

    $ oceanctl collect logs --help
    Collect logs of one or more nodes in specified namespace in Kubernetes
    
    Usage:
      oceanctl collect logs [flags]
    
    Examples:
      # Collect logs of all nodes in specified namespace
      oceanctl collect logs -n <namespace>
    
      # Collect logs of specified node in specified namespace
      oceanctl collect logs -n <namespace> -N <node>
    
      # Collect logs of all nodes in specified namespace
      oceanctl collect logs -n <namespace> -a
    
      # Collect logs of all nodes in specified namespace with a maximum of 50 nodes collected at the same time
      oceanctl collect logs -n <namespace> -a --threads-max=50
    
      # Collect logs of specified node in specified namespace
      oceanctl collect logs -n <namespace> -N <node> -a
    
    Flags:
      -a, --all                Collect all nodes messages
      -h, --help               help for logs
      -n, --namespace string   namespace of resources
      -N, --nodename string    Specify the node for which information is to be collected.
          --threads-max int    set maximum number[1~1000] of threads for nodes to be collected. (default 50)
    
    Global Flags:
          --log-dir string   Specify the directory for printing log files. (default "/var/log/huawei")
    
  4. Run the following command to check whether a Pod is started properly. In the command, huawei-csi indicates the namespace for installing CSI.

    kubectl get deployment -n huawei-csi
    

    The following is an example of the command output.

    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    huawei-csi-controller   1/1     1            1           21h
    

Collecting All Logs in the CSI Namespace Using oceanctl

  1. Use a remote access tool, such as PuTTY, to log in to the node checked in Performing Check Before Collection through the management IP address.

  2. Run the oceanctl collect logs -n <namespace> -a –threads-max=<max_node_processing_num> command to collect CSI logs of all nodes where CSI containers reside in the cluster. In the command, threads-max indicates the maximum number of nodes for which logs can be collected at the same time. The default value is 50. You can set the value based on the host performance and load.

    oceanctl collect logs -n huawei-csi -a --threads-max=10
    
  3. Check the log package generated in the /tmp directory. You can run the unzip <zip_name> -d collect_logs command to decompress the log package. In the preceding command, <zip_name> indicates the package name.

    # date
    Wed Sep 20 02:49:24 EDT 2023
    
    # ls
    huawei-csi-2023-09-20-02:48:22-all.zip
    

Collecting the Log of a Single CSI Node Using oceanctl

  1. Use a remote access tool, such as PuTTY, to log in to the node checked in Performing Check Before Collection through the management IP address.

  2. Run the **oceanctl collect logs -n <namespace> -N **<nodeName> command to collect CSI logs of all nodes where CSI containers reside in the cluster.

    oceanctl collect logs -n huawei-csi -N node-1
    
  3. Check the log package generated in the /tmp directory. You can run the unzip <zip_name> -d collect_logs command to decompress the log package. In the preceding command, <zip_name> indicates the package name.

    # date
    Thu Sep 21 04:08:47 EDT 2023
    
    # ls
    huawei-csi-2023-09-21-04:05:15-node-1.zip
    

3 - Downloading a Container Image

Downloading a Container Image Using containerd

  1. Run the following command to download an image to a local path. In the command, image:tag indicates the image to be pulled and its tag.

    ctr image pull <image>:<tag>
    
  2. Run the following command to export the image to a file. In the command, image:tag indicates the image to be exported, and file indicates the name of the exported image file.

    ctr image export <file>.tar <image>:<tag>
    

Downloading a Container Image Using Docker

  1. Run the following command to download an image to a local path. In the command, image:tag indicates the image to be pulled.

    docker pull <image>:<tag>
    
  2. Run the following command to export the image to a file. In the command, image:tag indicates the image to be exported, and file indicates the name of the exported image file.

    docker save <image>:<tag> -o <file>.tar
    

Downloading a Container Image Using Podman

  1. Run the following command to download an image to a local path. In the command, image:tag indicates the image to be pulled.

    podman pull <image>:<tag>
    
  2. Run the following command to export the image to a file. In the command, image:tag indicates the image to be exported, and file indicates the name of the exported image file.

    podman save <image>:<tag> -o <file>.tar 
    

4 - Updating the huawei-csi-controller or huawei-csi-node Service

Perform this operation when you need to update the huawei-csi-controller or huawei-csi-node service, for example, changing the number of copies for the huawei-csi-controller service.

Procedure

  1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

  2. Go to the /helm/esdk directory and run the following command to obtain the original service configuration file. helm-huawei-csi indicates the Helm chart name specified during the installation of the earlier version, and huawei-csi indicates the Helm chart namespace specified during the installation of the earlier version. For details about the component package path, see Table 1.

    helm get values helm-huawei-csi -n huawei-csi -a > ./update-values.yaml
    
  3. Run the vi update-values.yaml command to open the file obtained in 2 and modify the configuration items by referring to Parameters in the values.yaml File of Helm. After the modification, press Esc and enter :wq! to save the modification.

  4. Run the following command to update Huawei CSI services.

    helm upgrade helm-huawei-csi ./ -n huawei-csi  -f ./update-values.yaml
    

5 - Modifying the Log Output Mode

huawei-csi supports two log output modes: file and console. file indicates that logs are output to the fixed directory (/var/log/huawei), and console indicates that logs are output to the standard directory of the container. You can set the log output mode as required. The default mode is file.

Procedure

  1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

  2. Go to the /helm/esdk directory and run the following command to obtain the original service configuration file. helm-huawei-csi indicates the Helm chart name specified during the installation of the earlier version, and huawei-csi indicates the Helm chart namespace specified during the installation of the earlier version. For details about the component package path, see Table 1.

    helm get values helm-huawei-csi -n huawei-csi -a > ./update-values.yaml
    
  3. Run the vi update-values.yaml command to open the file obtained in 2 and modify the configuration items. After the modification, press Esc and enter :wq! to save the modification.

    # The CSI driver parameter configuration
    csiDriver:
      # Driver name, it is strongly recommended not to modify this parameter
      # The CCE platform needs to modify this parameter, e.g. csi.oceanstor.com
      driverName: csi.huawei.com
      # Endpoint, it is strongly recommended not to modify this parameter
      endpoint: /csi/csi.sock
      # DR Endpoint, it is strongly recommended not to modify this parameter
      drEndpoint: /csi/dr-csi.sock
      # Maximum number of concurrent disk scans or detaches, support 1~10
      connectorThreads: 4
      # Flag to enable or disable volume multipath access, support [true, false]
      volumeUseMultipath: true
      # Multipath software used by fc/iscsi. support [DM-multipath, HW-UltraPath, HW-UltraPath-NVMe]
      scsiMultipathType: DM-multipath
      # Multipath software used by roce/fc-nvme. only support [HW-UltraPath-NVMe]
      nvmeMultipathType: HW-UltraPath-NVMe
      # Timeout interval for waiting for multipath aggregation when DM-multipath is used on the host. support 1~600
      scanVolumeTimeout: 3
      # Timeout interval for running command on the host. support 1~600
      execCommandTimeout: 30
      # check the number of paths for multipath aggregation
      # Allowed values:
      #   true: the number of paths aggregated by DM-multipath is equal to the number of online paths
      #   false: the number of paths aggregated by DM-multipath is not checked.
      # Default value: false
      allPathOnline: false
      # Interval for updating backend capabilities. support 60~600
      backendUpdateInterval: 60
      # Huawei-csi-controller log configuration
      controllerLogging:
        # Log record type, support [file, console]
        module: file
        # Log Level, support [debug, info, warning, error, fatal]
        level: info
        # Directory for storing logs
        fileDir: /var/log/huawei
        # Size of a single log file
        fileSize: 20M
        # Maximum number of log files that can be backed up.
        maxBackups: 9
      # Huawei-csi-node log configuration
      nodeLogging:
        # Log record type, support [file, console]
        module: file
        # Log Level, support [debug, info, warning, error, fatal]
        level: info
        # Directory for storing logs
        fileDir: /var/log/huawei
        # Size of a single log file
        fileSize: 20M
        # Maximum number of log files that can be backed up.
        maxBackups: 9
    
  4. Run the following command to update the log configuration.

    helm upgrade helm-huawei-csi ./ -n huawei-csi  -f ./update-values.yaml
    

6 - Enabling the ReadWriteOncePod Feature Gate

The ReadWriteOnce access mode is the fourth access mode introduced by Kubernetes v1.22 for PVs and PVCs. If you create a Pod using a PVC in ReadWriteOncePod access mode, Kubernetes ensures that the Pod is the only Pod in the cluster that can read or write the PVC.

The ReadWriteOncePod access mode is an alpha feature in Kubernetes v1.22/1.23/1.24. Therefore, you need to enable the ReadWriteOncePod feature in feature-gates of kube-apiserver, kube-scheduler, and kubelet before using the access mode.

Currently, the CCE or CCE Agile platform does not support the ReadWriteOncePod feature gate.

Procedure

  1. Enable the ReadWriteOncePod feature gate for kube-apiserver.

    1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

    2. Run the vi /etc/kubernetes/manifests/kube-apiserver.yaml command, press I or Insert to enter the insert mode, and add –feature-gates=ReadWriteOncePod=true to the kube-apiserver container. After the modification is complete, press Esc and enter :wq! to save the modification.

      ...
      spec:
        containers:
        - command:
          - kube-apiserver
          - --feature-gates=ReadWriteOncePod=true
          ...
      

      After the editing is complete, Kubernetes will automatically apply the updates.

  2. Enable the ReadWriteOncePod feature gate for kube-scheduler.

    1. Use a remote access tool, such as PuTTY, to log in to any master node in the Kubernetes cluster through the management IP address.

    2. Run the vi /etc/kubernetes/manifests/kube-scheduler.yaml command, press I or Insert to enter the insert mode, and add –feature-gates=ReadWriteOncePod=true to the kube-scheduler container. After the modification is complete, press Esc and enter :wq! to save the modification.

      ...
      spec:
        containers:
        - command:
          - kube-scheduler
          - --feature-gates=ReadWriteOncePod=true
          ...
      

      After the editing is complete, Kubernetes will automatically apply the updates.

  3. Enable the ReadWriteOncePod feature gate for kubelet.

    The dynamic Kubelet configuration function is not used since v1.22 and deleted in v1.24. Therefore, you need to perform the following operations on kubelet on each worker node in the cluster.

    1. Use a remote access tool, such as PuTTY, to log in to any worker node in the Kubernetes cluster through the management IP address.

    2. Run the vi /var/lib/kubelet/config.yaml command, press I or Insert to enter the editing state, and add ReadWriteOncePod: true to the featureGates field of the KubeletConfiguration object. If the featureGates field does not exist, add it at the same time. After the modification is complete, press Esc and enter :wq! to save the modification.

      apiVersion: kubelet.config.k8s.io/v1beta1
      featureGates:
        ReadWriteOncePod: true
        ...
      

      The default path of the kubelet configuration file is /var/lib/kubelet/config.yaml. Enter the path based on site requirements.

    3. After the configuration is complete, run the systemctl restart kubelet command to restart kubelet.

7 - Configuring Access to the Kubernetes Cluster as a Non-root User

Procedure

  1. Copy the authentication file of the Kubernetes cluster and modify /etc/kubernetes/admin.conf to be the actual authentication file.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    
  2. Change the user and user group of the authentication file.

    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  3. Configure the KUBECONFIG environment variable of the current user. The following uses Ubuntu 20.04 as an example.

    echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
    source ~/.bashrc