Skip to content

Troubleshooting

Symptom: Node pods crash with errors about FUSE device access.

Cause: The worker node does not have the FUSE kernel module loaded or /dev/fuse is not accessible.

Solution:

  1. Verify the FUSE device exists on the host:
    Terminal window
    ls -la /dev/fuse
  2. If missing, load the kernel module:
    Terminal window
    sudo modprobe fuse
  3. To make it persistent across reboots, add fuse to /etc/modules-load.d/fuse.conf.

Most Linux distributions (Ubuntu, Amazon Linux 2, RHEL 8+) include FUSE3 by default.

Symptom: Node DaemonSet pods fail to schedule with “privileged” security policy errors.

Cause: The cluster’s Pod Security Policy, Pod Security Standards, or OPA/Gatekeeper rules prevent privileged pods.

Solution: The flexFS CSI node pods require privileged mode because they need to:

  • Access /dev/fuse for FUSE mounts
  • Perform bind mounts into the kubelet directory tree with bidirectional mount propagation

You must allow privileged pods in the kube-system namespace for the flexfs-csi-node DaemonSet. Example for Pod Security Standards:

apiVersion: v1
kind: Namespace
metadata:
name: kube-system
labels:
pod-security.kubernetes.io/enforce: privileged

Symptom: A pod using a flexFS PVC stays in ContainerCreating state.

Diagnosis:

Terminal window
kubectl describe pod <pod-name>
kubectl logs -n kube-system <flexfs-csi-node-pod> -c flexfs-csi-driver

Common causes:

CauseLog messageSolution
Admin server unreachablerequest creation failed or connection timeoutsVerify the adminAddr in your Secret is correct and reachable from the cluster.
Invalid account tokenstatusCode: 401 or statusCode: 403Verify the token in your Secret is valid.
Volume does not existstatusCode: 404For static provisioning, verify the volumeHandle matches an existing volume name.
Encryption secret missingError: encryption secret must be at least 8 charactersAdd the secret field to your Kubernetes Secret.
Stale FUSE mountdetected a broken base mountThe driver will attempt to unmount it automatically. If it persists, manually unmount from the node.

Symptom: Volumes fail to mount with “missing admin addr” or “missing account token”.

Solution: Verify your Secret contains the correct field names (adminAddr, token) and that the Secret is in the correct namespace:

Terminal window
kubectl get secret flexfs-secret -n default -o yaml

The Secret must be referenced correctly in both the StorageClass (for dynamic provisioning) and the PersistentVolume (for static provisioning).

Symptom: CSI pods show ImagePullBackOff or ErrImagePull.

Solution:

  1. Verify the image name and tag in your values or manifests. The default image is flexfs/csi-driver-1.9:latest.
  2. If using a private registry, create an imagePullSecret and reference it in the pod spec.
  3. Verify network connectivity from the nodes to the container registry.

Symptom: Pods cannot mount volumes; the CSI driver appears healthy but volumes never attach.

Cause: The kubelet on your nodes uses a non-standard directory (not /var/lib/kubelet).

Solution: If your kubelet uses a custom --root-dir, update the DaemonSet volume mounts to match:

  • kubelet-dir hostPath
  • registration-dir hostPath
  • The --kubelet-registration-path argument on the csi-node-driver-registrar container

Symptom: Pods that were running become unresponsive when accessing the mounted volume, with “transport endpoint is not connected” errors.

Cause: The underlying FUSE process (mount.flexfs) was terminated or crashed.

Solution: The CSI driver detects this condition and attempts to re-establish the mount automatically during the next NodePublishVolume call. To force recovery:

  1. Delete the affected pod (Kubernetes will recreate it)
  2. If the issue persists, check the mount client log at /var/lib/kubelet/flexfs/<volume-name>/log-rw inside the node pod

Enable verbose logging on the CSI driver by adding the --verbose flag to the csi.flexfs start command in both the Deployment and DaemonSet specs:

containers:
- name: flexfs-csi-driver
image: flexfs/csi-driver-1.9:latest
args: ["start", "--verbose"]

To enable verbose logging on the mount client, pass verbose as a mount option on the PV:

spec:
mountOptions:
- verbose

Mount client logs are written to /var/lib/kubelet/flexfs/<volume-name>/log-rw (or log-ro for read-only mounts) inside the node pod.

Terminal window
# Controller logs
kubectl logs -n kube-system deployment/flexfs-csi-controller -c flexfs-csi-driver
# Node logs (pick a specific node pod)
kubectl logs -n kube-system <flexfs-csi-node-pod> -c flexfs-csi-driver
# Provisioner sidecar logs
kubectl logs -n kube-system deployment/flexfs-csi-controller -c csi-provisioner