Logging and Diagnostics
FlexFS provides multiple logging and diagnostic tools for troubleshooting performance issues and understanding system behavior.
Mount client log file
Section titled “Mount client log file”In daemon mode (the default), the mount client writes logs to a file. Specify the log file path with:
mount.flexfs start my-volume /mnt/data --logFile /var/log/flexfs/mount.logIf no --logFile is specified, logs are written to a default location determined during init creds.
In foreground mode (--foreground), logs are written to stdout.
Verbose logging
Section titled “Verbose logging”Enable detailed logging with the --verbose flag:
mount.flexfs start my-volume /mnt/data --verboseVerbose mode automatically enables all RTT (round-trip time) logging and memory stats logging. It produces significantly more output and should be used for debugging only.
For servers:
meta.flexfs start --verboseproxy.flexfs start --verboseRTT logging
Section titled “RTT logging”Round-trip time logging measures and logs the latency of individual operations. Each RTT category can be enabled independently for targeted diagnostics:
| Flag | What it measures |
|---|---|
--blockRTT | Block storage system round-trip times (read/write to object storage or proxy). |
--metaRTT | Metadata server round-trip times (RPC calls to meta.flexfs). |
--storeRTT | Store subsystem round-trip times (internal store operations). |
--fuseRTT | FUSE round-trip times (time spent processing each FUSE operation). |
Example:
mount.flexfs start my-volume /mnt/data --blockRTT --metaRTTRTT log entries include the operation type, latency, and relevant context (such as block ID or inode number), enabling you to identify bottlenecks in specific subsystems.
Memory stats
Section titled “Memory stats”The --memStats flag logs periodic statistics about the mount client’s internal buffer pools and LRU caches:
mount.flexfs start my-volume /mnt/data --memStatsThis is useful for tuning cache sizes (--memCapacity, --dirtyCapacity, --poolCapacity) and understanding memory usage patterns.
When --verbose is enabled, memory stats are automatically logged.
pprof profiling
Section titled “pprof profiling”The --pprof flag starts a Go pprof HTTP server for CPU and memory profiling:
mount.flexfs start my-volume /mnt/data --pprofThe pprof endpoint is available at http://localhost:6063/debug/pprof/ for mount.flexfs (the port varies by component — see table below).
To capture a CPU profile:
go tool pprof http://localhost:6063/debug/pprof/profile?seconds=30To capture a heap profile:
go tool pprof http://localhost:6063/debug/pprof/heappprof is also available on server components:
| Component | pprof port |
|---|---|
admin.flexfs | 6060 |
csi.flexfs | 6061 |
meta.flexfs | 6062 |
mount.flexfs | 6063 |
proxy.flexfs | 6064 |
stat.flexfs | 6065 |
Server logging
Section titled “Server logging”The metadata server and proxy server log to stdout by default. Use --verbose for detailed operation logging.
When running under systemd, logs are captured by the journal:
# Metadata server logsjournalctl -u flexfs-meta -f
# Proxy server logsjournalctl -u flexfs-proxy -fCSI driver logging
Section titled “CSI driver logging”The CSI driver logs to stdout within the container. Enable verbose logging by passing --verbose to the csi.flexfs start command:
# View CSI driver logskubectl logs -n kube-system <flexfs-csi-node-pod> -c flexfs-csi-driverMount client logs within the CSI driver are written to /var/lib/kubelet/flexfs/<volume-name>/log-rw (or log-ro for read-only mounts).
Diagnostic checklist
Section titled “Diagnostic checklist”| Symptom | Diagnostic steps |
|---|---|
| Slow reads | Enable --blockRTT and --metaRTT. Check if latency is in block storage or metadata. |
| Slow writes | Enable --blockRTT. Check dirty cache queue with --memStats. |
| High memory usage | Enable --memStats. Review cache sizes. |
| Mount hangs | Enable --fuseRTT. Check for blocked FUSE operations. |
| Metadata server slow | Enable --verbose on meta.flexfs. Check flexfs_meta_rpc_duration_seconds metrics. |
Next steps
Section titled “Next steps”- Metrics reference — Prometheus metrics catalog
- Prometheus setup — scrape configuration
- Alerting — alert rules
- Performance tuning — cache and FUSE tuning