Skip to content

Performance Tuning

This guide covers the key tuning parameters for optimizing flexFS performance across different workloads.

The block size determines how file data is split into chunks for storage. It is configured at volume creation time and cannot be changed afterward.

Block sizeBest forTrade-offs
256Ki (256 KiB)Small files, random accessMore metadata overhead, more requests per large file
512Ki (512 KiB)Mixed workloadsBalanced
1Mi (1 MiB)General purposeGood default for most workloads
2Mi (2 MiB)Large sequential filesLess metadata, fewer requests
4Mi (4 MiB)Large files, HPC, genomicsDefault. Optimal for files > 100 MB
8Mi (8 MiB)Very large sequential files, streamingHighest throughput for large files, wastes space on small files

The in-memory block cache reduces latency by keeping recently accessed blocks in RAM.

FlagDefaultDescription
--memCapacityAuto-tunedNumber of blocks to keep in the in-memory LRU cache.
--dirtyCapacityAuto-tunedNumber of dirty (unwritten) blocks to buffer in memory before flushing.
--poolCapacityAuto-tunedSize of the block buffer pool.

The defaults are automatically calculated based on available system RAM. Override them for specific workloads:

Terminal window
mount.flexfs start my-volume /mnt/data --memCapacity 2048 --dirtyCapacity 512

Higher --memCapacity improves read performance for working sets that fit in memory. Higher --dirtyCapacity improves write throughput by allowing more blocks to be buffered before being flushed to storage.

The on-disk cache provides a second tier of caching that survives process restarts and can be much larger than the memory cache.

FlagDefaultDescription
--diskFolder/dev/shm/.flexfs-cache-<pid>Path to the on-disk cache folder
--diskQuota(disabled)Maximum disk space to use. Accepts absolute values (e.g., 50G) or percentages (e.g., 80%). Disk caching is disabled unless this is set.
--diskMaxBlockSize131072Maximum processed block size (bytes) that will be cached to disk
Terminal window
mount.flexfs start my-volume /mnt/data \
--diskFolder /var/cache/flexfs \
--diskQuota 100G

Enable disk-level writeback caching to mask write latency:

Terminal window
mount.flexfs start my-volume /mnt/data \
--diskFolder /var/cache/flexfs \
--diskWriteback

With --diskWriteback enabled, writes are acknowledged as soon as the block is written to the local disk cache. The block is then asynchronously uploaded to object storage (or the proxy). This significantly reduces write latency for workloads that can tolerate a short window where data exists only on local disk.

FlagDefaultDescription
--dirtyActiveAuto-tunedMaximum number of dirty blocks that can be actively syncing to storage simultaneously. Higher values increase write throughput at the cost of more concurrent network connections.
--maxBopsAuto-tunedMaximum number of block operations (reads and writes) that can run in parallel.

For write-heavy workloads:

Terminal window
mount.flexfs start my-volume /mnt/data --dirtyActive 64 --maxBops 128

Block prefetching detects sequential read patterns and preloads upcoming blocks before they are requested.

FlagDefaultDescription
--noPrefetchfalseDisable prefetching entirely.
--prefetchActiveAuto-tunedMaximum number of prefetch operations running in parallel.

For sequential workloads (e.g., streaming large files, genomics pipelines):

Terminal window
mount.flexfs start my-volume /mnt/data --prefetchActive 32

For purely random workloads, prefetching adds overhead. Disable it:

Terminal window
mount.flexfs start my-volume /mnt/data --noPrefetch

These flags control the Linux FUSE interface behavior:

FlagDefaultDescription
--asyncReadfalseEnable FUSE async reads. Allows the kernel to issue multiple read requests concurrently.
--attrValid3600Time in seconds for which file attributes are cached in the kernel. Higher values reduce metadata server load. Event-driven invalidation handles most remote changes promptly.
--entryValid1Time in seconds for which directory entry lookups are cached in the kernel.
--noMaxPagesfalseDo not set the FUSE max_pages option to its maximum value. By default, flexFS maximizes FUSE page size for best throughput.

For maximum single-client throughput:

Terminal window
mount.flexfs start my-volume /mnt/data \
--asyncRead \
--attrValid 60 \
--entryValid 60

For multi-client workloads where freshness matters:

Terminal window
mount.flexfs start my-volume /mnt/data \
--attrValid 1 \
--entryValid 1

Linux kernel readahead can complement flexFS prefetching. Increase it for sequential workloads:

Terminal window
# Set readahead to 4 MB for the FUSE device
echo 4096 > /sys/class/bdi/0:$(stat -c %T /dev/fuse)/read_ahead_kb
FlagDefaultDescription
--dirPageSize5000Number of directory entries per page in the directory stream. Larger values improve performance for large directories.
--dirTTL10Time in seconds for which directory stream pages are cached.

For Enterprise deployments using proxy groups:

  • Place proxy servers in the same region as the mount clients they serve.
  • Mount clients automatically select the lowest-latency proxy group via RTT probing.
  • Use multiple proxy servers per group for load distribution (blocks are distributed via rendezvous hashing).
Terminal window
mount.flexfs start genomics-vol /mnt/data \
--diskFolder /nvme/cache \
--diskQuota 500G \
--prefetchActive 32 \
--asyncRead
Terminal window
mount.flexfs start training-vol /mnt/data \
--diskFolder /nvme/cache \
--diskQuota 1T \
--asyncRead \
--prefetchActive 64
Terminal window
mount.flexfs start shared-vol /mnt/data \
--diskFolder /var/cache/flexfs \
--diskQuota 80%