Performance Tuning
This guide covers the key tuning parameters for optimizing flexFS performance across different workloads.
Block size
Section titled “Block size”The block size determines how file data is split into chunks for storage. It is configured at volume creation time and cannot be changed afterward.
| Block size | Best for | Trade-offs |
|---|---|---|
256Ki (256 KiB) | Small files, random access | More metadata overhead, more requests per large file |
512Ki (512 KiB) | Mixed workloads | Balanced |
1Mi (1 MiB) | General purpose | Good default for most workloads |
2Mi (2 MiB) | Large sequential files | Less metadata, fewer requests |
4Mi (4 MiB) | Large files, HPC, genomics | Default. Optimal for files > 100 MB |
8Mi (8 MiB) | Very large sequential files, streaming | Highest throughput for large files, wastes space on small files |
Memory cache sizing
Section titled “Memory cache sizing”The in-memory block cache reduces latency by keeping recently accessed blocks in RAM.
| Flag | Default | Description |
|---|---|---|
--memCapacity | Auto-tuned | Number of blocks to keep in the in-memory LRU cache. |
--dirtyCapacity | Auto-tuned | Number of dirty (unwritten) blocks to buffer in memory before flushing. |
--poolCapacity | Auto-tuned | Size of the block buffer pool. |
The defaults are automatically calculated based on available system RAM. Override them for specific workloads:
mount.flexfs start my-volume /mnt/data --memCapacity 2048 --dirtyCapacity 512Higher --memCapacity improves read performance for working sets that fit in memory. Higher --dirtyCapacity improves write throughput by allowing more blocks to be buffered before being flushed to storage.
Disk cache
Section titled “Disk cache”The on-disk cache provides a second tier of caching that survives process restarts and can be much larger than the memory cache.
| Flag | Default | Description |
|---|---|---|
--diskFolder | /dev/shm/.flexfs-cache-<pid> | Path to the on-disk cache folder |
--diskQuota | (disabled) | Maximum disk space to use. Accepts absolute values (e.g., 50G) or percentages (e.g., 80%). Disk caching is disabled unless this is set. |
--diskMaxBlockSize | 131072 | Maximum processed block size (bytes) that will be cached to disk |
mount.flexfs start my-volume /mnt/data \ --diskFolder /var/cache/flexfs \ --diskQuota 100GDisk writeback
Section titled “Disk writeback”Enable disk-level writeback caching to mask write latency:
mount.flexfs start my-volume /mnt/data \ --diskFolder /var/cache/flexfs \ --diskWritebackWith --diskWriteback enabled, writes are acknowledged as soon as the block is written to the local disk cache. The block is then asynchronously uploaded to object storage (or the proxy). This significantly reduces write latency for workloads that can tolerate a short window where data exists only on local disk.
Write tuning
Section titled “Write tuning”| Flag | Default | Description |
|---|---|---|
--dirtyActive | Auto-tuned | Maximum number of dirty blocks that can be actively syncing to storage simultaneously. Higher values increase write throughput at the cost of more concurrent network connections. |
--maxBops | Auto-tuned | Maximum number of block operations (reads and writes) that can run in parallel. |
For write-heavy workloads:
mount.flexfs start my-volume /mnt/data --dirtyActive 64 --maxBops 128Prefetch tuning
Section titled “Prefetch tuning”Block prefetching detects sequential read patterns and preloads upcoming blocks before they are requested.
| Flag | Default | Description |
|---|---|---|
--noPrefetch | false | Disable prefetching entirely. |
--prefetchActive | Auto-tuned | Maximum number of prefetch operations running in parallel. |
For sequential workloads (e.g., streaming large files, genomics pipelines):
mount.flexfs start my-volume /mnt/data --prefetchActive 32For purely random workloads, prefetching adds overhead. Disable it:
mount.flexfs start my-volume /mnt/data --noPrefetchFUSE tuning
Section titled “FUSE tuning”These flags control the Linux FUSE interface behavior:
| Flag | Default | Description |
|---|---|---|
--asyncRead | false | Enable FUSE async reads. Allows the kernel to issue multiple read requests concurrently. |
--attrValid | 3600 | Time in seconds for which file attributes are cached in the kernel. Higher values reduce metadata server load. Event-driven invalidation handles most remote changes promptly. |
--entryValid | 1 | Time in seconds for which directory entry lookups are cached in the kernel. |
--noMaxPages | false | Do not set the FUSE max_pages option to its maximum value. By default, flexFS maximizes FUSE page size for best throughput. |
For maximum single-client throughput:
mount.flexfs start my-volume /mnt/data \ --asyncRead \ --attrValid 60 \ --entryValid 60For multi-client workloads where freshness matters:
mount.flexfs start my-volume /mnt/data \ --attrValid 1 \ --entryValid 1Kernel readahead
Section titled “Kernel readahead”Linux kernel readahead can complement flexFS prefetching. Increase it for sequential workloads:
# Set readahead to 4 MB for the FUSE deviceecho 4096 > /sys/class/bdi/0:$(stat -c %T /dev/fuse)/read_ahead_kbDirectory tuning
Section titled “Directory tuning”| Flag | Default | Description |
|---|---|---|
--dirPageSize | 5000 | Number of directory entries per page in the directory stream. Larger values improve performance for large directories. |
--dirTTL | 10 | Time in seconds for which directory stream pages are cached. |
Proxy group placement
Section titled “Proxy group placement”For Enterprise deployments using proxy groups:
- Place proxy servers in the same region as the mount clients they serve.
- Mount clients automatically select the lowest-latency proxy group via RTT probing.
- Use multiple proxy servers per group for load distribution (blocks are distributed via rendezvous hashing).
Workload-specific recommendations
Section titled “Workload-specific recommendations”Genomics / HPC
Section titled “Genomics / HPC”mount.flexfs start genomics-vol /mnt/data \ --diskFolder /nvme/cache \ --diskQuota 500G \ --prefetchActive 32 \ --asyncReadAI/ML training data
Section titled “AI/ML training data”mount.flexfs start training-vol /mnt/data \ --diskFolder /nvme/cache \ --diskQuota 1T \ --asyncRead \ --prefetchActive 64General interactive workloads
Section titled “General interactive workloads”mount.flexfs start shared-vol /mnt/data \ --diskFolder /var/cache/flexfs \ --diskQuota 80%Next steps
Section titled “Next steps”- Capacity planning
- Logging and diagnostics — use RTT logging to identify bottlenecks
- Caching architecture — understand the three-tier cache model