Maintenance
Database folder
Section titled “Database folder”The metadata server stores all filesystem metadata in a database folder. The default location is:
~/.flexfs/meta/dataThis can be changed with the --dbFolder flag. The folder is created automatically on first start.
Folder contents
Section titled “Folder contents”The database folder contains the metadata database files. These files should not be modified manually. The metadata server manages compaction, garbage collection, and space reclamation automatically.
Storage requirements
Section titled “Storage requirements”Metadata database size depends on:
- Number of files and directories — Each inode (file, directory, symlink) consumes metadata entries.
- Extended attributes and ACLs — Volumes with extensive xattrs or ACLs require more space.
- Retention window — When block retention is enabled (for time-travel), historical metadata versions are retained, increasing database size.
As a rough guideline, expect 1-2 KiB of metadata per file/directory. A volume with 10 million files requires approximately 10-20 GiB of metadata storage.
Backup strategies
Section titled “Backup strategies”File-level backup
Section titled “File-level backup”Stop the metadata server before taking a filesystem-level backup of the database folder:
sudo systemctl stop flexfs-metacp -a ~/.flexfs/meta/data /backup/meta-data-$(date +%Y%m%d)sudo systemctl start flexfs-metaBlock storage durability
Section titled “Block storage durability”The block data itself is stored in cloud object storage (S3, GCS, Azure Blob, OCI), which provides its own durability guarantees (typically 99.999999999% for standard storage classes). Block data does not need to be backed up separately.
Point-in-time recovery
Section titled “Point-in-time recovery”For volumes with block retention configured, the time-travel feature provides point-in-time access to historical filesystem state. This can serve as a complement to traditional backups for data recovery scenarios.
Memory tuning
Section titled “Memory tuning”Database cache
Section titled “Database cache”The --dbMemCapacity flag controls how much memory the metadata database allocates for its in-memory block cache (index and data blocks). This is the most impactful tuning parameter for metadata server performance.
| Value | Description |
|---|---|
40% (default) | 40% of system RAM |
4G | 4 GiB absolute |
512M | 512 MiB absolute |
Guidelines:
- For dedicated metadata servers, the default of 40% is a good starting point.
- For servers running alongside other services, reduce to 10-20% or use an absolute value.
- If the metadata database is larger than available cache, frequently accessed metadata will still be served from cache, but random access patterns may incur disk I/O.
- Monitor the server’s memory usage and adjust if the system is under memory pressure.
Sync mode
Section titled “Sync mode”The --sync flag forces the database to fsync every write operation, ensuring full crash durability at the cost of higher write latency. By default, sync is disabled, and the database relies on the operating system’s page cache and periodic flushing.
- sync enabled: Every metadata write is durable before acknowledgment. Recommended for environments where power loss or kernel crashes are a concern.
- sync disabled (default): Higher throughput. A crash may lose the last few seconds of metadata operations, but the database remains consistent (no corruption).
Scaling considerations
Section titled “Scaling considerations”The metadata server is a single-instance service. Scaling is vertical:
- CPU: The metadata server benefits from multiple cores for concurrent RPC session handling and background maintenance tasks.
- Memory: More memory allows a larger database cache, reducing disk I/O.
- Storage: Fast local storage (NVMe) reduces metadata operation latency.
- Network: The metadata server handles RPC connections from all mount clients and REST requests from utilities. Ensure sufficient network bandwidth for your expected session count.
For very large deployments (thousands of concurrent mounts), consider:
- Splitting volumes across multiple metadata servers (each metadata server handles a subset of volumes).
- Increasing the database cache size.
- Using the fastest available local storage.