Hybrid Cloud / On-Prem
This guide covers deploying flexFS in a hybrid topology where compute runs on-premises and block data resides in cloud object storage.
Challenge
Section titled “Challenge”On-premises workloads accessing cloud object storage face high latency (tens to hundreds of milliseconds per operation). This makes interactive workloads and write-heavy pipelines impractical without a caching layer.
Solution: On-Prem Proxy Groups
Section titled “Solution: On-Prem Proxy Groups”Deploy proxy servers on-premises to create a local write and read cache:
- Reads: First read of a block fetches from cloud storage and caches locally. Subsequent reads are served from the on-prem cache at local network speed.
- Writes: Dirty blocks are written to the local proxy cache and asynchronously flushed to cloud storage in the background. Applications see local write latency.
Deployment Steps
Section titled “Deployment Steps”1. Cloud Infrastructure
Section titled “1. Cloud Infrastructure”Set up the admin server, metadata server, and block store in a cloud region as described in the Getting Started guide.
2. On-Premises Proxy Servers
Section titled “2. On-Premises Proxy Servers”Deploy one or more proxy.flexfs instances on-premises. Each proxy server needs:
- Network connectivity to the cloud object storage endpoint (HTTPS)
- Local disk for the block cache (SSD recommended)
- Network connectivity from on-prem compute hosts
proxy.flexfs init credsproxy.flexfs start --diskFolder /ssd/cache --diskQuota 90%3. Create an On-Prem Proxy Group
Section titled “3. Create an On-Prem Proxy Group”Register the on-premises proxies as a group:
configure.flexfs create provider --code onprem --name On-Premisesconfigure.flexfs create region --providerCode onprem --code dc1 --name "Datacenter 1"
configure.flexfs create proxy-group \ --providerCode onprem --regionCode dc1 \ --addresses proxy1.internal:443,proxy2.internal:4434. Associate with Volumes
Section titled “4. Associate with Volumes”configure.flexfs create volume-proxy-group \ --volumeID my-volume --proxyGroupID 15. Mount On-Premises
Section titled “5. Mount On-Premises”Mount clients on-premises automatically discover and select the on-prem proxy group (lowest RTT):
mount.flexfs start my-volume /mnt/flexfsNetwork Considerations
Section titled “Network Considerations”- Proxy to object storage: Requires outbound HTTPS to the cloud storage endpoint. Bandwidth determines the writeback throughput and initial cache fill speed.
- Mount clients to proxy: Local network (low latency, high bandwidth). All data flows through the proxy — mount clients do not need direct internet access.
- Mount clients to metadata server: Requires connectivity to the cloud-hosted metadata server. Metadata operations are small and frequent (a few KB per RPC call), so moderate latency is tolerable.
Writeback Tuning
Section titled “Writeback Tuning”The proxy server’s writeback behavior can be tuned with internal flags:
- Cache capacity is controlled by
--diskQuota(default 95% of the cache volume). - The
--syncflag enables fsync on every dirty block write for crash durability.
Community Edition Alternative
Section titled “Community Edition Alternative”If proxy groups are not available (Community edition), enable local on-disk writeback caching on the mount client:
mount.flexfs start my-volume /mnt/flexfs \ --diskFolder /ssd/cache \ --diskQuota 80% \ --diskWritebackThis provides local write latency mitigation without a separate proxy server, though it only benefits the single mount client host.