Skip to content

Hybrid Cloud / On-Prem

This guide covers deploying flexFS in a hybrid topology where compute runs on-premises and block data resides in cloud object storage.

On-premises workloads accessing cloud object storage face high latency (tens to hundreds of milliseconds per operation). This makes interactive workloads and write-heavy pipelines impractical without a caching layer.

Deploy proxy servers on-premises to create a local write and read cache:

  1. Reads: First read of a block fetches from cloud storage and caches locally. Subsequent reads are served from the on-prem cache at local network speed.
  2. Writes: Dirty blocks are written to the local proxy cache and asynchronously flushed to cloud storage in the background. Applications see local write latency.

Set up the admin server, metadata server, and block store in a cloud region as described in the Getting Started guide.

Deploy one or more proxy.flexfs instances on-premises. Each proxy server needs:

  • Network connectivity to the cloud object storage endpoint (HTTPS)
  • Local disk for the block cache (SSD recommended)
  • Network connectivity from on-prem compute hosts
Terminal window
proxy.flexfs init creds
proxy.flexfs start --diskFolder /ssd/cache --diskQuota 90%

Register the on-premises proxies as a group:

Terminal window
configure.flexfs create provider --code onprem --name On-Premises
configure.flexfs create region --providerCode onprem --code dc1 --name "Datacenter 1"
configure.flexfs create proxy-group \
--providerCode onprem --regionCode dc1 \
--addresses proxy1.internal:443,proxy2.internal:443
Terminal window
configure.flexfs create volume-proxy-group \
--volumeID my-volume --proxyGroupID 1

Mount clients on-premises automatically discover and select the on-prem proxy group (lowest RTT):

Terminal window
mount.flexfs start my-volume /mnt/flexfs
  • Proxy to object storage: Requires outbound HTTPS to the cloud storage endpoint. Bandwidth determines the writeback throughput and initial cache fill speed.
  • Mount clients to proxy: Local network (low latency, high bandwidth). All data flows through the proxy — mount clients do not need direct internet access.
  • Mount clients to metadata server: Requires connectivity to the cloud-hosted metadata server. Metadata operations are small and frequent (a few KB per RPC call), so moderate latency is tolerable.

The proxy server’s writeback behavior can be tuned with internal flags:

  • Cache capacity is controlled by --diskQuota (default 95% of the cache volume).
  • The --sync flag enables fsync on every dirty block write for crash durability.

If proxy groups are not available (Community edition), enable local on-disk writeback caching on the mount client:

Terminal window
mount.flexfs start my-volume /mnt/flexfs \
--diskFolder /ssd/cache \
--diskQuota 80% \
--diskWriteback

This provides local write latency mitigation without a separate proxy server, though it only benefits the single mount client host.