Skip to content

Proxy Server Overview

Proxy servers (proxy.flexfs) act as a CDN-like block caching layer between mount clients and object storage. They receive block read and write requests over HTTPS, cache blocks on local disk, and read from or write through to the underlying object store. Proxy servers are organized into proxy groups — sets of one or more proxy servers that collectively cache blocks for one or more volumes.

  • Multi-client read amplification — When many mount clients read the same data, proxy servers serve cached blocks instead of each client hitting object storage independently.
  • Egress cost reduction — Blocks served from the proxy cache avoid object storage egress charges.
  • Regional and edge caching — Place proxy groups near compute workloads in different regions. Mount clients automatically select the lowest-latency group.
  • Hybrid cloud / on-prem — Proxy groups with writeback caching bridge on-premises compute with cloud object storage, masking the higher latency of cross-network writes. See Hybrid Deployments.

A proxy group is a collection of one or more proxy server addresses. Each volume can be associated with multiple proxy groups (e.g., one per region). The admin server stores the mapping between volumes and proxy groups.

When a mount client starts, it probes every proxy group configured for its volume and measures the round-trip time (RTT) to each. The client selects the group with the lowest RTT. If no proxy group is reachable, the mount client bypasses proxies entirely and communicates directly with object storage. This graceful fallback ensures mounts never fail due to proxy unavailability.

Within the selected proxy group, mount clients distribute block requests across proxy servers using rendezvous hashing (also known as highest random weight hashing). This ensures:

  • Deterministic routing — All clients route the same block to the same proxy server, maximizing cache hit rates.
  • Minimal disruption — When a proxy server joins or leaves the group, only the blocks assigned to that server are redistributed. Other assignments remain stable.

Proxy server addresses within a group can be added or removed via the admin server while mounts are active. Mount clients periodically refresh their proxy group configuration and adapt their routing tables accordingly.

  1. Mount client determines the target proxy server via rendezvous hashing.
  2. Proxy server checks its local disk cache.
  3. Cache hit: Block is returned immediately from disk.
  4. Cache miss: Proxy server fetches the block from object storage, caches it locally, and returns it to the client.
  1. Mount client sends the block to the target proxy server.
  2. Proxy server writes the block to its local disk cache and acknowledges the client immediately.
  3. The proxy server asynchronously writes the block through to object storage in the background.
  1. Mount client sends the block to the target proxy server.
  2. Proxy server writes the block directly to object storage.
  3. The block is cached locally for future reads.
ScenarioBenefit
Multiple clients reading the same datasetShared cache reduces object storage load and egress
Cross-region compute accessing a central bucketRegional proxy groups reduce read latency
On-premises compute with cloud storageWriteback caching masks WAN latency for writes
Large-scale training or analytics jobsProxy cache absorbs repeated data access patterns
Egress cost sensitivityCached reads avoid per-GB egress charges