The S3 Daemon

s3d is a daemon for data access using S3 API. A modern cousin of nfsd, ftpd, httpd, etc. It is designed to be simple, tiny, blazing fast, and portable in order to fit in a variety of environments from developer machines, containers, kubernetes, edge devices, etc.

By default, s3d serves the S3 API as a gateway to a main remote S3 storage (AWS/compatible), with ultimate protocol compatiblity (based on the AWS SDK and Smithy API), it adds critical features needed by remote clients such as queueing, caching, and synching, in order to optimize performance, increase data availability, and service continuity for its clients.

The need for a daemon running alongside applications emerges in Edge computing use cases, where data is stored and processed locally at the edge as it gets collected, while some of the data gets synced to and from a main data storage. Refer to Wikipedia - Edge computing: “Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. …“.


Maturity legend: 🥉 - design phase, 🥈 - development phase, 🥇 - released.

  • 🥈   S3 API
    • Generated S3 protocol code with awslabs/smithy-rs which builds the AWS SDK for Rust.
    • Provides compatible parsing of API operations, inputs, outputs, errors, and the server skeleton.
  • 🥈   Write Queue
    • Writing new objects to local filesystem first to tolerate connection issues.
    • Pushing in the background to remote storage.
  • 🥉   Read Cache
    • Store cached and prefetched objects in local filesystem.
    • Reduce egress costs and latency of reads from remote storage.
  • 🥉   Filters
    • Fine control over which objects to include/exclude for upload/cache/sync.
    • Filter by bucket name, bucket tags, object keys (or prefixes), object tags, and object meta-data.
    • Optional integration with OpenPolicyAgent (OPA) for advanced filtering.
  • 🥉   Sync Folder
    • Continuous, bidirectional and background sync of local dirs to remote buckets.
    • This is a simple way to get data that begins as local files to the remote S3 storage.
  • 🥉   Fuse Mount
    • Virtual filesystem mountpoint provided by the daemon (see kernel fuse docs).
    • The filesystem can be accessed normally for applications that do not use the S3 API.
  • 🥉   Metrics
    • Optional integration with OpenTelemetry.
    • Expose logging, metrics, and tracing of S3 operations and s3d features.


  • 🧑‍🚀   User guide - how to use features and configurations.
  • 🥷   Developer guide - how to build and test.
  • 🧝   Architecture page - designs of components, software and roadmap ideas.


🧪🧪❕   Experimental
🧪🧪❕   This project is still in it’s early days, which means it’s a great time
🧪🧪❕   to affect its direction, and it welcomes contributions and open discussions.
🧪🧪❕   Keep in mind that all internal and external interfaces are considered unstable
🧪🧪❕   and might change without notice.

Getting Started

Until the first releases are available, please refer to the Developer guide for building s3d from source.

Here are some commands to explore:

make                        # build from source (see dev guide for prerequisites)
eval $(make env)            # defines aliases such as s3d -> build output binary
aws configure list          # make sure remote S3 credentials are configured
s3d run                     # runs daemon (foreground)
s3d status                  # check daemon status
s3d status bucket/key       # check bucket or object status
s3d ls bucket/prefix        # list buckets or objects
s3d get bucket/key > file   # get object data to stdout (meta-data to stderr)
s3d put bucket/key < file   # put object data from stdin
s3d set bucket/key          \
  --tag s3d.upload=true     \
  --tag s3d.cache=pin       # set tags for bucket or object to be used in filters
s3d <command> --help        # show usage for command


  • Github repo - The project lives here, set your options to get notified on releases, issues, etc.
  • Discord chat - use this invite link to join.
  • Redhat-et - this project was initiated at Red Hat Emerging Technologies.
  • License - Apache 2.0