We are building an S3-compatible distributed object storage system written in Rust (Apache 2.0). Today, we’re releasing a feature we’ve been working on for a while: a direct binary replacement tool to migrate existing MinIO instances to RustFS.
The Problem: As many of you know, MinIO recently archived its open-source repository. However, there are still millions of instances running older open-source versions that are no longer receiving security updates. For users with hundreds of gigabytes or petabytes of data, migrating to a new storage infrastructure over the network is cost-prohibitive and operationally terrifying.
The Solution: Because RustFS is fully S3-compatible, we decided to tackle the on-disk format. With our latest release (1.0.0-alpha.87), you don't need to provision new infrastructure or move data over the wire. You can simply swap the MinIO binary with the RustFS binary (or update your docker-compose image), and RustFS will boot up using your existing MinIO data directory.
Currently, we automatically migrate bucket metadata, objects (labels, locks, versioning), IAM, and lifecycle management.
We are still in alpha, with a Beta planned for April and GA in July. Our long-term goal is to build the key storage system for AI infrastructure, with native support for RDMA and DPUs.
The link points to our GitHub issue detailing the migration steps. I'd love to hear your feedback, especially if you are managing large legacy MinIO clusters. Happy to answer any questions about our architecture, Rust implementation, or the roadmap!
Hi there, RustFS team member here! Thanks for taking a look.
To clarify our architecture: RustFS is purpose-built for high-performance object storage. We intentionally avoid relying on general-purpose consensus algorithms like Raft in the data path, as they introduce unnecessary latency for large blobs.
Instead, we rely on Erasure Coding for durability and Quorum-based Strict Consistency for correctness. A write is strictly acknowledged only after the data has been safely persisted to the majority of drives. This means the concern about "eating committed writes" is addressed through strict read-after-write guarantees rather than a background consensus log.
While we avoid heavy consensus for data transfer, we utilize dsync—a custom, lightweight distributed locking mechanism—for coordination. This specific architectural strategy has been proven reliable in production environments at the EiB scale.
Is there a paper or some other architecture document for dsync?
It's really hard to solve this problem without a consensus algorithm in a way that doesn't sacrifice something (usually correctness in edge cases/network partitions). Data availability is easy(ish), but keeping the metadata consistent requires some sort of consensus, either using Raft/Paxos/..., using strictly commutative operations, or similar. I'm curious how RustFS solves this, and I couldn't find any documentation.
EiB scale doesn't mean much - some workloads don't require strict metadata consistency guarantees, but others do.
We are building an S3-compatible distributed object storage system written in Rust (Apache 2.0). Today, we’re releasing a feature we’ve been working on for a while: a direct binary replacement tool to migrate existing MinIO instances to RustFS.
The Problem: As many of you know, MinIO recently archived its open-source repository. However, there are still millions of instances running older open-source versions that are no longer receiving security updates. For users with hundreds of gigabytes or petabytes of data, migrating to a new storage infrastructure over the network is cost-prohibitive and operationally terrifying.
The Solution: Because RustFS is fully S3-compatible, we decided to tackle the on-disk format. With our latest release (1.0.0-alpha.87), you don't need to provision new infrastructure or move data over the wire. You can simply swap the MinIO binary with the RustFS binary (or update your docker-compose image), and RustFS will boot up using your existing MinIO data directory.
Currently, we automatically migrate bucket metadata, objects (labels, locks, versioning), IAM, and lifecycle management.
We are still in alpha, with a Beta planned for April and GA in July. Our long-term goal is to build the key storage system for AI infrastructure, with native support for RDMA and DPUs.
The link points to our GitHub issue detailing the migration steps. I'd love to hear your feedback, especially if you are managing large legacy MinIO clusters. Happy to answer any questions about our architecture, Rust implementation, or the roadmap!