Clumio announces $75M Series D and 4X YoY growth in ARR

// 28 Apr 2022

Backing up Amazon S3: Details and Best Practices

Woon Jung, Co-Founder & CTO

I want to share with you some important details to consider when thinking about protecting your data in Amazon S3. There are several factors to consider when thinking about backup vs. replication, data layout, versioning and restoring. Read on for my insights.



Data Durability Does Not Eliminate The Need for Backup

S3’s famous 11 9’s of durability does not protect you from accidental deletion or account level compromises. Moreover, even things like object level versioning do not provide full protection, since a user can mistakenly or maliciously delete versioned objects. They could configure lifecycle rules that result in unintended object deletions. Creating an air-gapped backup copy of the data is the best protection from any accidental deletion or malicious user.

Replication and Backup Have Different Uses

The purpose of replication is to keep a mirror copy of the primary bucket in a second location in order to fail over in case the primary data becomes unavailable (usually because of data loss or service outage).

In general, replication:

  • Mirrors the data structure/layout of the primary bucket into the secondary bucket
  • Copies every single operation made to the primary bucket to the secondary bucket immediately (the secondary bucket is a hot/active copy, updated constantly)
  • Requires versioning to be enabled which creates unwanted copies of objects and significantly increases costs

Backup is similar to replication in the sense that a second copy is maintained, however backup is not directly accessed by the application. This secondary backup data is used to restore back to a known “good” copy.

  • Backup is allowed to change the data structure/layout to optimize for cost efficiency, making it cheaper to retain the copies. Your application will not directly access the backup data, so it doesn’t need to be structured the same as your primary
  • Backup does not constantly copy every single update made to an object. Instead, backup copies your data at discrete recovery points, allowing you to perform PITR (Point in Time Recovery) while optimally consuming storage for backups

Data Layout and Efficiency

  • Creating a lot of small objects is not ideal for S3 in terms of cost
  • Large numbers of small objects results in high PUT cost
  • Except for S3-Standard tier, small objects are penalized by either a minimum size requirement (e.g S3 Infrequent Access) or fixed metadata per object (e.g Glacier)
  • The application drives the structure of the primary S3 bucket and the replica bucket (which mirrors the primary)
  • Conversely, for backup it is possible to organize the data and optimize cost; for example, merge several small objects into one large object and reduce multiple PUTs into a single operation

Until very recently, bucket replication was the only available data protection option for Amazon S3, so understandably, many organizations still use replication today.  Considering that using replication in place of backup often results in inefficiencies and higher costs, organizations should take a look at their data protection needs and make sure they are using the right tool for the job. The decision to have continuous recovery points or discrete recovery points should be based on your organizational requirements.

The Restore Process

It is very easy to forget about the recovery process, but the whole point of backup is to restore your data when something goes wrong. You want data restoration to be quick to meet your Recovery Time Objectives (RTO) and ensure business continuity.

It is easy to set up S3 replication but it is another story if you want to restore the data back. Depending on where and how you stored the replica bucket, you will have to:

  • Select the right version (if restoring to a specific version/point-in-time)
  • Retrieve objects (if stored in Glacier)
  • Copy objects to the destination
  • Make sure that the retrieved objects are deleted

While this is only a few steps, you may need to do this with millions or billions of objects. You will need to track progress, handle or retry any failures and execute the restores in the most efficient manner. You want the restore process to be as quick, easy and painless as possible. Configuring AWS S3 replication is easy, but restoring from it is a big undertaking.

Experience Clumio Protect for Amazon S3

Clumio performs numerous storage optimizations in order to backup data in the most cost-efficient way possible.
In Clumio Protect for Amazon S3, versioning is optional, which is helpful in optimizing retention cost. Another way Clumio helps you save even more is to provide you with the right framework to tell us what you want to protect and what you don’t. For instance:

  • Do you care about all versions or specific recovery points?
  • Do you want to backup specific prefixes or exclude others?
  • Do you want to backup only specific types of objects or storage classes?

While the amount of production data in S3 continues to increase, it is still used to store a variety of data that is not critical to backup. So it is important to control costs by selectively backing up your S3 data.

We implemented workflows to optimize efficiency when restoring buckets to ensure the fastest possible recovery time. Despite these optimizations, restoring millions of objects can still be time consuming, so Clumio provides global search-and-restore capability. You can search and restore objects based on their:

  • Prefixes
  • Sizes
  • Classes
  • Object tags

By quickly recovering specific objects during a disaster or cloud outage, Clumio reduces your  RTO, helping you meet your business continuity objectives.

In summary

  • You should consider backing up your S3 buckets. 11 9’s of durability means your data is very safe from server issues, but this does not protect you from other types of data loss.
  • Replication is not backup, just like backup is not replication. Each has a distinct purpose.
  • When considering replication and backup, think about the restore process and think in the context of hundreds of millions of objects.
  • Building and maintaining everything yourself is possible but probably isn’t your most efficient option.