Commvault Unveils Clumio Backtrack - Near Instant Dataset Recovery in S3
I want to share with you some important details to consider when thinking about protecting your data in Amazon S3. There are several factors to consider when thinking about backup vs. replication, data layout, versioning and restoring. Read on for my insights.
S3’s famous 11 9’s of durability does not protect you from accidental deletion or account level compromises. Moreover, even things like object level versioning do not provide full protection, since a user can mistakenly or maliciously delete versioned objects. They could configure lifecycle rules that result in unintended object deletions. Creating an air-gapped backup copy of the data is the best protection from any accidental deletion or malicious user.
The purpose of replication is to keep a mirror copy of the primary bucket in a second location in order to fail over in case the primary data becomes unavailable (usually because of data loss or service outage).
In general, replication:
Backup is similar to replication in the sense that a second copy is maintained, however backup is not directly accessed by the application. This secondary backup data is used to restore back to a known “good” copy.
Until very recently, bucket replication was the only available data protection option for Amazon S3, so understandably, many organizations still use replication today. Considering that using replication in place of backup often results in inefficiencies and higher costs, organizations should take a look at their data protection needs and make sure they are using the right tool for the job. The decision to have continuous recovery points or discrete recovery points should be based on your organizational requirements.
It is very easy to forget about the recovery process, but the whole point of backup is to restore your data when something goes wrong. You want data restoration to be quick to meet your Recovery Time Objectives (RTO) and ensure business continuity.
It is easy to set up S3 replication but it is another story if you want to restore the data back. Depending on where and how you stored the replica bucket, you will have to:
While this is only a few steps, you may need to do this with millions or billions of objects. You will need to track progress, handle or retry any failures and execute the restores in the most efficient manner. You want the restore process to be as quick, easy and painless as possible. Configuring AWS S3 replication is easy, but restoring from it is a big undertaking.
Clumio performs numerous storage optimizations in order to backup data in the most cost-efficient way possible.
In Clumio Protect for Amazon S3, versioning is optional, which is helpful in optimizing retention cost. Another way Clumio helps you save even more is to provide you with the right framework to tell us what you want to protect and what you don’t. For instance:
While the amount of production data in S3 continues to increase, it is still used to store a variety of data that is not critical to backup. So it is important to control costs by selectively backing up your S3 data.
We implemented workflows to optimize efficiency when restoring buckets to ensure the fastest possible recovery time. Despite these optimizations, restoring millions of objects can still be time consuming, so Clumio provides global search-and-restore capability. You can search and restore objects based on their:
By quickly recovering specific objects during a disaster or cloud outage, Clumio reduces your RTO, helping you meet your business continuity objectives.