With all the new “cloud” products in the enterprise market today, it can be challenging to discern what was actually born in the cloud and what is just a cloud native wannabe. Seemingly everyone these days markets their solution as “cloud-native” but the real question is, what is cloud-native, what difference does it make, and why should you care? In this blog, we are going to review the benefits of being a cloud native platform and contrast this with some of the downfalls of being a cloud native wannabe or cloud retrofit.
Before digging into the values of being cloud native, let’s first take a look at the Clumio’s cloud data platform architecture. Below are some of the key foundations:
- Leverage Cloud Scale: Most data protection solutions process data as the data flows into the platform, which creates a huge bottleneck. To avoid this, Clumio leverages cloud scalability as much as possible by transporting data directly into highly scalable and durable object storage, without our processing tier getting in the way.
- Microservices Processing Workflows: If you were building a cloud application from scratch today, there is a high likelihood you would build the platform on containers and serverless Lambda functions. We take what is typically a single application and break it up into thousands of tiny functions and orchestrate the functions into a workflow.
- Control Plane and Data Plane Disaggregation: We have separated the control plane from the data plane to enable multi-cloud functionality, without the need to copy data to a single cloud. This becomes very important as we protect data across clouds.
With this architectural approach comes significant value to customers in scalability, supportability, faster access to innovation, and economics. Let’s dig into this……..
Building our platform predominately on Lambda functions that are orchestrated into large workflows increases the ability to scale, as every request results in the same amount of Lambda functions executed in parallel. Have a thousand customers request to do a backup at the same time? No problem. Clumio can seamlessly execute a thousand lambda functions instantly in parallel. This takes the guesswork out of computing scale requirements and delivers predictability to the user experience.
Separating the platform into small functions also helps to restart from any failure quicker and makes diagnosing the failure quick and easy. If a backup fails, we can restart where we left off (at a function level), versus starting all over and forcing the same data to be sent once again which wastes time and bandwidth. It is much easier for support to diagnose issues which increases availability and uptime for the customer while decreasing management time on our side. Since there are no servers, there is no patching or updates, no need for antivirus, and no OS crashes to worry about which increases security and availability.
Faster Access to Innovation
The faster you can get to market with new features for your customers, the more you stay ahead of the competition. With breaking a single application into tiny self-contained building blocks, it makes it easier to add new features and values for customers without major upgrades. In addition, fixes and patches are localized to a single, small execution step. In fact, we push new platform updates every Thursday without having to lift a finger. This is another area where traditional backup products with monolithic approaches and quarterly or bi-annual updates will never be able to keep up with us!
The cost and processing efficiency of serverless is mindblowing. One of our customers mentioned a comparison analysis they completed between EC2 autoscaling groups and rewriting the application with serverless Lambda functions. The result was 10s of thousands (EC2) to hundreds (Lambda). No matter how elegant the autoscaling policy is, it is never going to be anywhere close to the efficiency and parallelism of running Lambda functions in a workflow. This is especially true for a service where only specific functions are run at various times and when customers are not requesting a service no costs are being created.
Now on to the comparison of Clumio, cloud native wannabes, and cloud retrofits architectures……..
Clumio Data Plane Architecture
One of the large benefits of the public cloud is getting access to a scale not typically available when building your own applications. For example, when data is backed up from a customer’s environment (VMware on-premises, VMware Cloud on AWS, or EBS in AWS) with Clumio, data is deduplicated, compressed and encrypted, then sent directly to S3 for persistent storage without putting our compute tier in the way. Once the data lands in S3, we then process it with hundreds of Lambda functions which are event-driven based on when data lands in S3. The same benefits exist for restores, where hundreds of Lambda functions can be invoked to make restores super fast. This allows us to completely leverage the massive scale of AWS versus making our software the bottleneck of scale. We also can directly access our key-value store (DynamoDB) to do hashing checks for deduplication which is event-driven through Lambda functions when needed.
The Cloud Native Wannabe Architecture
If you compare this to a cloud native wannabe that leverages EC2 auto-scaling groups for everything, you can quickly see inherent deficiencies that impact their end-user experience. Their software becomes a bottleneck and it limits their scale as all processes must go through this processing tier. This includes deduplication checks, data ingest, key-value updates, and file-level indexing. If you front-end the data with compute, and it doesn’t spin up fast enough, the user’s performance is impacted while the new application waits. If they spin up too much compute too fast, their consumption is inefficient. If a process breaks, it must be restarted from the beginning and patching/fixing means upgrading huge binaries that require tons of testing. Servers need to be patched and updated, maintained and monitored. And if the software does not scale, no matter the compute, the user experience suffers yet again.
The Cloud Retrofit Architecture
Then you have the good old cloud retrofits that try to use the cloud as both an archive and a more expensive virtualized appliance. Using the cloud as an archive has the benefit of buying less on-premises, which is great to cut costs, but you still have to manage hardware and get zero value from the cloud, nor are you getting a service offering. Virtualizing the hardware in the cloud is effectively a lift and shift, where the customer pays the IaaS bill instead of buying the hardware, licenses the software, and pays for the hardware to run 100% of the time, even if they are not using it. The end result is a cloud bill of $8,000 – $10,000 per month at a minimum, cloud infrastructure management, licensing costs from the vendor, and the requirement for an on-premises solution to do source deduplication, compression, and encryption. Forget cloud efficiency, this is as costly as it gets. The more performance or capacity you need to spin up, the more nodes of EC2 with their associated EBS storage you get to add and also continue to pay for running it 100% of the time.
The Future of Cloud Native:
As you can see, not all solutions running in the cloud are created equal and there are huge benefits of being a true cloud native architecture today, but there are even bigger benefits in the future. Done correctly, building a cloud native platform also brings the benefits of integrating into the public cloud ecosystem and getting leverage from all the new native services that AWS brings online every year. These new features can bring additional value to leverage the data that is being stored in the Clumio platform well beyond backup. Stay tuned for more on this, but one thing is for sure, Clumio is only getting started with the surprises we plan to bring to the market.
Until next time, stay SaaSy my friends………