Sizing for Peak Workloads – Public Cloud Demands Intelligent Design
Sizing for peak workloads is a fundamental tenet of any responsible architecture. After spending the last 20 years helping my customers properly evaluate and right-size storage architectures for these peaks, it was never lost on me that they generally occurred less than 20% of the total run time of the sized environment. In the era of on-prem infrastructure, this equated to a lot of dormant resources waiting in reserve.
At Clumio, as I now shift into the sphere of helping customers right-size a data protection solution for the public cloud, the first thing that became apparent is that, if done correctly, the dynamic scalability of the public cloud allows for much more precise utilization of resources at all points in time. In fact, this is an optimization that must occur to seriously advance any idea into a fully formed, developed solution because, in public cloud, every second a resource is reserved or “in use” costs money. This is exactly why solutions designed to address an on-prem problem cannot simply be lifted and shifted to the public cloud. We often hear customers talk about the need to ‘refactor’ an application. This is a time-consuming effort, but the value in wasting less resources, in the end, is worth the time spent redesigning a solution.
For a SaaS application to manage peak workloads at scale, resource utilization is even more important – not just for resources that get presented in the customer’s environment, but especially for the backend services that support not just one customer, but thousands of customers. Only by building an application end-to-end with an intelligent resource utilization model will true scalability occur while minimizing cost for the consumers of the service.
By bringing to bear a cloud-native approach using microservices, Clumio has already designed our secure, enterprise backup service so that our customers get the benefit of cloud efficiency instead of having to ‘refactor’ their existing backup applications. This is best exemplified by the way Clumio inserts zero fixed-compute resources in the data path and, instead, incorporates the parallelism of AWS S3 combined with the dynamic scalability of AWS Lambda functions and AWS DynamoDB resources to provide that seamless, fluid experience that customers expect when using SaaS. This also creates a much simpler methodology for right-sizing a customer environment – Clumio will provide the requested resources on-demand.
Clumio has effectively removed the need for the arduous task to size for peak workloads. I find that I now spend most of my time educating customers on the importance of a truly optimized architecture and the value that brings to the consumer. Of course, the best way to discover the value of Clumio is by experiencing it first-hand….I encourage you to give it a test run.