A Recipe for Cloud Design
Isn’t it time you started creating the infrastructure designs your business consumers want?
By: James Houghton
Jan. 3, 2011 06:30 AM
The deployment of infrastructure systems to support applications has been a challenge since we first developed a choice beyond the venerable mainframe. To some it’s a simple formula; take a server, toss in a little network and storage, bake for a few weeks and you’re done. If you’re having a few friends over (need a little more) then simply add a few more servers and you’re done. What the heck, hardware is cheap and gets cheaper every year...the advent of Cloud makes it even easier to maintain that philosophy.
Let’s talk about the latter flaw first, since it is more obvious and by now most readers are familiar with it. Until recently, much of the past decade in IT focused on optimizing our data centers. Why? Because for far too many years we simply threw hardware at the problem, and then suddenly – in the post dot-com IT budget slashing era – we realized we had thousands upon thousands of servers that were very poorly utilized and cost far more to support than procure. Enter server consolidation, virtualization, and the ostensibly never-ending IT Optimization / Transformation programs. We will come back to this a little later.
Now let's discuss effective design. As it's the season to cook and consume far too much food, it seems fitting to employ the “recipe” analogy. To the casual observer, the process of cooking something seems like an extraordinarily simple exercise: mix the ingredients, cook to proper time and temperature, and voila – the meal is done (if only it were that simple…think back to all the times when you took a bite and immediately wished you hadn’t). How about the first time you tried to cook something by yourself? Suddenly it didn’t seem quite so simple – who knew that there were a dozen different kinds of chocolate chips, or that it really does make a difference when you use a 10x10 pan instead of the recommended 8x14.
Is IT design any different? There are many, many varieties of servers, storage, and networking gear with varying elements of cost and performance benefits. So why do many organizations generally behave like they are cooking for the Cub Scouts on a campout, using low cost ingredients and cooking to feed 100 people? The answer lies in all three of the concepts we’ve discussed:
1. Hardware is cheap, and Cloud makes it appear even cheaper
2. We don’t understand the users (who’s coming to the party)
3. We don’t know how many we’re cooking for, so lets adopt an approach that allows us to serve as many as possible (quantity is more important than quality).
As amusing as this analogy may be, it’s actually a vicious cycle that we’ve been trapped in for some time. If our users don’t like the result at the end of step 3, we simply give them more resources … see step 1. Does that make our business partners happy (probably not)? Does it ensure we’ll be perpetually optimizing our IT landscape (definitely yes)? Is it any wonder that our business partners often opt to “go out for dinner”? Despite the many benefits of Cloud Computing, it does not fix this problem – it is the IT equivalent of ‘fast food’. What we gain in speed of service is (today) lost in quality, and I’ll politely suggest that long-term dining in the Cloud may have an adverse effect on our IT waistlines.
Ok, ok, enough with the food analogy – you get the point. The fact of the matter is that with our current approach to designing IT systems the leverage of Cloud as a delivery mechanism simply transfers the inefficiencies of that approach to the Cloud provider, who has greater economies of scale and can thus provide it for a lower cost. What are we to do?
The correct approach starts with understanding the workload demand from the beginning, and characterizing it in terms of something we call Quality of Experience (QoE). This is a composite metric, based loosely on an understanding of the relative importance of performance, cost, and efficiency to the intended workload. If that was as clear as mud, imagine you have 100 points to assign to each of those attributes; you can assign the points any way you want so long as the total adds up to 100. Critical revenue generating systems probably get 80 points or more for performance, whereas an employee expense reporting application likely gets 70+ points on the cost scale.
Now that we have a business understanding of the workload, we need to look at the various solutions (patterns, reference architectures) available and begin to apply our design rules to select the one that best matches the desired QoE and workload characteristics. What are design rules? Those are the decisions we make when taking an abstract pattern and decide what hardware choices to fulfill with, how to scale to meet the anticipated peak demand, how to make it highly available (if necessary), and how to recover if there’s a fault. Good organizations take it a step further and apply still more rules on what types of monitoring and reporting will be added to the solution, and top-notch groups with flexibility in their run- time environment will also determine the ‘when’ and ‘how’ a workload can be dynamically allocated more resources … or have them taken away by a higher priority workload.
If this sounds complicated, it’s because it is – this is the fundamental reason most IT shops say, “The heck with it, lets just cook for the Cub Scouts” and adopt very rigid infrastructure deployment options. It takes too long, and requires someone with a wide range of skills and considerable experience to participate in each and every project.
It doesn’t have to be that way any more.
What we’ve done is taken a step back and looked at this process with discipline of an engineer. In doing so we made two critical observations:
• A small percentage of projects actually require in-depth, manual design...the vast majority are simply a repeat application of something that’s been done in the past with minor variations.
• The majority of complex decisions and permutations an experienced designer (architect or engineer) makes can be codified into software rules.
That realization led us to develop our Blueprint platform; a flexible design suite that allows for the use of patterns and reference architecture and applies your (or our) design rules to generate mass-producible Blueprint designs specific to the workload requirements (QoE). Even the rules for run-time execution can be codified, whether it belongs in a tradition environment or in a private/public/hybrid Cloud. Isn’t it time you started creating the infrastructure designs your business consumers want, while also freeing up your best people to work on the projects that actually require their skills?
Reader Feedback: Page 1 of 1
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week