Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
A Recipe for Cloud Design
Isn’t it time you started creating the infrastructure designs your business consumers want?

The deployment of infrastructure systems to support applications has been a challenge since we first developed a choice beyond the venerable mainframe. To some it’s a simple formula; take a server, toss in a little network and storage, bake for a few weeks and you’re done. If you’re having a few friends over (need a little more) then simply add a few more servers and you’re done. What the heck, hardware is cheap and gets cheaper every year...the advent of Cloud makes it even easier to maintain that philosophy.

Unfortunately that mentality is pervasive in much of IT, not to mention the business world. But there are two fundamental flaws: the first revolves around the design effectiveness, and the second pertains to operational sustainability.

Let’s talk about the latter flaw first, since it is more obvious and by now most readers are familiar with it. Until recently, much of the past decade in IT focused on optimizing our data centers. Why? Because for far too many years we simply threw hardware at the problem, and then suddenly – in the post dot-com IT budget slashing era – we realized we had thousands upon thousands of servers that were very poorly utilized and cost far more to support than procure. Enter server consolidation, virtualization, and the ostensibly never-ending IT Optimization / Transformation programs. We will come back to this a little later.

Now let's discuss effective design. As it's the season to cook and consume far too much food, it seems fitting to employ the “recipe” analogy. To the casual observer, the process of cooking something seems like an extraordinarily simple exercise: mix the ingredients, cook to proper time and temperature, and voila – the meal is done (if only it were that simple…think back to all the times when you took a bite and immediately wished you hadn’t). How about the first time you tried to cook something by yourself? Suddenly it didn’t seem quite so simple – who knew that there were a dozen different kinds of chocolate chips, or that it really does make a difference when you use a 10x10 pan instead of the recommended 8x14.

There are so many different choices to make when selecting the ingredients, and even more when you discover there are also multiple cooking methods. The permutations are more than most minds can grasp, and you start to understand a little better why it costs so much to dine in a five-star restaurant. A good chef does not arbitrarily select ingredients or the cooking method; he/she considers carefully the audience they are cooking for, the budget, and the amount of time available. Are you cooking for the President and his family, for 30 important guests, or for your local Cub Scout Pack? Each option provides different criteria, requiring decisions throughout the meal preparation process to be made, any one of which (if made poorly) may negatively impact the outcome. Dinner will be late, bad, or hastily ordered pizzas.

Is IT design any different? There are many, many varieties of servers, storage, and networking gear with varying elements of cost and performance benefits. So why do many organizations generally behave like they are cooking for the Cub Scouts on a campout, using low cost ingredients and cooking to feed 100 people? The answer lies in all three of the concepts we’ve discussed:

1. Hardware is cheap, and Cloud makes it appear even cheaper

2. We don’t understand the users (who’s coming to the party)

3. We don’t know how many we’re cooking for, so lets adopt an approach that allows us to serve as many as possible (quantity is more important than quality).

As amusing as this analogy may be, it’s actually a vicious cycle that we’ve been trapped in for some time. If our users don’t like the result at the end of step 3, we simply give them more resources … see step 1. Does that make our business partners happy (probably not)? Does it ensure we’ll be perpetually optimizing our IT landscape (definitely yes)? Is it any wonder that our business partners often opt to “go out for dinner”? Despite the many benefits of Cloud Computing, it does not fix this problem – it is the IT equivalent of ‘fast food’. What we gain in speed of service is (today) lost in quality, and I’ll politely suggest that long-term dining in the Cloud may have an adverse effect on our IT waistlines.

Ok, ok, enough with the food analogy – you get the point. The fact of the matter is that with our current approach to designing IT systems the leverage of Cloud as a delivery mechanism simply transfers the inefficiencies of that approach to the Cloud provider, who has greater economies of scale and can thus provide it for a lower cost. What are we to do?

The correct approach starts with understanding the workload demand from the beginning, and characterizing it in terms of something we call Quality of Experience (QoE). This is a composite metric, based loosely on an understanding of the relative importance of performance, cost, and efficiency to the intended workload. If that was as clear as mud, imagine you have 100 points to assign to each of those attributes; you can assign the points any way you want so long as the total adds up to 100. Critical revenue generating systems probably get 80 points or more for performance, whereas an employee expense reporting application likely gets 70+ points on the cost scale.

Now that we have a business understanding of the workload, we need to look at the various solutions (patterns, reference architectures) available and begin to apply our design rules to select the one that best matches the desired QoE and workload characteristics. What are design rules? Those are the decisions we make when taking an abstract pattern and decide what hardware choices to fulfill with, how to scale to meet the anticipated peak demand, how to make it highly available (if necessary), and how to recover if there’s a fault. Good organizations take it a step further and apply still more rules on what types of monitoring and reporting will be added to the solution, and top-notch groups with flexibility in their run- time environment will also determine the ‘when’ and ‘how’ a workload can be dynamically allocated more resources … or have them taken away by a higher priority workload.

If this sounds complicated, it’s because it is – this is the fundamental reason most IT shops say, “The heck with it, lets just cook for the Cub Scouts” and adopt very rigid infrastructure deployment options. It takes too long, and requires someone with a wide range of skills and considerable experience to participate in each and every project.

It doesn’t have to be that way any more.

What we’ve done is taken a step back and looked at this process with discipline of an engineer. In doing so we made two critical observations:

• A small percentage of projects actually require in-depth, manual design...the vast majority are simply a repeat application of something that’s been done in the past with minor variations.

• The majority of complex decisions and permutations an experienced designer (architect or engineer) makes can be codified into software rules.

That realization led us to develop our Blueprint platform; a flexible design suite that allows for the use of patterns and reference architecture and applies your (or our) design rules to generate mass-producible Blueprint designs specific to the workload requirements (QoE). Even the rules for run-time execution can be codified, whether it belongs in a tradition environment or in a private/public/hybrid Cloud. Isn’t it time you started creating the infrastructure designs your business consumers want, while also freeing up your best people to work on the projects that actually require their skills?

About James Houghton
James Houghton is Co-Founder & Chief Technology Officer of Adaptivity. In his CTO capacity Jim interacts with key technology providers to evolve capabilities and partnerships that enable Adaptivity to offer its complete SOIT, RTI, and Utility Computing solutions. In addition, he engages with key clients to ensure successful leverage of the ADIOS methodology.

Most recently, Houghton was the SVP Architecture & Strategy Executive for the infrastructure organization at Bank of America, where he drove legacy infrastructure transformation initiatives across 40+ data centers. Prior to that he was the Head of Wachovia’s Utility Product Management, where he drove the design, services, and offering for SOA and Utility Computing for the technology division of Wachovia’s Corporate & Investment Bank. He has also led leading-edge consulting practices at IBM Global Technology Services and Deloitte Consulting.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a ...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally ...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scala...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enter...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales l...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE