Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Clouds Are Coming! | @CloudExpo @ZertoCorp #IoT #M2M #API #BigData
A plethora of surveys show that more than 70% of enterprises have deployed at least one or more cloud application or workload

Clouds Are Coming, So Learn How to Fly

The cloud. Like a comic book superhero, there seems to be no problem it can't fix or cost it can't slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition.

A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production workloads, which suggests there is still apprehension in moving towards utilizing cloud for critical core infrastructure.

Some of these fears are aligned around security, although the lack of any major cloud service provider breaches suggests that this concern is ebbing away. A more common explanation is that moving to the cloud is a relatively unknown experience for many IT teams and combined with natural reluctance to move mission-critical applications from stable systems to anywhere else - palpitations are to be expected.

Faster than a speeding bullet...
And the benefits are clear. The switch to a more OPEX-centered model combined with an ability to grow on demand is well suited to an economy that rewards agile business that can adjust to supply, demand and wider market forces. Virtualization now has more than 70 percent adoption across enterprises, and combined with increasing use of software-defined networking and storage, end users have a great basis for reducing complexity and ultimately costs.

Yet many enterprises are still asking the question: How do we start moving our production environments to the cloud? The logical first step is to move a use case into the cloud that will test the environment thoroughly and, at the same time, allow organizations to build confidence in the capabilities and security of the cloud. Disaster Recovery (DR) is the best use case for this as it more closely resembles a production environment and DR testing allows organizations to build confidence in the capabilities of the cloud. For many organizations, these still consist of backup processes where data sets or snapshots are replicated and stored at a distant datacenter. Yet this legacy, manual backup concept is also undergoing a massive rethink due to the slow recovery times and inability to deal with massive growth in data and application complexity. In an unsurprising development considering its characteristics of compute, storage and connectivity, the cloud is supplanting legacy backup and becoming the basis for delivering DR and business continuity (BC).

Win Win!
For many organizations, this provides an elegant strategy. By initially focusing on a cloud-based BC/DR strategy, organizations can become comfortable with the cloud concept and develop the key skill set and reduce risks ahead of moving production environments to the cloud. In essence, a modern cloud-based BC/DR can spin up an entire replacement production environment within a few minutes of an outage and use real data that is often less than 15 minutes old to carry on with business as normal. When this is proven time and time again, organizations become comfortable with the cloud, and they know they can rely on the cloud for their production environment. Although as a caveat, this is only true with production environments that have become virtualized and not all BC/DR designs are equal in terms of features and limitations they impose.

To the Hypervisor and beyond...
The complex interconnection of modern IT requires better BC/DR strategy. In response, many organizations are increasingly replicating workloads that include groups of linked applications along with data that form a production workload instead of just individual snapshots. Although there are many ways to replicate applications and data, hypervisor-based replication removes many of the limitations of the underlying infrastructure by creating encapsulated applications. This means that the data and application are not just linked, but instead offers full workload mobilization that consists of multiple VMs with interdependency rules such as networking, firewalls and other requirements to ensure interoperability between different virtualization platforms whether they are on premise or in the cloud.

In operation, hypervisor-based replication is used alongside the virtual management console such as VMware's vCenter or Microsoft's SCVMM. As part of the control plane, anything that happens within the entire virtualized domain can be replicated to the cloud in near real-time. Hypervisor-based replication also uses a Virtual Replication Appliance (VRA) that is automatically deployed by the virtual management console into ESXi or Hyper-V hosts. The VRA continuously replicates data from user-selected virtual machines, compressing and sending that data to a remote site or storage target over LAN/WAN links. Because it is installed directly inside the virtual infrastructure, VRA capture the I/O before it leaves the hypervisor and is committed to disks. A copy is made, and the system sends that copy to the recovery site. The end result is that the disaster recovery position is not just a 12-hour-old backup of applications, but a full history of all the changes that took place within the application and data. A final ancillary benefit is simplicity and reduction in management overhead. Because the VRA is per host and not per VM, a single appliance can manage multiple guests instead of other legacy solutions that require an agent deployed on each virtual machine leading to more complexity and a potential drain on capital and operational resources.

Building DR offers cloud migration path
Although this is perfect for BC/DR, it is also the same process and steps needed for migrating to the cloud - in essence, all you would need to do is to synchronize the live production environment with the cloud, fail it over, and the cloud then becomes the live environment. In this situation, you would then create another BC/DR replication to a separate cloud thus achieving a true continuity strategy. It also has the additional benefit of helping to avoid cloud supplier lock-in and this flexibility extends to more than just cloud vendors. If you look at the more advanced hypervisor-based replication tools, some can easily move workloads between VMware ESX and Microsoft Hyper-V with support for other platforms on the horizon.

This trend is not just theoretical. A recent VMware survey of use cases for its cloud customers ranked "Disaster Recovery" as second most common requirement just behind "Packaged applications". However the third most common use case "Test and Dev" is another example where a cloud based BC/DR solution not only protects against outages but also offers major operational benefits. The ability to clone a production environment and spin it up in a scalable resource like the cloud is a developers dream. With the cloud's elastic nature, developers can run projects like complex stress testing at global scale using accurate model of the real production environment.

The super hero that is the cloud is unlikely to put away its cape anytime soon. For organizations still feeling a bit awkward, simply doing the necessary groundwork of a building cloud-based BC/DR solution is enough to gain an understanding of many of the processes involved. The next phase of successfully moving production systems over will be less daunting with much less risk and of course, reduced costs. Finally, the IT team gets to be the hero and not just the cloud.

About Jennifer Gill
Jennifer Gill is the Director of Product Marketing at Zerto. She has more than 15 years of experience marketing high-tech products and solutions across a variety of industries with proven expertise in enterprise storage, virtualization and disaster recovery/business continuity (DR/BC). She currently leads Zerto’s messaging and product content strategy to increase awareness of Zerto Virtual Replication. Additionally, she leads Zerto’s customer satisfaction and reference program to deliver access to real customer stories and best practices as they relate to BC/DR strategies.

Previously, Jennifer held several management positions at EMC in the Solutions Marketing Group and played a key role in the launch of VCE, for which she was named a finalist for the John Howard Common Sense Award. She has a BS in Biomedical Engineering from Boston University and an MBA in Marketing and Leadership from the Goizueta School of Business at Emory University.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected ...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configura...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - ...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices ...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE