Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Building the Next-Generation Datacenter – A Detailed Guide
Part 1: Server Consolidation

Virtualization has the power to transform the way business runs IT, and it's the most important transition happening in IT today. It promotes flexible utilization of IT resources, reduced capital and operating costs, high energy efficiency, highly available applications, and better business continuity. However, the virtualization journey can be long and difficult as virtualization brings with it a unique set of challenges around the management and security of the virtual infrastructure. Most organizations struggle, sooner or later, with workload migrations, visibility and control, virtual machine (VM) sprawl, and the lack of datacenter agility.

Working with customers, industry analysts and other experts, CA Technologies has devised a simple four-stage model, shown in Figure 1, to describe the progression from an entry-level virtualization project to a mature dynamic data center and private cloud strategy. These stages include server consolidation, infrastructure optimization, automation and orchestration, and dynamic data center.

Figure 1: Customer virtualization maturity cycle

Most organizations face one (or more) clear ‘tipping points' during the virtualization journey where a virtualization deployment stalls as IT stops to deal with new challenges. This ‘VM stall' tends to coincide with different stages in the virtualization maturity lifecycle - such as the transition from tier 2/tier 3 server consolidation to the consolidation of mission-critical tier 1 applications, or from basic provisioning automation to a dynamic datacenter approach.

This four-part article provides guidance on the combination of people, processes and technology needed to overcome virtualization roadblocks and promote success at each of the four distinct stages of the virtualization maturity life cycle. It includes a definition of each stage and challenges associated with it, as well as a high-level project plan for a sample implementation.

It's important to note that the tasks and timelines for the sample project plans will vary depending upon the size and scope of the project, available resources, number and complexity of candidate applications, and other parameters.

Stage 1: Server Consolidation
Server consolidation, using virtualization, is an approach that makes efficient use of available compute resources and reduces the total number of physical servers in the data center. There are significant savings in hardware, facilities, operations and energy costs associated with server consolidation - hence it is one of the main drivers of virtualization and is being widely adopted within enterprises today.

Migrate with confidence
The bottom line for IT professionals undertaking initial server consolidation is that any failure at this stage could potentially stall or end the virtualization journey. Organizations undergoing server consolidation face key challenges with:

  • Gaining insight into application configuration, dependency, and capacity requirements
  • Quick and accurate workload migrations in a multi-hypervisor environment
  • Ensuring application performance after the workload migration
  • The lack of virtualization expertise

Project Plan
A high-level project plan for a department-level server consolidation project within a mid-sized organization is presented in Table 1. It details some of the key tasks necessary for a successful server consolidation project. The timelines and tasks mentioned in Table 1 present a broad outline for a tier 2/tier 3 departmental server consolidation project that targets converting approximately 200 production and non-production physical windows and Linux servers onto about 40 virtual server hosts. The two-three person implementation team suggested for the project is expected to be proficient in project management, virtualization design and deployment, and systems management.

Table 1: Server consolidation project plan

A successful server consolidation project necessitates a structured approach that should consist of the following high-level tasks.

Server consolidation workshop
A server consolidation workshop should identify issues within the environment that may limit the organization's ability to successfully achieve expected results, and provide a project plan that details tasks, resources, and costs associated with the project. A well-defined checklist should be used to identify potential challenges with resources like servers, network, and storage. For example, the checklist should ensure that:

  • The destination servers are installed and a physical rack placement diagram is available that depicts the targeted location of equipment in the rack space provided.
  • Highly available and high-bandwidth network paths are available between the virtual servers and the core network to support both application and management traffic.
  • An optimum data store and connectivity to it is available for the virtual machine disks.

The workshop should draw a complete picture of the potential challenges with the proposed server consolidation, and include concrete strategies and recommendations on moving forward with the project. It should result in the creation of a comprehensive project plan that clearly divides tasks among the implementation teams / individuals, defines timelines, contingency plans, etc, and is approved by all the key management and IT stakeholders.

Application and system discovery / profiling
Migrating physical workloads necessitates in-depth knowledge of the applications supported by them. Discovery and dependency mapping is one of the most important tasks during server consolidation as not knowing details of the environment can jeopardize the entire project. Although some hypervisor vendors provide free tools for light discovery (such as the Microsoft Assessment and Planning toolkit and the VMware Guided Consolidation), these don't dig deep enough to collect the information necessary for successful server consolidation. The lack of detailed discovery and application/system dependency will result in problems during migration almost all of the time.

During the application discovery and profiling process, application and systems consultants should use a configuration management tool to store configuration and dependency details of the applications supported by the target workloads. These tools discover and snapshot application / system components to provide a comprehensive, cross-platform inventory of applications at a granular level, including directories, files, registries, database tables, and configuration parameters - thus allowing for greater success during workload migration.

Capacity analysis and planning
Once organizations have profiled the candidate applications / systems for consolidation, they will need to determine what host hardware configuration and VM configuration will support optimal performance and scalability over time. Comprehensive capacity analysis and planning is essential to determine the optimal resource requirements in the target virtual infrastructure, and allows IT organizations to plan additional capacity purchases (server / storage hardware, bandwidth, etc.) prior to starting the migration process.

Here too, there are free tools (e.g., VMware Capacity Planner) available from hypervisor vendors, but these are ‘services heavy' and often biased toward the hypervisor vendor. In addition, they do not include important factors necessary for comprehensive capacity planning, such as power consumption, service level requirements, organizational resource pools, security restrictions, and other non-technical factors. These tools also lack critical features such as what-if analysis.

Capacity planning becomes even more important with critical applications. For instance, combining applications that have similar peak transaction times could have serious consequences, resulting in unnecessary downtime, missed SLAs, and consequent credibility issues with internal customers. To avoid such issues, historical performance data from the applications should be utilized during the capacity planning process.

Workload migration
The workload migration process is easily the most complex component of an organization's virtualization endeavor. The migration process refers to the "copying" of an entire server / application stack. IT organizations face many challenges during workload migration - most end up migrating only 80-85% of target workloads successfully, and that too with considerable problems. Some of the challenges include:

  • Migration in a multi-hypervisor environment, and possible V2P and V2V scenarios.
  • The flexibility of working with either full snapshots or performing granular data migration.
  • In-depth migration support for critical applications such as AD, Exchange, and SharePoint
  • Application / system downtime during the migration process.

There are free tools for migration available from some hypervisor vendors, but these don't work well and require system shutdown for several hours for the conversion. They might also limit the amount of data supported or require running tests on storage to uncover and address bad storage blocks in advance. Backup, High Availability (HA) and IP-based replication tools serve as a very good option for successful workload migrations as they not only help overcome / mitigate the above mentioned challenges, but also can be used for comprehensive BCDR (Business Continuity and Disaster Recovery) capabilities.

From a process standpoint, ensure that the migrations are performed per a pre-defined schedule and include acceptance testing and sign-off steps to complete the process. Ensure contingency plans, and factor in a modest amount of troubleshooting time to work out minor issues in real-time and complete the migration of that workload at that time rather than rescheduling downtime again later.

VM configuration testing
Configuration error is the root cause of a large percentage of application performance problems; post migration VM configuration testing prevents performance and availability problems due to configuration error. Another key post-migration challenge is preventing configuration drift. Maintaining various disparate VM/OS/application base templates can be very challenging; IT organizations can significantly ease configuration challenges by using a few well-defined gold-standard templates. Post migration, application / systems consultants should use change and configuration management tools to:

  • Compare to validated configurations (stored during the discover/profiling task) after migration.
  • Detect and remediate deviations from the post-migration configuration or gold standard templates.

These and related actions are essential to enable a successful migration, debug post-migration issues if any, and prevent configuration drift.

Production testing and final deliverables
The breadth and depth of post-migration testing will vary according to the importance of the migrated workload; less critical workloads might require only basic acceptance tests, while critical ones might necessitate comprehensive QA tests. In addition, this task should include follow up on any changes that the migration teams should have applied to a VM but are unable to perform due to timing or need for additional change management approval. All such post-migration recommendations should be noted, as appropriate, within the post-test delivery document(s).

This final stage of the implementation process should include the delivery of documentation on the conversion and migration workflow and procedures for all workloads. Doing so will remove dependency on acquired tribal knowledge and allow staffing resources to be relatively interchangeable. These artifacts and related best practices documents will also allow continuation of the migration process for additional workloads in an autonomous fashion in the future if desired.

Conclusion
In the first part of this article, we looked at a simple four-stage model to describe the progression from an entry-level virtualization project to a mature dynamic data center and private cloud strategy. We then focused on server consolidation, and used a sample project plan to discuss the tasks required for successful server consolidation. In the next part of this article, we will focus on building and maintaining a mature and optimized infrastructure that is essential for IT organizations to virtualize tier 1 workloads and achieve increased capacity utilization on the virtual hosts.

About Birendra Gosai
Birendra Gosai has a Masters degree in Computer Science and over ten years of experience in the enterprise software industry. He has worked extensively on data warehousing, network & systems management, and security management technologies. He currently works in the virtualization management business at CA Technologies. You can view his blogs at: http://community.ca.com/members/Birendra-Gosai.aspx, or follow him on Twitter @BirendraGosai.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions n...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructur...
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mis...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performa...
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE