yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
Cloud Expo & Virtualization 2009 East
Smarter Business Solutions Through Dynamic Infrastructure
Smarter Insights: How the CIO Becomes a Hero Again
Windows Azure
Why VDI?
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun's Incubation Platform: Helping Startups Serve the Enterprise
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Achieve Better, Faster & Cheaper Through Holistic Data Center Architecture
The network is uniquely qualified to enable virtualization at scale

Virtualization Magazine

The three perpetual business demands of better, faster and cheaper may just be three of the best reasons to consider infrastructure virtualization. Today's virtualization technologies, properly architected and deployed, can provide significant benefit to organizations working to evolve their IT infrastructure from an inflexible collection of individual assets into a system capable of rapidly adapting to meet business demands.

Although most enterprises evaluating virtualization may first implement server virtualization through the use of hypervisors, the journey toward a more dynamic infrastructure shouldn't stop there. Let's examine how existing and new network technologies can play a pivotal role in lowering costs and ensuring business continuity while positioning the IT infrastructure as an enabler of business agility.

Patterns in Virtualization
There are two common virtualization patterns. Server virtualization is a type of segmentation, which refers to the use of virtualization technologies to create a logical version of a physical device. Aggregation is where several physical devices are aggregated to present one logical instance. Grid computing is a common aggregation scenario.

Segmentation and aggregation can be combined in the context of an individual application. Imagine the benefits of a server virtual machine (VM) that employs segmentation being able to depend on a network-attached storage array that uses aggregation. By combining the two, we can achieve both the runtime isolation and the storage scalability required by a given enterprise application. At the same time, we can attain improved server utilization and data recovery capabilities.

Goals of Virtualization
Today's enterprises are under constant pressure to expand business capabilities, improve real-time information access, and provide richer user interactions. Globalization and new business models are creating an increasingly borderless enterprise. Governance and regulatory compliance requirements add complexities to information processing. Customers and users want their interactions with companies to incorporate new Internet capabilities.

Businesses are responding with a generation of applications that take advantage of the latest architectural trends such as service-oriented architecture (SOA) and Web 2.0 in the pursuit of delivering business value. However, the current generation of data center infrastructure is hard-pressed to keep up. New server installation often trails application demand even when adequate floor space exists. For many, the present-generation data center architecture may be at a breaking point.

·What does the latest wave of virtualization, in which hypervisor technology has become the focal point, bring to the table? What opportunities does this trend present? Why are organizations so interested in virtualizing their server infrastructure?

  • Higher efficiency: Better use of existing assets leads to reductions in power, cooling and floor space.
  • Improved resiliency: Configured properly, an entire logical server and its workload can be recovered by restoring a copy of the VM's files on a new virtual server host, reducing application service recovery time from days to minutes.
  • Increased agility: Execution environments for applications can be deployed faster than ever before. Creating a new VM on existing physical hardware requires significantly less time and coordination than bringing a new physical server online.

Even though today's economic climate has caused many enterprises to emphasize reducing costs rather than improving the ability of information technology to be more responsive, flexible, and innovative, server virtualization initiatives can often be justified on cost savings alone. At the same time, these initiatives can deliver on the other strategic goal of making IT more responsive to the continual evolution of the business.

Virtualization in Enterprise Architecture
As we've seen, server virtualization can help IT be better, faster, and cheaper. But what is the best way to deploy it within the existing operational context? How do you ensure that as virtualization expands beyond the server and into storage, applications, and the desktop, over-optimized silos of individual technology virtualization don't degrade the overall data center environment? This is where enterprise architects play a key role.

Many organizations' infrastructure architecture is rather accidental. An IT department may have selected key standards for server, networking and storage hardware and built teams around these individual technologies, but as the data center has grown over time and added special-purpose appliances, an IT department may now have the equivalent of the application architect's "integration spaghetti." It's no surprise that many enterprises spend more than 75 percent of their budgets on maintaining their current assets instead of funding innovative projects.

Even if that doesn't accurately describe your own organization, the goals and objectives of the various infrastructure teams in your enterprise may not always align. Technology interdependencies may be poorly understood. Server, storage and network teams may blame each other during an application outage. In this setting, introducing virtualization technology presents both an opportunity and a new set of challenges.

By bringing hypervisor technology into the data center, the new "atomic unit" of the data center effectively becomes the VM workload instead of the server. Because data center operations have focused for so long on what happens in and around the server, this seemingly small change has created a ripple effect on architects, engineers, and operations teams throughout the data center.

Nearly every data center system was originally built around physical servers. From designing network topology, to planning backup operations, to creating security policies, the server has likely been the center of it all. But what happens once you virtualize the server and focus on the workload? What are the implications for everything surrounding the workload? Expanding further to advanced virtualization techniques, such as the live migration of VMs between physical hosts, you may already see some potential challenges.

Significant work remains to be done beyond the hypervisor to achieve a more efficient, resilient and agile infrastructure. However, the disruption of virtualization also presents architectural opportunity. Incorporating server virtualization into your environment is sufficiently disruptive to warrant establishing or refreshing an architectural road map for the next-generation data center. A successful plan will encompass the right scope and perspective, ideally through a more holistic, systematic approach to the problem.

The Next-Generation Data Center Vision
A vision of the desired end state is essential for every successful architectural shift, and your next-generation data center is no different. Most organizations will agree that server virtualization is crucial. But how can other types of infrastructure virtualization help complete the picture? Do new capital expenditures even make sense when your management team has recently become interested in Amazon's Elastic Compute Cloud (EC2) and similar offerings from Microsoft, IBM and others? Where do you start?

Remember our goals: higher efficiency, improved resilience and greater agility. The business wants lower cost of ownership, continuous availability and faster deployment of new solutions. All of this points to the need for a new way of envisioning, architecting and ultimately implementing data center designs that result in a more dynamic infrastructure.

While the endpoints of the network are a logical place to start when introducing hypervisor technology, the network can play a significant role in realizing the vision of a dynamic data center. As the only common element that connects and enables communications between IT infrastructure components, the network can provide essential benefits that also merit thinking from the inside out - starting with core network capabilities and extending them to the VM endpoints.

Where Does the Network Fit In?
Virtualization is not a foreign concept to the network. Network administrators have used VPNs and VLANs to segment resources for individual users and applications for years. More recently, server load-balancing devices can aggregate multiple server instances into high-performance clusters.

Today's load-balancing devices can be partitioned to create virtual contexts on a per-application basis, as can firewall services. The routing table on an individual router can be partitioned using VPN routing and forwarding (VRF) technology. Two devices can be aggregated to provide a single logical switch, or virtual switching system. Further, these virtual building blocks can be combined to construct unique solutions such as enabling guest access on a shared corporate network through the combination of VPNs plus VRF associated with particular VLANs.

The network has two key roles as it enables the next-generation virtualized data center: virtualizing network services themselves and supporting a faster adoption of server virtualization and several related advanced techniques.

In the latter role, the network can uniquely combine the domains of storage, networking and server virtualization into a more unified computing experience. This evolution has occurred through innovative technologies that are open standards or are undergoing standardization. Virtual storage area networks (VSANs) became an ANSI standard in 2004. Lossless Ethernet and Fibre Channel over Ethernet (FCoE) capabilities included in data center Ethernet offerings now support the convergence of local and storage area networks into a single unified fabric. Not only is today's network a vital element in virtualizing storage, but it also helps reduce the physical cabling and adapter infrastructure needed for storage access. Linking I/O virtualization with server virtualization can greatly simplify an existing data center infrastructure and reduce cooling, power and space consumption, as shown in Figure 1.

Once the networked server and storage environment is linked to virtualized network services, an end-to-end virtualized environment that meets our stated goals begins to take shape. For an individual application, you can create a connected set of virtual resources that enables the end-to-end isolation required by many applications; increases application resiliency through workload portability among, and failover between, physical devices; and requires less floor space (see Table 1).

Yet to operate and maintain this set of virtualized abstractions, you need to be able to maintain a proper correlation between each virtual resource and its associated workload. The network and storage teams no longer have clear visibility into the dependencies between VMs and their surrounding network and storage services. New tools are required to consistently apply portable network and security policies to virtual workloads, especially on the enterprise scale. One approach is to apply uniquely identifiable Layer 2 tags to packets as they leave a given VM. These tags can be identified downstream to help match the proper network and security policy against each VM instance regardless of current location. With this approach, the network restores the traceability between components and makes virtualization transparent, which is critical to scaling operations in a virtual environment (see Figure 2).

Clouds, SOA and Web 2.0
Cloud computing, SOA and Web 2.0 may also be top of mind as you plan your data center. Accessible from anywhere, cloud services are designed to scale dynamically and to use a "pay as you go" model in place of capital investment. Likewise, your next-generation virtualized data center can be the foundation for a "private" cloud offering the same advantages. By adopting the right network and security architecture, you should be able to scale your internal application services while making them accessible from any location, enabling a borderless enterprise.

"Infrastructure as a service" can then become a reality through the establishment of an internal chargeback model that allows application developers and owners to be billed for the use of a common, shared infrastructure. It then becomes a sourcing decision whether to host a particular application internally in your virtualized data center or in a public cloud provider environment.

In adopting a data center architecture that can rapidly provision new applications, you also complement SOA and Web 2.0 application initiatives. In addition to rapid application development, agile infrastructure delivery is essential. A holistically architected next-generation data center can provide each application service with the security, availability and scalability it requires by leveraging network-based services. Combining the loosely coupled, abstracted application services offered by SOA with the virtualized abstracted infrastructure services of the next-generation data center greatly enhances your ability to rapidly deliver solutions to the business.

The Importance of People and Process
Moving toward a virtualized environment will undoubtedly affect the people and processes associated with the data center. The human factor requires an investment of time equal to that devoted to technology planning and implementation, and the transition to a virtualized environment should occur incrementally to accommodate the natural human resistance to change. It will be important to emphasize delivering capabilities like rapid instance provisioning instead of hardware instances and to communicate how a holistic plan for data center virtualization expedites such a transition.

Encourage close collaboration among the appropriate teams in your organization to plan the transition to this next-generation environment, tapping external expertise when needed. Exercise persistence and patience as new operational procedures and organizational boundaries evolve.

Better, Faster and Cheaper in Sight
Although the concept of virtualization is not new, server virtualization has created a sea change in the data center. This is an excellent opportunity to adopt a data center architecture that achieves the improved efficiency, resiliency and agility afforded by server virtualization while meeting the business demands of better, faster and cheaper.

To optimize the benefits of virtualization, look beyond the hypervisor and consider the entire system of services connected to the virtual workload via the network. The network is uniquely qualified to enable virtualization at scale as it touches every element of IT infrastructure.

About Chris Wiborg
Chris Wiborg helps drive understanding and adoption of Cisco’s Service-Oriented Network Architecture (SONA), with broad practical experience derived from more than 13 years as a practitioner in the IT and applications world. Most recently, Chris worked within Cisco’s Applications-Oriented Networking team where he was a key contributor to initial AON product concepts, developed vertical solution architectures and established key strategic partnerships. Previously, Chris was a founding member of Cisco IT's Enterprise Architecture team while part of Cisco's internal IT Infrastructure organization. In that role, Chris led the establish­ment of Cisco standards for enterprise-class Web services, application messaging infrastructure, and SOA best practices. Before joining Cisco, in addition to serving as an IT solutions architect at Applied Materials, Chris was a Senior Consultant with several boutique consulting companies both in Silicon Valley and on the East Coast. Chris holds a B.A. from Yale University.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we'll di...
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and se...
Founded in 2002 and headquartered in Chicago, Nexum® takes a comprehensive approach to security. Nexum approaches business with one simple statement: “Do what’s right for the customer and success will follow.” Nexum helps you mitigate risks, protect your data, increase busines...
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greate...
The Transparent Cloud-computing Consortium (T-Cloud) is a neutral organization for researching new computing models and business opportunities in IoT era. In his session, Ikuo Nakagawa, Co-Founder and Board Member at Transparent Cloud Computing Consortium, will introduce the big ...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)!

Advertise on this site! Contact advertising(at)! 201 802-3021

SYS-CON Featured Whitepapers
Most Read This Week