Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Network Services Virtualization
Challenging conventional wisdom about today’s networks

Virtualization Magazine on Ulitzer

Data center virtualization is changing the way we think about today's networks. Stress fractures have begun to appear in the network triggered by the increasing adoption of virtualization around datacenter compute and storage platforms. The virtualization of datacenter resources, while good for the efficient utilization of costly physical resources, is placing enormous demands on the underlying network and operational support teams in terms of increasing support costs and time spent implementing changes to the infrastructure driven by the dynamic nature of virtual services. Traditional network designs and the tools that control and manage them cannot keep pace with the dynamic nature of virtual services.

This sharp increase in cost and time is a direct result of the amount, and the complexity, of changes that are required across deployed devices and active configurations in order to maintain access and security models for dynamic virtual resources and services being delivered to end users. While the increase in cost/time is troubling, more alarming is the constant changes being made to the infrastructure exposes gaps in existing security models and enterprise security frameworks that could mean disruption of access, the loss of critical business services or even worse, the destruction or loss of data and other valuable assets of the enterprise.

As the pulse of virtualization continues to push the limits of the infrastructure and the tools that control and manage the devices, configurations and services, it is apparent a new holistic approach is required to support the day-to-day demands of a highly virtualized dynamic network infrastructure. This need is even more evident in cloud computing models where the demand on network infrastructure is even more intensive and being driven by the instant-on, real-time delivery of virtualized compute/storage services.

In cloud-based service delivery the access model defining connectivity to the cloud as well as the security model that defines how that access and data is protected is created dynamically and deployed automatically as the service is requested. No longer are teams of IT and network engineers able to design, test and rollout infrastructure services over a period of weeks or months. Cloud services are turned on and off like a light bulb based solely on end-user need. The infrastructure that comprises these services - virtualized compute, storage and the underlying network infrastructure - must be able to respond to that "flick-of-the-switch" automatically, without error and without exposing other services or the enterprise at large to security risk.

In traditional infrastructure delivery models the network is constructed to connect everything together using a generally open access model judiciously layered with security applied at various levels via firewalls, VLANs, ACLs and user authentication enforced at both end points (network access and application).

In the emerging world of cloud and virtualization this traditional model has been overwhelmed as datacenter services (servers, storage and applications) are now as transient as the users on the network. Access and security models have to be adjusted as frequently as users move around the network or virtual machines are created, moved or destroyed. Support teams are being overloaded by demand, and the existing network management tools they have available have not been designed to deal effectively with this volume or type of network change. Existing management tools have simply reached their limits.

Existing Tool Categories - Functions and Limitations
Device Management and Control

  • Function: This category includes traditional network and element management systems where the primary focus is to manage the active configuration and behavior of individual devices that constitute a network. Most of these tools provide insight into the health/performance of devices and allow administrators to manipulate the configuration of a device via pre-defined configuration templates, custom scripts or a user interface. The templates and scripts contain actual configuration syntax created by network engineers that define service characteristics. These templates can be pushed to a device(s) to initiate or control access and security models across the infrastructure.
  • Limitations: The obvious problem with this approach is that network engineers simply cannot pre-define enough templates and scripts for every conceivable change or service definition required in the demand-driven cloud environment. In addition, changes to devices and services are made in a serial fashion, device-by-device, which is time-consuming and prone to human error and oversight. Most important, however, the services, once deployed, are not persisted or bound to the users and virtualized resources. As users or virtual resources are moved, network access and security models break down, opening gaps and exposing services, resources and the enterprise in general. Overall, these tools and the mechanisms they use to control the devices and services cannot scale with the amount of dynamic changes required for highly virtualized cloud environments.

Configuration Management and Automation Engines

  • Function: These tools consist of systems that provide comprehensive archiving and control of configurations across a broad range of infrastructure devices. In addition, some of the more powerful engines provide the ability to manipulate infrastructure services and active configurations using more robust scripts and template architectures, all controlled through configurable workflow rules. In some cases the automation engines have been extended to provide complete IT operations support and orchestration (data center servers, applications, storage and network).
  • Limitations: Much like the device management tools described above, the configuration management and automation tools use predefined templates and scripts along with workflow engines to control device and configuration changes. While these systems are generally more robust than the device-by-device approach defined above, they face similar scaling limitations in terms of the types and amount of changes they can effectively address. In addition, as these tools take a broader approach to managing multiple types of devices across a network, they can run into limitations building complex services-chains that connect endpoints, which is a requirement in cloud service delivery. The use of static configuration templates or scripts to construct interconnected services between multiple types and models of devices will often require significant manual intervention as the templates and scripts are not intelligent enough to negotiate service characteristics between available resources. Last, these configuration and automation tools, like the device management tools above, cannot bind access and security models to the endpoints so as the endpoint resources move, the access and security breaks down.

Inventory and Capacity Management

  • Function: This category of tools reports on the available capacity of infrastructure devices and services in order to help network engineers determine if the devices/services have the excess capacity and processing power to do what is required at acceptable levels. Initially these tools were very device/hardware specific in their approach but have more recently been extended to monitor and report on the actual services running on the devices. Some of the tools in this category have the capabilities to monitor and report on end-to-end services running across multiple devices between endpoints.
  • Limitations: As a whole, these tools are relatively passive, in that they are not making changes to devices, running configurations or services. More often they are simply providing insight back to network engineers on how the devices and resources are being consumed and the remaining available capacity on the device/service. As defined thresholds are reached, alarms are generated and network engineers would use another tool to make changes to services, configurations or devices in order to address the problem. In terms of cloud services, the most effective inventory and capacity management tools are those that monitor the end-to-end services, as these tools can potentially alert engineers to access or security models that have failed, or are about to fail based on resource utilization. None of the inventory or capacity management platforms available today can react dynamically to changes made to cloud resources or be used proactively to control how cloud services are deployed across a network.

Network Alarm and Correlation Management

  • Function: This category of tools focuses on the overall health of the network and to some extent the services running throughout the network. This is accomplished by actively monitoring all the devices/services present in the infrastructure. All infrastructure devices are built to generate alarms as things go wrong. These tools are configured to capture those alarms and organize, process and intelligently interpret the alarms in order to discern the root-cause of problems that may be impacting network or service performance.
  • Limitations: Most tools in this category focused initially on the "device" and not the services running on the device, but over the years have been extended to provide some form of service level management. As with the inventory and capacity management tools, most of these technologies are more passive in nature and only report on how the infrastructure, devices and services are behaving and do not make active changes to existing device configurations. Some of the larger, more comprehensive platforms have adopted similar features of the configuration management systems so that as alarms and outages are detected, limited forms of action can be taken based on predefined scripts or templates. Much like the capacity management tools, these engines are built for reporting, not the constant level of service deployment and change management demanded by cloud environments.

Each of these categories has a necessary place in the control of the infrastructure - even highly dynamic virtual and cloud environments. What is missing from this list is a solution that allows network engineers to build and deploy truly "fluid" network architectures that respond automatically to the ever-changing needs of virtual computing and cloud delivery models without sacrificing control or increasing security risks. This is where network service virtualization becomes essential.

A New Approach - Network Services Virtualization
Network services virtualization is needed for highly virtualized environments and cloud delivery models. It does for the network infrastructure what server virtualization has done for the data center - provide efficiency, elasticity, automation and control. Network services virtualization enables the transformation of static, rigid networks into a dynamic infrastructure that responds automatically to the demands of virtual and cloud environments based on rules and business policies defined by administrators.

The orchestration capabilities that are inherent with network services virtualization enable physical or virtualized compute/storage resources to be combined with network access and security models into a single holistic service - a cloud service - that is fully automated and can be deployed, on-demand, to selected end users. Policies define and capture the discrete elements of a  loud service and translate those elements into actual device services and configuration syntax that is automatically disseminated to the appropriate devices across the network in order to initiate the requested service.

From the activation of a policy that defines a new cloud service, network services virtualization automatically initiates the creation of the required virtual machines (VMs). As the VMs are coming online, the network services virtualization engine defines and deploys the network access and security models across all required infrastructure devices (routers, switches, firewalls) as needed to deliver the cloud service to the defined end users. The entire process is completed in seconds and can include the setup and deployment of network routes, VPNs, VLANs, ACLs, the deployment of security certificates, the configuring of Firewall rules and DNS entries, all defined via the business policy and deployed automatically without any chance of command-line mistakes by overtaxed network engineers who may introduce security gaps.

Once the business policy is implemented and the cloud service is active, the access and security models are bound to the end-point resources and persisted. As users move to new locations, the access model that defines their connection to virtual resources and the specific security settings move with them. As VMs are relocated, access and security models for end users are adjusted automatically. As business Policies that define cloud services are deactivated, VMs are destroyed and deployed network access and security settings are removed from network devices.

This unique ability to create, deploy, persist, modify and tear down network services in a fully automated fashion based on policies that provide governance and control is what separates network service virtualization platforms from the rest of the traditional management tools. The ability to transform a static non-responsive infrastructure into a fluid, responsive infrastructure without compromising control, compliance or security is what enables enterprises and service providers to automate the deployment of cloud services.

As most early adopters of highly virtualized and cloud networking environments are realizing, true on-demand computing can only be fully realized when the underlying network infrastructure is as flexible and liquid as the dynamic needs of the business end users. Network services virtualization transform the legacy infrastructure into a responsive, dynamic delivery apparatus that are essential to creating a fluid, dynamic networking environment.

About Ken Ferderer
Ken Ferderer is CTO at LineSider Technologies, a leading provider of network service virtualization platforms. He received his Bachelor of Science degree in Computer Information Systems from DeVry Institute of Technology in Phoenix, Arizona, in 1991. An experienced and widely respected industry professional, Ken was named by Computerworld Magazine among their 2007 list of Premier 100 IT Leaders.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a ...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally ...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scala...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enter...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales l...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE