net.work

The Way Business Is Moving

net.work published by
Issue Date: November 2008

The next frontier: managing virtualised infrastructure

November 2008
Brett Haggard

While it is still a hot button, virtualisation in itself is already old-hat. Once a company has virtualised its environment however, how does it keep its more complex, yet better performing infrastructure from getting away from it? The answer is superior management – and it is the next big thing.

Virtualisation is undoubtedly a hot button in business IT circles today.
Testimony to this is borne out by the growing buzz surrounding the topic and the fact that a Gartner report released last month, cited virtualisation as first on the list of strategic technologies businesses must take cognisance of for 2009.
The practice of virtualising server workloads and storage volumes is old news however.
Hypervisors – the clever intermediary layers that allow for multiple server images to run on single physical servers – are in abundance and have become satisfactorily mature in their functionality.
Similarly, the technologies that allow for storage volumes to be virtualised, or appear as a single volume, even though they are spanned across multiple disparate physical disk solutions, have reached a more than functional level.
And largely, customers have been extremely satisfied by the additional performance and efficiency they have been able to achieve by virtualising their infrastructure.
The next challenge is providing platforms that allow IT management teams to easily and effectively manage these virtualised environments. Without the ability to manage a virtualised datacentre using fewer resources, the benefits brought to the table by virtualisation are rendered moot.
Managing virtualised environments
Managing a non-virtualised datacentre environment is enough of a challenge today.
To keep their organisations’ business systems ticking over, IT teams require tools not that allow them to monitor server load and storage capacity thresholds, but to do so across disparate operating systems and hardware architectures.
Throw virtualisation into the mix and suddenly there is an extra level of complexity they need to cope with.
Brandon Atkinson, Business Service Automation boss in HP Software says that although on the one hand is virtualisation has become the customer’s best friend, it can prove to be a huge headache if they do not have the tools and processes to manage their new virtualised environment well.
“Our job in Business Service Automation is to help them manage that extra complexity virtualisation can add into the management of a datacentre,” he says.
“Today there are solutions capable of managing a fleet of virtual machines,” Atkinson says.
In fact, many of them come bundled as part of the hypervisor suites customers use to embark on a virtualisation drive.
“Most of the virtualisation management solutions available today are siloed in nature – they are designed to deal with either a client, server or storage specific discipline; and furthermore to deal with specific kinds of virtualisation, on specific operating systems.”
The problem, Atkinson says, is that a change in one part of a business process impacts more than one of these silos.
Cutting across the silos
For example, to manage a change in an order management system that is running on virtualised infrastructure, the IT team would require discrete tools to manage the virtualised server, storage, network interfaces and possibly even the desktops that solution is delivered to.
“That is four different silos,” he says, “and generally, the company would require the four different teams in charge of those domains to work perfectly together.
“To avert this potentially confusing situation, a new area that HP is pioneering solutions in, called ‘run-book’ or operations automation comes into play,” he says.
Atkinson says the term ‘run book’ is a colloquialism for an operations manual.
Since an operations manual is where IT departments establish and set the technical IT workflows and processes, ‘run-book’ automation is the automation of those technical IT workflows and processes.
“With a ‘run-book’ automation solution in place, the IT professional that is responsible for deploying a new service, simply logs into a tool and provisions the new service – all of the individual steps involved in provisioning the underlying silos, such as servers, operating systems, storage and networking devices are orchestrated in the background.”
In some cases, he admits this will involve some manual interaction, like when a manager is required to sign off on a certain change or step.
“The ‘run-book’ automation solution however initiates the steps and carries them out once approval is given,” he says.
Managing ongoing change
Atkinson says that this form of automation is not only useful when it comes to speeding up provisioning of virtual and physical infrastructure for new services.
“Since there is more change in virtual environments than there is in the physical environments, companies need these tools to keep up with the constantly changing demands of their IT,” he says.
This could mean automatically provisioning additional processor capacity or storage as and when specific applications require it.
Beyond this, he says that these new tools also assist customers in assessing the impact of a change on their environment before starting the process of initiating that change.
“If a company deploys a new virtual machine on a server that is already part of their virtualised environment, they need to see what services depend on that server being available. If it is too risky, they have the ability to opt out,” he says.
Dealing with sprawl
Another area that begs addressing in the virtualised environment is virtual server sprawl.
“Because it is easier and cheaper than ever to spin up another operating system image in the virtualised world, customers can lose track of what is in their datacentre and what purpose it is serving.
“These tools allow for tight controls to be applied to who in the business is allowed to spin up a new image and monitor what the reason for that was.”
If the company loses track of the images in its virtualised environment, they might well have virtual machines sitting around idle, sapping resources – and in doing so, rendering much of the benefit virtualisation offers in terms of better efficiency moot.
“Conceptually it is not all that different from defragmenting a computer’s hard drive, except that it applies to virtual server instances in the datacentre,” he says.
Governance, risk and compliance
A last benefit this new breed of solutions offers is the ability to take control of compliance, standardisation and security.
“IT fails its audits when it has not used the prescribed processes for doing things. And in the virtualised world, with its added complexity, this becomes a more common occurrence.
“We have tools that ensure adherence to customers’ defined compliance policies – regardless of whether it comes to physical or virtual servers – and furthermore, allow IT to perform realtime reporting on how compliant it is with its own standards.”
Often however, IT policies need to be deviated from – and this is generally because an application has an idiosyncrasy that dictates a different operating system or virtual machine configuration.
Customers therefore need a red light/green light dashboard that indicates what is compliant with policy and not, and when it is not, warn the IT team and give it the ability to either define a new policy, or document the reason for deviation from the policy.
Atkinson believes that the convergence of tools that address both physical and virtual infrastructure management are needed and inevitable.
In many ways, it is the next frontier.
Virtualisation for the cloud
While it is an accepted fact that hypervisors and virtualisation solutions have reached an acceptable level of maturity, work is still ongoing to ensure that solutions are able to eke even more efficiency out of their hardware hosts and that currently unaddressed areas of the IT market can begin benefitting from the advantages virtualisation brings.
Looking at where things are destined to go in the coming years, Citrix recently announced an updated version of the Xenserver technology it acquired in last year and the extension of this solution into the cloud computing space.
Nick Keane, country manager for Citrix says that of the roughly 300 enhancements announced with XenServer 5 last month, greater storage support and a new high availability option that allows customers to configure their datacentre to automatically restart a virtual on a different piece of hardware if it crashes rank at the top.
The exciting part of Citrix’s announcements however centre on its tailoring of solutions for the cloud computing world, with its newly announced Cloud Centre solution.
“This new area of solutions allows us to build out the cloud environment for enterprises or service providers that provide functionality to customers in a cloud-computing manner,” he says.
Examples of the latter would include organisations such as Salesforce.com and Internet Service providers that host ERP, CRM and similar systems on behalf of their customers.
Keane says the Citrix Cloud Centre (C3) solution is designed to give cloud providers a complete set of service delivery infrastructure building blocks for hosting, managing and delivering cloud-based computing services.
Architecture for the future of services
Apart from the underlying virtualisation technology that has made XenServer an important player in this space, Cloud Centre includes a reference architecture that combines the individual capabilities of several Citrix product lines to offer a powerful, dynamic, secure and highly available service-based infrastructure ideally suited to large-scale, on-demand delivery of both IT infrastructure and application services.
This architecture consists of a bundling of four components that are already part of Citrix’s portfolio.
The first of these, XenServer Cloud Edition is nothing new. From what we can gather, it is for all intents and purposes XenSever standard edition, except for the fact that it benefits from a consumption-based pricing model.
NetScaler – the second component – automatically scales the number of VMs or servers charged with taking care of an application or service in the cloud, so that optimal performance can be delivered to customers.
Keane says that third component, WANScaler, allows for customers to easily begin moving their on-premise virtual machines and application resources into a cloud-based datacentre and back again as needed.
“And lastly, Workflow Studio, provides an orchestration and workflow capability that dynamically controls and automates the architecture so that it fits in with the customer’s defined business and IT policies,” he says.
While these solutions could enable the cloud-based computing model to flourish in the next couple of months, Keane acknowledges there are inhibitors for the local market.
“It is going to be challenging to enable WANScaler’s ability to move workloads to and from the cloud with the limited connectivity and bandwidth South Africa has at its disposal,” he says.
“We are actively on the lookout for a solution to this and welcome any and all solutions from the market,” he adds.
High-availability
While virtualisation has largely traded on its ability to give companies better efficiency in their IT environments, at the outset it did not provide for too much redundancy and high-availability.
While the predominant players in grass-roots virtualisation, such as VMware, Sun Microsystems with its Solaris Containers and LDOMs and IBM with mPars have built some level of 'high availability (HA)' into their solutions, most of these solutions do not have the enterprise-ready features that many customers require like rich application monitoring and failover.
“Furthermore,” says Eric Hennessey, director, Technical Product Management at Symantec, “these HA solutions only work on particular platforms. Most IT organisations have a mix of Windows, Unix, and server virtualisation in their datacentres.
“This means that the IT organisation has different tools on different platforms leading to extra personnel and training costs to manage the entire HA infrastructure,” he adds.
He says that Symantec’s new Veritas Cluster Server One makes a giant leap in traditional clustering and high availability.
“With this new product, IT organisations can use a single product to manage across their physical and virtual, multiplatform data centre. And with the increasing complexity of applications – in some cases the web server or application server may reside in a virtual machine, while the database may be on a physical server, they can now provide high availability for the entire IT service, even if part of it is running in virtual servers,” Dan Lamorena, senior product marketing manager at Symantec chips in.
Hennessey adds that Veritas Cluster Server One’s virtualisation support also helps customers reduce the amount of spare servers it needs to purchase for high availability.
“VCS One allows administrators to assign applications a priority. For example, a mission critical workload may be a Tier 1 application, while a test/dev application may be a Tier 4.
“This allows customers to repurpose their test/dev servers to be failover targets for production workloads. In the event of a server failure, the test/dev system could be shut down, and the production application could be started in its place.
“This allows companies to truly get the benefits of consolidating – reduced capital expenditures.
Reducing pain
“They can also leverage the capabilities that are coming out in new hardware in terms of capacity on demand, like what is available with IBM’s Dynamic Logical Partitions (DLPARs),” he says.
“With VCS One, they will be able to 'light up' the proper amount of CPU and memory resources when starting an application, then de-allocate those resources upon shutdown...turning the lights out when they leave the room, as it were,” Hennessey says.
Lamorena adds that the product is also designed to reduce the operational pain usually associated with high availability solutions.
“Administration with Veritas Cluster Server One is designed to be easy.
“It provides a single front end so you can manage all the applications running in an environment that you are authorised to view, or just the one you care about.
“So if you log in to VCS One and you belong to the IT group responsible for marketing, for example, you will only see those servers and applications that belong to marketing and will have full control of those applications.
“You can tell them to start, stop, move, and more. That is in addition to the traditional high availability functions that are still happening on the backend where if something breaks, Veritas Cluster Server One will move that application from one server to another,” he says.
Opinion
Where virtualisation was a pipe dream a decade ago, today it is delivering good value to customers of all size. It is clear that all the work has not been done yet. New concepts like cloud computing and the need to cater for older, yet critical concepts such as availability, business continuity and disaster recovery are keeping this space exciting.


Others who read this also read these articles

Search Site





Subscribe

Previous Issues