net.work

The Way Business Is Moving

net.work published by
Issue Date: June 2007

Q&A: Paul Miller, VP Marketing HP Industry Standard Server Division

June 2007
Timothy Prickett Morgan

As vice president of marketing for Hewlett-Packard's Industry Standard Server division, which peddles the company's ProLiant rack and tower servers and its Blade System blade servers, Paul Miller has his hand on the steering wheel of the world's largest single server business in terms of the number of customers and units sold each year. Miller took some time out of his schedule recently to talk about the X64 biz.

Q: I really just wanted to talk for a bit about what is happening in the X64 market. Can you give me a general view of where things are at from Hewlett-Packard's point of view and where things are headed? I just want to talk form factors, processors, and other peripheral technologies that are important as we look ahead into the rest of 2007 and beyond. For instance, we have been watching the transition into blades, and I think blades are doing alright but they are not doing great, although HP is certainly seeing better traction now than you have in the past couple of years, and IBM's numbers were not so great in the past few quarters. So, how is business?
A: Business continues to be strong in the X86 space, and in general, the nature of our conversations with customers continues to change. The hot topic, no pun intended, is power and cooling, and global warming and carbon emissions. For other people, the issue is that they are just running out of capacity in their data centres. This is a lot of what we are talking about. Part of that is an X86 thing because this architecture has the biggest footprint in the data centre, and it is the fastest-growing footprint there, too.
HP continues to be at the centre of the conversation.
We are still doing very well with blades, and I think part of the reason there is that with the c-Class blades, we redefined the value of the blades, which has made them relevant. Our initial foray into blades, with the p-Class, was all about packaging, and that is where I think HP's competitors are right now. That packaging drove a certain value proposition and that got us and IBM to about a 10% of the X86 mix. But we think getting it up to where we think it can be, around 30% of the mix, requires the advanced integration and a value proposition around power and cooling, virtualisation, and management. These aspects are what we think will drive blades up to 30% of shipments.
Q: Is there a point in time where servers will be in a blade form factor no matter what? In other words, will we build large SMP servers by taking a bunch of blades and using light pipes to lash them together with chipsets and all that stuff? I keep thinking that, in the long run, what the industry wants to be able to do is make one blade and use it many times, whether it is in a standalone server, a cell board in an SMP system, or a vertical blade in a set of infrastructure servers. Silicon Graphics seems to be taking this approach with its Altix designs, and Fujitsu has a blade server that you can scale from two to eight sockets in an SMP by plugging multiple blades together into an SMP; Appro has a similar design. The blades servers themselves seem to be a natural component for making what would have been a rack of individual servers in the past stacked vertically like pizza boxes in a rack — but which are now slotted side by side in a chassis horizontally as a blade, as well as being the core component for a real shared memory, NUMA/SMP system.
A: I think you will see that sort of approach more and more. But will we get to the point where everything goes to blades? For the customers who are buying two or three servers per year, I agree that for them, in the future everything they need will be in a bladed form factor. And the granularity will go down further and further. Even with today's blades, the granularity is at a server form factor, a storage form factor, or a network switch form factor. As we look ahead, the blade form factor will be CPU, memory, and I/O.
Q: That is how the SGI design is set up. The Altix machines have CPU blades, memory blades, I/O blades. Everything is a blade. If you want to make an eight-way box, you plug in four two-socket blades and they use a shared global memory. If you want to extend the I/O, you plug in more I/O blades. I do not know if it is technically possible to do it in all cases, but this strikes me as the way to do this. But this approach may not work for a database server, I realise.
A: This approach works for niches today, and in some ways, this is how our Superdome servers are already built, too. But in on an industry-standard cost basis, when you get to the point where everything is going out in high volumes, this might not make sense. Blades do not make sense for the company that is only buying a server once every three years. And in this case, you will have a mix of tower and rack systems that customers buy.
I think the interesting thing that we have seen across the different form factors is this: blades continue to be very strong in traditional businesses - the financial sector, insurance, and manufacturing. We are very strong there. On the other end of the spectrum, in the emerging markets like China and India, companies continue to drive a very strong tower business.
Q: Is that a function of the size of the typical company in these markets, or of their level of IT sophistication or relative computing needs, or budget?
A: It is not really the size of the business; it is the investment thinking about business. They are thinking about what they can get that is the least expensive from an upfront cost perspective. They are not thinking about longevity, they are not thinking about technology cycles.
Q: It is Beowulf clustering in HPC 10 years ago.
A: Exactly.
Q: People just took a bunch of PCs and made a cheap cluster, and then they realised that they had 400 PCs that were taking up a lot of space and that were difficult to manage. It is cheap, but it is hot and cranky, and it is not the right answer for the long run.
A: They are thinking about servers in terms of the budget they have to spend today, and they are not thinking about it as a long-term infrastructure investment.
In some countries, even in China, rack server growth is very strong. They are in some cases already moving from towers to racks. But the buying patterns are very different in sub-sectors and in sub-geographies.
Racks are starting to drive now, and we think that blades are going to be very big next year and beyond as companies go through an IT investment maturity cycle.
Q: So these companies in the emerging markets are, for all intents and purposes, entering the late 1990s? Everyone seems to have to go through all of the stages. Ontogeny recapitulates phylogeny is the principle in biology. OK, new topic. How do you track the penetration of server virtualisation among customers, and what kind of penetration are you seeing on ProLiant rack servers and Blade System blade servers. Obviously, on the entry tower servers, I do not expect virtualisation usage to be very high.
A: Overall, HP is shipping a server every 13 seconds or so, and the clock is going pretty fast. Virtualisation is very hard to track. We think virtualisation on X64 machines is somewhere between 10% and 15% of shipments. We know we have about a 4% to 5% virtualisation footprint based on our sales of VMware's virtualisation software through our channels, for which we collect revenues; there is a little bit of Xen out there and a little bit of Microsoft's Virtual Server, but the vast majority is VMware software. Some customers buy through Microsoft, and apply licenses that way. The great unknown is this: there are many different ways of downloading hypervisors, and we cannot track this. Which is why HP believes we are in the range of 10% to 15% of industry standard servers being equipped with some kind of virtualisation.
When I talk to customers, most of them are raising their hands and saying that virtualisation is a very big topic for them, and it is for us, too. It related back to our power and cooling focus as well as to some other things that we are doing around the virtualisation of clients. Blades have a much higher attach rate for virtualisation, too.
Q: But it is like Linux was seven or eight years ago, where companies did not really know where Linux was in their organisations because it was free or close to it.
A: We know that virtualisation is much higher in blades - it is not at 50% yet, but it is trending in that direction. Our Virtual Connect I/O switch, which is shipping now and which we announced last summer, breaks through a lot of barriers to virtualise I/O.
Q: I put my neck out from time to time, and I think that in the long run, not the short run but the long run, that this kind of virtualisation capability will eventually put a damper on X64 server shipments. Virtualisation will do so for servers of all kinds, and in my view, it has already done so for mainframes, proprietary minicomputers, and Unix machines, which all got virtualisation in various stages over the last two decades. As each one of these platforms virtualised, it drove down installed footprints. Some of that was competitive pressure - Unix replaced mainframes, Windows replaced Unix, etcetera for economic reasons. But some of the footprint shrink for these virtualised platforms was just because of the virtualisation.
Here is my thesis: companies get through the technology upgrade cycle to machines that can support CPU, memory, I/O, and network virtualisation, it takes a long time to get here with the X64 platform, and it might be two or three years, or more, from now, shipments take a hit. Once a company has consolidated and virtualised servers, they are just as apt to shift around workloads to get jobs to run as they are to buy lots of new capacity. I cannot imagine a world that has enough application growth where 25 million servers in the world running at 5, 10, or 20% get consolidated and virtualised onto machines where they are running at 60, 70, or 80% to keep footprints growing. Maybe server footprints contract, maybe they just level off. But I cannot imagine going from 8 million server shipments a year to 13 million server units, as some projects call for with this virtualisation crunch coming.
I would like to be proved wrong on this one.
A: Let me answer that question in a number of different ways. First, HP is very bullish on virtualisation. I can envision the day when we will ship servers that will be virtualised right out of the chute. We are already working with customers on standardisation of virtualised environments - meaning, hypervisors that they want to see embedded in our systems right out of the factories. This could be two, three, or four years out - it is hard to say when it will happen - but servers will ship already virtualised, whether they are going to run a single application or multiple applications.
Q: Are you thinking about embedding the hypervisor in the system itself? I keep thinking that the hypervisor belongs on the system, just like the BIOS.
A: I do not want to disclose anything right now on what we are thinking about that. Now, back to virtualisation and server shipments. So, will the number of server units go down? Yes. Will the total revenue go down? No.
There are going to be winners and losers in this game. When you start to talk to customers about server virtualisation, what you see is that the companies who are winning in virtualisation, and we think we have the largest footprint out there - our average selling prices are either flat or rising, which is bucking the industry trends. Customers are buying fewer units, but each unit has more CPUs, more memory, more I/O, and more software to control it. When you move from 50 1U rack mounted servers running at 20% utilisation to 10 servers running at 80%, the load balancing, management, and resiliency requirements on those fewer servers go up.
People who are doing this consolidation and virtualisation grasp that they need to make a more robust system, too. So I do not believe that server revenues will go down because of virtualisation, but the nature of the revenue will change.
The last comment I want to make is that virtualisation is an interesting beast. This is where you have to take virtualisation to the next level. People talk about servers running at 20% of capacity, and you need to get it up to 80%. It is not that simple. I was at a customer site where they have an application running at an average of 4%, but it is a trading application that this financial services company needs to run once a day. They need to get an answer back from this application in less than two minutes, and that is only possible on a machine that can deliver a very large amount of performance in a very short period of time. Getting compute power to shift so they can get it where they need it and when they need it is the issue and it will drive different types of revenue.
Q: But do we not eventually get to the point where we have done that shift? That is the point that I am trying to make. We get everyone through that upgrade cycle and we all have integrity-style, VSE-like virtualised ProLiants. And then, you live by the incremental growth in the applications as a set, and you cannot just grow revenues because companies have isolated workloads and they have to plan for peaks on isolated systems. To be fair, virtualisation will drive disaster recovery, since a lot of eggs will be in fewer baskets.
A: When companies start putting all their eggs in one basket, they actually buy more memory, more I/O, and more software. It is changing the balance of revenue around.
Source: Computergram


Others who read this also read these articles

Others who read this also read these regulars

Search Site





Subscribe

Previous Issues