net.work

The Way Business Is Moving

net.work published by
Issue Date: April 2002

Racks, blades and bricks

April 2002
Tiny Maubane: product manager - primergy servers, Fujitsu Siemens Computers

For a long time, system design followed the same formula: the more processors in the system, the higher its expandability and number of hard disks. This rule still applies for remote servers and in companies that require only one or just a few servers. For data centre environments, however, this classic design is a poor compromise.

Excess unused hard disk or tape drive slots in the system take up a lot of space, which is particularly expensive in data centres. This is particularly true if data is stored outside of the servers on disk sub-systems. This separation permits a more flexible allocation of disks to servers and allows the implementation of high-availability concepts like clustering.
For this type of architecture, highly compact computer nodes that are installed exclusively in racks are required - so-called rack-optimised servers. The goal is to pack a maximum amount of computing power into a minimum amount of space, with few internal hard disks. The most important measurement is the number of height units, which the server takes up in the rack (1 height unit or 1U = 1,75 inches or approximately 4,5 cm). Today, single-processor or dual-processor servers with 1 or 2U, 4-processor servers with 4U and 8-processor servers with 7U are considered standard.
This trend continues unabated, and after less than 18 months more than 40% of all new installations are rack-optimised servers.
Large server farms place new demands on the rack infrastructure. Since each server has multiple cable connections, the cabling must be structured in such a way that the rack can still be serviced. At the same time, the rack must be so constructed that the heat that is being produced by so many systems dissipates - otherwise they will overheat and shut down. Some server providers are therefore introducing new versions of server racks that take into account these requirements. One version, for example, places the cable guides on the side of the rack in order to prevent a 'cable curtain' at the back that would prevent the server fans from blowing out heat. At the same time, articulated cable guides make sure that cables do not become entangled when a server is removed.
Upon closer examination of these problem areas, it quickly becomes apparent that further improvements in the physical server consolidation are possible. As a result, 'blade server' architecture will be the main innovation topic in 2002.
A blade server is a sort of 'server farm in a box'. Unlike rack-optimised servers, each of which has its own infrastructure (power supplies, fans, etc), a blade server provides this infrastructure for multiple servers. A single server consists only of a system board with processors, main memory, and hard disks (for the operating system). It is as thin as a 'blade'. These servers are then mounted vertically in the blade server, similar to a telephone system.
To achieve this compact construction, a server blade no longer has a PCI bus. Its only interface is a LAN connection, which means that a blade server requires a net-attached storage system (NAS) for its data management. For optimised data throughput, blades with Gigabit Ethernet technology are recommended. For the processors, low-voltage versions are used. While they may be a little less powerful, they require substantially less energy and produce less heat.
Blade servers are typically used for applications that scale well by adding more servers (so-called scale-out scenarios) and permit the free allocation of clients to servers. Web server and terminal server farms are considered ideal applications - the same environment that is currently dominated by mono or dual 1U servers. For Web server farms, mono or dual processor blades are best suited, while terminal server applications work best with dual processor blades because of their high performance requirements.
A blade server can also substantially reduce the installation and maintenance costs associated with the cabling problems mentioned earlier, provided it is equipped with built-in switches that concentrate the external LAN links. With such a set-up, the amount of LAN and power cabling can be reduced by a factor of three.
Tiny Maubane: product manager - primergy servers, Fujitsu Siemens Computers
Tiny Maubane: product manager - primergy servers, Fujitsu Siemens Computers
Server consolidation with the Wintel mainframe
Blade servers will play a major role in the physical consolidation of large server farms (scale-out), but many business applications, due to the way they are structured, permit only a moderate distribution over a few powerful servers. For database systems, the greatest efficiency can be achieved by running them on a single large system. In these cases, the computing performance is increased via one or more powerful processors in a single system (the so-called 'scale up' scenario). To achieve more flexibility and lower administrative costs in such a scenario, IT departments show great interest in operating large Intel-based computers, or 'Wintel mainframes'.
With mainframes, the system is logically divided into several sub-systems, ie 'partitions'. Each partition runs under a separate operating system. The advantage compared to multiple, physically separate servers is that resources can be added to or removed from the individual partitions as needed. For example, if the 'order processing' application is barely used at night, resources could be shifted to batch runs like payroll processing, thus allowing resources, such as processors, memory, etc, to be used more efficiently.
All this only makes sense, however, if this resource allocation can be controlled dynamically, ie without having to shut down the system or parts thereof. This requires an operating system that is able to perform such operations. At this time, dynamic partitioning is not yet available for Windows 2000 (and not for other operating systems either that run on Intel). The successor to Windows 2000, Windows.NET, will also not have this feature. According to currently available information, it will only be included in the subsequent Windows version.
But the operating system is not the only problem. Another sore spot is the synchronisation between system, processor and operating system manufacturers. With current mainframes or large Unix systems, the processor, server and operating system technologies either come from a single source, or the manufacturers are at least closely related. Processor versions are tightly coordinated with the development of computers and operating systems. In the Intel world, these relationships are much looser. Intel processors have very short lifecycles, and often one quickly comes to the point where processors for your specific system boards become unavailable. As a manufacturer, you can try to add upgrade capabilities through modular system design, but such upgrade costs often exceed the costs of a completely new system, which no operator of mainframes or large computer systems will accept.
For this reason, most Intel-based servers top out at systems with eight processors. Attempts to add more processors are usually short-lived. Only Unisys currently offers large systems with up to 32 processors. The next 64-bit Intel Itanium processor generation (McKinley) is expected to make larger systems available, and the necessary chipsets are being developed. At the same time, the 64-bit technology will provide more functionality, especially for memory-intensive applications like database systems, one of the most important large system applications.
An alternative to large multiprocessor systems is the modular approach. In a modular architecture, separate systems (also called 'bricks') can be linked via a proprietary bus and expanded to a multiprocessor system. With this type of design, a user can start out with a traditional size system and add a second system when needed (for example, for a growing database). The advantage is that you need not invest in the whole infrastructure for a multiprocessor system at the start, but only as required. The systems can also be disconnected at any time and used separately.
The right position in the IT infrastructure
The restrictions listed above are the reason that large organisations employ a mix of different computer architectures.
At the front-end server level, ie the part of the infrastructure that links the clients with applications and data via Web and terminal services, Intel servers are the undisputed leader. Even Sun Microsystems, which focuses almost exclusively on Solaris/Sparc, announced that it would offer Intel services for this range. A scale-out load-balancing network of inexpensive servers features low system costs paired with high flexibility, which can be increased even more with blade servers. Among Web servers, Internet Information Server under Windows 2000 and Apache under Linux will dominate. The Meta Group expects a ratio of 40% Windows Web servers and 30% Linux web servers by 2004.
In the field of business applications, Linux will for the foreseeable future not play a large role in large organisations (Meta Group/date?). This level will be dominated by Intel servers under Windows, enterprise UNIX systems (Sparc Solaris, PA-RISC/HP-UX, etc), and mainframes (BS2000, MVS). Wintel will be helped by its usually lower purchase cost, while enterprise UNIX systems and mainframes benefit from their higher data centre functionality (dynamic partitioning, high automation, batch operation) which make for very low operating costs.
In the database field, Intel servers with SQL Server and Oracle have enjoyed high growth rates. Due to limited scalability, they did not represent an alternative for large implementations. Windows.NET, especially in combination with the Itanium processor, will clearly shift these restrictions higher in 2003.


Others who read this also read these articles

Search Site





Subscribe

Previous Issues