mercredi 18 septembre 2013

IBM Aims NextScale Hyperscale Boxes At Clouds--And Possibly Power8

Big Blue is not quite as ready as it may have seemed to get out of the X86 server racket, at least not judging by the launch last week of the NextScale server line. And maybe, just maybe, the NextScale line of minimalist machines will sport current and future Power processors to give service providers and cloud builders a cheaper and denser alternative than the Flex System chassis announced a year and a half ago to build Power-based clouds. The future of the IBM i business may depend upon it, so getting Power processors inside of the NextScale nodes is important.

You have to have the right tool for the job in this world. The Flex System machines are a converged data center in a box, and as such, they are intended for consolidated workloads running on bare metal or logical partitions that might otherwise run on separate systems. The main drive behind the Flex Systems (and their various Pure packaging above the iron) is to consolidate workloads and provide a sophisticated, integrated management of the converged compute, storage, and networking in the box. There's nothing wrong with the Flex System machines, but cloud operators and hyperscale data center operators running massively distributed applications have different needs in terms of price points and architectures.

Dell has been peddling custom servers from its Data Center Solutions unit to hyperscale data center operators and has created the PowerEdge-C line to chase enterprise customers with high-density, quasi-custom machine. Hewlett-Packard has followed suit, first with its SL6500 and SL2500 series, and then with its microserver-packed Moonshot server enclosure.

Now, IBM wants a piece of the action, and the NextScale minimalist machines are all about getting cheap iron out there that is competitive with Dell, HP, Super Micro, and the several vendors that are building machines based on designs from the Open Computer Project started by Facebook. These machines are, as OCP puts it, "vanity free" and that means they don't have a single bit of metal not needed and they don't look like a piece of office furniture like a System/3X did and most servers still do. These vanity-free designs are all about maximizing density and airflow and minimizing cost.

Here's the front and back view of the new NextScale n1200 chassis:




The chassis, as you can see, uses half-width server nodes, just like the Flex Systems do. (You can also get a full-wide node for four-socket processing in the Flex System.) This enclosure is 6U high, and has room for a dozen nodes. Unlike the Flex System, there is no midplane that hooks the server nodes into the midplane, which is what links the integrated switches or passthrough modules and storage to the servers into a single complex with a single management tool. The NextScale machines assume you will have minimal internal storage on a node and external switching inside the rack.

The chassis can have up to six hot swap power supplies, each rated at 900 watts, configured in N+1 or N+N redundancy modes, and it has up to 10 hot swap fans to pull air through the enclosure. With this enclosure, you can cram 84 two-socket server nodes into a single 42U rack, which is double the density of two-socket pizza box servers, but these half-width servers are the norm for density workloads these days and in fact, some companies are trying to do three nodes in the same 19-inch width by going just a little bit taller on the node. (Facebook and Rackspace Hosting just to name two.) This is about a third more compute density than a BladeCenter could offer, and you don't have to skimp on the processor watts, either. However, the iDataPlex machines that IBM introduced a number of years ago using non-standard racks (two half-depth racks sitting side-by-side) held 84 two-socket nodes, and had the same density as the new NextScale machines.

I haven't forgotten that IBM has the double-density Flex x222, which puts two whole servers in a single half-width slot in the Flex System. A rack of these Flex x222 machines packs 40 percent more compute, but you are limited to the Xeon E5-2400 v1 processors from Intel, not the full on Xeon E5-2600 v2s which have more clocks and cores and cache. And the new Xeon E5-2600 v2 processors announced last week by Intel, are not yet available in any Flex compute modules, and they are available in the NextScales.

The virtue here with NextScale is that they have the same density as the iDataPlex as well as a vanity-free design. And, because the enclosure and server nodes are made in IBM's factory in Shenzhen, China, they have prices that are competitive with HP's SL series Scalable Systems, says Gaurav Chaudhry, who is worldwide marketing manager for System x hyperscale computing solutions at Big Blue.


There is only one server node available for the NextScale n1200 chassis at the moment, and it is the nx360 M4. It is a two-socket node that has four memory slots per socket for a maximum of 128 GB of main memory per node. The compute node has one PCI-Express 3.0 x8 mezzanine connector for dual-port InfiniBand (56 Gb/sec) and dual-port Ethernet (10 Gb/sec) adapters plus another x16 slot for other peripheral connectivity. The server also has two 1 Gb/sec Ethernet ports for those who don't need fast networking. (And there are plenty of workloads that do not.) There is also a PCI-Express 3.0 slot with 24 lanes (x24 in the image above) and it is not clear at press time what this used for, but probably to hook in peripherals like external disk arrays and various kinds of coprocessors. The node can have one 3.5-inch disk, two 2.5-inch disks, or four 1.8-inch SSDs.

Red Hat's Enterprise Linux 6, SUSE Linux's Enterprise Server 11 SP3, and Microsoft's Windows Server 2012 are all supported on the nx360 M4 server. If customers want to virtualize, they have to use VMware's ESXi 5.1 hypervisor.

The new NextScale chassis and server nodes will ship on October 28. A base node with one six-core E5-2620 v2 processor spinning at 2.1 GHz with 8 GB of memory and a single 3 TB disk costs $4,049. Pricing for the chassis was not available for the chassis.

There is more to the NextScale than a chassis without a price and a node with one, as you can see:






IBM also has a storage expansion module that can snap onto a server node that is 1U high and a half chassis wide that will sport eight 4 TB disk drives and a RAID disk controller when it is available in November. An accelerator expansion mode, which will incorporate Nvidia GPU and Intel Xeon Phi co-processors will come out next year.

Chaudhry says that the System x division is talking to the Power Systems division about the possibility of using Power processors in another set of NextScale nodes, and given that this machine is the replacement for the iDataPlex and that IBM is very keen on promoting Power and Linux together to supercomputer and hyperscale data center operators, it seems inconceivable that there will not be Power nodes in this machine. Particularly after water cooling of the components is brought to market to being the NextScale on par with the iDataPlex setup in that regard. And when asked about possible ARM-based server nodes, Chaudhry said IBM was looking at all of the options--including microservers. IBM could probably cram a lot of single-socket ARM and Power nodes into this NextScale chassis. And it could support a mix of IBM i, Linux, and AIX on the machines, too.

If enough service providers and cloud builders ask for it, IBM i and Power should happen with the NextScale. And probably with next year's Power8 processors if I had to bet. Now is the time to ask. 




By: Timothy Prickett Morgan

Aucun commentaire:

Enregistrer un commentaire