mardi 10 juin 2014

IDC Data Shows IBM Security Outpacing the Market


With the massive media coverage of today’s advanced threats, customers and thought leaders are recognizing that a comprehensive, integrated approach rather than a proliferation of point products is necessary. IDC recently released its 2013 security software market share data, which showed that IBM Security gained share, maintained leadership positions and dramatically outpaced the market. This is substantive support for our value proposition.

“According to IDC’s Worldwide Semiannual Software Tracker analysis for calendar year 2013, IBM Security Systems maintained its number one position in identity and access management (IAM) and security vulnerability management (SVM), which includes security information and event management (SIEM), and improved its share in endpoint security and network security,” IDC said in a statement on the findings, which have not yet been released to the public. “IBM significantly outpaced overall security software market growth and remained the number three security software vendor in 2013.”

IDC Reports What Our Customers Already Know

This data from IDC is solid evidence that customers and partners support IBM Security’s value proposition, products, support, engagements and all the investments we’ve made to grow the business; further evidenced by the division’s consistent double-digit growth, as pointed out during our earnings calls.

A steady supply of media coverage regarding the latest advanced persistent threats (APTs), cyber attacks and breaches have elevated the need for advanced security solutions to a boardroom priority. This constant buzz also raises the stakes: CISOs and partners demand the best security products and services from their chosen vendors and will quickly find alternatives whenever necessary. The competition is fierce and will only ramp up as startups enter the fray and established vendors consolidate in order to enter this market. But the trend is towards integration, collecting massive amounts of data, applying analytics against the data to identify prioritized offenses and taking action. A collection of disparate security products puts the burden on the customer or partner, neither of whom have the budget nor the advanced skills necessary to act effectively.

Our relentless pursuit of product excellence clearly demonstrates to our customers and partners that we intend to lead this market through innovation by executing our strategy and, most of all, by partnering with them to solve the problems they face constantly. We offer a clear distinction: Customers all over the world can choose to acquire a best-in-class product or an integrated and open security intelligence system from a single vendor.
 
 
 
By: John Burnham

mardi 29 avril 2014

IBM announces IBM i 7.2

 


Today, IBM announces the release of IBM i 7.2, the first new IBM i release in four years. This release provides significant new function in DB2 for i, systems management and security as well as enhancing many other integrated components and licensed programs. IBM i 7.2 includes support for and takes advantage of the latest Power Systems server family running on new POWER8 technology. 
 
IBM i 7.2 new enhancements include:
  • Enhanced security options in DB2 for i
  • Many new functions for programmer productivity and expanded function in DB2 for i
  • Improved ease of use with IBM Navigator for i
  • Enhancements to iAccess Client Solutions
  • Extension of security to more applications through new single sign-on (SSO) environments
  • Liberty Core as the base for Integrated Application Server
  • Support for Zend Server 6.0 on IBM i 7.2
  • Performance improvements for the IFS
  • Extensions to the printing environments
  • Expanded Hub functions for Backup, Recovery, and Media Services (BRMS)
  • PowerHA SystemMirror for i Express Edition with new HyperSwap capability
  • Support for new Power Systems built with POWER8 architecture and processor
  • Additional I/O capabilities including support for WAN over LAN
  • Rational tools enhancements to support Free Format RPG
  • Support for the open-source file-serving solution Samba

IBM i 7.2 is supported on Power Systems servers and PureFlex systems with POWER8 processors, Power Systems servers and blades and PureFlex systems with POWER7/7+ processors, and Power Systems servers and blades with POWER6/6+ processors. Clients using POWER5/5+ servers or earlier servers must move to newer systems to take advantage of the new features in IBM i 7.2.

Clients running IBM i 7.1 or IBM i 6.1 can easily upgrade to IBM i 7.2, which will be available May 2, 2014, to benefit from the additional features and performance provided with the latest technologies included in the operating system.

To learn more, visit the IBM i 7.2 Knowledge Center website, and look for the May issue of IBM Systems Magazine, Power Systems edition, where IBM i Chief Architect Steve Will shares his thoughts on the new release.



By: Tami Deedrick

lundi 14 avril 2014

IBM FlashSystem 840 for Legacy-free Flash

Flash storage is at an interesting place and it’s worth taking the time to understand IBM’s new FlashSystem 840 and how it might be useful.
 
A traditional approach to flash is to treat it like a fast disk drive with a SAS interface, and assume that a faster version of traditional systems are the way of the future. This is not a bad idea, and with auto-tiering technologies this kind of approach was mastered by the big vendors some time ago, and can be seen for example in IBM’s Storwize family and DS8000, and as a cache layer in the XIV. Using auto-tiering we can perhaps expect large quantities of storage to deliver latencies around 5 millseconds, rather than a more traditional 10 ms or higher (e.g. MS Exchange’s jetstress test only fails when you get to 20 ms).

No SSDs 3

Some players want to use all SSDs in their disk systems, which you can do with Storwize for example, but this is again really just a variation on a fairly traditional approach and you’re generally looking at storage latencies down around one or two millseconds. That sounds pretty good compared to 10 ms, but there are ways to do better and I suspect that SSD-based systems will not be where it’s at in 5 years time.

The IBM FlashSystem 840 is a little different and it uses flash chips, not SSDs. It’s primary purpose is to be very very low latency. We’re talking as low as 90 microseconds write, and 135 microseconds read. This is not a traditional system with a soup-to-nuts software stack. FlashSystem has a new Storwize GUI, but it is stripped back to keep it simple and to avoid anything that would impact latency.

This extreme low latency is a unique IBM proposition, since it turns out that even when other vendors use MLC flash chips instead of SSDs, by their own admission they generally still end up with latency close to 1 ms, presumably because of their controller and code-path overheads.

FlashSystem 840
  • 2u appliance with hot swap modules, power and cooling, controllers etc
  • Concurrent firmware upgrade and call-home support
  • Encryption is standard
  • Choice of 16G FC, 8G FC, 40G IB and 10G FCoE interfaces
  • Choice of upgradeable capacity
Nett of 2-D RAID5 4 modules 8 modules 12 modules
2GB modules 4 TB 12 TB 20 TB
4GB modules 8 TB 24 TB 40 TB
  • Also a 2 TB starter option with RAID0
  • Each module has 10 flash chips and each chip has 16 planes
  • RAID5 is applied both across modules and within modules
  • Variable stripe RAID within modules is self-healing

I’m thinking that prime targets for these systems include Databases and VDI, but also folks looking to future-proof their general performance. If you’re making a 5 year purchase, not everyone will want to buy a ‘mature’ SSD legacy-style flash solution, when they could instead buy into a disk-free architecture of the future.

But, as mentioned, FlashSystem does not have a full traditional software stack, so let’s consider the options if you need some of that stuff:
  • IMHO, when it comes to replication, databases are usually best replicated using log shipping, Oracle Data Guard etc.
  • VMware volumes can be replicated with native VMware server-based tools.
  • AIX volumes can be replicated using AIX Geographic Mirroring.
  • On AIX and some other systems you can use logical volume mirroring to set up a mirror of your volumes with preferred read set to the FlashSystem 840, and writes mirrored to a V7000 or (DS8000 or XIV etc), thereby allowing full software stack functions on the volumes (on the V7000) without slowing down the reads off the FlashSystem.
  • You can also virtualize FlashSystem behind SVC or V7000
  • Consider using Tivoli Storage Manager dedup disk to disk to create a DR environment
Right now, FlashSystem 840 is mainly about screamingly low latency and high performance, with some reasonable data center class credentials, and all at a pretty good price. If you have a data warehouse, or a database that wants that kind of I/O performance, or a VDI implementation that you want to de-risk, or a general workload that you want to future-proof, then maybe you should talk to IBM about FlashSystem 840.


By: Jim

Made in IBM Labs: Enabling dynamic prioritization of data in the Cloud

IBM Corporation logo


As more and more companies take advantage of applications, processes and services delivered via the cloud, vendors are struggling with increased complexity and challenges associated with ensuring uninterrupted data availability. IBM's patented technique creates a cloud environment in which Quality of Service priorities can be modified according to real-time or expected conditions, to reduce data bottlenecks in the cloud, thereby ensuring that clients receive the level and quality of service they expect.

The new invention will help alleviate problems that cloud providers face when they need to provide simultaneous, efficient and uninterrupted service to a range of clients for applications, including online banking and shopping, real-time video, supply-chain management, enterprise resources planning and more. 

"Since companies are relying upon the cloud to manage and process critical business data and interactions, guaranteeing and delivering quality, reliable service is an imperative for cloud vendors," said IBM Cloud Offering Evangelist Rick Hamilton. "This patented invention will enable cloud service providers to dynamically respond to potential data choke points by changing quality of service priorities to ensure the free flow of data for their clients." 

IBM received U.S Patent #8,631,154 "Dynamically modifying quality of service levels for resources in a networked computing environment," for the invention. 

Since beginning work with clients and partners around cloud computing in 2007, IBM continues to focus building clouds for enterprise clients. IBM provides cloud services and collaborates with clients to create new opportunities to reach more of the market or extend their services leveraging cloud delivery.

For 21 consecutive years, IBM has been the leading recipient of U.S. patents. IBM inventors have patented thousands of inventions that will enable significant innovations that will position IBM to compete and lead in strategic areas, such as IBM Watson, cloud computing, Big Data analytics – and advance the new era of cognitive systems where machines will learn, reason and interact with people in more natural ways.

About IBM Cloud Computing 

IBM has helped more than 30,000 clients around the world with 40,000 industry experts. Today, IBM has 100+ cloud SaaS solutions, thousands of experts with deep industry knowledge helping clients transform and a network of 40 data centers worldwide. Since 2007, IBM has invested more than $7 billion in 17 acquisitions to accelerate its cloud initiatives and build a high value cloud portfolio. IBM holds 1,560 cloud patents focused on driving innovation. In fact, IBM for the 21st consecutive year topped the annual list of US patent leaders. IBM processes more the 5.5M client transactions daily through IBM's public cloud. 



By: Pr Newswire
Link: http://cloudcomputing.ulitzer.com/node/3054837

mercredi 26 mars 2014

IBM Power8 rollout to start with scale out clusters

ibm-power8-chip-handOfficially, IBM has said that would be launching its twelve-core Power8 processors in new Power Systems servers and PureSystems converged systems sometime around the middle of the year. But as EnterpriseTech has previously reported, the word on the street is that IBM is getting ready to get the first Power8 machines into the field sometime around the end of April or early May. If the latest scuttlebutt is right, then it looks like the first Power8 systems will be entry machines that can be clustered up to run supercomputing simulations, Hadoop analytics, parallel databases, and similar modern distributed workloads.

Or, as the case may turn out, homemade Power8 systems that might possibly be used by Google in its vast infrastructure or souped up boxes aimed at high frequency traders, as EnterpriseTech has previously told you about.

A late April or early May launch would coincide with the Impact2014 event that Big Blue is hosting for customers and partners in Las Vegas from April 27 through May 1. The company is also participating in the annual COMMON conference for customers of its IBM i (formerly OS/400) midrange system, which runs from May 4 through 7. While the IBM i server business is considerably smaller than it was 15 years ago at its peak, Power Systems machines running IBM i are still used by around 150,000 customers worldwide and that operating system is only available on Power-based servers from IBM. Significantly, most of those customers tend to buy entry-level machines because their workloads are fairly modest by modern standards, and this is also the same class of machine you might use if you wanted to build a Hadoop cluster running atop Linux on Power chips. Doing one big push to cover many different markets make sense, particularly with IBM trimming costs as Power Systems revenues have been on the decline.

Just like other chip makers – notably Intel, AMD, Oracle, and Fujitsu in the enterprise space – IBM staggers its chip launches, although in this case it controls both the chips and the systems unlike Intel and, for the most part, AMD. For the past several processor generations, IBM has started the Power chip launch in the middle of its line, with machines that have from 4 to 16 sockets in a single system image. These are relatively low volume products, so it gave IBM time to ramp up its 45 nanometer process for eight-core Power7 chips and 32 nanometer process for eight-core Power7+ chips. The Power7+ had some microarchitecture improvements to boost per-core performance and a whole lot more cache per core to push the performance up even further. Then IBM launched entry machines using the chip, and in the case of the Power7, finished up with a 32-socket box that has a specially packaged version of the Power7 that allows it to clock higher than in the entry and midrange machines. The high-end machine in the Power Systems line does not get an upgrade to the “plus” variants of any processor, by the way. Whether or not IBM maintains this practice remains to be seen.

If history is any guide, IBM will have a high-end Power machine with 16 or 32 sockets available by the fall, for the big year-end sales push.
 
Starting at the bottom of the line this time around makes sense given that Intel refreshed its Xeon E5 processor lineup back in September with variants with six, ten, and twelve cores. If IBM wants to sell scale-out clusters based on Power8 chips against Intel, systems based on Xeon E5 v2 processors are the ones Big Blue has to beat.



ibm-power8-vision

The mantra coming out of IBM’s Systems and Technology Group about the Power8 launch is cloud, big data, open, and scale out. The open part means linking the Power8 launch to the OpenPower Consortium, which IBM started last year with Google, Nvidia, Tyan, and Mellanox Technologies. The consortium now has 14 paid members who are contributing to firmware, hypervisor, motherboard, and other parts of the Power8 system design, and one company, a startup called Suzhou PowerCore, licensing the Power8 chip to create its own variants of the processor for the Chinese server, storage, and switching markets. Sources at IBM tell EnterpriseTech that there are more than 100 other companies that have expressed interest in joining the OpenPower Consortium.

The Power8 chip, like its predecessors, is probably relatively large compared to a Xeon E5 part and will probably consume more electricity and dissipate more heat, even though it uses a 22 nanometer process that puts it on par with what Intel can deliver at the moment. IBM’s 64-bit Power chips have always been larger and hotter than their Intel equivalents, but they make up for it with more throughput, enabled by radically higher I/O and memory bandwidth. The twelve-core Power8 chip will sport 96 MB of L3 cache on the die and will have an additional 16 MB of L4 cache on each “Centaur” buffer chip embedded on the memory cards used with Power8 systems. (On a 16-core system linked gluelessly by the NUMA interconnect on each Power8 chip, that is a total of 128 MB of L4 cache on a system that can have 16 TB of main memory across those sockets. That machine will have 192 cores, and the interconnect has an extra link in it now so any socket can get to any other socket in one or two hops instead of a maximum of three.)



ibm-power8-feeds



The interesting thing to consider as we head out to the GPU Technical Conference hosted by Nvidia is precisely how Big Blue and the graphics chip maker are going to collaborate on ceepie-geepie systems starting with the Power8 generation and moving forward from there. Back in november at the SC13 supercomputing conference, Brad McCredie, vice president of Power Systems development within IBM’s Systems and Technology Group, told EnterpriseTech that the two companies were in the process of modifying the Power8 systems to better accommodate Nvidia’s Tesla GPUs and would be tweaking the IBM software stack to accelerate it with those GPUs. The Power8 chips have on-die PCI-Express 3.0 peripheral controllers. The jump to PCI-Express 3.0 is necessary to quickly move data back and forth between the CPU and GPU as they share work; the PCI-Express 2.0 used on Power7 and Power7+ chips was too slow to push these accelerators or high-speed InfiniBand and Ethernet cards.

To simplify the programing model for hybrid systems and speed up the data transfers between the CPUs and GPUs, IBM created what it calls the Coherent Accelerator Processor Interface, or CAPI, which will be first implemented on the Power8 chip. This is an overlay for the PCI protocol that creates a virtual memory space comprised of the CPU main memory and any memory used by any kind of accelerator that plugs into the PCI bus – GPU, FPGA, DSP, whatever have you.

ibm-power8-capi



The CAPI interface will work with the Tesla GPU accelerators and the virtual memory in the CUDA environment to manage the movement of data between main and frame buffer memory, transparent to the application. Nvidia announced unified memory between the X86 CPUs and Tesla GPUs with CUDA 6 last fall ahead of the SC13 event. (It is probably not a coincidence that the accelerator side of the chart above is in two shades of green and black. The nearly unreadable print in the light green box says “Custom Hardware Application, FPGA or ASIC” in case you can’t read it.)

IBM has been very clear that it wants to accelerate Java workloads with GPUs, and work is progressing to get such acceleration into the field perhaps by 2015 or so. That is also, perhaps not coincidentally, when IBM expects to have a clean-slate system design based on Power chips and future Nvidia GPU coprocessors in the field that more tightly links the two together. Java, of course, is the language of choice for a lot of enterprise applications, mainly because it is easier to work with than C or C++ and more widely known than any of the legacy programming languages on IBM systems.

The thing to remember, as we have pointed out before, is that IBM can have a much tighter partnership with Nvidia than either Intel or AMD can. It is reasonable to expect for the two companies to work more closely together on traditional supercomputing systems as well as other kinds of clustered and accelerated systems used throughout enterprises. Hopefully, the two companies will make some of their long-term plans clear at the GPU Technology Conference.


By: Timothy Prickett  Morgan
Linkhttp://www.enterprisetech.com/2014/03/24/ibm-power8-rollout-start-scale-clusters/

mercredi 26 février 2014

IBM BlueMix: PaaS Play, explained

With BlueMix, IBM gives customers a cloud path for legacy apps. Here's how SoftLayer, Cloud Foundry, and WebSphere tools fit in.

IBM is putting together a PaaS platform that it has dubbed BlueMix, which is a combination of open source code, IBM software tools, and Big Blue's tried and true WebSphere middleware used by many of its oldest customers. In effect, it's investing $1 billion to give enterprise customers a path to move legacy systems into the cloud.

For enterprise users who want to move an application into IBM's SoftLayer unit's public cloud, the many components of IBM WebSphere middleware will be there and waiting as callable services through a SoftLayer API. IBM acquired SoftLayer and its 700 employees last July and made its provisioning, management, and chargeback systems the core of its future cloud services.

Not so fast, you say. IBM's Blu Acceleration for DB2, Watson advanced analytics, Cognos business intelligence, and many versions of WebSphere run on IBM Power Systems servers, not the cloud's ubiquitous x86 servers.

Lance Crosby, CEO of IBM's SoftLayer unit, agrees that's the case. And that's why Power servers are now being incorporated into the SoftLayer cloud. It will be one of the few public clouds with a paired hardware architecture approach. Crosby declined to predict how many Power servers may be added or what percentage they would become. SoftLayer currently has about 150,000 x86 servers. IBM is adding 4,000 to 5,000 x86 servers to that number a month, and x86 will remain the majority by a wide margin, Crosby told InformationWeek.

"Power servers were never about volume. They're about more memory capacity and processing power" to handle enterprise ERP and database applications, which require large amounts of both, Crosby said.

In addition, IBM is making a broad set of its data analytics, Rational development tools and applications, such as Q9 security and Maximo inventory management, available on SoftLayer as software-as-a-service. Developers producing next-generation applications will have the option of using services from IBM's software portfolio that they're already familiar with, Crosby added. IBM Tivoli systems management software will also be made available, though no date was announced. Crosby said IBM will seek to get the bulk of its portfolio into the BlueMix PaaS by the end of the year.


Although there's a strong legacy component, IBM claims the $1 billion figure comes into play because that's the amount it's spending to break Rational tools, WebSphere middleware, and IBM applications down into services and make them available via SoftLayer. It's also using part of that figure to acquire the database-as-a-service firm, Cloudant.

About two dozen tools and pieces of middleware are available for the beta release of BlueMix, with 150 to 200 products to become available when the cloud-enablement conversion process is done.

Much of the $1 billion will be needed to convert IBM's huge, software portfolio currently sold under the packaged and licensed model into a set of "composable services," employed by developers to become parts of new applications. Only a fraction of that portfolio is ready with BlueMix's beta launch on Feb.24. Crosby said the way IBM would have handled such an announcement in the past was to wait until it was finished converting distinct products or product sets before going public. But that's the old enterprise way of doing things.

IBM is trying to adopt more of "born on the web" or agile development approach, where software gets changed as soon as one update is ready and production systems have short upgrade cycles. "Our goal is to follow the mantra of the agile development approach as soon as we can," said Crosby.

IBM middleware will often appear through BlueMix incorporated into a predefined "pattern" created by IBM. BlueMix on SoftLayer will give developers the ability to capture a snapshot of a pattern with each application, so that it "can be deployed to 10 datacenters in an identical fashion at the click of a button," said Crosby. The capability is called "patterns," often consisting of an application, a Web server, IBM middleware, and a database service.

BlueMix will run in SoftLayer on top of the open source platform, Cloud Foundry, originally sponsored as a project by VMware. Cloud Foundry became the charge of the Pivotal subsidiary, as it was spun out of VMware and EMC. Now its organizers say they are moving the PaaS project out into its own foundation and governing board. The Apache Software Foundation, OpenStack, and other key open source code projects have followed a similar route to gain the broadest possible backing.

There are 20 million developers in the world, and three-quarters of them have yet to develop a cloud application or work with a cloud-based platform as a service, according to Evans Data, which regularly surveys developers' attitudes and skills around the world. IBM is launching BlueMix as a combination of open source code and proprietary software to capture its share of their future work in the cloud.

IBM announced in January that it was expanding the SoftLayer chain of datacenters from 13 to 40 locations around the world to give SoftLayer a competitive global reach. It is spending $1.2 billion this year on that initiative.



By: Charles Babcock
Linkhttp://www.informationweek.com/cloud/platform-as-a-service/ibm-bluemix-paas-play-explained/d/d-id/1113979

vendredi 21 février 2014

Knock out your mobile development deadlines with IBM Worklight

Have you been asked to deliver new functionality or a new application with an impossible deadline? How about deliver a fully featured and integrated mobile application for multiple platforms in five weeks? Yes, I know that is a ridiculous timeline. However, is it possible? With the help of an IBM Premier Business Partner (Avnet Technology Solutions) and IBM Worklight, we were able to deliver an application on time and on budget.


IBM Worklight
How is that even possible?

In a recent blog post, “ IBM Worklight to the rescue: Saving your company's reputation,” I discussed how the remote disable function of IBM Worklight could provide significant value to a company that needed to deny access to a specific version of their application. I recently completed a mobile application project with an IBM client that was successful in part because of the remote disable and direct update features of IBM Worklight.

So what did we really deliver?

We delivered a hybrid application built using JavaScript, HTML5 and CSS that would be approved by and available in iOS and Android app stores with custom phone and tablet versions. The application was tested on multiple devices, operating systems and form factors. I won’t bore you with all of the details, but here is a high-level list of the functional requirements that were delivered.
  • Push notifications
  • Remote database integration for lead and data collection
  • Device calendar integration (add events to personal calendars)
  • Custom Twitter integration
  • Custom RSS feed
  • Worklight analytics 

How did Worklight help make this possible?

We were able to ensure that this project was delivered as promised with several easy-to-use features that are included with IBM Worklight:
  • Adapters– secure integration with remote resources
  • Automated mobile functional testing– same test runs across multiple devices and mobile operating systems
  • Unified push notification APIs– polled server-side apps to dispatch notifications, uniform access to push notification providers and the ability to monitor and control notification delivery
  • Direct update– web resources pushed to app when application connects to the Worklight Server
The application used SQL and HTTP adapters to store customer information and to insert push notification messages into a database that was polled regularly. When a new entry was found in the push notification table, the polling process would create and send a new push notification through the unified push notification APIs. The direct update feature was used after the basic application structure had been created and accepted by the app stores. We finished the basic application structure, and it was accepted in the app stores about three weeks into the project. This provided the team with two weeks to make content changes and correct any defects that were found during testing.
In the end, the project was successful and the application was very well received by its users.


By: Drew Douglass
Linkhttp://asmarterplanet.com/mobile-enterprise/blog/2014/02/knock-mobile-development-deadlines-ibm-worklight.html

mercredi 19 février 2014

What can GPFS on Hadoop do for you ?

The Hadoop Distributed File System (HDFS) is considered a core component of Hadoop, but it’s not an essential one. Lately, IBM has been talking up the benefits of hooking Hadoop up to the General Parallel File System (GPFS). IBM has done the work of integrating GPFS with Hadoop. The big question is, What can GPFS on Hadoop do for you?

IBM developed GPFS in 1998 as a SAN file system for use in HPC applications and IBM’s biggest supercomputers, such as Blue Gene, ASCI Purple, Watson, Sequoia, and MIRA. In 2009, IBM hooked GPFS to Hadoop, and today IBM is running GPFS, which scales into the petabyte range and has more advanced data management capabilities than HDFS, on InfoSphere BigInsights, its collection of Hadoop-related offerings, as well as Platform Symphony.

GPFS was originally developed as a SAN file system. That would normally prevent it from being used in Hadoop and the direct-attach disks that make up a cluster. This is where an IBM GPFS feature called File Placement Optimization (FPO) comes into play.

Phil Horwitz, a senior engineer at IBM’s Systems Optimization Competency Center, recently discussed how IBM is using GPFS with BigInsights and System x servers, and in particular how FPO has is helping GPFS to make inroads in a Hadoop cluster. (IBM has since sold off the System x business to Lenovo, which IBM now must work closely with for GPFS-based solutions, but the points are still valid).

According to Horwitz, FPO essentially emulates a key component of HDFS: moving the application workload to the data. “Basically, it moves the job to the data as opposed to moving data to the job,” he says in the interview. “Say I have 20 servers in a rack and three racks. GPFS FPO knows a copy of the data I need is located on the 60th server and it can send the job right to that server. This reduces network traffic since GPFS- FPO does not need to move the data. It also improves performance and efficiency.”


Last month, IBM published an in-depth technical white paper titled “Deploying a Big Data Solution using IBM GPFS-FPO” that explains how to roll out GPFS on Hadoop. It also explains some of the benefits users will see from using the technology. For starters, GPFS is POSIX compliant, which enables any other applications running atop the Hadoop cluster to access data stored in the file system in a straightforward manner. With HDFS, only Hadoop applications can access the data, and they must go through the Java-based HDFS API.

The flexibility to access GPFS-resident data from Hadoop and non-Hadoop applications frees users to build more flexible big data workflows. For example, a customer may analyze a piece of data with SAS. As part of that workflow, they may use a series of ETL steps to manipulate data. Those ETL processes may be best executed by a MapReduce program. Trying to build this workflow on HDFS would require additional steps, as well as moving data in and out of HDFS. Using GPFS simplifies the architecture and minimizes the data movement, IBM says.

There are many other general IT housekeeping-type benefits to using GPFS. According to IBM’s "Harness the Power of Big Data" publication, POSIX compliance also allows users to manage their Hadoop storage “just as you would any other computer in your IT environment.” This allows customers to use traditional backup and restore utilities with their Hadoop clusters, as opposed to using the “copy” command in HDFS. What’s more, GPFS supports point-in-time snapshots and off-site replication capabilities, which aren't available in plain-vanilla HDFS.

The size of data blocks is also an issue with HDFS. In IBM's June 2013 whitepaper "Extending IBM InfoSphere BigInsights with GPFS FPO and IBM Platform Symphony" IBM makes the case that, because Hadoop MapReduce is optimized for blocks that are around 64MB in size, HDFS is inefficient at dealing with smaller data sizes. In the world of big data, it's not always the size of the data that matters; the number of data points and the frequency at which the data changes is important too.

GPFS also brings benefits in the area of data de-duplication, because it does not tend to duplicate data as HDFS does, IBM says. However, if users prefer to have copies of their data spread out in multiple places on their cluster, they can use the Write-affinity depth (WAD) feature that debuted with the introduction of FPO. The GPFS quote system also helps to control the number of files and the amount of file data in the file system, which helps to manage storage.

Capacity planning of Hadoop clusters is easier when the data stored in GPFS, IBM says. In HDFS, administrators need to carefully design the disk space dedicated to the Hadoop cluster, including dedicating space for the output of MapReduce jobs and log files. “With GPFS-FPO,” IBM says, “you only need to worry about the disks themselves filling up; there’s no need to dedicate storage for Hadoop.”



Other benefits include the capability to used policy-based information lifecycle management functions. That means third-part management tools, such as IBM’s Tivoli Storage Manager software, can manage the data storage for internal storage pools. The hierarchical storage management (HSM) capabilities that are built into GPFS mean you can keep the “hottest” data on the fastest disks. That feature is not available in plan-vanilla Hadoop running HDFS.

The shared-nothing architecture used by GPFS-FPO also provides greater resilience than HDFS by allowing each node to operate independently, reducing the impact of failure events across multiple nodes. The elimination of the HDFS NameNode also eliminates the single-point-of-failure problem that shadows enterprise Hadoop deployments. “By storing your data in GPFS-FPO you are freed from the architectural restrictions of HDFS,” IBM says.

The Active File Management (AFM) feature of GPFS also boosts resiliency by caching datasets in different places on the cluster, thereby ensuring applications access to data even when the remote storage cluster is unavailable. AFM also effectively masks wide-area network latencies and outages. Customers can either use AFM to maintain an asynchronous copy of the data at a separate physical location or use GPFS synchronous replication, which are used by FPO replicas.

Security is also bolstered with GPFS. Customers can use either traditional ACLs based on the POSIX model, or network file system (NFS) version 4 ACLs. IBM says NFS ACLs provide much more control of file and directory access. GPFS also includes immutability and appendOnly restriction capabilities, which can be used to protect data and prevent it from being modified or deleted.

You don’t have to be using IBM’s BigInsights (or its Platform Symphony offering) to take advantage of GPFS. The company will sell the file system to do-it-yourself Hadoopers, as well as those who are running distributions from other companies. And using GPFS allows you to use the wide array of Hadoop tools in the big data stack, such as Flume, Sqoop, Hive, Pig, Hbase, Lucene, Oozie, and of course MapReduce itself.

IBM added the FPO capabilities to GPFS version 3.5 in December 2012. Although it's POSIX compliant, GPFS-FPO is only available on Linux at this point. IBM says GPFS is currently being used in a variety of big data applications in the areas of bioinformatics, operational analytics, digital media, engineering design, financial analytics, seismic data processing, and geographic information systems.



By : Alex Woodie
Link :http://www.datanami.com/datanami/2014-02-18/what_can_gpfs_on_hadoop_do_for_you_.html

lundi 17 février 2014

This tiny chip makes the Internet four times faster

IBM chip faster Internet 
 
The race is on to build a faster, better Internet. While Google is working on bringing super-high-speed connections to homes in select cities, IBM is working on a technology that could make the Internet all around faster everywhere.

It has created a new chip that beefs up Internet speeds to 200 to 400 gigabits per second, about four times faster than today's speeds, IBM says. Plus it sucks up hardly any power.

At this speed, a 2-hour ultra-high-definition movie (about 160 gigabytes) would download in a few seconds. It would only take a few seconds to download 40,000 songs, IBM says.

The chip fits into a part of the Internet that runs between data centers, not your computer or home router. 

The latest version of the chip is only a prototype right now, so it will be a while before it gets installed and the Internet gets better.

However, IBM says it has already signed on a customer for an earlier version of the technology, a company called Semtech. Semtech makes a device that converts analog signals (like radio signals) to digital signals that can be piped across the Internet.

Equally interesting is that IBM says it will manufacturer the chip for the Semtech deal in the U.S. at its semiconductor fab in East Fishkill, N.Y.

That's of note because there's been speculation that IBM may be looking for a buyer for its semiconductor manufacturing unit. Breakthrough technology like this could either help the unit grow revenues, allowing IBM to keep it, or allow IBM to sell it for a higher price.



By: Julie Bort
Link: http://www.businessinsider.com/ibm-chip-makes-the-internet-faster-2014-2

Up Close and Personal With IBM PureApplication PaaS

The converged infrastructure value proposition, by now, is pretty evident to everyone in the industry. Whether that proposition can be realized, is highly dependent on your particular organization, and specific use case. 

Over the past several months, I have had an opportunity to be involved with a very high-profile pilot, with immovable, over-the-top deadlines.  In addition, the security requirements were downright oppressive, and necessitated a completely isolated, separate environment. Multi-tenancy was not an option. 

With all this in mind, a pre-built, converged infrastructure package became the obvious choice. Since the solution would be built upon a suite of IBM software, they pitched their new PureApplication system. My first reaction was to look at it as an obvious IBM competitor to the venerable vBlock. But I quickly dismissed that, as I learned more. 

The PureApplication platform is quite a bit more than a vBlock competitor. It leverages IBM’s services expertise to provide a giant catalog of pre-configured multi-tiered applications that have been essentially captured, and turned into what IBM calls a “pattern”. The simplest way I can think of to describe a pattern is like the application blueprint that Aaron Sweemer was talking about a few months back. The pattern consists of all tiers of an application, which are deployed and configured simultaneously, and on-demand.

As an example, if one needs a message broker app, there’s a pattern for it. After it is deployed (usually within 20-30 mins.), what’s sitting there is a DataPower appliance, web services, message broker, and database. It’s all configured, and ready to run. Once you load up your specific BAR files, and configure the specifics of how inbound connections and messages will be handled, you can patternize all that with script packages, so that next time you deploy, you’re ready to process messages in 20 minutes.  If you want to create your own patterns, there’s a pretty simple drag and drop interface for doing so. 

image

I know what you’re thinking. . . There are plenty of other ways to capture images, vApps, etc. to make application deployment fast. But what PureApp brings to the table is the (and I hate using this phrase) best-practices from IBM’s years of consulting and building these solutions for thousands of customers. There’s no ground-up installation of each tier, with the tedious hours of configuration, and the cost associated with those hours. That’s what you are paying for when you buy PureApp. 

Don’t have anyone in house with years of experience deploying SugarCRM, Business Intelligence, Message Broker, SAP, or BPM from the ground up? No problem. There are patterns for all of them. There are hundreds of patterns so far, and many more are in the pipeline from a growing list of global partners. 

The PureApplication platform uses IBM blades, IBM switching, and IBM V7000 storage. The hypervisor is VMware, and they even run vCenter. Problem is, you can’t access vCenter, or install any add-on features. They’ve written their own algorithms for HA, and some of the other things that you’d expect vCenter to handle. The reasoning for this, ostensibly, is so they can support other hypervisors in the future. 

For someone accustomed to running VMware and vCenter, it can be quite difficult to get your head around having NO access to the hosts, or vCenter to do any troubleshooting, monitoring, or configuration. But the IBM answer is, this is supposed to be a cloud in a box, and the underlying infrastructure is irrelevant. Still, going from a provider mentality, to an infrastructure consumer one, is a difficult transition, and one that I am still struggling with personally. 

The way licensing is handled on this system is, you can use all the licenses for Message Broker, DB2, Red Hat, and the other IBM software pieces that you can possibly consume with the box. It’s a smart way to implement licensing.  You’re never going to be able to run more licenses than you “pay for” with the finite resources included with each system. It’s extremely convenient for the end user, as there is no need to keep up with licensing for the patternized software. 

Access to the PureApp platform is via the PureApp console, or CLI. It’s a good interface, but it’s also definitely a 1.x interface. There is very extensive scripting support for adding to patterns, and individual virtual machines. There are also multi-tenancy capabilities by creating multiple “cloud groups” to carve up resources.  There are things that need to be improved, like refresh, and access to more in-depth monitoring of the system.  Having said that, even in the past six months, the improvements made have been quite significant.  IBM is obviously throwing incredible amounts of resources at this platform. Deploying patterns is quite easy, and there is an IBM Image Capture pattern that will hook into existing ESXi hosts to pull off VM’s to use in Pure, and prepare them for patternization.

SNAGHTMLf9ba738

Having used the platform for a while now, I like it more every day. A couple weeks ago, we were able to press a single button, and upgrade firmware on the switches, blades, ESXi, and the v7000 storage with no input from us. My biggest complaint so far is that I have no access to vCenter to install things like vShield, backup software, monitoring software, etc.. But again, it’s just getting used to a new paradigm that’s hard for me.  IBM does have a monitoring pattern that deploys Tivoli, which helps with monitoring, but it’s one more thing to learn and administer. That said, I do understand why they don’t want people looking into the guts on a true PaaS.

Overall, I can say that I am impressed with the amount of work that has gone into building the PureApplication platform, and am looking forward to the features they have in the pipeline. The support has been great so far as well, but I do hope the support organization can keep up with the exponential sales growth. I have a feeling there will be plenty more growth in 2014. 



By: Brandon Riley
Link: http://www.virtualinsanity.com/index.php/2014/02/10/up-close-and-personal-with-ibm-pureapplication-paas/

Server market realignment

The server market is in the midst of a radical realignment, the likes of which have not been seen since the shakeout of the 1980s that saw most of the minicomputer makers, including Prime Computer, Data General and Digital Equipment Corp., disappear, devastating the Boston high tech corridor. And while the writing has been on the wall for some time, this major industry shift promises to happen much faster than that one.

IBM System x General Manager Adalio Sanchez speaking at an IBM event in Beijing on January 16, 2014 to debut the company’s latest x86-based servers. Today IBM announced plans for Lenovo to acquire IBM’s x86 server business for $2.3 billion.

The first major shock came to the market last month, when IBM announced an agreement to sell its System x servers, x86 network switches and other x86-based products to Lenovo, continuing IBM’s transition into a software and services provider. While internal sources say that the sale, which includes the transfer of up to 6,700 IBM employees to the commodity system maker, will take several months to complete, this announcement definitely points to the future of x86 hardware.

Actually the commodization of x86 has been ongoing for several years and is well under way. It started with the invention of hyperscale by the big Web service companies including Yahoo, Google, Amazon, and Facebook. These companies buy huge quantities of standardized white box servers direct from Taiwan and China for their mega-data centers, run them hard in highly automated environments and, when something breaks, throw it away and replace it with a new box. But even before that the seeds of commodization were sown by the major traditional players themselves when they handed manufacturing of their servers over to the Taiwanese. Essentially they created their own replacements.

That arrangement worked for them as long as the hardware still required lots of attention, expensive built-in management software, and constant optimization fine tuning to handle the compute loads. But in the last decade three things have changed. First, Moore’s Law has driven compute power and network speed to the point that detailed optimization is no longer necessary for most compute loads. Second, the management software has moved to the virtualization layer. The result of these two trends is that increasingly the focus of IT organization attention is moving up the stack to software, and hardware is taken for granted. After 67 years, the techies are finally tiring of fiddling constantly with the hardware.

Third, increasing amounts of the compute load is moving steadily to the cloud. Companies that always had to buy extra compute to support peak loads now can move those applications into a hybrid cloud, size their hardware for the average load and burst the peaks to their public cloud partner. As those companies gain a comfort level with their public cloud service providers, they will start moving entire compute loads, particularly new Web-based applications such as big data analysis that have a strong affinity to the public cloud, entirely to those providers, in many cases by subscribing to SaaS services.

Trend toward standardization

 
The result of this is that the underlying hardware is becoming highly standardized, and the focus of computing is moving to software and services. Under the onslaught of hyperscale and cloud computing, the market for the traditional vendors is decreasing, a trend that will accelerate through this decade. And the market is shifting from piece-parts to converged systems as customers seek to simplify their supply chains and save money. As Wikibon CTO David Floyer points out, the more of the system that can be covered by a single SKU, the more customers can save. The hardware growth for both IBM and HP is clearly in their converged systems, and their differentiation increasingly comes from what they provide above the virtualization layer in middleware and applications. The expansion of virtualization from servers to the software-led data center will only drive this trend faster.


Open source hardware is beginning to appear in the market and can be expected to become commonplace over the next five years as the big Asian white box makers adopt them as the next step in driving cost out of the server manufacturing process.

The clear message for x86 server vendors is either to drive cost out of their hardware business and become commodity providers on the level of the Taiwanese while developing differentiation through higher level software running on top of those commoditized boxes or get out of the x86 hardware business entirely and source their servers from a commodity provider. IBM has clearly chosen the latter course with its sales of System x to Lenovo along with the creation of a close partnership with the Chinese commodity hardware manufacturer.

IBM’s strategy — partner with Lenovo

This is the right strategy for both companies. Since buying IBM’s PC manufacturing business a decade ago, Lenovo has proven itself as a quality commodity electronics maker, and in the process passing HP last year to become the number one PC vendor in worldwide sales. IBM, meanwhile, is a highly creative company with a huge R & D budget that is betting its future on leading edge areas including big data processing and analysis, business cloud services worldwide, and Watson.

The close partnership should leverage the very different strengths of both companies to create products that benefit from both, particularly in the IBM PureSystems line of integrated systems. Meanwhile Lenovo is likely to enter the hyperscale market once it has brought its manufacturing and marketing to bear on the IBM System x line. It also will certainly continue to sell its rebranded System x servers into the traditional business and governmental markets and can be expected to field its own x86-based converged system line probably in partnership with IBM. Since both companies will be profiting from the relationship, regardless of whose brand is on the box, they both will have strong business reasons to maintain a close partnership into the future.

IBM is in the market position it is in today in large part because of the visionary leadership of CEO Louis V. Gerstner Jr., IBM’s head for a decade through much of the 1990s and early 2000s. He foresaw the industry changes we are experiencing today, at least in their general form, and espoused the strategy of transforming IBM from a hardware giant to a software and services company. In retrospect this was prophetic, and while obviously nobody in 1993 could have anticipated the impact of cloud computing and in Gerstner’s time “services” mostly meant consulting, he moved IBM in a direction that puts it in a strong position today to capitalize on the burgeoning cloud services market. And his successors, Samuel J. Palmisano and Virginia M. Rometty, have continued to move IBM forward and make the hard decisions sometimes necessary for IBM’s transformation.

HP at the crossroads

But what about the other big x86 server vendors, who did not have the good fortune to have a visionary at their helm in the 1990s? HP in particular seems to have been searching for a path forward in recent years with its parade of short-lived CIOs. Carley Fiorina certainly had a strong vision, but unfortunately it proved to be the wrong one for the company.

After she left, HP suffered from a revolving door at the top. Michael Hurd was the only CEO to last long enough to create a vision for the company’s future, but he seemed mainly to see “more of the same”. Meg Whitman has been in charge for nearly two-and-half-years now and seems to have stabilized the company, but it is also suffering from market shrinkage. Its answer so far has been to bring forward some innovative hardware, but large parts of the company outside converged systems, Moonshot and storage seem to be going forward blindly, producing more of the same with no regard for the reefs ahead, and HPs financial results in recent quarters have shown the result.

HP needs to make up its mind, and fast. In its first two decades it was a highly creative, if sometimes chaotic, company producing leading edge products including some of the first servers and desktop printers. It really was the Apple Computer of its day. But it seems to have lost much of that, and today it buys more innovation than it produces in house, Moonshot not withstanding.

The problem is that while it has some very innovative products, they are all one-offs, and large parts of the company appear to be drifting. Decisions seem to be tactical rather than strategic. The PC group, for instance, is obviously floundering. At one time HP was almost as much a consumer company, with its desktop printers and PCs, as it was a business-to-business vendor. It has neglected that part of its business, which is a mistake.

HP seems to be moving in the direction of becoming a U.S.-based commodity hardware supplier. If that is what its leadership wants, then it should embrace that completely and start competing on price in all its markets while driving cost out its processes at all levels.  If it wants to return to its roots as a highly creative company then it should start building on products like Moonshot and revitalize its consumer and business market mindshare with new, creative electronic products that can create new markets. It cannot do both — no company can.



By: Bert Latamore

IBM successfully transmits text using graphene-based circuit

Big Blue confident it can use fragile material within smartphones and tablets.


IBM has successfully transmitted a text message using a circuit made out of graphene, as the firm shows the potential of carbon-based nanotechnology.

Graphene consists of a single layer of carbon atoms packed into a honeycomb structure. The low-cost material is an excellent conductor of electricity and heat, which meakes it ideal for use in smartphones and tablets as data can be transferred faster, save power and be more cost efficient.

The barrier to using graphene within integrated circuits is its fragility.  But IBM believes it has found a way to compensate for this weakness by using silicon as a backbone for circuits.

The firm created an RF receiver using three graphene transistors, four inductors, two capacitors, and two resistors. These components were packed into a 0.6 mm2 area and fabricated in a 200mm silicon production line.

Big Blue’s scientists were able to send and receive the digital text “I-B-M” over a 4.3 GHz signal with no distortion. The firm claims performance is 10,000 times better than previous efforts and is confident graphene can now be integrated into low-cost silicon technology.

The firm said applications could involve using graphene within smart sensors and RFID tags to send data signals at significant distances.

“This is the first time that someone has shown graphene devices and circuits to perform modern wireless communication functions comparable to silicon technology” said Supratik Guha, director of physical sciences at IBM Research.  


By: Khidr Suleman

mardi 28 janvier 2014

New IBM Kenexa talent suite taps Big Data to energize today's workforce

IBM today announced the software-as-service (SaaS)-based IBM Kenexa Talent Suite that allows Chief Human Resources Officers (CHROs) and C-Suite executives to gain actionable insights into the deluge of data shared every day by their workforce. As a result, organizations can now streamline, modernize and add precision to hiring practices, increase workforce productivity and connect employees in ways that impact business results.

Organizations around the world today are on a mission to identify and hire top talent. By hiring precisely the right employees and then arming them with powerful social tools, businesses can more effectively manage and develop their workforce and put them into the position to succeed.

With the IBM Kenexa Talent Suite, HR professionals can look at large volumes of employee data – such as work experience, social engagement, skills development and individual interests – to identify the qualities that make top performers successful. Organizations and teams can then use those models to pursue candidates through additional targeted social marketing on social recruiting sites, where job seekers matching the profile are automatically connected with opportunities matching their skills. 

Customers can accelerate the onboarding and the integration of new hires through IBM Connections capabilities. This helps employees share information and find the right experts to accelerate learning and increase productivity and engagement, while at the same time providing a way for leaders to more effectively manage their teams. Through analytics and reporting, line of business leaders can better understand emerging employee trends and more effectively manage each individual's career path in areas like skill attainment, performance appraisals, compensation, succession planning and more.

"We know people are the lifeblood of an organization, and business success on today's stage requires not just talent but social capabilities that can energize, empower and nurture each team member so they can reach their full potential," said Craig Hayman, General Manager, Industry Cloud Solutions, IBM. "By combining social, behavioral science and analytics in the cloud, we give businesses a clear path to empower their most valued asset – employees." 

Interested customers can complement the Suite with Watson Foundations, a comprehensive, integrated set of Big Data and Analytics capabilities that enable clients to find and capitalize on actionable insights. Watson Foundations provides the tools and capabilities to tap into relevant data – regardless of source or type – and apply a full range of analytics to gain fresh insights in real-time, securely across an enterprise. 

Using Watson Foundations customers will be able to conduct a deeper level of analysis on key workforce-related data, identify trends within the workforce, predict future trends and proactively take action. Executives can also look at the profiles and work performance of their top employees and determine the appropriate type of rewards needed to keep them engaged. 

According to an upcoming IBM C-Suite study that surveyed 342 CHROs representing 18 industries, many businesses are not taking full advantage of the insights delivered by workforce big data and analytics. The study found that just over half of organizations are using workforce analytics, with far fewer applying predictive analytics to optimize decision making and outcomes in areas such as sourcing and recruiting (7 percent), employee engagement and commitment (9 percent) and talent development (10 percent), retention (13 percent). 

The CHRO study also found that human resources executives are in the early stages of applying social approaches within the organization. Currently, 66 percent are regularly using social for their recruiting efforts, but only 41 percent are using it for learning 31 percent for knowledge sharing.

Today leading businesses such as AMC are benefiting from IBM talent management software. AMC, one of the world's premier entertainment companies, uses recruitment technologies from IBM to gain a deep understanding through data analytics of what it takes to succeed at the organization. AMC then uses that knowledge to attract candidates who are more likely to succeed once they're hired.   

"Harnessing the power of data gives us a better picture of what top talent looks like in our industry. IBM's talent management solutions allow us to use data in new ways so we can make better informed decisions that have a greater impact on our business," said Heather Jacox, Director, Diversity, Recruitment & Development at AMC.

The IBM Kenexa Talent Suite includes the following:
  • Talent Acquisition: Includes recruitment, skill and behavioral science-based assessments and onboarding. These integrated functions are designed to provide a deep understanding of what the best talent looks like and then how to attract, hire and engage them.
  • Talent Optimization: Includes performance management, succession planning and compensation planning to empower and get the most out of employees.
  • Social Networking: Increases productivity with expertise identification and knowledge discovery – connecting employees and accelerating the time to productivity.


By: PR Newswire
Link: http://cloudcomputing.ulitzer.com/node/2942997

lundi 20 janvier 2014

IBM announces revolutionary System x generation with modular design, 12 TB flash on memory bus

IBM today announced a complete redesign of its System x x86 server family, featuring up to 12.8 Tbytes of NAND flash directly on the memory bus of the server and a modular design that allows users to upgrade the server by simply replacing plug-in “Compute Books”. This sixth generation of System x has six core reference architectures including one for SAP HANA and is designed to facilitate the virtualization of ERP and other large, core enterprise-level applications for delivery through private and hybrid clouds. The scalable design can also reduce acquisition costs up to 28 percent in comparison to competitive Xeon x86 systems, IBM says.

The announcement includes a new System x3850 M4 BD storage server, a two-socket rack server supporting up to 14 flash and/or disk drives delivering up to 56 Tbytes of high-density storage. This server, which combines compute with storage, is specifically designed for large build-out architectures such as Hadoop big data installations, says Stuart McRae, IBM System x high-end business line manager.

It also includes the new IBM FlashSystem 840, providing double the bandwidth and performance – 1.1M IOPS – of its predecessor, the FlashSystem 820. It supports up to 48 Tbytes of usable capacity in a 2U unit with IBM Microlatency technology that cuts data access times to microseconds. Designed to support big data, it provides actionable insights from real-time data analytics faster than its predecessors. It also features a new management GUI and datacenter-optimized features such as hot-swap components and concurrent code load, enabling fast installation and easier management.

Virtualize your ERP for the cloud

“The System x6 is the first server family that’s been effectively designed from the ground up to incorporate flash storage,” McRae said. “Until now, flash storage has been kind of an add-on – you add on a PCI card to the server. This is integrating flash storage on the memory bus, the highest speed bus in the system, and making that available as a block storage device, that looks like any other block storage device to the application.”

By putting the flash on the memory bus, it becomes the fastest flash storage on the market. And these new systems can support a lot of it.

“It looks exactly like a DDR3 DIMM,” he said. “These systems are going to have 92 DIMM sockets in a four-way, so it can support up to 6 terabytes of system memory on a four-way, or 12 terabytes in an eight-way. That’s three times as much memory as is available in a standard eight-way server today.”

This has major implications both for big data analytics and for virtualization of very large enterprise applications such as ERP. “If you wanted to cache a five terabyte database in the server for analytics applications, you can configure that as a cache.” So for instance, in the SAP HANA appliance, a large amount of that space is used for RAM, allowing users to have a very large data set in HANA while still providing large amounts of flash for staging data. And by spreading a HANA or similar installation across several servers, it can support very large databases on the memory bus while providing resilience and redundancy in case of a hardware or power failure in any one server.

Before the x6 generation, users were memory-constrained in what they could virtualize. “If you had a four-way server only supporting one or two Tbytes of memory, it’s hard to virtualize a terabyte application,” McRae said. “Now they can do that on these new platforms.” This opens the way for virtualization of ERP systems, even Oracle Red Stacks, that today run on bare metal, allowing customers to realize the advantages of server virtualization and deliver services based on their ERP and other core systems to users via their private clouds.

“I want to move my large databases to a cloud model. I want to move my SAP HANA to a cloud model. I want to move my big ERP applications. I don’t want to have to re-architect it to a new architecture, I want to move it now, and this provides the infrastructure to do that,” MacRae said.

“Booking” your memory, flash, CPUs

The other part of the x6 revolution is the new parallel modular design that IBM calls “Compute Books”. Each server is made up of these plug-and-play modules, each with its own processor and memory. These plug into a backplane that provides power and IO.

This means that upgrading a server or replacing a failing unit is simply a matter of unplugging one or more modules and plugging in replacements. Then a simple restart implements the new hardware without requiring a forklift replacement and all the management that goes along with that. MacRae says IBM estimates that the core server will support at least the next three generations of processor and memory/flash technology.

“Once you’ve architected the server and put your big applications on it, two years from now, when you say ‘Scotty, I need more power,’ you just pull the Compute Books out and plug the latest, greatest ones in. It’s all transparent to the back-end IO.” That provides a great deal of investment protection across generations.

And while it does require a reboot of the upgraded server, “because it’s a virtualized environment, and now we’ve virtualized these large applications, you have no application downtime.”

Six reference architectures

As part of the announcement, IBM also announced six pre-architected versions that come with software installed: an SQL data Warehouse, a Hyper-V appliance running on Windows Server, an SAP HANA version, an SAP Business Suite version, a VMware vCloud, and finally a version running DB2 with BLU acceleration on Linux. The servers come with either SUSI or Red Hat Linux or Microsoft Server. While IBM does not have a reference architecture for it, Oracle has System x on its compatibility list, so users can also run an Oracle Red Stack on the new System x. And because of the higher end processors and the large amounts of memory and flash storage that the new generation supports, they can decrease the number of licenses they need, saving significant cost, particularly with Oracle. And System x also runs IBM Watson for users who want that in-house rather than using it from IBM’s cloud.


By: Bert Latamore