vendredi 6 décembre 2013

IBM's Big Plans for Cloud Computing



Ambition is an impressive thing, particularly when a desire for world domination is combined with existential survival. 

Four heavyweight tech companies are translating that ambition into investments in their cloud computing services: IBM, Microsoft, Amazon and Google are all expected to spend more than $1 billion annually on their global networks in the coming years

Even more important, however, is that all the companies are developing knowledge through their cloud services of how to run truly huge Internet-based computing systems — systems that may soon be nearly impossible for other companies to match. If any other company is thinking of entering the business, like China’s Tencent, for example, they’ll need to move fast or come up with something revolutionary.

IBM’s response? You ain’t seen nothing yet. 

In 2014, the company will make a series of announcements that will shiver all challengers, according to Lance Crosby, chief executive of SoftLayer, a cloud computing company that IBM purchased earlier this year for $2 billion

More than 100 products, like e-commerce and marketing tools, will be put inside the cloud as a comprehensive series of offerings for business, Mr. Crosby said. So will another 40 infrastructure services, like big data analysis and mobile applications development. 

“It will take Amazon 10 years to build all of this,” he said. “People will be creating businesses with this that we can only dream about.” 

Maybe. IBM already claims to lead in cloud computing revenue, with $1 billion in revenue in the past quarter alone. That’s impressive, though that revenue includes revenue from software that used to be attributed to a different category at the company. And some of the revenue is being generated by companies IBM recently acquired, including SoftLayer.

On many other fronts, such as the number of machines it operates, the number of major companies running big parts of their business on IBM’s public cloud, and the new technology it appears to have built for cloud computing, IBM is arguably the laggard among the top four providers. As the SoftLayer purchase indicates, it has had to buy big for what the others have mostly grown internally.

What IBM does have, however, is a lot of money and resources it plans to throw at cloud computing. And given its experience in the early-1990s, when it faced a near-death experience after missing a major technology shift, the company may also have a belly for a swift change.

The big push will begin in February, Mr. Crosby said, with a formal inauguration of its new cloud offerings by Virginia M. Rometty, IBM’s chief executive. 

IBM has also deployed 400 employees to Openstack, an open source software project with more than 200 corporate members that goes after much of the proprietary cloud systems of Amazon, Microsoft and Google. This seems much like IBM’s involvement a decade ago in Linux, which helped that open source operating system win corporate hearts and minds. 

In addition to the consolidation of online software and services, Mr. Crosby said, IBM is “absolutely” looking to sell its big mainframe computing capabilities as a cloud-based service. It also plans to draw on the insights it has gained from building and licensing technology used by Microsoft in the Xbox gaming console, and Google in its own network operations, he said, and will make more acquisitions for the cloud business.

“We make the processors in Google’s server racks,” he said, “We understand where gaming is going. Before I got here, I thought this was a big old tech company, too; I didn’t see all of the assets.” 

It’s true that IBM is big. And, it is also a tech company. And undeniably 102 years old, which makes it both a survivor and a creature of successful processes. Mr. Crosby has two bosses between him and Ms. Rometty, and numerous executive vice presidents above him that may agree on the eventual future, but have their own views about the speed with which they’ll move there.


By: Quentin Hardy
Link: http://mobile.nytimes.com/blogs/bits/2013/12/04/ibms-big-plans-for-cloud-computing/

mardi 3 décembre 2013

IBM storage GM: Flash impacts everything

Ambuj Goyal, a 31-year veteran of IBM, became the company's top storage executive in January of 2013 when he took over as general manager (GM) of IBM's System Storage & Networking business. Goyal's previous roles at IBM included GM of global development and manufacturing for the Systems and Technology Group, GM of IBM information management software, GM of workplace, portal and collaboration software, GM of solutions and strategy for software, vice president of services of software, and director of computer sciences. With his first year as the IBM storage GM winding down, we spoke with him about IBM's flash storage, how he sees the storage world changing, and where he sees it going in the near future.

What were your main goals for improving IBM storage when you took over as GM early this year?

Ambuj Goyal: There were a few things I wanted to change. We had a lot of products. We were doing a battle on speeds and feeds and capacity. What I tried to do was say, 'This is not a storage battle, this is a data battle.' Clients are looking at how they manage data. Their different workloads have different data management needs. When people think about data, the first thing they say is, 'Don't lose my data.' The second thing they say is, 'My data should be available to my workload.' And the third thing they say is, 'When I need capacity or performance, give it to me.' So we changed the focus to data management and that has rationalized the product portfolio.

Does that mean you have to get rid of products?

Goyal: When you get into speeds, feeds and capacity, everything feels like it is overlapping.
Let me give you three scenarios of data management. First is, I have business-critical data that I cannot survive without -- I cannot load my ledger, I cannot report earnings, whatever it is. I have to make sure the data is available and secure. The second scenario is, I have lots of data, let me understand the data and leverage data. Let me start quick and add value -- tell me what I want to keep and what I can throw away and what I can put into cost-effective scenarios and where I need real-time analytics. The third scenario is I have a clutter of data, I have lots of products, and I want to start a new project quickly to leverage all my existing data on depreciated capital.

In the first scenario, which is business-critical data, we lead with multi-copy data management. It's about how you access a created copy of the data [and] how fast you can create a copy. In the past it was called batch processing. Now it's called creating an analytics copy so you can do analytics associated with it. Our DS8000 family is useful for copy management of business-critical data.

In the world of start quick and add value, we have the world's most popular virtualization platform. Originally that was SVC (SAN Volume Controller) and we have significantly improved it over the last three or four years to create things like non-disruptive integration in a data center. So when you put it in it takes a short time for applications to run, and it gives you a huge amount of utilization improvement through data reduction. And this product line is now called Storwize.

In the third scenario where you say have lots of data, the cloud, big data, people are looking for amazing capacity and a grid-scale-architecture. I want to make sure I have so many applications running, yet it should automatically self-adjust and provision itself so I don't have to get humans involved. But it still has things like encryption of critical data. That is the XIV family. The XIV family is now the most used in the OpenStack, big data, clouds and analytics scenario.

Our strategy has shifted. We start with a workload, understand what the need for that particular data is, and then lead with a solution rather than speeds and feeds. Now many of our clients have shifted from saying, 'Give me the best dollars per gigabyte,' to saying, 'I want to buy the right data architecture.'

Our sales team has really been well-educated about that, and that's why we are taking competitive share now.

Taking share? I haven't seen any numbers that indicate that.

Goyal: I don't know about IDC or [Gartner] Dataquest, they will publish the numbers. But I look at competitive changes. We are starting to get into many clients where the normal answer was, 'We are standardized on something, show me your value before we will even consider you' and we are starting to displace competitors.

In that sense we are starting to move forward. I'll give you an example. We just went into eBay. EBay is using our big data and cloud solutions for huge amounts of grid-scale data, and that's built on XIV.

Storage is more than 10 Gigabit Ethernet versus Fibre Channel versus 6 terabyte DASD (Direct Access Storage Device), or eMLC (Enterprise Multi-Level Cell) or SLC (Single- Level Cell) flash. Storage is about data management.

Where does flash fit in?

Goyal: Flash is impacting everything -- flash is not a product for us. Yes, we can sell a standalone product, but flash is leveraged behind Storwize and SVC, all-flash technology is leveraged in multi-copy data management scenarios with the DS8000, and flash is being leveraged in XIV for big data and cloud scenarios, as well.

We use flash to get the fastest ROI [return on investment] without any operational change in the data center. We just announced a DS8000 product line that is all-flash. There's a significant improvement in performance, a significant improvement for clients who want extremely consistent response time. An all-flash DS8000 is good because from an application perspective, the mainframe is using DS8000 and no software needs to change. You can get amazing response times and consistent response times with a reduction in floor space without changing a single line of software. If you roll in a new product with different APIs [application programming interfaces] and different management environments, then you will have to disrupt the data center and ROI will take a long time.  

What about the FlashSystem all-flash platform acquired from Texas Memory Systems?

Goyal: That goes into scenarios where clients say, 'I'm already doing things like data backup and replication and all the data loss prevention things in my software. All I want is the amazingly fast capability to access data."  In those scenarios we are seeing a huge interest in all-flash.

More than 1,000 organizations in six months have purchased FlashSystems, and we have exceeded 100 petabytes of flash.

What are the challenges your customers are seeing in backup?

Goyal: In backup, they want cheap and deep. For the data they want to access, it needs to be quick. They don't want to depend on a particular media. They say, 'Don't sell me tape, don't sell me flash, don't sell me a separate controller, I have a data management problem. I want to put away petabytes and petabytes of data, and then I want to tell you my recovery point and recovery time objectives, and then you decide the media and give me the cost associated with it.'

There are scenarios where we used to say the answer is a virtual tape library for backup if you only want disk, or real tape, or some other backup software scenario. Now we say [that] through a mechanism we are calling long-term file system, we can use any combination of tape and flash and disk. You tell us the capacity you need, we will give you the lowest capacity for dollars per gigabyte for storage and the best performance based on recovery point and recovery time objectives.

When storing data for archive -- think about for an audit or legal hold -- in those scenarios a combination of flash and tape works nicely. In a media asset management scenario where people need to play out movies, a combination of disk and tape is working nicely.

We don't want to have a separate media business associated with tape, disk or flash. I look at what problem I am I trying to solve, and come up with the right software and media.

What is your cloud storage strategy?

Goyal: There are two ways to think about the cloud. One is being an arms supplier to people who are providing the cloud. Those can be MSPs [ managed service providers], and there are lots of people building private clouds based on our XIV with a cloud option. The second way is through our SoftLayer offering.

Are you seeing any other storage trends in the market?

Goyal: Everything that we are doing is going open. Even when we do tape, the drives we put LTFS on can run on somebody else's libraries. Everything we are doing with respect to OpenStack, Cinder or Swift, we are not going to create proprietary APIs. Many clients say, 'We're stuck with proprietary APIs, and now our application is tied to the API associated with a proprietary vendor.' We want to be in a situation where we endorse open standards and win with execution.

So our strategy is to be open so people are not tied. Even with Storwize and SVC, you can put non-IBM storage behind it. Many of the flash vendors and our traditional competitors enable their storage behind our storage virtualization engine. I can change applications and change the storage associated with it because it's open [source]. I'm trying to endorse everything open. Just like I said flash is pervasive behind every architecture we are doing, open is also becoming pervasive. EBay would have never bought our XIV product without OpenStack support.

We want to win with execution, not by controlling your data center.



By : Sonia Lelii & Dave Raffo
Link : http://searchstorage.techtarget.com/news/2240209881/IBM-storage-GM-Flash-impacts-everything?src=5186535

Indian companies get on to cloud to manage workforce

How are leading Indian companies such as Bharti Airtel, HCL & Tata Motors using the cloud to make smarter appointments and increase productivity



The biggest asset of any organisation is the internal consumer force or the work force and keeping this sect well trained and pepped up is now more important than ever. According to a recent announcement by IBM, 70 per cent of CEOs cite human capital as the single biggest contributor to sustained economic value.

Marrying data with employees’ qualities

To enhance the value of their human capital and to transform their work force, leading India organisations such as Bharti Airtel Ltd, HCL and Tata Motors have taken resort to cloud based social software. The aim of using IBM’s cloud based social software is to combine industry leading social business and analytics capabilities with the human capital management offerings and get cloud-based capabilities that allows organisations to capture and analyse data shared by employees.

Bharti Airtel Limited,a leading global telecommunications company, was looking to implement evolved and proven methods to build organizational culture and employee engagement. By using Kenexa's employee engagement survey, which gives accurate measure of employee opinions, Airtel is now able to understand manageable factors that cause employees to be more productive, stay with the company for a longer period of time and care more deeply about the work they do. As a result, leadership is able to use this feedback to change management strategy, align it with key cultural factors and ultimately improve business performance.

Right data to detect the right candidate !

Tata Motors on the other hand was looking for a service that would help them screen the right candidates for a particular position.
 
Tata Motors, a Fortune 500 and leading automobile player, has an internal job posting practice called Opportunities++.

The company teamed up with IBM to study job fit within the organisation by using an online system that also had the ability to screen candidates for the right position. The goal was to select candidates on various parameters such as those, who provide great customer service. Once Tata’s pinpointed these characteristics, the company refocused its efforts on assessing and selecting the right people for open jobs.

“IBM is uniquely positioned to help organizations capture information, create insights and generate interactions that translate into real business value,” said Anmol Nautiyal, Director, Smarter Workforce, IBM India and South Asia. “By marrying powerful social and talent management offerings in the cloud, organisations can attract, develop and inspire their employees, which in the end help to ensure their growth and long-term business success.”

All major organizations in India, usually hire once in a year, picking up the best talent. However, with the growing youth in the country, the task gets difficult day by day. To solve this issue, HCL, a leading global Technology and IT Enterprise, was looking to deploy a cloud-based recruitment system that would allow it to improve recruiter productivity, cycle time, and quality of hire and fulfilment rate while at the same time reducing costs of recruitment.

HCL was also looking to improve customer satisfaction in terms of improved candidate and recruiter experience. Working with IBM, HCL replaced its home grown Smart Recruit candidate/requisition tracking system with a new Talent Acquisition solution that automates its entire recruitment process.

Recruitment and then retaining the trained recruited staff has been a major challenge for organizations. With technology and data entering the space organizations are empowered to take wiser decision.

IBM’s Smarter Workforce initiatives help businesses capture and understand data and use these insights to empower their talent, manage expertise and optimize people-centric processes. It includes IBM’s 2012 $1.3B acquisition of Kenexa’s talent management, recruitment, compensation, engagement, leadership and assessment offerings.




By : Saloni
Link : http://e27.co/indian-companies-get-on-to-cloud-to-manage-workforce/

lundi 2 décembre 2013

Design: IBM's secret weapon in the coming SMACdown ?

I’ve talked before about how Phil Gilbert – the former President and CTO of Lombardi Software who joined IBM when it bought his company – now has a role to develop a cross-company design practice in IBM. IBM Design is centred around a lab in Austin TX but with plans to spread wider. It’s the centre of excellence for IBM’s own take on Design Thinking, and is hoovering up design talent like you wouldn’t believe.

What I hadn’t realised – until I saw Gilbert present at last week’s Analyst Insights event – was the full extent to which IBM Design is really a kind of rediscovery of the company’s industrial design heritage.

Starting in IBM’s Software Group, Gilbert’s IBM Design group is applying IBM Design Thinking to existing and new products. This variant of Design Thinking – which itself might be roughly characterised as “an approach to design that’s focused first around the experience that the user has, rather than a product; and that takes an ‘outside-in’ approach to analysing problems and opportunities” – is being retooled in a way that enables IBM to scale the approach to very large teams and communities. The approach is being spread through IBM product teams through week-long intensive “designcamps”. The IBM Design group is focusing in particular on dramatically improving six areas of customer and user experience: find/install/setup, first use, everyday use, upgrade, API use, and maintenance.

Not content with spreading the religion amongst those building and improving products, IBM Design is also running distilled versions of its designcamps for senior executives in IBM’s Software Group. And Gilbert makes no secret of the fact that the ambition is to take IBM Design Thinking beyond the Software Group to other groups in IBM.

Well, this is all very nice. But so what?

The first part of why this is so important to IBM: one of the very legitimate ways that its competitors have in recent years been able to score points against IBM is to highlight how complicated its technologies are to navigate, implement and use. As Gilbert himself says: “too many users are working for our products; we want to turn this around.” From what I’ve seen of IBM Design’s work, it’s already started to have a pretty radical impact on the intuitiveness of some of IBM’s products look and feel.

The second part is more forward-looking. It relates to the ways in which technology vendors large and small are currently investigating and investing in Social, Mobile, Analytics and Cloud (“SMAC”) technologies and platforms for their customers, to augment the infrastructure platforms they already have and help to deliver on a Digital Enterprise vision. Every vendor has to have a story about how SMAC technologies affect them and how they’re taking advantage. This is all well and good; but the truth is that there’s a very real danger for enterprises as they embark on explorations of the new platforms being assembled for them.

The danger is that enterprises will slide into platform investments that bring *many* more moving parts and more integration points; and at the same time more control-point tussles between vendors with each trying to make sure that their own social front-end, or application development/design repository, or device management toolset, or whatever becomes the ‘master’ in the customer’s environment. Make no mistake, this will happen. Just as it always has when new business technology platforms have emerged.

This is where I think the power of IBM Design has the potential, possibly, to strengthen IBM’s strategic position. Note that IBM Design’s mission is to “design an IBM that works together, works the same, and ‘works for me’”.

It’s that last part that really resonates in today’s environment, I think. In Phil Gilbert’s own words:  “The only platforms that matter are the platforms in our customers’ organisations.” If this turns out to be more than words, then it will be very powerful indeed.

Of course, the proof of the pudding is in the eating. IBM Design is off to a great start but it’ll be another year at least before we can say for certain the impact that IBM Design Thinking is having on IBM’s business and its customers’ businesses.



By: Neil Ward-Dutton

IBM supercharges Power servers with graphics chips

IBM will support Nvidia GPUs in its Power servers, mainframes, and supercomputers starting next year.

IBM achieved a computing breakthrough when the Watson supercomputer outperformed humans in game show "Jeopardy," but the company now wants to supercharge its high-end Power servers by tapping into graphics processors for the first time.
 
Starting next year, IBM will start using Nvidia's Tesla graphics chips in servers with Power chips, which have been used in Watson and supercomputers like Sequoia, IBM said on Monday. 

Nvidia's Tesla graphics processors have been used alongside CPUs in some of the world's fastest supercomputers to accelerate technical computing. The addition of GPUs to Power servers would be new; previous servers were boosted by vector co-processors, FPGAs (field-programmable gate arrays) and other circuitry.

In addition to supercomputers, the combination of Power processors and Nvidia GPUs could speed up mainframes used for critical tasks like financial transaction processing.

The addition of Tesla to IBM's Power CPUs in servers will help customers process and analyze data faster, said Sean Tetpon, an IBM spokesman, in an email. IBM plans to deploy Power-based rack based servers with Nvidia's GPUs as early as 2014, Tetpon said.

The first servers could combine Tesla with IBM's upcoming 12-core Power8 chip, which will ship next year. IBM claims the Power8 chip is up to three times faster than the Power7, which was released in 2010 and is used in the Watson supercomputer.

In August, IBM unexpectedly announced that it would open up its Power8 architecture and start licensing intellectual property to third parties looking to build Power servers or components. IBM also established the OpenPower Consortium, whose members include Nvidia, Google, Tyan and Mellanox. Tyan will be the first company outside IBM to build a Power server.

IBM is also making it easier to plug-in co-processors like GPUs to Power8 servers. It is providing a connector called CAPI (Coherence Attach Processor Interface) to which third-party component makers can attach graphics cards, storage devices, field-programmable gate arrays, networking equipment or other hardware.

IBM already uses Nvidia GPUs in the System x servers, which use Intel's x86 processors. The Power8 server architecture is built around the PCI-Express 3.0 data-transfer standard, which is already used for GPUs in PCs and x86 servers.

IBM is also adding native support for Nvidia GPUs to its version of the Java Development Kit (JDK).

A lot of applications in distributed computing environments are written using Java, and Nvidia's GPUs will be able to process more mainstream applications in environments like Hadoop, said Sumit Gupta, general manager of Tesla Accelerated Computing products at Nvidia.

"This expands us into the broader general enterprise market," Gupta said.

Currently, Nvidia GPUs are used mostly to process scientific and math applications in supercomputers. IBM will support Nvidia's proprietary parallel programming tools called CUDA, in which code can be written for parallel execution across graphics processors.

IBM is "exploring many application areas" across its software portfolio that could be off-loaded to graphics processors, IBM's Tetpon said.

"Any existing or new compute applications that are developed with the NVIDIA CUDA programming model will be supported," Tetpon said.



By: Agam Shah
Link: http://podcasts.infoworld.com/d/computer-hardware/ibm-supercharges-power-servers-graphics-chips-231064

Graph500: IBM is #1 for Big Data Supercomputing

IBM has taken eight of the top 10 places on the latest Graph500 list. The firm has also secured a huge market share when it comes to building the majority of the world’s best computers for processing big data. Big data is critical to IBM’s current strategy and the graph 500 list, ranked super computers based on their processing ability to deal with huge amounts of big data. The top three positions have been awarded to Lawrence Livermore National Laboratory’s Sequoia, Argonne National Laboratory’s Mira and Forschungszentrum Juelich’s (FZJ) JUQUEEN, which all use IBM Blue Gene/Q systems.


Blue Gene supercomputers have ranked #1 on The Graph500 list since 2010 with Sequoia topping the list three consecutive times since 2012. IBM also was the top vendor on the most recent list, with 35 entries out of 160. Big Blue was also the top  vendor on the list, with 35 entries out of 160. Competitor, Dell featured 12 times and Fujitsu, seven.

The Graph500 was established in 2010, by a group of 50 international HPC industry professionals, academics, experts and national laboratory staff. There are five key industries that the Graph500 tries to address with its benchmark which include cybersecurity, medical informatics, data enrichment, social networks, and symbolic networks. All of these industries process and analyze large amounts of data, which is also why the Graph500 looks at graph-based data problems, since this a foundation of most analytics work, and the ability of systems to process and solve complex problems.

The name also comes from graph-type problems or algorithms which is at the core of many analytics workloads in applications, such as those for data enrichment. According to llnla graph is made up of interconnected sets of data with edges and vertices, which in a social media analogy might resemble a graphic image of Facebook, with each vertex representing a user and edges the connection between users. The Graph 500 ranking is compiled using a massive data set test. The speed with which a supercomputer, starting at one vertex, can discover all other vertices determines its ranking”.

The rankings are geared toward enormous graph-based data problems, a core part of most analytics workloads. Big data problems currently represent a huge $270 billion market and are increasingly important for data driven tech businesses such as Google, Facebook and Twitter. While the limit for what actually constitutes ‘big data’ continues to evolve rapidly, businesses and startups, need to understand and unlock additional value from the data that is most relevant to them, no matter the size.



By:  Hayden Richards