mercredi 26 février 2014

IBM BlueMix: PaaS Play, explained

With BlueMix, IBM gives customers a cloud path for legacy apps. Here's how SoftLayer, Cloud Foundry, and WebSphere tools fit in.

IBM is putting together a PaaS platform that it has dubbed BlueMix, which is a combination of open source code, IBM software tools, and Big Blue's tried and true WebSphere middleware used by many of its oldest customers. In effect, it's investing $1 billion to give enterprise customers a path to move legacy systems into the cloud.

For enterprise users who want to move an application into IBM's SoftLayer unit's public cloud, the many components of IBM WebSphere middleware will be there and waiting as callable services through a SoftLayer API. IBM acquired SoftLayer and its 700 employees last July and made its provisioning, management, and chargeback systems the core of its future cloud services.

Not so fast, you say. IBM's Blu Acceleration for DB2, Watson advanced analytics, Cognos business intelligence, and many versions of WebSphere run on IBM Power Systems servers, not the cloud's ubiquitous x86 servers.

Lance Crosby, CEO of IBM's SoftLayer unit, agrees that's the case. And that's why Power servers are now being incorporated into the SoftLayer cloud. It will be one of the few public clouds with a paired hardware architecture approach. Crosby declined to predict how many Power servers may be added or what percentage they would become. SoftLayer currently has about 150,000 x86 servers. IBM is adding 4,000 to 5,000 x86 servers to that number a month, and x86 will remain the majority by a wide margin, Crosby told InformationWeek.

"Power servers were never about volume. They're about more memory capacity and processing power" to handle enterprise ERP and database applications, which require large amounts of both, Crosby said.

In addition, IBM is making a broad set of its data analytics, Rational development tools and applications, such as Q9 security and Maximo inventory management, available on SoftLayer as software-as-a-service. Developers producing next-generation applications will have the option of using services from IBM's software portfolio that they're already familiar with, Crosby added. IBM Tivoli systems management software will also be made available, though no date was announced. Crosby said IBM will seek to get the bulk of its portfolio into the BlueMix PaaS by the end of the year.


Although there's a strong legacy component, IBM claims the $1 billion figure comes into play because that's the amount it's spending to break Rational tools, WebSphere middleware, and IBM applications down into services and make them available via SoftLayer. It's also using part of that figure to acquire the database-as-a-service firm, Cloudant.

About two dozen tools and pieces of middleware are available for the beta release of BlueMix, with 150 to 200 products to become available when the cloud-enablement conversion process is done.

Much of the $1 billion will be needed to convert IBM's huge, software portfolio currently sold under the packaged and licensed model into a set of "composable services," employed by developers to become parts of new applications. Only a fraction of that portfolio is ready with BlueMix's beta launch on Feb.24. Crosby said the way IBM would have handled such an announcement in the past was to wait until it was finished converting distinct products or product sets before going public. But that's the old enterprise way of doing things.

IBM is trying to adopt more of "born on the web" or agile development approach, where software gets changed as soon as one update is ready and production systems have short upgrade cycles. "Our goal is to follow the mantra of the agile development approach as soon as we can," said Crosby.

IBM middleware will often appear through BlueMix incorporated into a predefined "pattern" created by IBM. BlueMix on SoftLayer will give developers the ability to capture a snapshot of a pattern with each application, so that it "can be deployed to 10 datacenters in an identical fashion at the click of a button," said Crosby. The capability is called "patterns," often consisting of an application, a Web server, IBM middleware, and a database service.

BlueMix will run in SoftLayer on top of the open source platform, Cloud Foundry, originally sponsored as a project by VMware. Cloud Foundry became the charge of the Pivotal subsidiary, as it was spun out of VMware and EMC. Now its organizers say they are moving the PaaS project out into its own foundation and governing board. The Apache Software Foundation, OpenStack, and other key open source code projects have followed a similar route to gain the broadest possible backing.

There are 20 million developers in the world, and three-quarters of them have yet to develop a cloud application or work with a cloud-based platform as a service, according to Evans Data, which regularly surveys developers' attitudes and skills around the world. IBM is launching BlueMix as a combination of open source code and proprietary software to capture its share of their future work in the cloud.

IBM announced in January that it was expanding the SoftLayer chain of datacenters from 13 to 40 locations around the world to give SoftLayer a competitive global reach. It is spending $1.2 billion this year on that initiative.



By: Charles Babcock
Linkhttp://www.informationweek.com/cloud/platform-as-a-service/ibm-bluemix-paas-play-explained/d/d-id/1113979

vendredi 21 février 2014

Knock out your mobile development deadlines with IBM Worklight

Have you been asked to deliver new functionality or a new application with an impossible deadline? How about deliver a fully featured and integrated mobile application for multiple platforms in five weeks? Yes, I know that is a ridiculous timeline. However, is it possible? With the help of an IBM Premier Business Partner (Avnet Technology Solutions) and IBM Worklight, we were able to deliver an application on time and on budget.


IBM Worklight
How is that even possible?

In a recent blog post, “ IBM Worklight to the rescue: Saving your company's reputation,” I discussed how the remote disable function of IBM Worklight could provide significant value to a company that needed to deny access to a specific version of their application. I recently completed a mobile application project with an IBM client that was successful in part because of the remote disable and direct update features of IBM Worklight.

So what did we really deliver?

We delivered a hybrid application built using JavaScript, HTML5 and CSS that would be approved by and available in iOS and Android app stores with custom phone and tablet versions. The application was tested on multiple devices, operating systems and form factors. I won’t bore you with all of the details, but here is a high-level list of the functional requirements that were delivered.
  • Push notifications
  • Remote database integration for lead and data collection
  • Device calendar integration (add events to personal calendars)
  • Custom Twitter integration
  • Custom RSS feed
  • Worklight analytics 

How did Worklight help make this possible?

We were able to ensure that this project was delivered as promised with several easy-to-use features that are included with IBM Worklight:
  • Adapters– secure integration with remote resources
  • Automated mobile functional testing– same test runs across multiple devices and mobile operating systems
  • Unified push notification APIs– polled server-side apps to dispatch notifications, uniform access to push notification providers and the ability to monitor and control notification delivery
  • Direct update– web resources pushed to app when application connects to the Worklight Server
The application used SQL and HTTP adapters to store customer information and to insert push notification messages into a database that was polled regularly. When a new entry was found in the push notification table, the polling process would create and send a new push notification through the unified push notification APIs. The direct update feature was used after the basic application structure had been created and accepted by the app stores. We finished the basic application structure, and it was accepted in the app stores about three weeks into the project. This provided the team with two weeks to make content changes and correct any defects that were found during testing.
In the end, the project was successful and the application was very well received by its users.


By: Drew Douglass
Linkhttp://asmarterplanet.com/mobile-enterprise/blog/2014/02/knock-mobile-development-deadlines-ibm-worklight.html

mercredi 19 février 2014

What can GPFS on Hadoop do for you ?

The Hadoop Distributed File System (HDFS) is considered a core component of Hadoop, but it’s not an essential one. Lately, IBM has been talking up the benefits of hooking Hadoop up to the General Parallel File System (GPFS). IBM has done the work of integrating GPFS with Hadoop. The big question is, What can GPFS on Hadoop do for you?

IBM developed GPFS in 1998 as a SAN file system for use in HPC applications and IBM’s biggest supercomputers, such as Blue Gene, ASCI Purple, Watson, Sequoia, and MIRA. In 2009, IBM hooked GPFS to Hadoop, and today IBM is running GPFS, which scales into the petabyte range and has more advanced data management capabilities than HDFS, on InfoSphere BigInsights, its collection of Hadoop-related offerings, as well as Platform Symphony.

GPFS was originally developed as a SAN file system. That would normally prevent it from being used in Hadoop and the direct-attach disks that make up a cluster. This is where an IBM GPFS feature called File Placement Optimization (FPO) comes into play.

Phil Horwitz, a senior engineer at IBM’s Systems Optimization Competency Center, recently discussed how IBM is using GPFS with BigInsights and System x servers, and in particular how FPO has is helping GPFS to make inroads in a Hadoop cluster. (IBM has since sold off the System x business to Lenovo, which IBM now must work closely with for GPFS-based solutions, but the points are still valid).

According to Horwitz, FPO essentially emulates a key component of HDFS: moving the application workload to the data. “Basically, it moves the job to the data as opposed to moving data to the job,” he says in the interview. “Say I have 20 servers in a rack and three racks. GPFS FPO knows a copy of the data I need is located on the 60th server and it can send the job right to that server. This reduces network traffic since GPFS- FPO does not need to move the data. It also improves performance and efficiency.”


Last month, IBM published an in-depth technical white paper titled “Deploying a Big Data Solution using IBM GPFS-FPO” that explains how to roll out GPFS on Hadoop. It also explains some of the benefits users will see from using the technology. For starters, GPFS is POSIX compliant, which enables any other applications running atop the Hadoop cluster to access data stored in the file system in a straightforward manner. With HDFS, only Hadoop applications can access the data, and they must go through the Java-based HDFS API.

The flexibility to access GPFS-resident data from Hadoop and non-Hadoop applications frees users to build more flexible big data workflows. For example, a customer may analyze a piece of data with SAS. As part of that workflow, they may use a series of ETL steps to manipulate data. Those ETL processes may be best executed by a MapReduce program. Trying to build this workflow on HDFS would require additional steps, as well as moving data in and out of HDFS. Using GPFS simplifies the architecture and minimizes the data movement, IBM says.

There are many other general IT housekeeping-type benefits to using GPFS. According to IBM’s "Harness the Power of Big Data" publication, POSIX compliance also allows users to manage their Hadoop storage “just as you would any other computer in your IT environment.” This allows customers to use traditional backup and restore utilities with their Hadoop clusters, as opposed to using the “copy” command in HDFS. What’s more, GPFS supports point-in-time snapshots and off-site replication capabilities, which aren't available in plain-vanilla HDFS.

The size of data blocks is also an issue with HDFS. In IBM's June 2013 whitepaper "Extending IBM InfoSphere BigInsights with GPFS FPO and IBM Platform Symphony" IBM makes the case that, because Hadoop MapReduce is optimized for blocks that are around 64MB in size, HDFS is inefficient at dealing with smaller data sizes. In the world of big data, it's not always the size of the data that matters; the number of data points and the frequency at which the data changes is important too.

GPFS also brings benefits in the area of data de-duplication, because it does not tend to duplicate data as HDFS does, IBM says. However, if users prefer to have copies of their data spread out in multiple places on their cluster, they can use the Write-affinity depth (WAD) feature that debuted with the introduction of FPO. The GPFS quote system also helps to control the number of files and the amount of file data in the file system, which helps to manage storage.

Capacity planning of Hadoop clusters is easier when the data stored in GPFS, IBM says. In HDFS, administrators need to carefully design the disk space dedicated to the Hadoop cluster, including dedicating space for the output of MapReduce jobs and log files. “With GPFS-FPO,” IBM says, “you only need to worry about the disks themselves filling up; there’s no need to dedicate storage for Hadoop.”



Other benefits include the capability to used policy-based information lifecycle management functions. That means third-part management tools, such as IBM’s Tivoli Storage Manager software, can manage the data storage for internal storage pools. The hierarchical storage management (HSM) capabilities that are built into GPFS mean you can keep the “hottest” data on the fastest disks. That feature is not available in plan-vanilla Hadoop running HDFS.

The shared-nothing architecture used by GPFS-FPO also provides greater resilience than HDFS by allowing each node to operate independently, reducing the impact of failure events across multiple nodes. The elimination of the HDFS NameNode also eliminates the single-point-of-failure problem that shadows enterprise Hadoop deployments. “By storing your data in GPFS-FPO you are freed from the architectural restrictions of HDFS,” IBM says.

The Active File Management (AFM) feature of GPFS also boosts resiliency by caching datasets in different places on the cluster, thereby ensuring applications access to data even when the remote storage cluster is unavailable. AFM also effectively masks wide-area network latencies and outages. Customers can either use AFM to maintain an asynchronous copy of the data at a separate physical location or use GPFS synchronous replication, which are used by FPO replicas.

Security is also bolstered with GPFS. Customers can use either traditional ACLs based on the POSIX model, or network file system (NFS) version 4 ACLs. IBM says NFS ACLs provide much more control of file and directory access. GPFS also includes immutability and appendOnly restriction capabilities, which can be used to protect data and prevent it from being modified or deleted.

You don’t have to be using IBM’s BigInsights (or its Platform Symphony offering) to take advantage of GPFS. The company will sell the file system to do-it-yourself Hadoopers, as well as those who are running distributions from other companies. And using GPFS allows you to use the wide array of Hadoop tools in the big data stack, such as Flume, Sqoop, Hive, Pig, Hbase, Lucene, Oozie, and of course MapReduce itself.

IBM added the FPO capabilities to GPFS version 3.5 in December 2012. Although it's POSIX compliant, GPFS-FPO is only available on Linux at this point. IBM says GPFS is currently being used in a variety of big data applications in the areas of bioinformatics, operational analytics, digital media, engineering design, financial analytics, seismic data processing, and geographic information systems.



By : Alex Woodie
Link :http://www.datanami.com/datanami/2014-02-18/what_can_gpfs_on_hadoop_do_for_you_.html

lundi 17 février 2014

This tiny chip makes the Internet four times faster

IBM chip faster Internet 
 
The race is on to build a faster, better Internet. While Google is working on bringing super-high-speed connections to homes in select cities, IBM is working on a technology that could make the Internet all around faster everywhere.

It has created a new chip that beefs up Internet speeds to 200 to 400 gigabits per second, about four times faster than today's speeds, IBM says. Plus it sucks up hardly any power.

At this speed, a 2-hour ultra-high-definition movie (about 160 gigabytes) would download in a few seconds. It would only take a few seconds to download 40,000 songs, IBM says.

The chip fits into a part of the Internet that runs between data centers, not your computer or home router. 

The latest version of the chip is only a prototype right now, so it will be a while before it gets installed and the Internet gets better.

However, IBM says it has already signed on a customer for an earlier version of the technology, a company called Semtech. Semtech makes a device that converts analog signals (like radio signals) to digital signals that can be piped across the Internet.

Equally interesting is that IBM says it will manufacturer the chip for the Semtech deal in the U.S. at its semiconductor fab in East Fishkill, N.Y.

That's of note because there's been speculation that IBM may be looking for a buyer for its semiconductor manufacturing unit. Breakthrough technology like this could either help the unit grow revenues, allowing IBM to keep it, or allow IBM to sell it for a higher price.



By: Julie Bort
Link: http://www.businessinsider.com/ibm-chip-makes-the-internet-faster-2014-2

Up Close and Personal With IBM PureApplication PaaS

The converged infrastructure value proposition, by now, is pretty evident to everyone in the industry. Whether that proposition can be realized, is highly dependent on your particular organization, and specific use case. 

Over the past several months, I have had an opportunity to be involved with a very high-profile pilot, with immovable, over-the-top deadlines.  In addition, the security requirements were downright oppressive, and necessitated a completely isolated, separate environment. Multi-tenancy was not an option. 

With all this in mind, a pre-built, converged infrastructure package became the obvious choice. Since the solution would be built upon a suite of IBM software, they pitched their new PureApplication system. My first reaction was to look at it as an obvious IBM competitor to the venerable vBlock. But I quickly dismissed that, as I learned more. 

The PureApplication platform is quite a bit more than a vBlock competitor. It leverages IBM’s services expertise to provide a giant catalog of pre-configured multi-tiered applications that have been essentially captured, and turned into what IBM calls a “pattern”. The simplest way I can think of to describe a pattern is like the application blueprint that Aaron Sweemer was talking about a few months back. The pattern consists of all tiers of an application, which are deployed and configured simultaneously, and on-demand.

As an example, if one needs a message broker app, there’s a pattern for it. After it is deployed (usually within 20-30 mins.), what’s sitting there is a DataPower appliance, web services, message broker, and database. It’s all configured, and ready to run. Once you load up your specific BAR files, and configure the specifics of how inbound connections and messages will be handled, you can patternize all that with script packages, so that next time you deploy, you’re ready to process messages in 20 minutes.  If you want to create your own patterns, there’s a pretty simple drag and drop interface for doing so. 

image

I know what you’re thinking. . . There are plenty of other ways to capture images, vApps, etc. to make application deployment fast. But what PureApp brings to the table is the (and I hate using this phrase) best-practices from IBM’s years of consulting and building these solutions for thousands of customers. There’s no ground-up installation of each tier, with the tedious hours of configuration, and the cost associated with those hours. That’s what you are paying for when you buy PureApp. 

Don’t have anyone in house with years of experience deploying SugarCRM, Business Intelligence, Message Broker, SAP, or BPM from the ground up? No problem. There are patterns for all of them. There are hundreds of patterns so far, and many more are in the pipeline from a growing list of global partners. 

The PureApplication platform uses IBM blades, IBM switching, and IBM V7000 storage. The hypervisor is VMware, and they even run vCenter. Problem is, you can’t access vCenter, or install any add-on features. They’ve written their own algorithms for HA, and some of the other things that you’d expect vCenter to handle. The reasoning for this, ostensibly, is so they can support other hypervisors in the future. 

For someone accustomed to running VMware and vCenter, it can be quite difficult to get your head around having NO access to the hosts, or vCenter to do any troubleshooting, monitoring, or configuration. But the IBM answer is, this is supposed to be a cloud in a box, and the underlying infrastructure is irrelevant. Still, going from a provider mentality, to an infrastructure consumer one, is a difficult transition, and one that I am still struggling with personally. 

The way licensing is handled on this system is, you can use all the licenses for Message Broker, DB2, Red Hat, and the other IBM software pieces that you can possibly consume with the box. It’s a smart way to implement licensing.  You’re never going to be able to run more licenses than you “pay for” with the finite resources included with each system. It’s extremely convenient for the end user, as there is no need to keep up with licensing for the patternized software. 

Access to the PureApp platform is via the PureApp console, or CLI. It’s a good interface, but it’s also definitely a 1.x interface. There is very extensive scripting support for adding to patterns, and individual virtual machines. There are also multi-tenancy capabilities by creating multiple “cloud groups” to carve up resources.  There are things that need to be improved, like refresh, and access to more in-depth monitoring of the system.  Having said that, even in the past six months, the improvements made have been quite significant.  IBM is obviously throwing incredible amounts of resources at this platform. Deploying patterns is quite easy, and there is an IBM Image Capture pattern that will hook into existing ESXi hosts to pull off VM’s to use in Pure, and prepare them for patternization.

SNAGHTMLf9ba738

Having used the platform for a while now, I like it more every day. A couple weeks ago, we were able to press a single button, and upgrade firmware on the switches, blades, ESXi, and the v7000 storage with no input from us. My biggest complaint so far is that I have no access to vCenter to install things like vShield, backup software, monitoring software, etc.. But again, it’s just getting used to a new paradigm that’s hard for me.  IBM does have a monitoring pattern that deploys Tivoli, which helps with monitoring, but it’s one more thing to learn and administer. That said, I do understand why they don’t want people looking into the guts on a true PaaS.

Overall, I can say that I am impressed with the amount of work that has gone into building the PureApplication platform, and am looking forward to the features they have in the pipeline. The support has been great so far as well, but I do hope the support organization can keep up with the exponential sales growth. I have a feeling there will be plenty more growth in 2014. 



By: Brandon Riley
Link: http://www.virtualinsanity.com/index.php/2014/02/10/up-close-and-personal-with-ibm-pureapplication-paas/

Server market realignment

The server market is in the midst of a radical realignment, the likes of which have not been seen since the shakeout of the 1980s that saw most of the minicomputer makers, including Prime Computer, Data General and Digital Equipment Corp., disappear, devastating the Boston high tech corridor. And while the writing has been on the wall for some time, this major industry shift promises to happen much faster than that one.

IBM System x General Manager Adalio Sanchez speaking at an IBM event in Beijing on January 16, 2014 to debut the company’s latest x86-based servers. Today IBM announced plans for Lenovo to acquire IBM’s x86 server business for $2.3 billion.

The first major shock came to the market last month, when IBM announced an agreement to sell its System x servers, x86 network switches and other x86-based products to Lenovo, continuing IBM’s transition into a software and services provider. While internal sources say that the sale, which includes the transfer of up to 6,700 IBM employees to the commodity system maker, will take several months to complete, this announcement definitely points to the future of x86 hardware.

Actually the commodization of x86 has been ongoing for several years and is well under way. It started with the invention of hyperscale by the big Web service companies including Yahoo, Google, Amazon, and Facebook. These companies buy huge quantities of standardized white box servers direct from Taiwan and China for their mega-data centers, run them hard in highly automated environments and, when something breaks, throw it away and replace it with a new box. But even before that the seeds of commodization were sown by the major traditional players themselves when they handed manufacturing of their servers over to the Taiwanese. Essentially they created their own replacements.

That arrangement worked for them as long as the hardware still required lots of attention, expensive built-in management software, and constant optimization fine tuning to handle the compute loads. But in the last decade three things have changed. First, Moore’s Law has driven compute power and network speed to the point that detailed optimization is no longer necessary for most compute loads. Second, the management software has moved to the virtualization layer. The result of these two trends is that increasingly the focus of IT organization attention is moving up the stack to software, and hardware is taken for granted. After 67 years, the techies are finally tiring of fiddling constantly with the hardware.

Third, increasing amounts of the compute load is moving steadily to the cloud. Companies that always had to buy extra compute to support peak loads now can move those applications into a hybrid cloud, size their hardware for the average load and burst the peaks to their public cloud partner. As those companies gain a comfort level with their public cloud service providers, they will start moving entire compute loads, particularly new Web-based applications such as big data analysis that have a strong affinity to the public cloud, entirely to those providers, in many cases by subscribing to SaaS services.

Trend toward standardization

 
The result of this is that the underlying hardware is becoming highly standardized, and the focus of computing is moving to software and services. Under the onslaught of hyperscale and cloud computing, the market for the traditional vendors is decreasing, a trend that will accelerate through this decade. And the market is shifting from piece-parts to converged systems as customers seek to simplify their supply chains and save money. As Wikibon CTO David Floyer points out, the more of the system that can be covered by a single SKU, the more customers can save. The hardware growth for both IBM and HP is clearly in their converged systems, and their differentiation increasingly comes from what they provide above the virtualization layer in middleware and applications. The expansion of virtualization from servers to the software-led data center will only drive this trend faster.


Open source hardware is beginning to appear in the market and can be expected to become commonplace over the next five years as the big Asian white box makers adopt them as the next step in driving cost out of the server manufacturing process.

The clear message for x86 server vendors is either to drive cost out of their hardware business and become commodity providers on the level of the Taiwanese while developing differentiation through higher level software running on top of those commoditized boxes or get out of the x86 hardware business entirely and source their servers from a commodity provider. IBM has clearly chosen the latter course with its sales of System x to Lenovo along with the creation of a close partnership with the Chinese commodity hardware manufacturer.

IBM’s strategy — partner with Lenovo

This is the right strategy for both companies. Since buying IBM’s PC manufacturing business a decade ago, Lenovo has proven itself as a quality commodity electronics maker, and in the process passing HP last year to become the number one PC vendor in worldwide sales. IBM, meanwhile, is a highly creative company with a huge R & D budget that is betting its future on leading edge areas including big data processing and analysis, business cloud services worldwide, and Watson.

The close partnership should leverage the very different strengths of both companies to create products that benefit from both, particularly in the IBM PureSystems line of integrated systems. Meanwhile Lenovo is likely to enter the hyperscale market once it has brought its manufacturing and marketing to bear on the IBM System x line. It also will certainly continue to sell its rebranded System x servers into the traditional business and governmental markets and can be expected to field its own x86-based converged system line probably in partnership with IBM. Since both companies will be profiting from the relationship, regardless of whose brand is on the box, they both will have strong business reasons to maintain a close partnership into the future.

IBM is in the market position it is in today in large part because of the visionary leadership of CEO Louis V. Gerstner Jr., IBM’s head for a decade through much of the 1990s and early 2000s. He foresaw the industry changes we are experiencing today, at least in their general form, and espoused the strategy of transforming IBM from a hardware giant to a software and services company. In retrospect this was prophetic, and while obviously nobody in 1993 could have anticipated the impact of cloud computing and in Gerstner’s time “services” mostly meant consulting, he moved IBM in a direction that puts it in a strong position today to capitalize on the burgeoning cloud services market. And his successors, Samuel J. Palmisano and Virginia M. Rometty, have continued to move IBM forward and make the hard decisions sometimes necessary for IBM’s transformation.

HP at the crossroads

But what about the other big x86 server vendors, who did not have the good fortune to have a visionary at their helm in the 1990s? HP in particular seems to have been searching for a path forward in recent years with its parade of short-lived CIOs. Carley Fiorina certainly had a strong vision, but unfortunately it proved to be the wrong one for the company.

After she left, HP suffered from a revolving door at the top. Michael Hurd was the only CEO to last long enough to create a vision for the company’s future, but he seemed mainly to see “more of the same”. Meg Whitman has been in charge for nearly two-and-half-years now and seems to have stabilized the company, but it is also suffering from market shrinkage. Its answer so far has been to bring forward some innovative hardware, but large parts of the company outside converged systems, Moonshot and storage seem to be going forward blindly, producing more of the same with no regard for the reefs ahead, and HPs financial results in recent quarters have shown the result.

HP needs to make up its mind, and fast. In its first two decades it was a highly creative, if sometimes chaotic, company producing leading edge products including some of the first servers and desktop printers. It really was the Apple Computer of its day. But it seems to have lost much of that, and today it buys more innovation than it produces in house, Moonshot not withstanding.

The problem is that while it has some very innovative products, they are all one-offs, and large parts of the company appear to be drifting. Decisions seem to be tactical rather than strategic. The PC group, for instance, is obviously floundering. At one time HP was almost as much a consumer company, with its desktop printers and PCs, as it was a business-to-business vendor. It has neglected that part of its business, which is a mistake.

HP seems to be moving in the direction of becoming a U.S.-based commodity hardware supplier. If that is what its leadership wants, then it should embrace that completely and start competing on price in all its markets while driving cost out its processes at all levels.  If it wants to return to its roots as a highly creative company then it should start building on products like Moonshot and revitalize its consumer and business market mindshare with new, creative electronic products that can create new markets. It cannot do both — no company can.



By: Bert Latamore

IBM successfully transmits text using graphene-based circuit

Big Blue confident it can use fragile material within smartphones and tablets.


IBM has successfully transmitted a text message using a circuit made out of graphene, as the firm shows the potential of carbon-based nanotechnology.

Graphene consists of a single layer of carbon atoms packed into a honeycomb structure. The low-cost material is an excellent conductor of electricity and heat, which meakes it ideal for use in smartphones and tablets as data can be transferred faster, save power and be more cost efficient.

The barrier to using graphene within integrated circuits is its fragility.  But IBM believes it has found a way to compensate for this weakness by using silicon as a backbone for circuits.

The firm created an RF receiver using three graphene transistors, four inductors, two capacitors, and two resistors. These components were packed into a 0.6 mm2 area and fabricated in a 200mm silicon production line.

Big Blue’s scientists were able to send and receive the digital text “I-B-M” over a 4.3 GHz signal with no distortion. The firm claims performance is 10,000 times better than previous efforts and is confident graphene can now be integrated into low-cost silicon technology.

The firm said applications could involve using graphene within smart sensors and RFID tags to send data signals at significant distances.

“This is the first time that someone has shown graphene devices and circuits to perform modern wireless communication functions comparable to silicon technology” said Supratik Guha, director of physical sciences at IBM Research.  


By: Khidr Suleman