vendredi 26 avril 2013

IBM Continues to Top the Social Business Marketplace

Earlier this year Gartner announced its prediction that the social business market will be worth nearly $30 billion by 2015.  Such scale has prompted most of the big software companies to enter the fray.
Despite the growing competition however, it's IBM that continues to come out on top.  At least that is, according to an annual IDC survey into social business software.
The report also announced that the enterprise social market reached 1 billion in 2012, a 25% increase on 2011.
"Businesses today are operating in the social age where innovation, speed and exceptional client experiences are critical," said Alistair Rennie, general manager of social business at IBM, in a statement. "Our social business platform is accelerating that transformation and helping change the way leaders are working."
As companies increasingly look to fully utilize the knowledge and collaborative potential of their employees, IBM predict that the social business industry will only grow.
IBM reveal that 60% of Fortune 100 companies have been using IBM's software for social business, including 80% of the top 10 retailers and banks.
IBM Connections is the bedrock of their social business offering.  It allows for instant collaboration, allowing employees to build social communities both inside and outside the enterprise.
Last year the company released a whitepaper on successful deployment of social business tools within the enterprise.  The aim of the document was to find out how companies are using social business, and which areas were currently most successful for them.  Three main areas jumped out as being particularly successful:
  • Creating valued customer experiences
  • Driving workforce productivity and effectiveness
  • Accelerating innovation
Such outcomes have become commonplace amongst IBM customers.  Construction company Caterpillar are one such example. Jeff Bowman, head of global e-business at Caterpillar, said his company is just getting started with e-business and social media.
"We're using it because we're focused on growing loyalty in the customers that we have," Bowman said.
Craig Hayman, general manager of industry solutions at IBM Software, said one of his key goals in this social strategy is to enable both IBM and its customers to have better relationships with customers.
"Providing an effective customer experience is all about perception—from awareness to customer loyalty," he said. "Customer experience is more important today than ever before."

IBM Acquires Marketing Automation Software Company Unica

IBM and Unica Corporation today announced they have entered into a definitive agreement for IBM to acquire Unica in a cash transaction at a price of $21 per share, or at a net price of approximately $480 million, after adjusting for cash.  A publicly held company in Waltham, Mass., Unica will expand IBM's ability to help organizations analyze and predict customer preferences and develop more targeted marketing campaigns.
The acquisition, which is subject to Unica shareholder approval, applicable regulatory clearances and other customary closing conditions, is expected to close in the fourth quarter of 2010.
IBM to Acquire Unica Corporation for marketing automation and management.
Today's leading organizations place a high value on a consistent and relevant customer experience.  They must continuously focus on enhancing their brand by responding quickly to marketplace changes and differentiating themselves through more targeted, personalized marketing campaigns.  In order to achieve this, marketing professionals are increasingly investing in technology to automate and manage marketing planning and execution to help them better analyze customer preferences and trends and in turn, predict buying needs and drive relevant campaigns.
To meet this demand, IBM is assembling transformational capabilities to help clients create this consistent and relevant cross-channel brand experience to promote customer loyalty and satisfaction.  With sophisticated analytics and marketing process improvement, the combination of IBM and Unica will help clients streamline and integrate key processes including relationship marketing, online marketing and marketing operations.

By: John Ryan
Source:  https://websphere.sys-con.com/node/1499164https://websphere.sys-con.com/node/1499164

Social Business Infographic: IBM is named #1 in social software for the 4th year in a row by IDC



IBM-is-#1-1 (2) 

 By: Sandy Carter
Source:  http://socialbusinesssandy.com/2013/04/22/social-business-ibm-is-named-1-in-social-software-for-the-4th-year-in-a-row-by-idc/

IBM Research uses supercomputer tech to “harness the energy of 2,000 suns”

A collaboration of Swiss institutions, including IBM Research, has announced that it’s developing a highly efficient, low-cost photovoltaic system that’s capable of concentrating “the power of 2,000 suns.” The collaboration claims that the system, which is targeted at dry regions such as southern Europe, Africa, the south west of North America, South America, and Australia, will have an overall efficiency of 80% — and, miraculously, be able to provide a source of fresh water, too.
The High Concentration Photovoltaic Thermal system [PDF], or HCPVT for short, combines Airlight’s concentrated solar power (CSP) tech with IBM’s microchannel water cooling tech. In essence, the HCPVT system consists of a large parabolic dish that tracks the sun, with mirrored facets that concentrate the sun’s rays on a cluster of photovoltaic chips — which is where the real magic occurs. The HCPVT system uses triple-junction photovoltaic chips, which can harness the energy of three different wavelengths of light, compared to the single wavelength captured by a conventional single-junction photovoltaic cell.
IBM/Airlight photovoltaic/microchannel module 
Furthermore, these triple-junction chips (pictured right) are kept cool using IBM’s microchannel cooling, allowing the chips to continue operate nominally at a solar concentration between 2,000 and 5,000 times. This technology, called Aquasar, was originally developed by IBM to efficiently cool supercomputers, which require extraordinary cooling solutions to keep their densely-packed processors at an acceptable temperature. For complete details of Aquasar, see our explainer. In essence, though, each photovoltaic chip is cooled by a network of tiny, water-filled microchannels, “inspired by the hierarchical blood supply system of the human body.”
These microchannels are so efficient that the complete HCPVT system can recover up to 50% of waste heat, bringing the total system efficiency up to 80%. But that’s not all: The hot water, which reaches temperatures of 90 Celsius (194F), is then passed through a porous membrane desalination system, producing clean, drinkable water. A square meter of microchannel-cooled photovoltaic chips would produce between 30 and 40 liters of water per day. A large installation would provide enough water for a small town. This hot water could also power an adsorption refrigerator, too, providing air conditioning — though, in reality, the ability to produce drinkable water will probably take precedence over cool air.

Zooming out from the micro, the macro-scale details of the HCPVT system are equally important. According to IBM Research, this system is only economically viable because the structure is fashioned from concrete and the primary optics are composed of inexpensive pneumatic mirrors (thin, reflective metal films pulled tight with pneumatics). All told, this equates to a system cost of below $250 per square meter of mirror, which is apparently “three times lower than comparable systems.” The levelized cost of energy from the HCPVT system — the price that must be charged to break even over the system’s lifetime — is just 10 cents per kilowatt-hour (kWh).

By:
Source: http://www.extremetech.com/extreme/154044-ibm-research-uses-supercomputer-tech-to-harness-the-energy-of-2000-suns

lundi 22 avril 2013

IBM attempts to harness the power of two thousand suns

Considering it’s Earth Day, it’s unsurprising that collaborative research has been announced which focuses on trying to get the solar industry back on track — by creating systems able to harness the power of thousands of suns.
A three-year, $2.4 million grant from the Swiss Commission for Technology and Innovation has been awarded to IBM Research, Airlight Energy, ETH Zurich and the Interstate University of Applied Sciences Buchs NTB to research and develop an economical “High Concentration PhotoVoltaic Thermal” (HCPVT) system.
The project aims to create a system capable of concentrating solar radiation 2,000 times and converting 80 percent of the incoming radiation into useful energy, far beyond today’s solar panels that can only convert a margin of energy captured.
The prototype features a large, mirrored parabolic dish attached to a sun tracking system. Sunlight reflects off the mirrors into several microchannel-liquid cooled receivers with triple junction photovoltaic chips that can convert 200-250 watts over a typical day. Each 1×1cm chip is mounted on pipe liquid coolants that absorb heat and draw it away. According to IBM, “the coolant maintains the chips almost at the same temperature for a solar concentration of 2,000 times and can keep them at safe temperatures up to a solar concentration of 5,000 times.”
“We plan to use triple-junction photovoltaic cells on a micro-channel cooled module which can directly convert more than 30 percent of collected solar radiation into electrical energy and allow for the efficient recovery of an additional 50 percent waste heat,” said Bruno Michel, manager, advanced thermal packaging at IBM Research.
The scientists hope that their research will result in a HCPVT system that is more cost-effective than models currently on the market, and believe that they can achieve a cost per aperture area below $250 per square meter — three times lower than the cost of current systems.
In addition, it is expected that the system will be able to provide desalinated water and cool air in sunny, remote locations.

mercredi 17 avril 2013

Big Data, All Data, PureData, BLU Data

For some time now, when it comes to big data, my mantra has been "big data is simply all data".  IBM's April 3 announcement served admirably to reinforce that point of view. Was it a big data announcement, a DB2 announcement, or a hardware announcement?  The short answer is "yes", to all the above and more.

Weaving together a number of threads, Big Blue created a credible storyline that can be summarized in three key thoughts: larger, faster and simpler.  As many of you may know, I worked for IBM until early 2008, so my views on this announcement are informed by my knowledge of how the company works or, perhaps, used to work.  Last Wednesday, I came away impressed.  Here were a number of diverse, individual product developments that conform to a single theme across different lines and businesses.

Take BLU acceleration as a case in point.  The headline, of course, is that DB2 LUW (on Linux, Unix and Windows) 10.5 introduces a hybrid architecture.  Data can be stored in columnar tables with extensive compression, making use of in-memory storage and taking further advantage of parallel and vector processing techniques available on modern processors.  The result is an up to 25% improvement in analytic and reporting performance (and considerably more in specific queries) and up to 90% data compression.  In addition, the elimination of indexes and aggregates simplifies considerably the need for manual tuning and maintenance of the database.  This is a direction that has long been shown by small, newer vendors such as ParAccel and Vertica (now part of HP), so it is hardly a surprise.  IBM can claim a technically superior implementation, but more impressive is the successful retrofitting into the existing product base.  And the re-use of the technology in the separate Informix TimeSeries code base to enhance analytics and reporting there too, as well as the promise that it will be extended to other data workloads in the future.  It seems the product development organization is really pulling together across different product lines.  That's no mean feat within IBM.

Another hint at the strength of the development team was the quiet announcement of a technology preview of JSON support in DB2 at the same time as the availability of 10.5.  JSON is one of the darlings of the NoSQL movement that provides significant agility to support unpredictable and changing data needs.  See my May 2012 white paper "Business Intelligence--NoSQL... No Problem" for more details.  As in its support for other NoSQL technologies, such as XML and RDF graph databases, IBM has chosen to incorporate support for JSON into DB2.  There are pros and cons to this approach.  Performance and scalability may not match a pure JSON database, but the ability to take advantage of the ACID and RAS characteristics of an existing, full-feature database like DB2 makes it a good choice where business continuity is a strong requirement.  IBM clearly recognizes that the world of data is no longer all SQL, but that for certain types of non-relational data, the difference is sufficiently small that they can be handled as an adjunct to the relational model through a "subservient" engine, allowing easier joining of NoSQL and SQL data types.  This is a vital consideration for machine-generated data, one of three information domains I've defined in a recent white paper, "The Big Data Zoo--Taming the Beasts".

The announcement didn't ignore the little yellow elephant, either.  The PureData System family has been expanded with the PureData System for Hadoop, with built-in analytics acceleration and archiving, and provides significantly simpler and faster deployment of projects requiring the MapReduce environment.  And InfoSphere BigInsights 2.1 offers the Big SQL interface to Hadoop, an alternative file system, GPFS-FPO, with enhanced security and no single point of failure, as well as high availability.

While the announcement clearly targeted Big Data--at the Speed of Business, the underlying message, as seen above, is much broader.  This view is of an emerging information ecosystem that must be considered from a fully holistic viewpoint.  A key role, and perhaps even the primary role, for BigInsights / Hadoop is in exploratory analytics, where innovative, what-if thinking is given free rein.  But the useful insights gained here must eventually be transferred to production (and back) in a reliable, secure, managed environment--typically a relational database.  This environment must also operate at speed, with large data volumes and with ease of management and use.  These are characteristics that are clearly emphasized in this announcement.  They are also key components of the integrated information platform I described in the Data Zoo white paper already mentioned.  Missing still are some of the integration-oriented aspects such as the comprehensive, cross-platform metadata management, data integration and virtualization required to tie it all together.  IBM has more to do to deliver on the full breadth of this vision, but this announcement is a big step in the right direction.

By: Barry Devlin
Source: http://smartdatacollective.com/barry-devlin/117406/big-data-all-data-puredata-blu-data

Feedback The value of IBM PureSystems

In our first three pieces, we discussed what IBM PureSystems is and how it works in principle. But as with any new IT system you might be thinking about deploying, the biggest question is whether it will improve your business competitiveness, and in particular whether you will be saving money. In this video, IBM PureSystems Chief Strategy Officer John Warnants looks at some example areas where the value proposition for PureSystems is particularly strong. For example, PureSystems supports both X86 and Power processors, and if security is important the super-secure PowerVM hypervisor is available, among a host of other benefits. The advantages come both from the PureSystems components and the integrated system as a whole. Going all the way to the Pattern-based delivery we have discussed in a previous piece, the cost reduction can be significant. The speed of provision with Patterns means services can be deployed only when they are needed, and not permanently, just in case they will be used, which is a much more efficient arrangement.

By: ITProPortal

IBM Is Investing $1 Billion In Flash Storage R&D


IBM recently announced their plans for flash storage.  IBM is going to be spending $1 billion on research and development and on a series of systems that utilizes solid-state drives.  The company also announced a new FlashSystem line of appliances that are available for businesses based on technology that they acquired from Texas Memory Systems.  The FlashSystem line uses Flash memory which is 20 times faster than a standard spinning hard drive and can store up to 24TB of data.

Ambuj Goyal, general manager of IBM’s System Storage & Networking, said that it financially makes sense for flash storage to be spread widely in data centers.  The Flash memory will be integrated into hybrid systems in IBM’s hardware lineup like PowerSystems and DB2 PureScale.
“The economics and performance of Flash are at a point where the technology can have a revolutionary impact on enterprises, especially for transaction-intensive applications,” stated Goyal. “The confluence of ‘big data,’ social, mobile, and cloud technologies is creating an environment in the enterprise that demands faster, more efficient, access to business insights, and Flash can provide that access quickly.”
IBM is setting up 12 Centers of Competency around the world where potential clients will be able to test out IBM’s Flash servers.  The centers will be in India, China, France, Germany, Singapore, Japan, South America and U.K. and the U.S.

By: Pulse 2.0
Source: http://pulse2.com/2013/04/13/ibm-is-investing-1-billion-in-flash-storage-rd-83989/

mardi 9 avril 2013

IBM makes next-gen transistors that mimics the human brain

IBM has found a way to make transistors that could be fashioned into virtual circuitry that mimics how the human brain operates.
The new transistors would be made from strongly correlated materials, such as metal oxides, which researchers say can be used to build more powerful -- but less power-hungry -- computation circuitry.
"The scaling of conventional-based transistors is nearing an end, after a fantastic run of 50 years," said Stuart Parkin, an IBM fellow at IBM Research. "We need to consider alternative devices and materials that operate entirely differently."
Researchers have been trying to find ways of changing conductivity states in strongly correlated materials for years. Parkin's team is the first to convert metal oxides from an insulated to conductive state by applying oxygen ions to the material. The team recently published details of the work in the journal Science.
In theory, such transistors could mimic how the human brain operates in that "liquids and currents of ions [would be used] to change materials," Parkin said, noting that "brains can carry out computing operations a million times more efficiently than silicon-based computers."

By: Joab Jackson
Source: http://cw.com.hk/news/ibm-makes-next-gen-transistors-mimics-human-brain


IBM on the importance of network virtualization to a virtualized environment

Summary: IBM's Inder Gopal discusses is view that a balanced, high performance, reliable virtualized environment requires a complete array of virtualization technology including virtual processing, virtual storage and virtual networking working in harmony.

Inder Gopal, Vice President, IBM System Networking Development Systems and Technology Group, stopped by to discuss his views of what technology is required to create a balanced, reliable, high performance and agile data center environment. He also came by to introduce IBM's Software Defined Network for Virtual Environment (SDN VE).

What does it take to create a balanced data center environment?

Inder pointed out that, in IBM's view, that such a balanced data center environment can only be created with a carefully selected mix of virtualization technology including processing virtualization, storage virtualization and network virtualization (see Sorting out the different layers of virtualization for more information on the layers of virtualization technology.) I pointed out that access virtualization, application virtualization and both management and security technology are also required.
He made the point that creating an environment that best meets an organization's requirements usually means deploying a mix of technologies coming from many vendors and is likely to also include many different system architectures and operating systems.
Then he turned back to the discussion of IBM's Software Defined Network for Virtual Environments.

Software Defined Network for Virtual Environment

IBM believes that creating a flexible, agile virtual environment requires the following networking components:
  • A network hypervisor
  • Management and security tools that support simple, easy operation in a virtual environment
  • An set of tools that allow the creation of an overlay network making it possible to view the virtual network environment as just a traditional Ethernet-based LAN
  • Traditional network switches and OpenFlow enabled network switches that make it easily possible to do the following:
    • Create virtual networks that can link together systems supporting components of an distributed, multi-tier, multi-site workload
    • Support a multi-tenant environment that isolates one virtual network from others
  • The environment must support multiple layers of communication and management including the following
    • The data plane - the layer that carries data packets from one place to another
    • The control plane - the layer containing the logic that controls where data packets go and who can see them
    • The management plane - the layer allowing a network administrator to log into a device and configure how devices work
The company is offering a collection of hardware and software products designed to help organizations design and implement virtual environment.

Getting from today's networks to software defined networking

Our conversation then turned to a discussion of the gold rules of IT (see Reprise of the Golden Rules of IT for more information on the rules). I pointed out that to be successful in today's world, it is necessary for suppliers to help organizations get from where they are today to a desired future state without having to abandon what they're doing and start over. I pointed out that organizations don't rip out technology and replace it just for the joy of using new technology.
Inder agreed and said that is the reason that IBM is so careful to design products and services that recognize that organizations need to continue to be productive even while they're carefully implementing their future. This has means, he pointed out, that the company's products are designed to work in a multi-vendor, multi-platform, multi-site environment.
When a networking product wasn't designed to operate in a virtual environment, IBM supplies tools, such as the Distributed Virtual Switch, The OpenFlow Controller, and sometime in mid 2013, the Software Defined Network for Virtual Environments.

Snapshot analysis

I've long been a proponent of implementing an architecture and only acquiring products and services that fit into that architecture. The architecure should be based, as much as is possible, on international and industry standards rather than just upon products and technologies from a single vendor. It was refreshing to speak with an industry executive that appeared to operate based upon the same principles.
I would urge IT architects to learn more about what IBM is doing with virtualization technology in general and network virtualization in specific. 

Source: http://www.zdnet.com/ibm-on-the-importance-of-network-virtualization-to-a-virtualized-environment-7000013677/

IBM Compresses 100Gbps Network Onto A CMOS Chip


IBM researchers have made cheap low-powered analogue to digital converters (ADCs) which could allow 100Gbps networks over long-distance fibre, with a cheap device to send and receive data each end.
The breakthrough will allow cheaper and simpler devices that can convert the signals on fibres into digital information. This will cut energy use and could revolutionise mobile phones and radio astronomy, while making high speeds easier and cheaper to deliver to a wider range of devices. The technology, produced by IBM researchers, working with Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, is based around tiny ADCs integrated onto the same CMOS chips which hold other functions.

Martin Schmatz IBM 

Smaller ADCs than anyone else

Although light signals are sent along fibres in digital form, by the time they reach the far end of the cable they have to be processed to clean up and receive the data which was sent, explained Martin Schmatz, who manages the systems division of IBM’s Zurich lab: “The problem is fibre has dispersion,” he explained, so the signal is blurred when it reaches its destination. “During transit, some energy is dispersed to other frequencies. We need to shift that bit of energy back to the right place.”
Currently, long distance links use “dispersion compensating” fibres, in which the main fibre is combined with another which has the opposite dispersion characteristics. This is complex and expensive says Schmatz – and the IBM breakthrough allows the signal to be cleaned up with dispersion cancelled out electronically.
“You can actually correct for dispersion by a mathematical approach,” Schmatz told TechWeekEurope. “If you digitise the signal, you can then apply mathematical functions to correct for the dispersion.” Telephone lines regularly use maths to correct for signal dispersion but – as Schmatz pointed out – they only have to work at around 56kbps on digitised voice traffic. Digital correction on a fast fibre network would have to work far faster.
Fibre networks carry long-haul Ethernet data at 100Gbps, but to view them as analogue signals and clean them up, needs an eight bit resolution, said Schmatz: “If you have four channels, all of a sudden you have 2.5 Terabits persecond (Tbps) coming out of the ADC.”
ADCs have normally been implemented as separate components made with different technology, because traditional CMOS circuits aren’t optimised for analogue signals. However,  it is expensive and inefficient to take data at that rate off one chip and onto another, Schmatz told us.
The IBM team managed to show it is possible to make a very tiny ADC out of standard 32nm CMOS, opening the way to integrating many of these components onto the same chip as the rest of the network hardware.
“It is power inefficient to ship the data from an ADC to a CMOS chip,” he said. “We were able to show that it is possible to use a plain vanilla digital process on a digital CMOS chip, not optimised for analogue, to build such an ADC which has an extremely high content of analogue circuits.”
These ADCs are very power efficient, and consume a tiny amount of the CMOS chip’s area: “The majority of the power – and area – needs to be assigned to the digital circuit. The ADCs need to be tiny.”
100Gbps Ethernet signals are sent as four 25Gbps streams, separated by phase and polarisation.  Because the Nyquist-Shannon theorem says they need “oversampling”, a 100Gbps Ethernet channel would actually need four 64Gbps ADCs.  That’s faster than can be easily done, so the group proposes using more ADCs, each of which handles a time slice of the total signal (so it might use 64 ADCs, each at 1Gbps).
“We will end up with 256 ADCs on one side of the chip and a lot of processing on the other,” predicted Schmatz. “Is this possible? Absolutely!”

IBM fast ethernet ADC 

Radio telescopes and phones

The technique could be available in 2014, he said, and will have applications beyond fast long-haul Ethernet.”There are many applications where the signal you are using is in the analogue domain,” he said.
For instance, a set-top box could take in the whole frequency spectrum from a cable modem and sort out the individual channels in a single chip.
The approach could also revolutionise radio astronomy, which is all about finding signals in a big analogue spectrum. Schmatz hopes to provide signal processing equipment for the Square Kilometer Array (SKA), an international project to build the world’s largest and most sensitive radio telescope.
Another possibility would be to have one chip to handle all the radio signals coming into a mobile phone. “You could have one wideband antenna, sample everything from 400MHz to 6GHz – and then sort it out by number crunching  so you have Wi-Fi and Bluetooth at 2.4 GHS, and 3G and 4G at frequencies like 900MHz – and you do all of that in the digital domain.”
“Most of the ADCs on the market today weren’t designed to handle the massive Big Data applications we are dealing with today – it’s the equivalent of funnelling water through a straw from a fire hose.”

By: Peter Judge 
Source: http://www.techweekeurope.co.uk/news/ibm-adc-breakthrough-100gbps-big-data-108364 

lundi 8 avril 2013

IBM releases Hadoop box and database technology for quicker data insights

IBM announced a new PureData appliance for Hadoop and technology for speeding up analytic databases. The announcements come at a good time, with data sets growing and enterprises hankering for easy and fast analysis capability.

IBM talked up the latest ways in which it has sped up databases and introduced a Hadoop appliance at a press and analyst event in San Jose, Calif., on Wednesday. The developments aim to bring enterprises closer to running analytics on more types and greater quantities of data as close to real time as possible — a higher and higher priority as big-data projects proliferate.
In the long run, as more and more data piles up and in greater varieties, IBM wants to help to prevent its customers from drowning in the deluge of data and instead give them tools to get better results, such as more revenue, said Bob Picciano, general manager of information management at IBM Software. That’s why tools have to be fast, capable of working on huge data sets and easy to use.
Toward that end, IBM announced BLU Acceleration. When a user of an IBM database such as DB2 runs a query, BLU quickly slims down a big set of data to the amount needed for analysis and spreads tiny workloads across all available compute cores to give a result. One feature of BLU — data skipping — essentially fast-forwards over the data that’s not needed and hones in on the small area that is. And with BLU, data can stay compressed for almost the entire duration of the analysis. IBM claimed BLU produces results a thousand times faster than a previous version of the DB2 database without BLU in some tests.
The IBM PureData System for Hadoop appliance.
The IBM PureData System for Hadoop appliance.
IBM also unveiled another IBM PureData box tailored for big-data purposes, this time around Hadoop. Previous boxes in the line include the PureData System for Analytics. The IBM PureData System for Hadoop appliance will become available later this year. It enables customers to start loading data in 90 minutes, compared with two or three weeks for a company’s Hadoop instance in a data center, said Phil Francisco, vice president of Netezza product management and product marketing at IBM Software. The box can store data processed in Hadoop right in the box, a perk for companies facing retention requirements. Look for IBM to offer more big-data hardware and software. The company has spent $16 billion on big-data and analytics acquisitions, and it wants to “spend as much organically as well as inorganically to figure out what clients need in this space,” said Inhi Cho Suh, vice president of information management product strategy at IBM Software. Meanwhile, Picciano said IBM will soon come out with a way to do for many servers what the BLU Acceleration does with the processors inside a single server.
The new IBM products sound like they could speed up analytics. If enterprises don’t believe the need is there now, they will as data gets bigger.

By: Jordan Novet
Source: http://gigaom.com/2013/04/03/ibm-releases-hadoop-box-and-database-technology-for-quicker-data-insights/

mercredi 3 avril 2013

City of Bunbury Selects IBM PureSystems to Take the Lead with Government Cloud

Sydney, Australia - IBM (NYSE: IBM) today announced that the City of Bunbury, one of the largest Regional local governments in Western Australia, has selected IBM’s PureSystems technology to streamline and simplify their IT infrastructure and provide a cloud-ready environment to deliver future initiatives such as local Government private cloud computing.
Faced with exponential growth of data and server sprawl, the City needed a scalable solution to address not only current needs but to take them well into the future. Working closely with IBM Business Partner Stott + Hoare, Bunbury selected the IBM PureFlex System that integrates server, IBM Storwize V7000 storage, networking and software into a highly automated, simple-to-manage system. Bunbury, along with Stott + Hoare, will virtualize their desktops and mobile devices to reside on the IBM PureFlex system for greater organisational flexibility and ongoing cost reductions.
“Our end goal is to not only deliver exceptional 24/7 levels of service to Bunbury’s residents, but establish the City as a technology leader whose IT systems can become a template for other councils throughout the state and beyond; PureFlex offers us this long-term technical solution,” said City of Bunbury CEO, Andrew Brien. “We are on the cusp of significant reform where, for instance, the National Broadband Network is only one of several macro changes which will radically shape governance in Bunbury and Australia more broadly. Our primary goal has always been to make sure our IT systems can handle whatever changes may occur while remaining cost-effective and efficient. That way we know that when our own functions of government have to adapt and expand, the technology we use will be able to match us at every turn.”
The City of Bunbury expects the PureFlex-enabled virtual server environment to reduce hardware replacement costs, power consumption and time spent maintaining the multiple platforms that currently sustain the City’s IT operations. Another key factor in its selection was the ease of managing mass storage and disaster recovery.
The IBM PureSystems family offers clients an alternative to current enterprise computing models, where multiple and disparate systems require significant resources to set up and maintain.
The true value of PureSystems is the fully integrated and pre-tuned platforms. The PureFlex System enables organisations to more efficiently create and manage an infrastructure. PureApplication System helps organisations reduce the cost and complexity of rapidly deploying and managing applications. PureData System is tuned for cloud computing and can consolidate more than 100 databases on a single system. In addition to the common web application patterns supported by PureApplication System, the combination of both PureData and PureApplication Systems can be used for end to end transaction workloads.
For more information on IBM PureSystems visit: www.ibm.com/press/pure
For more information on City of Bunbury visit: http://www.bunbury.wa.gov.au/
For more information on Stott + Hoare visit: http://www.stotthoare.com.au/
IBM, the IBM logo, ibm.com, PureSystems, PureFlex, PureApplication, Power, Smarter Planet and the planet icon are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. For a current list of IBM trademarks, please see www.ibm.com/legal/copytrade.shtml
All other company, product or service names may be trademarks or registered trademarks of others. Statements concerning IBM’s future development plans and schedules are made for planning purposes only, and are subject to change or withdrawal without notice. Reseller prices may vary.


By: Web wire
Source: http://www.webwire.com/ViewPressRel.asp?aId=172526

mardi 2 avril 2013

Search for: Back to Expert integrated systems Skip to contentHomeContributorsAbout usOpen standards for better portability in the cloud

Open standards in the cloud benefit everyone and can dramatically reduce the time it takes to move between public and private clouds, to deploy software and to implement complex application topologies.
IBM is working with independent software vendors (ISVs) and the OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) working group to define and deliver standards for pattern definition. More information can be found here.
At IBM Pulse this year, we hosted the Open Cloud Summit, where leaders in the cloud standards community from IBM and other vendors presented on the importance of standards for successful cloud deployments. The summit also featured a TOSCA demonstration that highlighted TOSCA implementations by IBM, SAP and others. IBM also had a pedestal in the solutions expo further promoting and informing attendees of the ongoing standards work.
PureApplicationIn the IBM PureApplication System we are already working with other vendors like SAP to deliver on the promise of open standards and portability in the cloud. Pattern definition standards benefit implementers and customers alike. They allow customers to easily move from private clouds to public clouds, and allow for a level of abstraction above the underlying deployment, provisioning and orchestration technologies.
Taking complex distributed business applications and delivering deployment patterns that are executable out of the box helps customers get immediate value from their packaged software purchases. This IBM PureApplication System case study cites a 47 percent reduction in labor costs for software deployment, and in many cases steps are eliminated—freeing up resources who used to spend time provisioning virtual servers, configuring monitoring and backup software, installing and patching base operating systems, infrastructure and middleware software, and installing application software.
It’s not just about money either. Take any application that needs a few environments, such as production, test, development and so on. The traditional deployment models have either been three separate installation processes, or taking snapshots of images and re-configuring them. So either we have three different installations or three versions of the same error—or by luck everything works out perfectly. Taking into consideration a large application like SAP, where runbooks can be hundreds of pages of manual steps, you can start to understand how automation based on patterns developed by the experts who wrote the application can deliver huge productivity improvements and ongoing operational efficiencies.
There is no question that this is the future of software deployment, and all organizations should be taking a close look at how much time and money they can save in this type of environment. As standards evolve and more vendors adopt them, customers will benefit from both efficiencies and portability across public and private clouds, as well as solutions like the IBM PureApplication System. All enterprise and infrastructure architects should be taking a close look at these standards.

IBM takes a step towards building artificial semiconductor synapses

The scientists and researchers who try to build computers that attempt to duplicate some of the brain’s capabilities, even crudely, have long faced a significant problem. Our best conventional technology looks nothing like the biological system it attempts to emulate, nor is this simply an issue of scale. Computer processors are built on 2D planar silicon, connect via controller hubs (both on-die and across server nodes), and use a simple binary system for determining whether or not a given transistor is on or off.
People tend to think of the problem as being one of scale. It isn’t. As John Hewitt explored in an article in January, the problem isn’t that transistors are too big, it’s that neurons connect to each other in 3D and are decidedly non-binary. The release of neurotransmitters in the brain is governed by the movement of calcium ions across cell membranes. Large influxes of calcium into the synapse produce larger downstream effects.
Synapse structure
IBM researchers have detailed a new discovery that brings us one step closer to bridging the gap between synapses and silicon. The research team has detailed a method for transforming an insulative layer into a conductive material by exposing it to a charged fluid. VO2 (vanadium(IV) oxide) is a compound with a particularly odd (and interesting) habit. It transforms from an insulator to a conductor depending on its temperature. That’s the sort of capability that makes scientists giddy, but it’s just the starting point for what the IBM team found.
By exposing the VO2 thin film to an ionic fluid, the scientists were able to stabilize the metallic phase of VO2 down to five degrees Kelvin. Normally, VO2 is an insulator below 340K (68C) and metallic/conductive at 68C or above. The previous explanation for this dramatic change in behavior is called electrolyte gating. This theory posited that the dramatic change in VO2′s transition capabilities was caused by the introduction of the ionic liquid into the gate structure.
The research team tested this by cleaning the heck out of their test substrate. The VO2 thin film was examined using X-ray photoelectron spectroscopy (XPS) — no fluid was found. The treated VO2 film, meanwhile, could still be flipped between low and high conductance by a sufficient voltage change. The team confirmed its findings on VO2 thin films over different substrates, to make certain that particular properties of the underlying material weren’t the cause of the results.
Fluidic channel
Image credit: New York Times
The next step, according to the team, is to try and create larger fluidic circuits that flip on or off depending on local fluid concentrations. “We could form or disrupt connections just in the same way a synaptic connection in the brain could be remade, or the strength of that connection could be adjusted,” Dr. Parkin told the New York Times. Parkin believes the team will likely tackle a small memory array next.
What’s exciting about this work isn’t the short-term implications, but the long-term goals. It’s extremely difficult to model the behavior and function of a system if you can’t build a representative model of it. The Blue Brain project is one of the world’s leading efforts to simulate neuronal structure. The last major project milestone was the simulation of a cellular mesocircuit with 100 neocortical columns and a million cells in total. Doing so required the use of an IBM Blue Gene/P, one of the most power-efficient supercomputers in existence. At present, simulating one simplified component of a rat brain requires multiple orders of magnitude more power than an organic brain uses.
Blue Brain project goals
And that’s why advances like this matter. The ability to modify a material’s insulative properties without applying electricity could be critical to future attempts to scale brain modeling downward. Creating circuits that model synapse functions (even if they do so imperfectly and very simply) can help us understand how their biological counterparts function. It could dramatically reduce the power consumption (and waste heat) generated by such attempts, just as the advent of modern semiconductor manufacturing reduced computers from structures that fit into warehouses to pockets.
It’s an exciting, if small, step in the right direction.

Why IBM Made a Liquid Transistor

IBM materials advance shows another promising path to replace the foundation of today’s computing technologies. 

Researchers at IBM last week unveiled an experimental new way to store information or control the switching of an electronic circuit.
The researchers showed that passing a voltage across electrolyte-filled nanochannels pushes a layer of ions—or charged atoms—against an oxide material, a reversible process that switches that material between a conducting and nonconducting state, thus acting as a switch or storing a bit, or a basic “1” or “0” of digital information.
Although it’s at a very early stage, the method could someday allow for very energy-efficient computing, says Stuart Parkin, the IBM Research Fellow behind the work at the company’s Almaden Research Lab in San Jose, California.  “Unlike today’s transistors, the devices can be switched ‘on’ and ‘off’ permanently without the need for any power to maintain these states,” he says. “This could be used to create highly energy-efficient memory and logic devices of the future.”
Even a small prototype circuit based on the idea is two to four years off, Parkin says. But ultimately, “we want to build devices, architecturally, which are quite different from silicon-based devices. Here, memory and logic are fully integrated,” he says.