dimanche 16 août 2015

IBM Is Teaching Watson To Interpret Medical Images

But don't expect it to threaten radiologists' job security anytime soon.

By Joe Satran

 


In the latest sign that the singularity is nigh, IBM announced last week that it would start teaching its ultra-fast computer system "Watson" to be something like a robotic radiologist.
The goal is for Watson -- most famous for beating human opponents on "Jeopardy!" -- to be able to interpret medical images from sources such as CT scans, electrocardiograms and MRIs, as well as photographs of skin conditions such as melanoma.
IBM has already started training Watson to analyze visual data -- "to see," the company says. The supercomputer will soon bring this ability to bear on a trove of 30 billion medical images that IBM acquired in its recent $1 billion purchase of health tech company Merge, to figure out how to distinguish a normal result from an abnormal one.
So does that mean that your radiologist cousin could soon be out of a job? That the next time you get an MRI on your bum knee, you'll hear the results in C-3PO's voice?
Not quite. At least not according to radiologists -- admittedly not an unbiased group on this issue.
Dr. John Eng, a radiologist and machine learning expert at Johns Hopkins Hospital, was dubious that Watson will actually be able to compete with a human radiologist when it comes to visual diagnoses anytime soon.

"It seems like the claims are being made that Watson is going to look at images and make general diagnoses from those images and that seems like it's a ways off," he said.
That's because interpreting radiologic images is arguably one of the toughest visual reasoning tasks that humans take on -- far more difficult than, say, identifying a specific person in a photo posted on Facebook. "The human brain is a remarkable pattern recognition machine. It's going to be difficult to beat the brain," said Dr. Daniel Sodickson, the vice chair for research in radiology at the New York University School of Medicine.

But if Watson can't beat radiologists as easily as other "Jeopardy!" contestants, it can join them to assume a key role in patient care.
"As good as we are as radiologists, and as much training as we have, there are still things that we miss on images every day," said Dr. Michael Recht, chair of radiology at the NYU School of Medicine. "And the goal would be to have aids that would help us make sure we wouldn't miss things."
Computers already help radiologists around the country interpret mammograms. Programs highlight potential problem areas on a patient's images, allowing the radiologist who examines them to spend their time most efficiently. And leading radiology departments have also started to adopt similar programs to assist with certain types of CT scans and MRIs -- for example, by tracking the size of abnormal growths more precisely than the human eye is able to.
Watson -- or some other artificial intelligence like Watson -- could, in the not so distant future, take that type of work further and act as a first filter for all sorts of medical images that are later examined by doctors. That could help them catch serious problems that are hard to see with the naked eye. A supercomputer could also act as a kind of second opinion, helping to confirm a doctor's suspicions about a somewhat unusual diagnosis. That, in turn, could cut down on redundant testing, which saves patients time, money and dangerous radiologic exposure.
Watson could serve a particularly crucial role in areas underserved by advanced medicine, suggested Dr. Kimberly Amrami, a musculoskeletal radiologist at the Mayo Clinic in Minnesota.

"If I were a physician in a remote part of sub-Saharan Africa, say, I might have access to a computer, but not a bunch of people with specialized knowledge," she said. "So Watson could serve as a first pass and help determine whether, based on this exam, you need another more advanced, more expensive test, or consultation from an expert far away."
As for the idea that a computer could ever replace radiologists completely, Amrami was highly skeptical. She noted that some people worried about that happening when computer-assisted diagnostics first started to crop up decades ago -- but time has proven them wrong.
"When we went from film to digital, people were worried, but that enhancement in our technology actually made us more important," she said. "So I think that the same will be true here. Watson will only make us better radiologists."

Read more : http://www.huffingtonpost.com/entry/ibm-watson-radiology_55cbccf9e4b0898c48867c56?linkId=16277120
Article written by : Joe Satran
Posted : 08/13/2015

More on Joe Satran here : http://www.huffingtonpost.com/joe-satran/

mercredi 13 mai 2015

Power8 Iron To Take On Four-Socket Xeons

May 11, 2015

The high ground in the server market used to be large-scale SMP and NUMA machines with 16, 32, 64, or 128 sockets all lashed together to make a big shared memory machine. But that was back in the days when processors have one or maybe two cores, and the pace of Moore’s Law increases in transistor etching technologies has allows processor makers like Intel, IBM, and Oracle to cram a lot more cores and threads onto a single die. Enterprise workloads do not grow as fast as hyperscale and HPC parallel workloads, and that means over time a fairly modest machine from 2015 has the oomph of big iron from a decade ago. 

Such is the case with machines based on Intel’s “Haswell-EX” Xeon E7 v3 processors, announced last week and scaling up to eight sockets using Intel’s on-chip NUMA links, and the Power8 midrange and high end, which IBM is updating to go up against Xeon-based machines, particularly for Linux-based applications. The Platform covered the expansion of IBM’s high-end Power E880 machines, which scale out to a maximum of 16 sockets and 16 TB of main memory, last week. IBM is formally announcing these high-end boxes at its Edge2015 conference in Las Vegas this week, but details on the largest of its Power8 machines slipped out a bit early. The final machine to be added to the Power8-based Power Systems lineup from Big Blue is the Power E850, and it is a four-socket machine that is aimed squarely at systems from Hewlett-Packard, Dell, Oracle, Fujitsu, NEC, and others that employ Intel’s Xeon E7-4800 v3 CPUs, which similarly support four-way NUMA clustering in their hardware.
The Power E850 is a bit different from four-socket boxes that IBM has shipped in the past. For one thing, it includes some capacity-on-demand features that up until now have only been available on larger Power Systems machines. With capacity on demand, IBM ships a box loaded with processors and main memory and allows customers to activate it as needed either permanently or temporarily on a daily or monthly basis with utility pricing. The base Power E850 system ships with two processors and a full memory complement (based on 16 GB, 32 GB, or 64 GB memory sticks) as a base, and customers active Power8 cores and memory in 1 GB increments.

The engines in the Power E850 are based on IBM’s “Murano” dual-chip module, which puts two half-cored Power8 chips into a single Power8 socket and links them by a crossbar. IBM uses dual-chip modules for a number of reasons, and the first is that a smaller chip has a high yield than a larger chip, generally speaking, and this is important because IBM’s 22 nanometer copper/SOI process, now controlled by GlobalFoundries, is nowhere near as high volume as Intel’s 22 nanometer Tri-Gate process, which is used to make the Haswell-EX Xeon E7 v3 processors. The DCM variants of the Power8 chips have more I/O capacity on their PCI-Express controllers, at 48 lanes per second instead of the 32 lanes per socket that are used in the single-chip variants of the Power8 chips used in the high-end Power E870 and Power E880 systems, which respectively scale to eight and sixteen sockets in a single image. These SCM Power8 variants do not need as much I/O bandwidth per socket because they have many more sockets in a system.
Initially, IBM will be supporting up to 2 TB of maximum main memory across the 32 memory slots, but Steve Sibley, director of worldwide product management for IBM’s Power Systems line, says that Big Blue will double it up again with its 128 GB memory modules, maybe later this year or early next. That will give the Power E850 the same maximum memory per socket as the top-end Power E880.



Intel supports 6 TB of memory across four sockets using 64 GB DDR3 or DDR4 memory right now, and IBM only supports DDR3 memory (which generally runs hotter for a given level of performance). The memory controllers in the Power8 chips were designed to be protocol agnostic and can support either DDR3 or DDR4 memory, but IBM tends to lag when it comes to memory because it likes to keep its memory costs low and its profit margins high.
Importantly, all of the enterprise-class Power8 machines make use of IBM’s “Centaur” memory buffer and controller chip, which has a chunk of 16 MB memory on it that is used to make up to 128 MB of L4 cache memory between the main memory and the L3 cache subsystem on the Power8 processors. The Xeon E7 v3 processors do not have L4 cache memory, and one of the reasons why IBM has been able to jack up the memory bandwidth on the Power8s relative to X86 architectures is this distributed L4 cache. Memory bandwidth and higher performance per core are the two key selling points that IBM is leveraging to promote the Power8 chip over Xeon alternatives. (No one talks much about AMD Opterons anymore, but that could change in a few years if AMD revamps its X86 server business as it plans to.)




As you can see, IBM is packing a lot of electronic components into the 4U chassis of the Power E850 system. The machine has four processors, and it is very likely that they run in a 190 watt thermal envelope (like the merchant silicon variants of the Power8 chips that Google, Tyan, and others are building systems based on) instead of the hotter 250 watt chips that IBM has used in its other and less densely packed systems. IBM is supporting three different variants of the Murano DCM: one with eight cores in the package that run at 3.72 GHz, one with ten cores that runs at 3.35 GHz, and one with twelve cores that run at 3.02 GHz.
The Power E850 has fans and drives in the front, and specifically, five large fans on top that blow first across the memory sticks, then the processors and then the PCI-Express slots in the back. Drive bays are below this – eight 2.5-inch drives, four 1.8-inch SSDs, and one DVD drive – and four power supplies fill the bottom of the rack behind the drives.
Customers with more storage requirements can hang up to four I/O drawers off the Power E850, which can have as many as 40 PCI-Express peripheral cards in them. The Power E850 system has eight PCI-Express 3.0 x16 slots and three PCI-Express 3.0 x8 slots internally. It seems very unlikely, given its dense packaging, that customers could put more than a couple GPU coprocessors into this machine, and given the workloads that the Power E850 is aimed at – database, in-memory processing (particularly SAP HANA and IBM DB2 Blu), analytics, and fat HPC cluster nodes – it is very unlikely that anyone will add GPUs to this machine. The Power E850 comes with a dual-port 10 Gb/sec Ethernet card in one of its PCI-Express slots by default.

Pushing Linux On Power Hard

Another thing that IBM is doing to keep the Power E850 competitive with systems using Intel’s Xeon E7 v3 processors (and the impending Xeon E5-4600 v3 variant of the Haswell chip, aimed at lower-cost four-socket machines) is offering what it calls Integrated Facility for Linux, or IFL, pricing on the Power8 cores in the Power E850 system. With the IFL approach, IT shops can restrict cores to running Linux and if they do, IBM gives them a big price break. The idea is somewhat counterintuitive, and the net result is IBM ends up charging customers using its own AIX Unix and IBM i proprietary operating system considerably more on processors and memory than it does for customers using Linux. This might be annoying to customers using AIX or IBM i, but they are a lot more captive (given that IBM is the only system supplier that supports them, and that probably will not change as the OpenPower systems come to market until IBM decides to get out of the server hardware design and manufacturing business) and hence have fewer options than Linux customers. IBM has to drop prices for Linux systems no matter what
The Platform is putting together an analysis to compare the compute performance and price/performance for Xeon, Power, and Sparc processors to try to get a better handle on how these platforms stack up. Sibley says that a fully loaded, four-socket Power E850 with 48 cores will have somewhere between 5 percent and 10 percent more oomph in terms of raw performance compared to a four-socket Xeon E7-4800 v3 machine with 72 cores. Pricing for the machines configured with a hypervisor and Linux will be about the same, he says, and with a 70 percent utilization guarantee – meaning, IBM is promising that customers can load this machine up to that level of CPU capacity and still have workloads run with snappy response time – the gap widens up because, at least according to IBM’s tests, VMware ESXi on Xeons does not handle multiple workloads as well as IBM’s PowerVM hypervisor on Power8s. The gap on workloads could be as much as a 30 percent to 40 percent price/performance advantage favoring the Power E850 over a Xeon E7 box.
Intel can – and does – show its own charts illustrating how it beats Power8 machines.

What IBM is equally focused on is showing how the Power E850 offers a significant performance boost to its own customers running AIX and Linux workloads. (The Power E850 does not support the IBM i operating system, which is sure to annoy a bunch of the company’s customers who need more than a two-socket machine. The will be encouraged to buy a half-loaded Power E870 machine, which is more expensive and which is put into a higher software pricing tier, too, that will make IBM systems software and third party application software more expensive.)


The Power 750 four-socket machines that IBM announced in April 2010 using its Power7 processors are looking a little long in the tooth now and are the main targets in the IBM customer base where Big Blue and its partners are expected to push the new Power E850 four-socket box. Core for core, this Power E850 offers about twice the performance of the Power 750. The Power 750 used single-chip variants of the Power7 and Power7+ processors, but the Power 760 tested out the dual-chip module idea, and that is why you don’t see a Power E860 in the lineup. In effect, the Power E850 is the DCM variant; IBM did play around with the idea of having a SCM variant of the Power8 in a four-socket machine, but for whatever reason it has decided against the idea, so far. The point is, the resulting machine has a lot more oomph than the prior two generations, and customers who use this class of machine for application serving, database clustering, and in-memory processing will be looking hard at the Power E850. (Up until when the Chinese government put the brakes on buying Power Systems iron from IBM nearly two years ago, the Power 550 and Power 750 machines were hot sellers in the Middle Kingdom.)

The Power E850 will be available on June 5. IBM’s AIX 6.1 and 7.1 Unix variants are supported on the machine. The Power8 chip supports big endian byte ordering (used by prior Power chips) and little endian byte ordering (used by X86 processors), and now Linux variants can run in either mode. Red Hat Enterprise Linux 6.6 and 7.1 are both supported in big endian mode, and so is SUSE Linux Enterprise Server 11 SP3. In little ending mode, RHEL 7.1, SLES 12, and Canonical Ubuntu Server 14.04 and 15.04 are all supported. For Linux and AIX, IBM is also moving the machine down to the small software tier, rather than the medium one, which further lowers the price of application software on the box. This may or may not make software vendors happy. Pricing on the Power E850 was not available at press time, but we are digging.


Link : http://www.theplatform.net/2015/05/11/power8-iron-to-take-on-four-socket-xeons/
Written by :   
May 11, 2015 
 

​With an infrared rainbow, IBM optical chip outpaces copper wires

Big Blue's researchers have demonstrated fiber-optic technology that could help computers break through today's speed limits by transferring data faster.
by





IBM Research engineers have pushed a step ahead with a technology called silicon photonics designed to loosen up bottlenecks in the computing industry.

Silicon photonics marries conventional chip technology with the superfast data-transfer abilities of fiber optics. Sending data as light over optical links instead of electrons over copper wires offers big advantages in both speed and transmission distance, but because it's expensive, it's mostly limited to long-haul uses like connecting computers in different buildings, cities and continents.

But IBM's researchers demonstrated a computer chip that can simultaneously transmit and receive four different colors of infrared light over a single fiber-optic line -- a technology called multiplexing. Each link can transmit 25 gigabits of data per second, for a total of 100Gbps. That's enough to transfer a Blu-ray disc's full-resolution 25 gigabyte movie every 2 seconds.


This multiplexing-based speed, combined with the chip's all-in-one design, is an industry first, IBM said in an announcement Tuesday.
It's only a demonstration chip from a research lab at this stage, but silicon photonics work from companies like IBM, Intel and Luxtera could play a crucial role in advancing services like Google search, Microsoft Office Online and Facebook social networking that are housed in mammoth data centers packed with thousands of servers. Those servers today are often linked with copper lines, but more economical fiber-optic links could help unify those servers into a larger, more powerful block of computing power. That means more sophisticated online services.

"People would love to have a way to do inexpensive silicon-compatible photonics," said Linley Group analyst David Kanter. But the technology hasn't been easy to develop, he said.

Silicon photonics dovetails with a number of technologies like spintronics, exotic carbon materials and quantum computing that are in development to ensure the computing industry can keep up its steady pace of progress even after conventional silicon runs out of steam. The steady progress is embodied in a 50-year-old observation called Moore's Law named after Intel co-founder Gordon Moore.




Commercial use later

IBM Research typically works a step ahead of what's commercially feasible, but Big Blue expects the work will pay off for the company later.
"Making silicon photonics technology ready for widespread commercial use will help the semiconductor industry keep pace with ever-growing demands in computing power driven by big data and cloud services," said Arvind Krishna, senior vice president and director of IBM Research. So-called big-data services rely on computationally intense analysis that reveals patterns in things like shopping, traffic or product demand.
The four-link technique could cut data-center fiber-optic costs roughly in half, said Will Green, manager of IBM Research's Silicon Photonics Group.

"Multiplexing four wavelengths into one optical fiber means that you can carry four times as much data per fiber, and therefore will need four times less fiber in your interconnect system," Green said. "This fact translates into an additional system-level cost savings for the data-center application on the order of two times on the cost of installed fiber."

Longer-term future

In the longer run, fiber-optic links could tie together components within a computer, too.
Power-consumption limits have capped the speed of processors -- few chips ever make it past 4GHz these days, meaning that their internal clock speed ticks 4 billion times per second. As a result, computing engineers have been looking for other ways to improve overall system performance, and silicon photonics could play a role in keeping processors fed with the data they need to work at maximum efficiency instead of spending large fractions of their time idle.
Key to silicon photonics will be bringing the optical transmitters and receivers -- transceivers -- closer to the processors that need to send and receive data. Those components eventually will be stacked one atop another, linked with a technology called a through-silicon via (TSV), said An Steegen, senior vice president of process technology at Imec, a large Belgian-based chip research group. It'll take years to bring that idea to fruition, she predicted.

Intel has had a long-running interest in silicon photonics and with a technology called Light Peak hoped to build an inexpensive fiber-optic link for computers. It never commercialized that project, though, instead partnering with Apple on the Thunderbolt technology that uses either copper or fiber-optic links that today reach up to 40Gbps.
That's pretty fast, but copper has significant length limits. Copper Thunderbolt cables can reach 3 meters, but fiber-optical alternatives from Corning are available in lengths up to 60 meters.

Link : http://www.cnet.com/news/with-an-infrared-rainbow-ibm-optical-chip-outpaces-copper-wires/
Credits : Stephen Shankland

More on Stephen Shankland here : http://www.cnet.com/profiles/shankland/



lundi 4 mai 2015

IBM Shows Off a Quantum Computing Chip

By Tom Simonite on April 29, 2015
On Technology Review.


A new superconducting chip made by IBM demonstrates a technique crucial to the development of quantum computers.

When cooled down to a fraction of a degree above absolute zero, the four dark elements at the center of the circuit in the middle of this image can represent digital data using quantum mechanical effects.

---- 
A superconducting chip developed at IBM demonstrates an important step needed for the creation of computer processors that crunch numbers by exploiting the weirdness of quantum physics. If successfully developed, quantum computers could effectively take shortcuts through many calculations that are difficult for today’s computers.

IBM’s new chip is the first to integrate the basic devices needed to build a quantum computer, known as qubits, into a 2-D grid. Researchers think one of the best routes to making a practical quantum computer would involve creating grids of hundreds or thousands of qubits working together. The circuits of IBM’s chip are made from metals that become superconducting when cooled to extremely low temperatures. The chip operates at only a fraction of a degree above absolute zero.

IBM’s chip contains only the simplest grid possible, four qubits in a two-by-two array. But previously researchers had only shown they could operate qubits together when arranged in a line. Unlike conventional binary bits, a qubit can enter a “superposition state” where it is effectively both 0 and 1 at the same time. When qubits in this state work together, they can cut through complex calculations in ways impossible for conventional hardware. Google, NASA, Microsoft, IBM, and the U.S. government are all working on the technology.

There are different ways to make qubits, with superconducting circuits like those used by IBM and Google being one of the most promising. However, all qubits suffer from the fact that the quantum effects they use to represent data are very susceptible to interference. Much current work is focused on showing that small groups of qubits can detect when errors have occurred so they can be worked around or corrected.

Earlier this year, researchers at the University of California, Santa Barbara, and Google announced that they had made a chip with nine superconducting qubits arranged in a line (“Google Researchers Make Quantum Computing Components More Reliable”). Some of the qubits in that system could detect when their fellow devices suffered a type of error called a bit-flip, where a qubit representing a 0 changes to a 1 or vice versa.

However, qubits also suffer from a second kind of error known as a phase flip, where a qubit’s superposition state becomes distorted. Qubits can only detect that in other qubits if they are working together in a 2-D array, says Jay Gambetta, who leads IBM’s quantum computing research group at its T.J. Watson research center in Yorktown Heights, New York.
A paper published today details how IBM’s chip with four qubits arranged in a square can detect both bit and phase flips. One pair of qubits is checked for errors by the other pair of qubits. One of the pair doing the checking looks for bit flips and the other for phase flips.

“This is a stepping stone toward demonstrating a larger square,” says Gambetta.
“There will be other challenges that emerge as the square gets bigger, but it looks very optimistic for the next few steps.”
Gambetta says his team had to carefully design its new chip to overcome interference problems caused by putting the four qubits so close together. They are already experimenting with a chip that has a grid of eight qubits in a two-by-four rectangle, he says.

Raymond Laflamme, director of the institute for quantum computing at the University of Waterloo, Canada, describes IBM’s results as “an important milestone [toward] reliable quantum processors.” Tackling errors is one of the field’s most important problems. “Quantum computing promises to have many mind-boggling applications, but it is hindered by the fragility of quantum information.”

Truly solving that problem requires going one step further than IBM’s latest results, and correcting qubit errors as well as detecting them. That can only be demonstrated on a larger grid of qubits, says Laflamme. However, not all quantum computing researchers think that qubits like those being built at IBM, Google, and elsewhere will ever be workable in large collections. Researchers at Microsoft and Bell Labs are working to create a completely different design of qubit that should be less prone to errors in the first place (see “Microsoft’s Quantum Mechanics”).

Link : www.technologyreview.com/news/537041/ibm-shows-off-a-quantum-computing-chip/

Credits for the article go to : Tom Simonite

mercredi 22 avril 2015

5 Reasons to Attend IBM World of Watson

We are less than a month away from the IBM World of Watson main stage event. On May 5-6th some of the top business leaders, thinkers and developers will converge in NYC’s Silicon Alley to talk the future of cognitive computing and how businesses around the world are using IBM Watson services to fuel next level innovation across different industries.






Registration is filling up fast and we want to make sure you’re on the list!
Here are the top five reasons why we think you should be there:

1.   Get access to the IBM Watson Dev team responsible for emerging cognitive services
2.  Meet the top IBM executives and developers responsible for IBM Watson break-through technology
3.  Preview dozens of commercial apps heading to market
4.  See how some of the world’s biggest companies are taking IBM Watson to scale
5.  Tons of developers will be onsite to trade insights and learnings including IBM Watson’s first ever hackathon
Take advantage of the early bird rate (ends 4/15) and register today to get you place at IBM World of Watson.

Learn more about IBM Watson and current use cases on our YouTube channel. Also visit us on SlideShare


Link : https://developer.ibm.com/watson/blog/2015/04/09/5-reasons-to-attend-ibm-world-of-watson/ 


lundi 20 avril 2015

IBM's supercomputer Watson wrote a cookbook, and it's coming out soon !



Watson, the IBM supercomputer best known for crushing "Jeopardy!" contestants at their own game, will publish its first-ever cookbook next week, according to CNN Money.


The book, "Cognitive Cooking with Chef Watson," is a collaboration between IBM's Watson and the Institute of Culinary Education that goes on sale April 14.

But this is far from an ordinary cookbook. This will be the first cookbook that's co-created by computer algorithms.
Around three years ago, IBM began building an "idea-generating tool" for Watson, which would let the supercomputer tap into its massive data trove to create new and interesting ideas and suggestions. IBM immediately thought food would be a great category for Watson to innovate, since everyone eats and there are literally countless combinations of meals and flavors.
According to the book, IBM taught Watson all about existing food dishes so it could learn how flavors and food chemicals interact, combine and contrast. It also learned about cultural preferences for certain foods and flavors, and it also learned about nutrition.
Once it had enough data, Watson began spewing out combinations of ingredients, which the Institute of Culinary Education helped convert those ideas into real dishes used in the book. 


While there are plenty of meals you might be accustomed to, Watson offers plenty of novel flavor combinations you probably wouldn't have dreamt of. How about an Indian burrito? What about Thai quiche? Or maybe some grilled asparagus on top of some sous vide pig's feet? 
Watson came up with thousands of recipes but eventually narrowed down the options to 100; the book only contains 65 different recipes, which are sorted by preferences and dietary constraints, but CNN Money says IBM might have more recipes and Watson cookbooks on the way.

Article written by Dave Smith 
Apr. 10, 2015

mercredi 8 avril 2015

IBM Watson QA + Speech Recognition + Speech Synthesis = A Conversation With Your Computer

" Back in November I released a demo application here on my blog showing the IBM Watson QA Service for cognitive/natural language computing connected to the Web Speech API in Google Chrome to have real conversational interaction with a web application.  It’s a nice demo, but it always drove me nuts that it only worked in Chrome.  Last month the IBM Watson team released 5 new services, and guess what… Speech Recognition and Speech Synthesis are included!
These two services enable you to quickly add Text-To-Speech or Speech-To-Text capability to any application.  What’s a better way to show them off than by updating my existing app to leverage the new speech services?

So here it is: watsonhealthqa.mybluemix.net!

By leveraging the Watson services it can now run in any browser that supports getUserMedia (for speech recognition) and HTML5 <Audio> (for speech playback).

(Full source code available at the bottom of this post)

You can check out a video of it in action below:





If your browser doesn’t support the getUserMedia API or HTML5 <Audio>, then your mileage may vary.  You can check where these features are supported with these links: <Audio>getUserMedia
Warning: This is targeting desktop browsers – HTML5 Audio is a mess on mobile devices due to limited codec support and immature APIs.

So how does this all work?

Just like the QA service, the new Text To Speech and Speech To Text services are now available in IBM Bluemix, so you can create a new application that leverages any of these services, or you can add them to any existing application.
I simply added the Text To Speech and Speech To Text services to my existing Healthcare QA application that runs on Bluemix:

IBM Bluemix Dashboard


These services are available via a REST API. Once you’ve added them to your application, you can consume them easily within any of your applications.
I updated the code from my previous example in 2 ways: 1) take advantage of the Watson Node.js Wrapper that makes interacting with Watson a lot easier and 2) to take advantage of these new services services.

Watson Node.js Wrapper

Using the Watson Node.js Wrapper, you can now easily instantiate Watson services in a single line of code.  For example:

1.var watson = require('watson-developer-cloud');
2.var question_and_answer_healthcare = watson.question_and_answer(QA_CREDENTIALS);
3.var speechToText = watson.speech_to_text(STT_CREDENTIALS);
The credentials come from your environment configuration, then you just create instances of whichever services that you want to consume.

QA Service

The code for consuming a service is now much simpler than the previous version.  When we want to submit a question to the Watson QA service, you can now simply call the “ask” method on the QA service instance.
Below is my server-side code from app.js that accepts a POST submission from the browser, delegates the question to Watson, and takes the result and renders HTML using a Jade template. See the Getting Started Guide for the Watson QA Service to learn more about the wrappers for Node or Java.


01.// Handle the form POST containing the question
02.app.post('/ask', function(req, res){
03. 
04.// delegate to Watson
05.question_and_answer_healthcare.ask({ text: req.body.questionText}, function (err, response) {
06.if (err)
07.console.log('error:', err);
08.else {
09.var response = extend({ 'answers': response[0] },req.body);
10. 
11.// render the template to HTML and send it to the browser
12.return res.render('response', response);
13.}
14.});
15.});
Compare this to the previous version, and you’ll quickly see that it is much simpler.


Speech Synthesis

At this point, we already have a functional service that can take natural language text, submit it to Watson,  and return a search result as text.  The next logical step for me was to add speech synthesis using the Watson Text To Speech Service (TTS).  Again, the Watson Node Wrapper and Watson’s REST services make this task very simple.  On the client side you just need to set the src of an <audio> instance to the URL for the TTS service:
1
<audio controls="" autoplay="" src="/synthesize?text=The text that should generate the audio goes here"></audio>
On the server you just need to synthesize the audio from the data in the URL query string.  Here’s an example how to invoke the text to speech service directly from the Watson TTS sample app:
01.var textToSpeech = new watson.text_to_speech(credentials);
02. 
03.// handle get requests
04.app.get('/synthesize', function(req, res) {
05. 
06.// make the request to Watson to synthesize the audio file from the query text
07.var transcript = textToSpeech.synthesize(req.query);
08. 
09.// set content-disposition header if downloading the
10.// file instead of playing directly in the browser
11.transcript.on('response', function(response) {
12.console.log(response.headers);
13.if (req.query.download) {
14.response.headers['content-disposition'] = 'attachment; filename=transcript.ogg';
15.}
16.});
17. 
18.// pipe results back to the browser as they come in from Watson
19.transcript.pipe(res);
20.});
The Watson TTS service supports .ogg and .wav file formats.  I modified this sample slightly to return .ogg files for Chrome and Firefox, and .wav files for other browsers.  On the client side, these are played using the HTML5 <audio> tag. You can see my modifications in the git repository.

Speech Recognition

Now that we’re able to process natural language and generate speech, that last part of the solution is to recognize spoken input and turn it into text.  The Watson Speech To Text (STT) service handles this for us.  Just like the TTS service, the Speech To Text service also has a sample app, complete with source code to help you get started.
This service uses the browser’s getUserMedia (streaming) API with socket.io on Node to stream the data back to the server with minimal latency. The best part is that you don’t have to setup any of this on your own. Just leverage the code from the sample app. Note: the getUserMedia API isn’t supported everywhere, so be advised.

On the client side you just need to create an instance of the SpeechRecognizer class in JavaScript and handle the result:


01.var recognizer = new SpeechRecognizer({
02.ws: '',
03.model: 'WatsonModel'
04.});
05. 
06.recognizer.onresult = function(data) {
07. 
08.//get the transcript from the service result data
09.var result = data.results[data.results.length-1];
10.var transcript = result.alternatives[0].transcript;
11. 
12.// do something with the transcript
13.search( transcript, result.final );
14.}
On the server, you need to create an instance of the Watson Speech To Text service, and setup handlers for the post request to receive the audio stream.


01.// create an instance of the speech to text service
02.var speechToText = watson.speech_to_text(STT_CREDENTIALS);
03. 
04.// Handle audio stream processing for speech recognition
05.app.post('/', function(req, res) {
06.var audio;
07. 
08.if(req.body.url && req.body.url.indexOf('audio/') === 0) {
09.// sample audio stream
10.audio = fs.createReadStream(__dirname + '/../public/' + req.body.url);
11.} else {
12.// malformed url
13.return res.status(500).json({ error: 'Malformed URL' });
14.}
15. 
16.// use Watson to generate a text transcript from the audio stream
17.speechToText.recognize({audio: audio, content_type: 'audio/l16; rate=44100'}, function(err, transcript) {
18.if (err)
19.return res.status(500).json({ error: err });
20.else
21.return res.json(transcript);
22.});
23.});

Source Code

You can interact with a live instance of this application at watsonhealthqa.mybluemix.net, and complete client and server side code is available at github.com/triceam/IBMWatson-QA-Speech.
Just setup your Bluemix app, clone the sample code, run NPM install and deploy your app to Bluemix using the Cloud Foundry CLI.


Release on the 04.04.2015
Written by Andrew Trice. More on Andrew here : http://java.dzone.com/users/triceam

Link to the article here : http://java.dzone.com/articles/ibm-watson-qa-speech 

lundi 16 mars 2015

How this regular programmer became a 'Master Inventor' at IBM

IBM is awfully proud of its patent portfolio.
The company spends about $6 billion a year on R&D and has research scientists working on everything from nanotechnology to evidence of the Big Bang.

In 2014, IBM broke an invention record: it became the first company to earn more than 7,000 patents in a single year (7,534 patents). This was the 22nd consecutive year IBM topped the annual list of U.S. patent recipients. IBM inventors earned an average of more than 20 patents per day last year, the company boasted.

IBM's secret? It's not just research scientists doing all the inventing.
Any employee can become an inventor, and IBM has a team that helps hobbyist inventors apply for and land patents.

Take IBMer Kelly Abuelsaad, age 33

 

For instance, Kelly Abuelsaad, 33. She currently works as a software engineer for IBM's cloud services team, and she started as a system administrator. (That's an IT person that keeps a company's technology running smoothly.)
She bills herself as an "accidental inventor" yet has invented so much stuff for the company in the past six years, she's been crowned a "Master Inventor."  That's a special title at the company for someone who has lots patents and helps other ordinary employees do the same.
"To date, I've filed 55 patent applications with the US Patent Office, 12 of which so far have been granted," she told Business Insider.
But seven years ago, "it wasn't really something I had ever considered doing. I had thought you needed to be like a rocket scientist in order to create a patents," she says.

Then a friend decided he wanted to try getting a patent on a way he invented to view pages in specific Web browsers. He asked her Abuelsaad to help him.
She worked with him to write up the idea, and a bunch of others, and submit them to an internal IBM team that reviews employees patent ideas.

And a light bulb dawned

"We got three ideas through the process and it opened my eyes that creating inventions was something anybody can do. Really. It's not reserved for PhD rocket scientists," she says.
When she wanted to try her hand at a few patent ideas of her own, she joined an inventor's brainstorming support group, scattered across the world, who met online to discuss their ideas and discovered, "This is something a lot of people do in their spare time at IBM."
The group was chock full of other IBMer's in their 30s, too, including Lisa Seacat DeLuca, who at the age of 31 became IBM's most prolific woman inventor, with more than 370 patent applications. (Here's DeLuca's Ted Talk.)

You, too, can become a master inventor

Abuelsaad says becoming a patent-producing inventor, "is something a normal person can do." Here's how:
1. Look for problems you encounter "as a regular person using technology." The first few patents she did, had nothing to do with her expertise in cloud computing. "They were really common place everyday things," she says.

For instance, one of her patents is for providing cell phone subscriptions for email threads (U.S. No. 2013-07-16 8489690). Another is for adding a teleconference caller to a group instant messaging chat (U.S. No. 2013-12-10 8605882)



2. Use your imagination during your day job to spot problems that everyone deals with.  "Every day in all of our jobs, all of us have pressures to execute and deliver, deliver, deliver," she says. "Allow yourself to stop and observe all the problems you are solving in your work."
Maybe you are solving the same problem over and over again and you could come up with a permanent solution. Maybe something you are doing can be applied to a bigger audience, a broader problem.
3. Allow yourself to toy with solutions. Say to yourself, "Wouldn't it be neat if ... <solution to problem>," she says. For instance, wouldn't it be neat if there was a way to let someone know about a meeting when they were offline, and have the meeting notes automatically sent to them?
Start there.

4. Join an inventor's group. At IBM that's easy. Ditto for many other big tech companies that apply for lots of patents.  If your company can't help, you'll need to do some sleuthing to find an inventors meet-up that works for you.
"Brainstorming with like-minded people is incredibly liberating," she says. But its also helpful to find a mentor that can guide you through the patent process.

Creativity feeds on itself

"Getting a patent is a reward. It is something that I’ve got my permanent record inside IBM and outside IBM. All these inventions I created, I’m very proud of," she says.
But inventing also "helps you be more creative, observant and be more proactive about solving issues. Now whenever I see a technical issue, I think, what kind of invention could I create? Anybody can come up with ideas that are patentable, if you stop and train your mind."

mardi 24 février 2015

IBM Spectrum Scale (formerly GPFS)

This IBM Redpaper Redbooks publication updates and compliments the previous publication: Implementing the IBM General Parallel File System in a Cross Platform Environment, SG24-7844 with additional updates since the previous publication version was released with GPFS. Since then, two releases have been made available up to the latest version of IBM Spectrum Scale 4.1. Topics such as what is new in Spectrum Scale, Spectrum Scale licensing updates (Express/Standard/Advanced), Spectrum Scale infrastructure support/updates, storage support (IBM and OEM), operating system and platform support, Spectrum Scale global sharing - Active File Management (AFM), and IBM Spectrum Protect considerations for using Spectrum Scale for LAN-free backup are discussed in this new IBM Redbooks publication.
This IBM Redpaper Redbooks publication provides additional topics are discussed in this publication such as planning, usability, best practices, monitoring, problem determination and so on. The main concept for this publication is to bring the readers up to date with the latest features and capabilities of IBM Spectrum Scale as the solution has become a key component of the reference architecture for clouds, analytics, mobile, social media, and much more,.
This IBM Redpaper Redbooks publication targets toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) responsible for delivering cost effective cloud services and big data solutions on IBM Power Systems helping to uncover insights among client's data so they can take actions to optimize business results, product development, and scientific discoveries.

Disclaimer

These pages are Web versions of IBM Redbooks- and Redpapers-in-progress. They are published here for those who need the information now and may contain spelling, layout and grammatical errors.
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. Your feedback is welcomed to improve the usefulness of the material to others.
IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends upon the customer's ability to evaluate and integrate them into the customer's operational environment. 
 
By : IBM Redbooks publication  
Please find the complete publication on : http://www.redbooks.ibm.com/redpieces/abstracts/sg248254.html