jeudi 29 décembre 2011

How IBM Built the Most Powerful Computer in the World Read more: How IBM Built the Most Powerful Computer in the World - Popular Mechanics

Capable of an astounding 209 teraflops, IBM's forthcoming Blue Gene/P will model hurricanes, analyze markets, and simulate nuclear explosions with incredible precision. Here's the story behind Blue Gene/P, and a sneak peek at what it can do.

 Chris Marroquin is waist-deep in a hole in the floor. He's a tall guy with a medium build, but he looks awfully short now, and his shirt is pumped up to Schwarzenegger size by a 60-degree breeze blustering all around him. Grappling with a 1-inch-diameter hose, he attempts to explain the liquid-cooling system of IBM's next-generation supercomputer to me, but I can barely hear him over the howling wind. We're in a development room of IBM's Rochester, Minn., facility, where engineers test and assemble the company's Blue Gene supercomputers. The air buffeting Marroquin cools a small, four-rack Blue Gene/P system capable of 13.9 teraflops per rack, but the hose he's holding is part of a far more advanced cooling system. Filled with deionized water, the anti-corrosive agent benzotriazole and a dose of biocide, the tube feeds into a prototype of the company's new Blue Gene/Q computer. The Blue Gene/Q rack sitting on the raised floor has its own circulatory system—850 feet of copper pipe, with check valves, quick-disconnect rubber hoses and an electronic monitor that measures flow rate, pressure and dew point—designed to shut down if anything goes awry. "You don't want any drips," Marroquin says.

As sophisticated as the cooling system is, what launches this machine into the realm of technological superlatives is its processing power: Each rack contains 1024 computer chips, and every one of those chips has 16 processor cores. That's a total of 16,384 processors, making it capable of 209 teraflops, 15 times more power per rack than the Blue Gene/P. Within the next year IBM will ship 96 Blue Gene/Q racks to Bruce Goodwin at Lawrence Livermore National Laboratory (LLNL) in California. Collectively, those racks will become the most powerful computer in the world. It should be able to predict the path of hurricanes, decode gene sequences and analyze the ocean floor to discover oil. But Goodwin primarily wants to use it to blow up a nuclear bomb.

Goodwin used explode nukes the old-fashioned way. From 1983 to 1991, he designed and oversaw five nuclear weapons tests at the Department of Energy's Nevada Test Site. He and other engineers would dig a 2000-foot-deep hole, toss a warhead and some highly specialized monitoring equipment into a 10-story-tall, 1-million-pound iron canister and lower it into the hole. Then everybody would move way the heck back, cross their fingers and detonate. "Sitting in the control room 10 miles away, it felt like a magnitude 5 or 6 earthquake," Goodwin says.

All that changed in October 1992, when then President George H.W. Bush declared a moratorium on nuclear testing in anticipation of the Comprehensive Nuclear-Test-Ban Treaty of 1996. After that, if the United States wanted to test any of the warheads in its multithousand-weapon arsenal, it had to do a computer simulation. Thus, our interest in really powerful computers was nationalized.

Really powerful computers have been around as long as computers themselves, but the term supercomputer didn't arrive until 1976, when Seymour Cray built the Cray-1. It cost $8.8 million ($35 million in today's dollars) and cranked up to 160 megaflops. Yesterday's supercomputer, however, has less power than today's personal computer—a modern PC has more than 50 times the processing horsepower of the original Cray. In fact, the "super" prefix is so fuzzy that many computer scientists eschew the term supercomputer altogether and call such machines high-performance computers, or HPCs. In an attempt to bring some clarity to the genre, in 1993 a private group called the Top500 project started publishing a twice-yearly list of the 500 most powerful computers in the world. If your computer is on the list, it is by definition a supercomputer.

For 17 of the Top500 list's 18 years, the U.S. and Japan have swapped supremacy. But in October 2010, China claimed the top spot with the 2.6-petaflop Tianhe-1A. The computer scientists who design and build these systems tend to work for multinational companies and are cautious about characterizing what they do as a statement of national pride. Regardless, supercomputers have come to symbolize the technological prowess of the countries that build them—a silicon-age version of the space race. In a sign of the whipsaw speed of technological progress, Japan eclipsed China just eight months later, in June 2011, unveiling the 8-petaflop K Computer. The Chinese countered in August, outlining a road map to "exascale" computing, essentially promising a 125-fold increase in computing power within 10 years. If Tianhe-1A was China's Sputnik moment, exascale is its moonshot.

The supercomputer's role in maintaining America's nuclear weapons justifies its status as a national security interest. But China's challenge to the West's computing dominance has led many computer scientists and policy wonks to claim that supercomputing is essential to U.S. economic security as well. These machines are force multipliers for American scientists, engineers and businesses, the argument goes, and whoever builds the best ones gains an advantage. Supercomputers don't just reflect intellectual and technological power, they also reinforce it.

The folks at IBM Rochester betray little interest in China's goal of supercomputing dominance. Their job is to work out the engineering for Blue Gene/Q, and they deliberately focus on the technology, not the politics. They are classic pocket-protector engineers, and their titles are inelegant bureaucratic artifacts that offer little clue to their actual roles. "We're a very small, roll-up-your-sleeves team effort," says Pat Mulligan, development manager for Global Server Integration (who, for the record, had his sleeves rolled up when we spoke). "We're not overly nationalistic, we just want to make the best computer we can."

The building where Marroquin, Mulligan and the rest of the IBM team are creating the 21st century's most powerful computers is a monument to mid-20th-century corporate futurism. Designed by architect Eero Saarinen (who also designed the St. Louis Gateway Arch), the sprawling structure is clad in dark blue glass. Hallways a half-mile long stretch through the interior. At some point IBM—always pushing the technological envelope—concealed wires in the hallway floors to guide robots that delivered parts and machinery from one assembly room to another. The robots are long gone, a dream of mechanical efficiency undone by reality: They were slow and broke down so often that the facility switched to human-guided forklifts.

The Blue Gene/Q computers I'm getting a look at in midsummer are not part of Bruce Goodwin's supercomputer (named Sequoia). These are test models, used to work out the kinks in the hardware and software. The manufacturing of Sequoia's 96 racks was due to ramp up soon after my visit, but Goodwin and his team at Lawrence Livermore are already logging in to Blue Gene/Q and tinkering from afar; a sign on one of the racks in the Rochester assembly room says LLNL REMOTE ACCESS MACHINE.

Goodwin's Terascale Simulation Facility (TSF) at Livermore is one of two DOE centers that perform nuclear simulations as part of the Stockpile Stewardship Program (the other is at Los Alamos National Laboratory in New Mexico). To get a simulation that delivers an acceptable degree of accuracy, Goodwin's team models a 50-microsecond explosion in three dimensions down to a scale of 10 microns. "It gets very complicated," Goodwin says. "These things are imploding and exploding, and you have to track the fluid mechanics with the precision of a Swiss watch." Every time a component is changed or upgraded in a U.S. nuclear warhead, the TSF virtually tests the bomb to make sure it will still go boom. The computer simulations have revealed aspects of nuclear fission that testers hadn't anticipated, and, consequently, the number and complexity of algorithms have increased over time. Modern simulations model only parts of a full explosion, and even then, the most complex sims Goodwin runs use about a million lines of code. If you had 1600 years, the calculations could conceivably be done on a laptop; Livermore's current 500-teraflop Blue Gene/P system, named Dawn, gets a high-complexity sim done in a month. When the 20-petaflop Sequoia system goes live in 2012, the test time should drop to a week.


http://www.blogger.com/post-create.g?blogID=5416801800951228109 

More of this article: Anatomy of a supercomputer


http://www.popularmechanics.com/technology/engineering/extreme-machines/how-ibm-built-the-most-powerful-computer-in-the-world  

Aucun commentaire:

Enregistrer un commentaire