lundi 2 décembre 2013

Graph500: IBM is #1 for Big Data Supercomputing

IBM has taken eight of the top 10 places on the latest Graph500 list. The firm has also secured a huge market share when it comes to building the majority of the world’s best computers for processing big data. Big data is critical to IBM’s current strategy and the graph 500 list, ranked super computers based on their processing ability to deal with huge amounts of big data. The top three positions have been awarded to Lawrence Livermore National Laboratory’s Sequoia, Argonne National Laboratory’s Mira and Forschungszentrum Juelich’s (FZJ) JUQUEEN, which all use IBM Blue Gene/Q systems.


Blue Gene supercomputers have ranked #1 on The Graph500 list since 2010 with Sequoia topping the list three consecutive times since 2012. IBM also was the top vendor on the most recent list, with 35 entries out of 160. Big Blue was also the top  vendor on the list, with 35 entries out of 160. Competitor, Dell featured 12 times and Fujitsu, seven.

The Graph500 was established in 2010, by a group of 50 international HPC industry professionals, academics, experts and national laboratory staff. There are five key industries that the Graph500 tries to address with its benchmark which include cybersecurity, medical informatics, data enrichment, social networks, and symbolic networks. All of these industries process and analyze large amounts of data, which is also why the Graph500 looks at graph-based data problems, since this a foundation of most analytics work, and the ability of systems to process and solve complex problems.

The name also comes from graph-type problems or algorithms which is at the core of many analytics workloads in applications, such as those for data enrichment. According to llnla graph is made up of interconnected sets of data with edges and vertices, which in a social media analogy might resemble a graphic image of Facebook, with each vertex representing a user and edges the connection between users. The Graph 500 ranking is compiled using a massive data set test. The speed with which a supercomputer, starting at one vertex, can discover all other vertices determines its ranking”.

The rankings are geared toward enormous graph-based data problems, a core part of most analytics workloads. Big data problems currently represent a huge $270 billion market and are increasingly important for data driven tech businesses such as Google, Facebook and Twitter. While the limit for what actually constitutes ‘big data’ continues to evolve rapidly, businesses and startups, need to understand and unlock additional value from the data that is most relevant to them, no matter the size.



By:  Hayden Richards

Aucun commentaire:

Enregistrer un commentaire