jeudi 25 juillet 2013

New IBM zBC12 mainframe gets OpenStack integration

zEnterprise BC12 #zBC12, which is intended to bring capabilities found in the high-end zEnterprise EC12 available to a broader market, IBM is releasing new versions of the z/OS operating system and z/VM virtualisation platforms across System z, along with numerous other enhancements.


By: IBM

IBM Watson: A Powerful Ally in Doctors' War on Cancer

Link : http://www.internetevolution.com/document.asp?doc_id=265538&sf15293407=1

By: Internet Evolution

New IBM Software Helps Companies Reinvent Relationships with Exceptional Digital Experiences - Retailers and Manufacturers Increase Engagement with Customers, Employees and Partners

IBM (NYSE: IBM) today announced new Digital Experience software that allows organizations to create customized digital experiences that reinvent the way they engage with their most important audiences: customers, employees and business partners. 

IBM Heat Map
IBM digital experience software combines with IBM customer experience management capabilities so marketing professionals can analyze customer activity
on a specific channel, such as a mobile device. With these unique views, marketers can gauge the behavior of customers across all digital channels, identify patterns and then adjust the digital experience based on this insight to improve the quality and appeal of the user's experience. (Credit: IBM)

Aligned with IBM's Smarter Commerce initiative, the new software allows line of business employees from marketing, sales, HR and customer loyalty, to produce, share and distribute digital content on the fly, to all mobile and social channels -- without the need for IT technical skills or outside assistance.
The growth of mobile, online, social media and commerce trends have spawned the rise of the digital consumer which requires businesses to deepen their interactions with individuals and accelerate data driven decisions into functions such as marketing, sales, service and human resources.

Building on these demands, IBM's Digital Experience software allows CMOs to provide customers with relevant information and offers that are based on their preferences and can be published quickly to all digital channels and mobile devices. For example, while at a conference, marketing and event teams can develop professional grade assets that incorporate client interviews, show floor footage, audio and text overlays and in a few simple clicks publish it to the broadest range of social, mobile and online channels.

Executives across the C-Suite acknowledge the need to reorient their businesses and deliver more personalized experiences to become more competitive in a digital economy. In fact, according to a Forrester survey of customer service professionals, more than 90 percent stated that customer experience is a top strategic priority for their firm.

HR executives can use these same technologies to connect new hires with seasoned employees who can answer questions and share pertinent insights that will improve and accelerate the orientation process.

"The digital era is being driven by the proliferation of mobile and social networks which have transformed the way organizations engage their key audiences,” said Larry Bowden, Vice President of Digital Experience Software, IBM. "To succeed, companies must look beyond websites to create digital experiences that marry analytics, deeper social engagement, compelling content and design for mobile delivery in order to engage audiences on their terms and on their time."

In order to meet these rising demands, CMOs, CHROs and other executives are joining forces with the emergence of the Chief Digital Officer to create comprehensive digital strategies across all lines of the business including marketing, product development, customer service, human resources and more. Businesses that are seeing the benefit of this digital transformation include Performance Bicycle, Amadori and Omron Europe.

Performance Bicycle, which operates across 110 stores in 20 states, collaborated with IBM and IBM business partner Sirius Computer Solutions to create a digital experience and transform the way it interacts with its clients. Working with IBM, the company launched Performance Learning Center, an online learning site where customers can engage with experts and peers, get answers to their cycling questions through articles, videos and online chats. Since its launch, the Learning Center has driven significant increases in traffic for Performance and ultimately an increase in sales.

“Working with IBM we have transformed the digital experience for our customers through a web site that features hundreds of unique content pieces and videos that address the most pressing questions of our customers,” said Aaron Pickrell, Director of Online Systems, Performance Bicycle. “Thanks to these efforts, we have not only increased both clicks and sales but provided a valuable service to customers looking to learn more about the complicated world of cycling, including tips on buying a child’s first bike and how to fine tune a rear derailleur.”

Success for Omron Europe, a global leader in the field of industrial automation, required that its staff have fast access to experts and specialized knowledge about varied industrial processes and technologies. Working with IBM and IBM business partner Portico Consultancy B.V., the company created a portal that includes social networking capabilities which allow employees to more easily share knowledge and collaborate with colleagues about topics such as product packing which can be used to help close deals with key clients. Since going live it has increased engagement while allowing its team to respond to the needs of customers more quickly.

With the new Digital Experience software, IBM allows companies to deepen engagement, uncover customer sentiment and build loyalty with their desired audience. The advanced capabilities include:

· Mobile Experiences: According to Morgan Stanley, 90 percent of mobile users keep their device within arm’s reach 100 percent of the time. Using the new software CMOs and their teams can quickly design a single mobile application that can then be viewed on multiple devices to ensure a consistent brand experience as customers move between screens. When combined with IBM’s customer experience technology, e-commerce and customer service professionals can quickly assess the quality of a visitor's experience and then eliminate the pain points that may spur them to leave the site.
· Analytics and Optimization: Today 84 percent of businesses are integrating analytics into the digital experience. Through digital analytics capabilities, marketing and customer service professionals can analyze customer activity on a specific channel, such as a mobile device, or a web page. These unique views can gauge the behavior of customers across all digital channels at any time, identify patterns and then adjust plans based on this insight to out-maneuver the competition.

· Omni Channel Media Creation: Digital video consumption continues to rise and businesses must respond quickly. With new digital experience software, non-technical line of business employees can quickly and easily create compelling video content that can be viewed anywhere including a business's website, smart phones and tablets as well as social media destinations such as Facebook.

· Social Interaction: IBM delivers out of the box integration with its premier social business platform allowing companies to embed social experiences within the company portal or social networking sites such as Facebook, LinkedIn and Twitter, so customers and employees can more easily interact with one another. Using social analytics, team can capture the sentiment of customers and employees and take action based on the data to be more responsive to their needs – a critical component for companies as they look to stay ahead of issues and be agile in the marketplace.

Avnet Services, Genus Technologies, Gemini Systems and Prolifics are among a host of IBM Business Partners working to meet the growing market demand for IBM's new Digital Experience software, helping clients embrace social and commerce transformation in today's digital marketplace.

By: IBM

 

New IBM zEnterprise BC12 Entry-Level Mainframe Launches

IBM announced a new cost-efficient mainframe system, the zEnterprise BC12, designed for analytics, cloud and mobile computing.

IBM has announced a new entry-level mainframe system that suits businesses of all sizes, the zEnterprise BC12 (zBC12). The new mainframe builds on IBM's decades of experience in enterprise computing and is designed for the latest in analytics, cloud, and mobile computing. Moreover, starting at $75,000, IBM is making one of the most secure and technologically advanced enterprise servers attractive to organizations of all sizes.
"Analytics, cloud and mobile computing are changing the way businesses in all industries engage with their customers," said Patrick Toole, general manager of IBM System z, in a statement. "IBM's zEnterprise technologies address these challenges by providing clients with a powerful and highly secure platform to manage new and emerging workloads, helping speed time to market, reduce costs and stimulate business growth by making stronger connections with customers."
In addition to the new mainframe hardware, IBM also announced new industry solutions and enhanced software and operating systems across its zEnterprise portfolio to help clients better serve their customers. These solutions are designed to enable banks to deliver new mobile banking services, insurance companies to prevent payment of fraudulent claims, and government agencies to interact and serve citizens using new applications in the cloud, IBM said.

The new software for analytics includes updates to Cognos, SPSS, and DB2. New cloud and mobile offerings include updates for better integration and security in Tivoli, CICS and DB2. Big Blue officials said compared with its predecessor, the z114, the new zBC12 features a faster processor at 4.2GHz and two times the available memory -- and it allows clients to grow into their system with a pay-as-you-grow approach. When integrated with IBM DB2 Analytics Accelerator, the zBC12 can perform business analytics workloads with response times up to nine times faster, 10 times better price performance and 14 percent lower total cost of acquisition than the closest competitor, IBM claimed citing a customer study. For cloud computing, the zBC12 can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1 per day per virtual server. A single zBC12 can save clients up to 55 percent over x86 distributed environments, IBM said. Meanwhile, new hardware functions provide CPU and storage savings by compressing data on the server. For example, there is a new high speed, low latency I/O connection for enabling up to an 80 percent reduction in network latency. Also, with z/OS 2.1, IBM delivers performance and scale, as well as simplified management with z/OSMF. There is also a new 2-to-1 ratio for IBM System z Integrated Information Processor (zIIP) and zSeries Application Assist Processor (zAAP) special purpose engines for improved workload economics. And with z/VM 6.3, clients can now consolidate up to 520 virtual servers in a single footprint with the increase in real memory and the new HiperDispatch function.

For the Linux crowd, IBM announced it is delivering a new Linux-only based version of the zBC12, the Enterprise Linux Server (ELS), to help clients that are rapidly growing their businesses, especially in growth markets. The ELS includes hardware, a z/VM Hypervisor and three years of maintenance. The system can run a portfolio of more than 3,000 Linux applications, and clients can extend it with two new solutions, ELS for Analytics and Cloud-Ready for Linux on System z, as an on-ramp for analytics and cloud computing. "ABK chose to consolidate our business systems onto the zBC12—all of our servers from Intel to Sun—in order to bring our development and production to a new level," said Armin Gerhardt, CEO of ABK Systeme GmbH, in a statement. "Our client work requires us to run several systems simultaneously and securely in order to keep projects moving forward while ensuring all the newest regulations are being observed. What convinced us was the ability the zBC12 had to react quickly, to implement new requirements rapidly and, above all, use tools that are common." For its part, IBM continues to build on its full range of analytics, cloud, mobile and security capabilities in zEnterprise with new software. For analytics the new IBM zEnterprise Analytics System 9710 now includes zBC12 and DB2 10 for z/OS VUE providing a foundation to deliver a cost-effective analytics deployment.

For cloud, IBM enhanced its Omegamon for z/OS family to detect performance problems in the cloud and minimize impact to the business and increase analytics visibility. And IBM continues to help clients bridge the gap between mobile devices and enterprise data and services with native JavaScript Object Notation (JSON) support and conversion between JSON and data structures with the new CICS Transaction Server Feature Pack for Mobile Extensions V1.0 and DB2 11 for z/OS (ESP). Moreover, IBM said its new z/VM (v6.3) operating system builds on top of Live Guest Relocation and now supports up to 1 terabyte of real memory, enabling support for more virtual servers. It leverages HiperDispatch technology for improved system performance and enables OpenStack for advanced enterprise-wide service management. Meanwhile, the new z/OS 2.1 operating system supports the latest zEnterprise hardware features—zEDC and SMC-R. It also includes many performance and scalability enhancements for data serving workloads. And a new capability, "Crypto as a Service" enables Linux on System z applications to use z/OS services to encrypt data, providing more secure encryption. Additionally, enhancements to z/OS Management Facility improve startup times and provides services for automating workflow, further reducing costs. IBM Global Financing can help credit-qualified clients acquire the new zBC12 for as low as $1,965 per month. IBM finance offerings can help clients lower their total cost of ownership (TCO) and accelerate ROI to keep pace with innovation and grow their businesses, IBM said. 

By: Darryl K. Taft

jeudi 18 juillet 2013

Best practices for patterns adoption in IBM PureApplication System


Over the last few years, we have witnessed the start of a true revolution in how middleware operations are carried out. Starting with the release of the IBM WebSphere® CloudBurst® Appliance, and then with the subsequent releases of IBM Workload Deployer and IBM PureApplication System, the introduction of pattern-based deployment approaches have helped clients achieve a fundamental change in the way in which they plan, deploy, and manage IBM middleware. What we have seen is that this approach has changed the landscape of system operations, and has also had significant influence on the relationship between development and operations in those companies that have adopted it. The combination of these pattern-based approaches with the full system integration provided in PureApplication System has revolutionized the way in which the IT organizations that have adopted them function.
However, what we have also found is that in order for the full benefit of these changes to be realized, these companies need to also adopt a set of organizational and operational best practices that maximize the potential of the PureApplication System. This article documents several of these best practices, and provides a rationale for why you can realize a major positive effect on the efficiency of your organization and the total cost of ownership from adopting PureApplication System.
Before explaining these best practices, we have to begin with a brief discussion of the type of assets that exist in PureApplication System, and introduce a classification scheme that will help you understand the different management techniques we suggest be put in place for these assets in order to realize the maximum benefits that you can gain from PureApplication System.

At a very, very high level there are two types of assets in PureApplication system; for the purposes of this article, let's call those shared assets and transient assets.
Shared assets are those assets that are long-lived resources that are used by multiple teams or individuals. These include the physical cloud assets (ITE hardware, storage, and networking switches) as well as virtual cloud assets (environment profiles, cloud groups, IP groups). This category also includes software assets like images and script packages, and importantly, virtual system patterns. We classify virtual system patterns as shared assets because they should be built in such a way as to maximize their reuse potential. This bears directly on the organizational structures that are most appropriate for creating and managing the lifecycle of these assets.
If shared assets are long-lived resources, then by contrast, transient assets are those that relatively short-lived. Transient assets include virtual system pattern instances and volume allocations. Another key type of transient asset is a virtual application pattern. Virtual application patterns in general (and web app patterns specifically) contain specific deployable resources that are included as part of the pattern when it is built. In these "cloud-centric" types of applications, the topology is a transient asset; the actual topology can change at run time based on the execution of policies, such as scalability policies. The definition of the pattern is also a transient asset; that is, because everything you are specifying in a virtual application pattern is application-specific. This includes the required resources (application servers, databases, and so on), as well as the policies that describe how those resources should be used. In this case, the EAR file or DDL is simply another application-specific asset that has the same relative lifetime as all the other aspects of that particular application.
Each asset in PureApplication System, regardless of whether it is a shared or transient asset, will have its own lifecycle and specific monitoring points within that lifecycle. In order to maximize the utility of PureApplication System, the IT organization needs to consider setting up an organization responsible for the following general aspects of asset management:
  • Maintaining the catalog or list of assets that are available.
  • Setting in place policies around managing the lifecycles of assets.
  • Tracking assets through their individual lifecycles and moving them along as appropriate.
While we won't specify an exhaustive list of the types of governance policies that you will need to put in place, here is a representative list that covers some of the most common scenarios.
Some of the common shared resource policies would include:
  • How long do you keep virtual system pattern versions available for use?
  • How long do you support previous or deprecated pattern versions?
  • What are the naming conventions for virtual system versions?
Some of the more common transient resource policies would include:
  • How much of each resource (storage, CPU, network) do you allow per pattern instance?
  • How long do you leave pattern instances (virtual system and virtual application) running if not in use?
  • How long do you keep pattern instances available if not currently in use?
When we consider the policies around the management of physical and virtual cloud assets (not just patterns), the list grows even longer. For instance, if a client owns more than one PureApplication System, then the management of pattern catalogs across two instances becomes an interesting and important consideration. What patterns are defined in the software catalog of each rack and what patterns are deployed onto each PureApplication System rack needs to be determined by a combination of factors, such as the need to support HA and DR configurations of specific patterns, the usage of each pattern (for example, a pattern with batch behavior like MDM might have high utilization only at certain times of the day), and the need to balance development needs like load testing against production needs. In general, it is possible to use racks both for production and development/test, as long as you set up appropriate divisions using cloud groups and environment profiles for isolation.


This section outlines the major best practices that we have identified for effectively adopting PureApplication System. We begin by discussing who creates the assets for which the policies are enforced, and what automation is needed to enforce the asset policies described.

IBM PureApplication System ships with a number of existing virtual application system patterns that represent standard configurations of IBM WebSphere Application Server and other middleware, such as IBM DB2®. However, in most environments you will need to customize those patterns in one way or another, or you will need to build your own patterns that combine middleware from IBM and other vendors. Rather than have every project create its own patterns from scratch, a better approach is usually to have a common content team create a few standard patterns that represent the common uses in your organization as a whole, which are then slightly customized as needed by each application team. Therefore, you’ll need a few people in your organization with a wide view of all the different projects in your organization so that they can do that topology abstraction and determine, first, which patterns need to be created, and then build them.
Likewise, IBM ships an even higher-level abstraction called virtual applications that provide developers with a way to specify policies that describe a desired outcome, and lets the system dynamically determine how to produce the correct middleware topology to meet those outcomes. Many business partners also provide virtual application patterns for their own applications through the IBM PureSystems Centre. However, advanced clients might also want to build their own virtual application pattern components customized to their own needs. For that purpose, IBM provides the Plugin Development Kit that enables developers to build these components. This requires some amount of thought and planning in order to figure out how to best meet your organization’s needs, so again, this is usually something that will be done by a centralized content team.
What we have found is that the alternative to this best practice (letting each development team create its own patterns) leads to less reuse than if forethought is put into developing a few reusable patterns from the start. It also becomes more difficult to manage the large and poorly organized pattern catalog that results from letting each individual developer or project team build its own patterns. When implemented properly, this foundational best practice creates a vehicle for implementing many of the best practices that follow.

As discussed earlier, PureApplication System contains two broad classes of assets, transient assets and shared assets. Transient assets, in particular virtual system pattern instances, have a particular lifecycle that is managed either by an operations team (in production) or by a development team (in development), or by a combined dev/ops team in both environments. What is worth pointing out is that most of the steps in this lifecycle can be automated. Let's consider the simple illustrative lifecycle of a virtual system pattern, shown in Figure 1.
In this simple case (which does not consider the possibility of patching a running virtual system instance), a newly provisioned pattern instance (let's say in a dev/test environment) begins in the Started state. At any time, an administrator can stop the virtual system instance, which does not release the reserved resources in the cloud, but does terminate the virtual machines comprising the virtual system. They could then Store the virtual system, which does release those resources, but keeps the virtual system instance available for redeployment at any time. All of these functions are available through the Instances > Virtual Systems menu in the workload console.
As discussed earlier, an IT organization should set up policies regarding how long an instance should reside in each of these states, and under what conditions they should move from one state to another. So, if an instance is not used for a period of time (which could be verified, for example, by referring to the HTTP server's web logs) then it should be stopped. It could then remain in that state for a fixed period of time before the owner restarts it, or it should be moved to the stored state. Finally, if an instance is in stored state and has not been started for some time, it can be deleted.
Once these policies are set in place and have been validated over a period of time, then our best practice recommendation is that they should be automated. This guarantees that the policies will be enforced consistently and that the shared resources of the PureApplication System will be utilized most efficiently. The PureApplication System command line interface and REST API contain functions that let you query the state of a virtual system pattern instance, as well as start, stop, store, and delete that instance. You can either write scripts to automate these functions and implement the defined policies, or you can write programs that manage this through the REST interfaces.
Even though we did not cover it in the previous example, the CLI and REST interfaces also contain functions that enable you to automate the patching and fix application processes for virtual systems as well. That is a more complicated case, and should be handled with care — in many cases, it might be better to simply replace the instance with another one at the newer patch level.
We have seen several different approaches to automating these lifecycle steps with different clients, utilizing a number of different technologies. In some cases, the automation is tied into existing system automation tools already in use in the datacenter; this is especially true if the organization is very mature in its use of virtualization and already had policies in place for managing virtual images as transient assets. However, in other situations where the organization is not as mature and does not have these policies and tools in place, we have seen companies be very successful in building net new automation with a combination of custom web portals and batch programs written within WebSphere Application Server.
As a final note, an emerging approach that we believe holds the most promise for hosting this kind of automation is IBM SmartCloud® Orchestrator. You can think of the implementation of these policies as being different workflows built using SmartCloud Orchestrator. While it is still early in the adoption of SmartCloud Orchestrator integrated with PureApplication System, we can easily envision standard libraries of different PureApplication System asset lifecycle policy implementations being built using SmartCloud Orchestrator. 

Essentially, when you build a virtual system pattern you are describing a general topology onto which you can suitably deploy transient application assets. If you build a clustered WebSphere Application Server pattern that connects to a standalone instance of IBM DB2 Enterprise Edition, it could be potentially used for many different applications that require that same topology. What differs for each particular application is detailed application configuration information, such as JVM configuration (heap size, and so on) and, critically, application assets such as EAR files and DDL. In a large organization with many applications, if each application built its own pattern you would end up with a proliferation of nearly identical patterns in the library. From simply an asset management perspective, it would become challenging to manage, maintain, and track all of these patterns. When we have witnessed this approach of using IBM Workload Deployer and PureApplication System, the result has been a pattern catalog that quickly becomes unmanageable.
What we encourage instead as a best practice is that virtual system patterns should not include purely application-centric information in the scripts that they execute. Instead, that kind of specific information should be read in by the scripts that run in the deployment process as configuration from a shared repository of information, or from instance-specific locations that are specified as part of the deployment process. Another possibility is to use an external deployment automation tool to deploy these application-specific configuration items into a pattern instance when that tool is triggered by an external event such as the completion of a new build. This encourages reuse of each virtual system pattern, and also reduces the total number of virtual system patterns that must be managed. This also fits with the tooling and policies that most development and operations teams already have in place for handling application build and automated deployment. It then becomes a matter of taking those tools that the IT department is already productive with and integrating those into the PureApplication System structure.
As a detailed example, consider the case of script packages within PureApplication System. In following this best practice, you would not include either direct references to application-dependent information in the script package (such as JDBC datasources, JMS resources, or JVM properties) or include specific application JAR or EAR files in your script package. Instead, reference these from an external location. This would enable a decoupling of the application lifecycle from the lifecycle of the script package and enable you to deploy the same pattern with the same script package into multiple environments, such as development, test, and production — even where the details of the JDBC datasources, JMS resources, or JVM properties differed in each environment.
IBM provides tooling inside PureApplication System (the Advanced Middleware Configuration tool) that can be used to separate configuration information from the specific deployments themselves. Likewise, IBM provides a tool called IBM SmartCloud Continuous Delivery (which fully integrates with PureApplication System) that helps development teams deploy and test software in a production-like environment in a straightforward, repeatable way. Another possibility is to use IBM Urban{code} uDeploy to automate artifact deployment into your pattern instances. 

A common question we receive is how big a pattern should be and what the scope of applications should be (for example, should applications all be contained within a single pattern, or should an application spread across patterns). In general, there's no single answer for this question because the term "application" means different things to different people and organizations. To some organizations, a single application could comprise only a single EAR file running on an application server instance. In others, a single application can spread over multiple runtimes, including database runtimes, application server runtimes, portal server runtimes, and ESB runtimes. Rather, we should rephrase the question to "How do you determine the right scope of a virtual system pattern?" and "How do you determine the right scope of a virtual application pattern?" For those more tightly-scoped questions, there are some basic principles we can apply.
Essentially, we can phrase the recommendation as follows: A single pattern should contain only components that are directly related to each other and should not contain components that are shared by any components outside the pattern unless all those components are shared. Let's describe this by example.
Suppose you have two applications, a Web Banking application and a Wealth Management application. They each have EAR files that are deployed onto application servers. They both share some common information in a shared database (the Customer Accounts database). However, the Wealth Management application also relies on information from a database that is not used by the Web Banking application (a Portfolios database). In this case, you should best divide these applications into three patterns along the following lines:
  • Pattern 1: Web Banking application servers and web servers
  • Pattern 2: Wealth Management application servers, web servers and Portfolios database
  • Pattern 3: Customer Accounts database
Now, any changes that happen to the individual applications are localized to the specific patterns that are affected by those changes. If a change needs to be made to the Web Banking application, then the Wealth Management application will continue to function unhindered, and vice versa.

As discussed earlier, one of the approaches that enables you to gain maximum reuse from virtual applications is separating any application-specific information in them from the topology-related information that is general across an entire class of applications. The application-specific information would then be read from an external source at deployment time, most likely from an external source code repository like Git or Subversion. This same principle can and should be extended to the shared assets in general, such as the definitions of your virtual system patterns, your script packages, and your virtual images.
PureApplication System provides an extensive ability to perform import and export of these shared assets between racks, and from a rack to external storage. This is useful, for instance, in the case where one PureApplication System rack might be used for all dev/test environments, while another is used for all production environments. However, what you would really want to do is tie together the application-independent assets together with the application-dependent parts of your assets into a common external repository so that they can be tracked as a single combined asset. This discipline will ensure that if anything untoward happens during a change (say, that an upgrade fails due to a poorly written script, or that a major application bug is found that necessitates a rollback to an earlier state) that both the application topology (represented by the virtual system pattern) and its images and script packages and the application-specific information (such as the JDBC datasources and a specific version of the application EAR file) are available in a form that had been previously tested and shown to be stable.
Even besides being a generally good practice to follow, this particular best practice helps address some specific issues in the current version of PureApplication System. For example, with regard to script packages, IBM Workload Deployer and IBM PureApplication System do not (as of version 1.1.0.0) provide versioning. Likewise, the naming of script packages is arbitrary, nothing is enforced; while it is a best practice to indicate version numbers through a consistent, enforced naming policy, only keeping all versions of the script packages under source code control in an external repository will ensure that they can be fully tracked and managed appropriately.
This then leads to two additional approaches in how to deal with the fact that versioning is not currently fully supported. Those two approaches are:
  • Version everything in the name This is the most common approach. The names of all patterns and script packages should have a version included in it. To facilitate continuous integration scenarios the build system then can use some form of a token in the cbscript.json and pattern.json files, and then the resulting pattern can be imported into the cloud provider as such. This approach works well but can result in an expanding catalog; therefore, it is recommended that the build system also has a pruning method built into it (as described in this article) or some other form of a regular purge of the catalog and instances.
  • Layer the pattern such that the fast moving pieces are handled by an external system that understand builds and versioningAs described earlier, teams should build a standard set of "base patterns" that would then change relatively infrequently. Script packages can then be layered on top of the base pattern. The job of a script package is to apply a specific version of a package on top of the base pattern. The script package would typically take the version of the package as an argument. The script package can then retrieve that content dynamically from a media library, build system, or repository. This is in line with our earlier recommendation to separate topology from configuration in virtual applications. This is a fantastic way to accomplish other objectives such as integrating with existing systems for build and automation, as well as maintaining automation or deployment efforts that are not limited to cloud. The downside of this approach is that the pattern is no longer a complete package, but requires the external build system or repository to function. However, we believe this tradeoff is worth the effort.
This second approach is productized within Smart Cloud Continuous Delivery, though it can be accomplished using many external automation solutions with version control and programmatic interfaces. For example, a script package can take a few parameters and call out to external systems such as BuildForge Projects, Advanced Middleware Configuration Projects,or IBM Urban{code} uDeploy. There are also a number of teams that have implemented the “version everything” approach simply using IBM Rational® Team Concert and JazzBuild with Ant. The important pieces to keep in mind is that re-using mature automation systems is a good choice if continued investment in those systems is desired, and making sure that input into patterns (such as version information) be kept as simple as possible for the user. As an example, if your pattern takes in a build number as input, then the pattern should be updated to have a good default (such as the last committed) or should take values such as {latest | committed | release}; a team member deploying a pattern should not need to spend a lot of effort to look up or find out a valid value for any deployment parameter to the pattern.

Many clients have WebSphere Application Server cells and other system environments whose lifetime is very long — it usually takes a major version upgrade or something of similar magnitude to motivate a team to decommission an environment and create a new one. These are often long and complex processes in conventional environments. However, PureApplication System makes the creation of a new environment a fast and painless process. This leads to a major change in the way that environments are viewed, based upon the amount of effort required to create or re-create them.
If all of the important configuration information is stored in a pattern, as opposed to only residing in the configuration files of the cell, then it’s easier to simply recreate the cell on any significant change to the configuration – even on adopting a new point release of WebSphere Application Server or other middleware. In IBM PureApplication System, the two cells could even run side-by-side for a few days or weeks until it is assured that the new cell is functioning normally, at which point it can be deleted with the assurance that it could be recreated at any time if necessary. Another change we have seen is that, again, clients were building few large cells due to the complexity of creating the cells; when this can be done simply from a pattern, it becomes possible for each project to be contained within its own cell – resulting in smaller cells, and more of them.
The major change to the way in which operations teams work when they adopt this pattern is that they must move all administrative aspects into scripts. While you might perform an administrative action like setting a tuning parameter, or installing an application in the console for that particular middleware (like the WebSphere Application Server console), you should also, at a minimum, create a script that also performs that same action and attach that to the virtual system pattern definition. This enables the recreation of the environment exactly as it stands at any point in time and guarantees that you can quickly adapt to both planned and unplanned failures.


This article demonstrated a few of the major best practices dealing with integration of IBM PureApplication System into your IT governance policies and data center management strategies. There are many, many more best practices in other specific areas, such as application migration, not covered here that we will look at in follow-on articles. Likewise, in our discussion of assets we have hinted at the types of organizational responsibilities and changes that might be necessary for you to best leverage PureApplication System. The article Aligning organizations to achieve integrated system benefits with IBM PureApplication System begins the process of describing those changes, but we will cover this subject in significant additional depth in another follow-on article. 

By developerWorks (IBM)

mercredi 10 juillet 2013

What We Can Expect From The Next Decade Of Technology

Technology tends to run in cycles.  Microsoft ruled the 90’s by building essential software for enterprises.  Then Apple created a new device driven marketplace in which the consumer was king.  What will drive the next decade?
While these things are always hard to predict with any specificity, much of the writing is already on the wall. Humanlike, no-touch interfaces will combine with a pervasive array of sensors and intelligent back-end systems to form a new Web of Things.  Computing will become truly ubiquitous.
This new era of computing will be different than anything we’ve seen before.  Technology will cease to be something we turn on and off, but will become an inextricable part of not only our environment, but ourselves.  It is a future that is both utopian and dystopian (depending on your perspective), in that the human experience will change dramatically.

4 Digital Laws
When William Gibson said, “The future is already here – it’s just not evenly distributed,” he meant that the seeds of the future are sown in the present.  While there is no telling the exact composition of the fruit that those seeds will bear, we can expect the stalks to grow according to laws already apparent.
The information economy has been around long enough for us to have identified four digital laws that drive the growth and direction of technology:
Moore’s Law:  Back in the 80’s and 90’s, when computers first landed on our desktops, we were mostly concerned with processing power, because we wanted to be sure that our hardware would be capable of running the software that made computers useful.
Today, however, most of us pay little attention to processing speeds because we’re confident that whatever device we buy will be fast enough.  That’s because of Moore’s law, a principle first identified by Intel cofounder Gordon Moore in 1965 which states that the power of our chips doubles about every 18 months.
Kryder’s Law:  When Steve Jobs first returned to Apple, he revamped the product line and then went searching for the next big thing.  An avid music fan, he was disappointed with the primitive MP3 devices on the market and envisioned a new product that would allow him to carry around 1000 songs in his pocket.
In a matter of months, his team identified a supplier which could deliver drives that were both small enough and powerful enough to make good on his vision.  The iPod was born and Apple was on its way to becoming the most valuable company on the planet.
Of course, 1000 songs is no big deal anymore.  Today’s iPods carry 40,000 and you can buy a drive that can play 1000 full length movies for a few hundred dollars, less than the price of those original iPods.  This is thanks to Kryder’s law, which doubles storage about every 12 months, even faster than Moore’s law increases processing power.
Nielsen’s Law:  Even after we stopped worrying about the speed of our computers and our hard drives became big enough that we didn’t need to clean out our e-mail archives every month, we still had trouble accessing content because Internet connections were so slow.  Now with 4G mobile connections, we scarcely have to worry about it.
This is thanks to Nielsen’s law, which observes that effective bandwidth doubles every 21 months.  That’s’s quite a bit slower than Moore’s law and Kryder’s law, which is why bandwidth has historically been such a limiting factor, but at current speeds we can do almost everything we want to and 5G is expected around 2020.
Kaku’s Caveman Law:  Now that we have eradicated most technical limits to everyday use, the most important law to pay attention to is what Michio Kaku calls the “caveman law”, which can be stated as follows:
Whenever there is a conflict between modern technology and the desires or our primitive ancestors, these primitive desires win each time.
It is this last law, riding the wave of the previous three, that will drive the next decade of technology.  Our devices will become not only vastly more powerful, but also more natural and eventually disappear altogether.  Effective computing will become less dependent on expertise and more a function of desire.

A New Digital Paradigm

While the digital laws may seem to be working steadily on our behalf, the numbers can be deceiving because they actually represent accelerating returns.  Simply follow the pace of Moore’s law alone and you will quickly realize that we will advance roughly the same amount in the next 18 months as we did in the previous thirty years.
At some point, a difference in degree becomes a difference in kind.  Having exhausted most of the possibilities we saw for computers a decade ago, we are beginning to focus our technology on completely new tasks, such as nanotechnologygenomics and energy. Clearly, we are entering a new digital paradigm.
To get an idea of how this will all play out, look at how supercomputing has progressed at IBM.  In the 90’s it focused its efforts on pure computation, eventually defeating chess champion Garry Kasparov with brute force.  In 2011, its Watson computer triumphed at Jeopardy!, a game show that requires intuition as well as intelligence.
Now, IBM is repurposing Watson for human professions, such as medicinelaw and evencustomer service.  The line between man and machine is blurring beyond anything we could imagine even a few years ago.

Atoms Become The New Bits

There is probably no place the expansion of the digital economy is as dramatic as in the field of manufacturing, which until recently was assumed to be a low tech area best left to sweatshops and cheap labor.  Today, as Steve Denning reported in Forbes, companies from Apple to GE are finding it makes more sense to keep manufacturing closer to home.
The reason is that we are in the midst of a new industrial revolution where the informational content of manufactured goods is becoming more valuable than the physical content.  An array of technologies, ranging from CAD software to 3D printing tolights out factories which are entirely populated with robots, is reinventing the economics of making things.
Just as people gathered in places like the Homebrew Computer Club in the 70’s, there are now dozens of fab labs scattered across the globe where hobbyists can meet and build prototypes.  These designs can then be manufactured at just about any scale by services like Ponoko and Pololu.
Open software is now giving way to  open hardware where, as Chris Anderson puts it, they “give away the bits and charge for the atoms.”  The maker economy is so potentially powerful that there is already talk of a Moore’s law for atoms that will bring accelerating returns to physical products.

Tech Becomes More Like Pharma

When the personal computer revolution took hold, it was driven by garage entrepreneurs.  Hobbyists tinkering with homemade kits could outfox big corporations and turn a clever idea into a billion dollar business.  This trend only deepened as software became dominant and any kid with a keyboard could compete with industry giants.
Smart companies embraced the start-up culture and became more nimble.  The tech industry began to resemble the entertainment industry, with the business press spending more and more time in sweaty convention halls hoping to catch a glimpse of the next blockbuster hit.
That’s changing as devices and applications are becoming secondary to platforms.  The new paradigm shifts, such as IBM’s Watson, Google Brain and Microsoft’s Azure take years and billions of dollars to develop.  The upshot is that the tech business is starting to look more like pharma, where the R&D pipeline is as important as today’s products.
And for better or worse, that’s where we’re heading.  Whereas previous tech waves transformed business and communication, the next phase will be marked by technology so pervasive and important, we’ll scarcely know it’s there.
 
 

vendredi 5 juillet 2013

IBM bets the server farm on flash







You may have seen the news that IBM has decided all enterprise Tier 1 storage should be flash-based and is putting in place plans to make the transition as fast as possible. Big Blue will be investing $1 million to integrate flash into all of its servers and storage systems and is introducing its own flash-only appliance.
Why the sudden move? Data centers increasingly demand the ability to process information more quickly, but traditional hard drives have only shown a small increase in speed over the last few years. IBM claims that flash solutions can speed up processing by around 90 percent for banking and trading applications. Other benefits include lower energy consumption, less maintenance and a smaller footprint.
Demand for ever larger amounts of cloud storage is helping drive the change too. Speaking at last month's IBM Edge conference in Las Vegas, Tom Rosamilia, senior vice president, IBM Systems and Technology Group said, "Cloud computing and Big Data analytics are playing key roles in helping organizations lower operating expenses, improve efficiencies, and increase productivity."
Market research company IHS agrees with IBM's predictions. It says that flash based storage is challenging the hard disk drive in all markets, not just the enterprise. SSD vendors had record-high profits in 2012, and not only because of inclusion in many data centers. The makers are also gaining popularity in the PC market. In 2017, IHS predicted that SSDs will account for just over 33 percent of all data storage shipments; an increase of 700 percent from current levels.
We've already seen the collaboration of IBM and Seagate to release the fastest enterprise hard drive in hybrid format and it's likely to be the first of many.
For those who don't need instant access to data the electro-mechanical hard drive still has a price advantage. For this reason it should continue to dominate areas like archive and backup storage where performance is less important. When the giants of the corporate IT world like IBM start to take flash storage seriously though you'd be foolish to bet against the trend.

By: Ian Barker
Source: http://betanews.com/2013/07/04/ibm-bets-the-server-farm-on-flash/


IBM PureSystems

IBM PureSystems

 
 
The IBM PureSystem introduces a fundamental shift in providing computing for business users. There are three systems, each specifically designed to provide infrastructure, application platform and data analytics services. Each system simplifies set-up and ownership by combining computing, data storage, management and software into a single product designed to work together.
Designed by IBM Design for International Business Machine Corp.

Source: http://www.idsa.org/ibm-puresystems