Pages

Sunday, July 16, 2017

IBM z14 Mainframe = Trust and Security Benchmark

By Rich Ptak

         Figure 1 z14 Design Goals        (Image courtesy of IBM, Inc.)
IBM's introduction of the z14, the next generation mainframe raises the bar not only for enterprise security, scalability and performance, but also addresses the pricing issues. The first three with pervasive encryption and technological innovation. The latter with highly flexible container-based pricing models. 

In their announcement details, IBM focused on enterprise and business relevance of the z14.
There are too many new features, capabilities, and innovative aspects to cover in one article.
We will highlight the design goals and provide a quick overview of the perennially interesting new pricing models. Then, look at the Open Enterprise Cloud aspects in a little more detail.

It's the z14 For Trusted Computing - Overview

The amount of business-critical data collected for rapid analysis and feedback continues to explode. Digital transformation is well-on its way to reality for enterprises of all sizes. Data sharing includes an increasing number of partners and customers. The issues around data security, data integrity, data authentication, and the risk of compromise become of increasing concern. At the same time, an operating model built on the hybrid cloud (with collocation, shared infrastructure, multi-tenancy, etc.) is clearly establishing itself as the preferred enterprise computing infrastructure model for the foreseeable future. This results in enormous pressures on existing security and data handling approaches to adapt and change to be more innovative and reliable.

In the increasingly interconnected, interactive world, trust, security, risk reduction and management to serve are critically important. It is such an operating environment that IBM aims to serve as it introduces the z14, the latest generation of mainframe computing.

So, IBM operated with three basic design goals and one major pricing innovation for the z14.
The design goals (see Figure 1) first:
  1. A new security model - pervasive encryption as the new standard for data protection and processing with no changes to apps or impact on SLA's - the security perimeter extends from the center to the edge - designed-for security, processing speed and power; the most efficiently secure mainframe ever. 
  2. Fully leverage continuous, in-built intelligence - complement and extend human-machine interaction with direct application of analytics and machine learning capabilities to data where it resides - leverage continuous intelligence across all enterprise operations.
  3. Provide the most open enterprise operating environment - new hardware, open standard firmware, operating system, middleware and tooling that simplifies systems management for admins with minimal IBM z knowledge - more Open Source software supports agile computing, e.g. leverage and extend existing API's as service offerings; easier scaling of cloud services.

Next, pricing innovation:

After some extensive research with customers, IBM is introducing three new pricing models.
The goal is to provide increased operational flexibility with prices that are significantly more
competitive and attractive for modern digital workloads. Container Pricing for IBM z is designed
to provide "simplified software pricing for qualified solutions, combining flexible deployment
options with competitive economics that are directly relevant to those solutions." We provide
some details later. First, a look at the Open and Connected aspect of the z14.

Open and Connected

Today's market demands open, agile operating environments, and services with new or
extended capabilities being introduced rapidly and seamlessly. All to be delivered through an
agile, open enterprise cloud. The z14 software environment is designed to those expectations.
Advanced DevOps tools that leverage new and existing APIs can cut service build times by
90%. To speed innovation, IBM's extensive ecosystem of partners are developing and
delivering thousands of enterprise-focused, open source software packages to support the
mainframe in accelerating the "delivery of new digital services through the cloud." Let's look at
this a little more closely.

The new z14 is about leveraging APIs to speed development and ease access to mainframe
capabilities. The goal is to make the efforts of developers and users to exploit the powers of
the mainframe to be easier to access, simpler to use and more quickly deliverable to the
market. This is to be achieved with new hardware, firmware, operating system, middleware
and tooling that simplifies systems management tasks. These also make the process easier for
system administrators with minimal IBM z System experience and knowledge.

The procedure breaks down into four tasks:

  1. Discover - leverage existing investments by helping developers to quickly, automatically discover existing applications and services that can then be converted to API services. 
  2. Understand -  prior to going into production or implementing application changes, identify the dependencies and interactions between the applications and API's to identify how they are affected by any changes. Know where and what an API touches to avoid down time and re-working of changes. It also minimizes the risk of removing protection of critical data by exposing an API. 
  3. Connect - provide easy, automated creation of RESTful services based on industry standard tooling to rapidly create new business value, e.g. link a vacation search to destination appropriate clothing, hotels, interesting sites, etc. Or, associate an order for heavy equipment to a link that suggests purchasing insurance, maintenance, installation or operating services. 
  4. Analyze - use operational analytics and data collection to create an enterprise view of the mainframe and the surrounding operational environment. Integrate the z System data with data from over 140 different data sources in any format. Search, analyze and create a visual representation of service activities and interactions using SIEM tools, such as Splunk or open source Elasticsearch. This helps in early identification of potential problem areas such as performance bottlenecks or operational conflicts.

New capabilities dramatically increase the performance and scalability to already impressive
mainframe abilities. These include such new capabilities as zHyperLink a new direct connect,
short-distance link. It is designed for low latency connectivity between the z14 and FICON
storage systems. It can lower latency by up to 10x which can reduce response time up to 50%
in I/O sensitive workloads, without any code changes. The z14 has available, as a purchasable
option, Automatic Binary Optimizer for z/OS(r), which will automatically optimize binary code for
COBOL applications which can reduce their CPU usage by 80% without a recompilation. One
z14 can scale out to support an impressive 2 million Docker containers. Now, let's look at
pricing.

Container Pricing for IBM z

Any mainframe discussion is bound to include a discussion of pricing policies, management,
and control. Customers want predictability - to know what the bill will be. They want
transparency - knowing how billing is calculated. They want visibility - to understand the
impact of changing or moving workloads. They want managerial flexibility - ability to adjust
workload processing and scheduling to balance their needs with computing costs.

IBM's solution is the concept of Container Pricing for IBM z, which provides line-of-sight pricing
to make the true cost highly visible. It applies to a collection of software collocated in a single
container. It determines a fixed price which applies to that single container[1] of software with no impact to the pricing of anything external to the container.



[1] A container is a collection of software treated for pricing purposes as a single item. The collection is priced separately and independently of any other software on the system.

A container pricing solution can be within a single logical partition or a collection of partitions.
Multiple, collocated and/or stacked containers are permitted. Separate containers with different
pricing models and metrics can reside in the same logical partition. Container deployment is
flexible to allow the best technical fit, independent of the costs. Three types of Container
Pricing solutions are offered now:
  1. Application Development and Test solution (DevTest) - provides DevTest capacity that can be increased (up to 3x) at no additional MLC cost. Clients choose the desired multiplier and set the reference point for MLC and OTC software. Additional DevOps tooling with unique, discounted prices are available. 
  2. New Application solutions - special, competitive pricing for those adding a new z/OS workload to existing environments. There is no impact on existing workload prices. The container size determines the billing for capacity-priced IBM software.Payments 
  3. Pricing solution - offers on-premise, Payments-as-a-Service on z/OS based on IBM Financial Transaction Manager. It applies to software or software plus hardware combinations. 
This is a simplified review of the new model. Contact IBM for more detailed information. IBM
will be refining and adding models to meet customer needs. Moving on to the other design goals.

Trust + Security thru Pervasive Encryption

Data and application security in enterprise IT have taken a beating in the last few years. Traditional security techniques and barriers have fallen victim to numerous attacks as well as rapidly evolving threats and scams. Successful attacks and breaches came from sophisticated external criminals as well as maliciously or accidentally by insiders. Victims include large, sophisticated financial institutions to national governments and ministries. Even blockchain ledgers have proven vulnerable to weak implementations and clever hackers.

With data widely recognized as an asset of escalating value, the risks and costs of such breaches increases. Traditional security methods focused on trying to prevent successful intrusions or minimizing damage with selective encryption, rapid detection, and blocking. Selective data encryption proved too expensive, resource intensive and inconsistent in application. And, significant risks remain when leaving some data un- or weakly protected as hackers and intruders became more sophisticated. Also, new policies or evolving compliance requirements can make critical once non-critical data, further weakening selective methods.

IBM's solution was to design the z14 with hardware technology and software protections that make pervasive encryption from the edge to the center including the network affordable, efficient and rapid. All data is encrypted all the time without requiring any changes to applications and without impacting Service Level Agreements (SLA's).

Application of Machine Learning

Successfully leveraging artificial intelligence (AI) in the enterprises had been an elusive goal
for decades. Early attempts were frustrated by limitations in expertise, processing power, high
costs and the sheer amount of effort required to build and test models.

Today, the maturation and automation of modeling techniques along with improvements in
infrastructure and technology have allowed AI, more accurately described as machine learning,
to come into its own in the enterprise. Examples in the z14 include optimized instructions,
faster processing of Java code, and improved math libraries that speed and improve analytics.
The 32TB of memory means the z14 can process more information and analyze larger
workloads and in-memory databases in real-time. The results come in the form of prompt
availability of actionable business insights that result in better customer services. The
announcement contains much more about machine learning applications as well Blockchain
capabilities. Topics for future coverage.

The Final Word

The new z14 is an impressive and worthy addition to the IBM mainframe family. It promises
"Trusted" computing on the platform that has been the benchmark for processor security. That
is a much-desired deliverable in a highly integrated, totally connected, rapidly evolving world of
digital enterprise. There are many more attractive features to the new z14. These include
unique to IBM Blockchain services which provide significant protection against fraud. There's
the ability to rapidly build microservices choosing from over 20 different languages and
databases to use. There's the free access to the mainframe for those interested in testing the
ease of use features or expanding their mainframe skillset. (See https://ibm.biz/ibmztrial).

By delivering efficient, affordable, speedy 100% end-to-end encryption of all application and
data base data it pushes infrastructure boundaries to achieve a uniquely secure environment;
without requiring any changes to applications, services or data. IBM has also implemented
unique encryption key protection that removes any risk of it being exposed. To do so without
changing or impacting the ability SLA's is remarkable. IBM estimated encryption overhead at
"low-to-mid" single digits.

IBM's focus on automating and facilitating the utilization and optimization of API services is a
very smart move on their part. An on-going 'critique' of the mainframe has been that it is
inaccessible, living and operating in its own isolation. True in the past, the last few years have
seen a dramatic alteration with the emergence of the "Open, Connected and Innovative"
mainframe. The change has been rapid and significant.

The significant impact of the introduction of Linux on Z and the proliferation of numerous Open
Standard solutions, APIs, tools and interfaces cannot be ignored. The introduction and
movement of numerous Open Stack products to the mainframe along with the addition of agile,
Open Source DevOps tools and APIs have made the mainframe's extensive capabilities easier
to access and faster to exploit by a much wider audience. This is reflected in the growth of the
highly diverse ecosystem of mainframe partners, ISVs and developers working with IBM. The
z14 looks to accelerate that process.

The mainframe, IBM's longest running product, has seen its ups and downs over the last 50+
years. Anticipation and predictions of its death have filled column space of way too much IT
commentary, stories and speculation. The z14 fills a well-defined, valuable place in the IT
infrastructure.

Friday, July 14, 2017

IBM and Nutanix deliver no-compromise, on-premise Cloud computing with IBM Hyperconverged Systems powered by Nutanix

By Rich Ptak

Figure 1 IBM CS822  (Photo courtesy of IBM, Inc.)

Congratulations to IBM[1] and Nutanix[2] on their July 11th announcement of the industry’sfirst hyperconverged system that combines Nutanix software with POWER8-based systems (IBM CS821, IBM CS822). They are delivering two significant innovations:
  1. Immediate access to a fully-configured, full stack workload-optimized system with servers designed for data and high-performance workloads, e.g. high-volume transaction and cognitive analytics. This includes scale-out Linux workloads like IBM WebSphere® Application Server, NGINIX, IBM Big Insights/Hadoop, etc.
  2. Vastly simplified, automated implementation of on-premise cloud-like operation. Nutanix’s world-class Enterprise Cloud Platform[3] makes cloud creation transparent as it simplifies operations and management with one-click access, operation, and management in an on-premise cloud-like environment.

Configuring the optimal combination of compute infrastructure elements (processor, storage, network, operating software, etc.) for a workload has been a challenge forever. The perennial trade-off has been between the heavy burden and expertise required to design a system for optimal workload performance; and the alternative of adapting the workload to an off-the-shelf system. Custom configurations involve a resource intensive, manual process requiring significant expertise with the significant downsides of cost, time and the need for specialized support. The standard alternative sacrifices performance, capacity, scalability or other features, for a lower cost, immediate availability and standard support. In today’s rapidly evolving, highly competitive market, such compromising may yield short-term advantage, but will more likely result in long-term problems.

A cloud-based solution would be an alternative, for those with the necessary expertise in cloud infrastructure design, configuration, management, etc. Or, a willingness to depend upon cloud provider expertise. Not to worry. Just last May, IBM and Nutanix announced plans to attack the problem head-on with a multi-year initiative to provide an integrated solution that combines Nutanix’s Enterprise Cloud Platform software with IBM’s Hyperconverged Systems optimized for specific enterprise workloads.

The first results are seen in these turn-key hyperconverged fully-scalable, on-premise cloud systems. They are impressive. There’s a lot more to the announcement. So, talk to IBM to get the full details. We expect customers will agree.

Monday, July 10, 2017

Compuware Further Boosts Mainframe Agility with Topaz for Total Test Enhancements and Integrations with Leading DevOps Tools

By Rich Ptak



Figure 1 Topaz for Total Test Speeds and Simplifies COBOL Unit Testing
Image courtesy of Compuware


 It's a new quarter and time for a Compuware mainframe product announcement. This time the focus is on enhancements to Topaz for Total Test. As you may recall, we last commented on Topaz for Total Test's powerful automation capabilities for application test creation, implementation, execution, and cleanup at ist's introduction last January[1].See Figure 1 above.

 Earlier announcements have addressed such topics as Source Code Management[2], Release Automation and Application Deployment and Application Audit, which increases overall cybersecurity and compliance with automated auditing of user behavior with applications. Integration with SIEM tools such as Splunk[3] allows the user to get a cross-enterprise view that speeds identification and detection of non-compliant and security threatening behavior by users. 

Topaz for Total Test, the subject of this commentary, addresses problems of COBOL code change management with groundbreaking automation and innovations in COBOL code testing. Given the abject failure of re-platforming initiatives, large enterprises hoping to avoid digital irrelevance must aggressively modernize their mainframe DevOps practices. Key to the modernization and ‘de-legacing’ of mainframe application is the adoption of unit testing for COBOL code that is equivalent to and well-integrated with unit testing as practiced across the rest of the enterprise codebase. That is exactly the challenge Compuware addresses with Topaz for Total Test.

Compuware has committed to build on its solution base using agile, continuous, modern processes to deliver significant enhancements and extensions. In fulfillment of that commitment, they are developing new DevOps Toolchain integrations and extended support for DB2 SQL. Here is what they are bringing to market.

What’s new?


Compuware made an impressive start in January with the initial release of Topaz for Total Test, which enables developers at all skill levels to perform unit testing of COBOL code similar to how it is done for other programming languages (Java, PHP, etc.). Program Stubs were also a significant and highly popular innovation. Stubs allow sub-program calls to be disconnected from the main program. Therefore, the subprograms can be tested independently of the main program. Data Stubs eliminate the need to access data files or DB2 Tables. Testing becomes much easier, less complicated, less risky and complete considerably faster. Testing can be repeated without disrupting the production environment, thereby significantly increasing operational flexibility. It was no surprise that customers responded enthusiastically by using stubs extensively and quickly identifying specific extensions to make the product even more attractive.

 
Compuware quickly moved to explore the possibilities for further automation of the unit test process. Developers, like all skilled craftsmen, have favored tools. For developers, these include Jenkins (toolchain management), SonarQube (quality control) and Compuware’s own ISPW (source code management and deployment).

Compuware recognized the opportunity to completely automate the DevOps processes of Build – Test – Deploy. They also noted that the ability to test independently of the main program and without impacting operations was highly valuable as it simplified a frustrating, time-consuming task. Further, data stubbing could be used in other areas to eliminate or reduce dependencies to further strengthen, simplify and speed testing. This release responds to those requests. The results are the enhancements included in the announcement. They are:
  • Topaz for Total Test integration with Jenkins which enables COBOL unit testing to be automatically triggered as part of a DevOps toolchain and/or continuous delivery process. The result is a significant increase in efficiency.  
  • Topaz for Total Test integration with SonarSource’s SonarQube ensures quality trends are visible throughout the development process by displaying pass/fail testing results along with all cross-platform DevOps activities. 
  • Topaz for Total Test integration with Compuware ISPW tightly couples test cases with source code to enable the sharing of test assets, enhanced workflow and the enforcement of testing policies as part of the DevOps toolchain.
  • New “stubbing” for DB2 databases allows developers to run unit tests without requiring an active connection to a live DB2 database. This is huge. Testing can be done against real data without impacting or risking corruption of the production data base. With stubbing, Topaz for Total Test can test code processing most types of mainframe data. The unique capability for stubbing of DB2, VSAM, and QSAM data types means that creating repeatable tests is much easier. Data stubs can be created automatically with no re-compilation needed. 

There’s still more in the announcement. The DB2 data used to make SQL statement stubs can be collected automatically, in real-time from on-line test databases. These data stubs can be saved and used to create and run new scenarios for use by other testers. Data stubs can be reused or overwritten by multiple testing programs. Decoupling code into subprograms allows Unit testing to be done in smaller increments, speeding results, simplifying testing and allowing for more granular analysis and better testing. All this means testing can be done without requiring a large system for testing. Testing can be done on-line with no risk to the production database. Job Control Language (JCL) can be created and reused from Profiles, eliminating the need to recreate them every time. 

The Final Word
Compuware is aggressively pursuing a strategy directed at “Mainstreaming the Mainframe.” Their strategy recognizes and is dedicated to overcoming structural and operational issues that make mainframe utilization and COBOL code maintenance a complex, slow and intimidating task, especially for those new to the mainframe.

They do so by delivering “big step” IT tools that introduce the latest new-to-the-mainframe capabilities, such as automated unit testing. But, they also extend and enhance existing solutions by automating functions or processes, providing interesting product integrations and extending APIs to simplify or ease time-consuming mainframe tasks that annoy admin and operations staffs. To accomplish this, Compuware has employed and made contributions in visualization, code analysis, behavior auditing, automated unit testing, operations management, etc.  

Topaz for Total Test provides positive proof of Compuware’s success as it benefits both IT and production staffs. IT staff benefit from access to familiar, modern tools and more efficient processes. IT productivity and performance benefit from increased automation. Faster collection of test data by exploiting Compuware’s Xpediter is one example. The extensive use of automation in test creation (such as collecting test data) and execution improves the quality and depth of testing. Integrations with Jenkins, SonarQube, and ISPW further empower less experienced mainframe developers to work on multi-tier apps. The overall result is that program updates, changes and improvements move more quickly through the DevOps process to get higher quality code into the production environment.

Operations benefits as both customers and users see improvements. Users benefit from better quality code with few problems and faster introduction of changes to meet business and operational needs. Customer satisfaction improves when they get the benefits of updated and modified code with fewer problems.

This is the 11th consecutive quarter that Compuware has delivered on its “Mainstreaming the Mainframe” commitment to improve and make more attractive the Mainframe ecosystem. Figure 2 at right summarizes their path to this point.



Figure 2 Compuware Delivery Record to Date
Image courtesy of Compuware

Their performance to-date has been impressive by any measure. And, from what we’ve been told and heard from them, they fully expect to continue to deliver at an equivalent pace and scale for the foreseeable future.

We congratulate Compuware on their success so far, as well as their commitment to the future. Compuware’s efforts have positively impacted the mainframe market, to the benefit of everyone involved in that market whether partner, customer, service provider or vendor. Look at what they’ve done; see if you don’t agree.




[2] For more details on these and other topics see: http://www.ptakassociates.com/content/
[3] Read about the full range of Splunk products here: https://www.splunk.com/

Tuesday, May 23, 2017

IBM and Hitachi collaborate to make a historic deal!


IBM and Hitachi have announced today that IBM will supply customized IBM z system mainframe technology running the Hitachi VOS3 operating system. Hitachi will supply the system to customers in Japan. Existing IBM services such as IBM Cloud Blockchain (available on Bluemix) will be available. Clearly, it is no exaggeration to call this a “historic agreement”. Here’s why.

This agreement benefits both companies as it marks a historic first for IBM in sharing its mainframe technology. For Hitachi, the agreement means that the company will be able to offer the latest in large system technology with additional features and specifications specifically targeted to its customers. In turn, IBM acquires a strong technology partner to collaborate with for future mainframe development.

Hitachi customers benefit because they have assured access to world-class technology without having undertaken any conversions. Continuing use of the Hitachi operating system preserves existing investments (typically quite significant for large system customers) in applications software, staff training, and experience with Hitachi’s operating system.


The economics for both companies are quite compelling. Hitachi is spared making the massive investments needed go-it-alone in the large systems market.  IBM strengthens and broadens its commitment to Open innovation on the mainframe by extending their existing open source software and cloud standard support. It also broadens the market for z Systems technology already in-place and supporting the world’s top 10 insurance firms, 44 of the top 50 global banks, 18 of the top 25 retailers and 90 per cent of the largest airlines.  This promises to be a win-win for all involved parties. 

Kudos to Hitachi and IBM for this well thought out collaboration. 

BMC’s Mainframe Research Survey – Open until June 4th for your input!

The world of mainframes is radically different from what it was just a few years ago. This is due to some significant changes in the mainframe itself as well as to an explosion in the number and type of products and tools used to manage and operate it.

Automation of tools along with simplification and speeding up processes for operations and maintenance play a big role. These have made the power and unique advantages of the mainframe both simpler and easier to access to the benefit of experienced mainframes and those new to the platform.

Especially important in expanding access to the mainframe to a whole new generation of potential users were the efforts of some forward thinking, very smart vendors who made available numerous mainstream DevOps, open source and emerging technologies that attracted new users even as the products facilitated changes to well-established patterns of management and usage for the better.  

Expanding interest in and application of machine intelligence, Big Data, mobile computing, security, Blockchain technology and more meant increased the need for such mainframe features of high reliability, built-in security, high availability as well as the ability to handle heavy loads of processing of large databases. The combination of these has significant impact as they influence and change where and how the mainframe influences and is being in the enterprise.  

All this activity is of interest to the entire mainframe community. For the past 12 years, BMC has conducted an annual mainframe survey to examine the state of the mainframe. As is typical, each survey provides insight even as it is raising additional questions. As a result, every year's survey has new questions to explore new topics and drill down for more details in specific areas. 

BMC is at it again. Looking for your inputs and opinions on the state of the mainframe within your organization as well as personal viewpoints, concerns, etc. On May 18th, BMC launched its 12th Annual Mainframe Research Survey. This survey is one of the largest in the industry, reporting on some of the most important mainframe usage trends. Several new questions and topics have been included this year. This year they want to know more about the types of workloads you run on the mainframe, what work is growing and what is shrinking. What new workloads and methods are employed to manage them. You have until June 4th to complete the survey.  


Monday, May 22, 2017

Red Hat on Cloud Migration and DevOps

By Bill Moran and Rich Ptak

In the recent Red Hat Stair Step to the Cloud[1] webcast, Red Hat Senior Manager, James Labocki discussed issues in Cloud migration and software development. As usual for Red Hat webcasts, he presented good information about the changes and problems (for DevOps) driving migrations, as well as describing how Red Hat can assist customers interested in migrating to a Cloud. We’ll start with James’ review of IT business performance issues including some that should concern anyone involved in IT.

What’s driving organizations to become software companies?

Today, it is a competitive necessity for enterprises to use the internet as a sales and information channel to customers. Proliferating smartphones and mobile devices make it critical for organizations to compete and service customers using internet communications and mobile apps. Software’s role and influence in business operations drives organizations to act as software companies. In turn, this raises the question of just how well are IT departments meeting the demands of various lines of business (LoBs). Are they effectively delivering the software that meets business needs?

Based on a 2012 McKinsey study[2] report, Red Hat reveals that IT is not performing well. The average IT project is both way over budget (45%) and delivered late (7%). IT fails to deliver expected value – yielding significantly (48%) less than forecast. Abandoned, uncompleted projects mean that resources poured into them are wasted. Thus, actual losses may be worse than these statistics indicate.

Separating software from non-software projects, McKinsey reveals software-based projects perform much worse on certain metrics. On average, software projects are 66% over budget and 33% are late. Even more startling, some 15%, of large IT projects (greater than $15M in the initial budget), went so far off track that they threatened the very existence of the sponsoring company.

What’s behind these challenges? How might one deal with these problems?

Before discussing Red Hat’s solutions, we want to highlight a major management problem. We recommend reading McKinsey’s[3] report for additional insight. The report’s indication is that in many companies, senior managers are neither knowledgeable about nor involved in IT related projects. Therefore, they avoid participating in project decision-making. This is very risky, especially considering the following:
  1. A percentage of IT projects will go disastrously wrong (i.e. black swans in McKinsey terminology),
  2. Aberrant projects can potentially bankrupt a company, and
  3. Chances of a disaster increase dramatically with project length.
The obvious recommendation is:
Senior management must not only be aware of but also involved in major IT projects, with a special focus on long-running projects. Regular checkpoints and on-going management attention[4] is mandatory, starting prior to launching any serious project.

This is NOT to say senior management should make technical decisions. They MUST be involved and provide oversight to avoid serious problems in the process, administration, impact assessments, tradeoffs, etc. and issues that call for overall managerial expertise. This deserves more attention and discussion that is beyond the scope of this overview.

Red Hat Cloud solutions and services

Red Hat’s overview of IT operational challenges and problem areas underscores the need for an IT project partner. They underscored that conclusion with examples of customer success stories detailing how Red Hat products and services provided substantial assistance leading to a successful Cloud migration. The webcast focused on two Red Hat Cloud solution offerings, Red Hat Cloud Infrastructure and the Red Hat Cloud Suite of component products.

An overview of the Red Hat Cloud Infrastructure includes:
Figure 1 Red Hat Cloud complete infrastructure and tools (Chart courtesy of Red Hat)
Red Hat’s Cloud infrastructure includes a complete set of tools and infrastructure to meet enterprise needs. This includes platforms for data center virtualization, private on-premise cloud, as well management of those and leading public cloud providers. This allows customers to seamlessly run applications on virtual machines across a hybrid cloud. Whether for a private cloud, public cloud or hybrid configuration, Red Hat appears to have the experience and assets needed for customer success.

The Red Hat Cloud Suite:   
Figure 2 Red Hat Stand-alone tools and solution suite (Chart courtesy of Red Hat)
The Red Hat Cloud Suite builds upon the Red Hat Cloud Infrastructure by including the OpenShift Container Platform. This allows organizations to develop, run, and manage container-based applications in a hybrid cloud. We will provide an overview of Infrastructure problems along with selected Red Hat solutions. A detailed product review exceeds the scope of this paper.

Problems with Infrastructure and cloud migration

Red Hat divides infrastructure issues into two categories; those associated with the existing in-place cloud (or physical servers), and those of a new cloud environment. For existing infrastructure, Red Hat identifies two major problems:
  1. Slow delivery of applications
  2. The sheer complexity of managing dynamic virtual machines (VMs) and systems environment.
In today’s hyper-active, highly competitive business environment, LOB management demands apps be frequently updated. New apps must be deployed into production on an ever more rapid schedule. An existing environment with a multiplicity of VMs (let alone one consisting of physical servers) becomes increasingly difficult to manage.


Red Hat Solutions

Red Hat software tools address these problems by:
  1.  Helping to speed up slow application delivery by automating the creation of environments for developers,  
  2. Reducing infrastructure complexity by adding a management platform able to optimize the environment.
When developers request a new test environment, they need it ASAP. It is untenable for IT Operations to take days to satisfy system requests. Red Hat’s catalog-based automation allows developers to select from pre-defined, pre-configured environments. This saves time, sometimes cutting system delivery times from days to minutes. Of course, other bottlenecks may exist that slow down application delivery. Red Hat has, in-hand and in development, a rich set of tools to help identify the causes and aid in resolving them.

Complexity problems with existing infrastructure can frequently be solved through appropriate management. For example, a shop can have multiple VMware VM’s on premise, along with VMs in private and/or public clouds. Red Hat’s Cloud Management Platform, CloudForms, was designed to manage the mixed environment. Now, we look at some challenges of a new cloud environment.

Cloud Environment Challenges

Red Hat identified two problems frequently facing IT departments. One, mentioned earlier, is IT unable to quickly respond to LoB requests. The cause may be in resource allocation or even just getting solutions out-the-door. IT development and operations are not agile; hampered by processes that are out-of-date or even non-existent.

Another serious problem occurs if IT must operate with infrastructure that is incapable of handling or changing in response to a changing workload. The infrastructure, processors, storage, network, etc. are unable to scale to meet demand.

Red Hat’s Solutions

Such cases call for more extensive and far-reaching change. First, we describe what Red Hat recommends, then discuss additional concerns. The recommendations for:
  1. Non-responsive, not agile apps – IT needs to modernize development processes to replace monolithic apps so that logical portions of the app can be independently updated,
  2. Infrastructure unable to adapt to changing workloads – IT needs to change from a scale-up environment to scale-out environment.
As mentioned, these require extensive changes. If the existing design mode is for monolithic applications, clearly Red Hat’s solution to #1 is worth investigating. Otherwise, we add a word of caution.

What Red Hat is recommending requires much more than installing some software. It amounts to a fundamental change in development methodology. It involves creating cross-functional teams to assure that applications turned over from development to operations will be robust enough for quick release to production. All this requires close coupling of operations (and their needs) with and across development team activities.

Red Hat does indicate their recommended change has implications that will ripple across the enterprise environment. The most obvious are budgetary and headcount impacts. There may well be resistance from existing developer (and operations) staff to such change. We are not saying Red Hat is wrong. (We think they are correct.) We are saying it is necessary to understand and evaluate the effort and cost to implement this recommendation. 

With respect to switching from a scale-up to a scale-out environment, while correct for many cases, it isn't a one-size-fits-all solution. Again, it depends on the environment. For very large systems, the cheapest, more logical solution may be to increase capacity through better load management or scheduling. In other cases, scale-up may be too expensive to undertake. We'd recommend a study to identify and evaluate alternatives. companies facing this choice should investigate carefully. 

Summary

Red Hat is a very experienced, successful open-source company. They recommend careful consideration when choosing an open-source versus proprietary solution (that may cause lock-in problems in the future). They believe most customers would prefer and benefit more from open-source solutions.

We agree with Red Hat that companies should use open-source when it meets their needs. Red Hat, correctly, points out integrating multiple open-source products can be challenging. In such cases, companies may decide to use proprietary solutions if no integrated open-source solution is available.
As decades-long promoters of open-source, we believe that it makes good sense to evaluate Red Hat and other open-source solutions before rushing to a proprietary solution.

Red Hat’s discussion of migrating to the Cloud provides a very helpful review of problems that companies may encounter. Red Hat has a fine reputation for their products. Companies will not go wrong to consider their products and solutions. We look forward to hearing more from them. In this work, our goal was to provide some background to assist companies in making their evaluations.




[4] The referenced McKinsey papers contain many other suggestions. Our recommendation is the first step to avoid the worst disasters. Organizations need to develop their ownC-suite executive best practices. 

Friday, May 19, 2017

IBM PowerAI on OpenPOWER system eases and speeds access to AI, Deep and Machine Learning

By Rich Ptak

Image courtesy of IBM, Inc.
IBM  launched the latest additions to its PowerAI Platform[1] for AI, Deep and Machine Learning with an analyst briefing session (which we attended) along with presentations at the NVidia’s GPU Technology Conference (GTC)[2] in Silicon Valley.

PowerAI is an integrated software distribution of open-source deep learning frameworks (TensorFlow, Caffe, Torch, etc. – see graphic at right) running on an IBM® Power System™ server. It is targeted at Data scientists both experienced and just getting started as the face some serious entry roadblocks due to the amounts and variety of raw data they work with and existing modeling processes. These roadblocks are addressed in IBM’s new release. Here’s what we learned along with our opinions about the product.

The difficulty to be overcome

Three significant tasks have frustrated Data Scientists working in Deep and Machine Learning. One was the effort required for data extraction, transformation and loading (ETL). The second was the time-/effort-intensive manual process as models are refined, trained and tuned for optimal performance. Massive effort and time were spent at transforming and loading diverse data types into Data Lakes and Data Stores able to feed into existing analytic and modeling tools. Finally, were the manual processes to train, revise and optimize standardized industry-focused models to fit a specific operational model the Data Scientist was building.

PowerAI was initially announced in November 2016. IBM set out to make PowerAI the leading open-source, end-to-end cognitive application platform for Data Scientists and developers. The three goals set at PowerAI’s very beginning were that it should be:
  1. Fast and easy to install and deploy (in hours) using common frameworks;
  2. Deliver optimal performance so it was designed with frameworks and libraries for tuning to achieve peak throughput;
  3. Tuned for unique performance and scaling by using the latest and emerging hardware architectures and technologies, e.g. GPU, CPU, interconnection, etc.

Thus, simplification, ease of use and adaptability were key design goals. The plan is to use automation and integration to deliver a set of tools needed by data scientists and experts in basic tasks such as data transformation, model building and optimization. Becoming a leader in deep learning required more than focused product activity by one vendor. IBM recognized the need for an outstanding system platform, an integrated open source stack and a dispersed open-sourced ecosystem of partners, suppliers and innovators. IBM also participated in efforts to pioneer and support many of the best practices used in deep learning today.

The hardware underpinning the software stack is the OpenPOWER system announced last fall at the first OpenPOWER European Summit[3] (Barcelona, Spain). It is the IBM Power S822LC for High Performance Computing (HPC) system. Joined with the NVidia pioneered CPU-to-GPU NVLink, the server is currently the best-in-breed system for deep learning projects.

The OpenPOWER Platform system has proven to be popular with developers and users around the world. We describe the system and chip specifics in earlier articles. See our discussion on the system at: “The newest IBM Power Systems – more of everything for the hottest environments![4]”, and the chip at the heart of it all: “Acceleration, Collaboration, Innovation - IBM's roadmap for its POWER architecture[5]”. Also, key to building the success of PowerAI are existing partnerships and more to come with industry leaders like Julia, DL4J, Apache Spark, Anaconda, DL4J (DeepLearning4J), OpenBLAS, ATLAS, NumPy, docker, etc.

Additions to PowerAI

                    Figure 1 The PowerAI Software Stack             Image courtesy of IBM, Inc.
Figure 1 illustrates the PowerAI software stack. The new additions include new data preparation and transformation (ETL) tools that automate, speed deep learning capabilities. New cluster orchestration, virtualization and distribution capabilities using Apache Spark. Tools to speed and automate the development process for data scientists. Adding up to faster training times (for model building and validation) by distributing deep learning (processing) across a cluster.
The result is a multi-tenant, enterprise-ready Deep Learning platform that includes:
  1. AI Vision – a custom application development for use with Computer Vision workloads;
  2. Data ETL accomplished using Apache Spark;
  3. DL (Deep Learning) Insight (Automated Model Tuning) – automatically tune hyper-parameters for models using input data coming from Spark-based distributed computing with intuitive GUI-based developer tools that provide continuous feedback to speed creation and optimization of deep learning models;
  4. Distributed Deep Learning –  HPC Cluster enabled distributed deep learning frameworks that automatically share processing tasks to speed results while also accelerating training (of models) with auto-distribution using Spark and HPC technology from TensorFlow and Caffe.

In summary, PowerAI uses automation, integration and AI methodology to speed and simplify the whole process of model building and testing. It provides Apache Spark-based data extraction, transformation and preparation tools for Data Scientists with extensive experience in Deep Learning. It provides automated, distributed model tuning and testing to speed the overall process by eliminating tedious manual comparison and analysis. Experienced and entry level data scientists will benefit significantly from these tools that simplify data preparation, model building, testing and tuning.

IBM offers PowerAI on-premise today and it will eventually be available as an IBM Cloud service. They expect that for a variety of compliance, security and capacity reasons, most users will opt for the on-premise solution. Basic PowerAI capabilities are available for free. Enterprise extensions are for fee with support/consulting services available.

The Final Word

With this release, IBM has convincingly speeded up and simplified major tasks associated with data ETL and manual processes for model training, tuning and optimization. The results benefit both experienced and entry-level Data Scientists working in Deep and Machine Learning.

From the beginning, IBM has worked to build and maintain ties with the larger open source community. They continue to expand the size of the community, cooperating with major players to integrate new technologies and capabilities.  

With the introduction of the PowerAI Platform and OpenPOWER server, IBM stands at the forefront in providing an integrated toolkit and platform as a comprehensive entry way to AI development for data scientists from mid-range to full-enterprise sized organizations. At the heart of the needed ecosystem is the open source deep learning community and association with such open source communities as that surrounding OpenPOWER. We encourage you to look more deeply into what PowerAI is offering, both now and in the future.