Monday, December 19, 2016

IBM Systems - year-end review

By Rich Ptak

It’s been a busy year for IBM, what with transitioning, proliferating use cases for Cognitive Solutions, the rapid buildup in their Cloud platform ecosystem and announcements of a series of innovative industry-specific solutions and projects, LinuxONE mainframe activities, the OpenCAPI initiative, etc. You might even expect they might slow down a bit.

Nothing even resembling that appears to be in the cards. IBM CEO Ginny Rometty led off the week announcing that IBM will hire 25000 new US employees while investing $1 billion for employee training and development. This is to take place over the next 4 years.  And, she promises the focus will be on “new collar” hires, i.e. skills-based, rather than focusing simply on “higher education” requirements. The needed skills include cloud computing technicians and specialists in service delivery acquired in vocational training or “on-the-job”. IBM collaborated in the curriculum design and implementation of schools to do the training. You can learn more about these Pathways in Technology Early College High Schools (P_TECH)  here[1].  This spending is in addition to the $6 billion spent annually on R&D projects. This is hardly the behavior of a company unsure about its future. But, I digress.

In the same week, Tom Rosamilia, Senior Vice President, IBM Systems held his annual review on the status of IBM Systems. Taking place prior to year-end results reporting, it was light on financial details. But, the broad strokes presented a company that was seeing both progress and positive returns.  In a turbulent and rapidly evolving business environment, IBM had embarked upon a bet-the-business strategy of transformation and innovation in technology, business models and skills to address the challenges of the evolving era of cognitive computing.

IBM committed to the cognitive era before it was fully defined and clearly defined as a viable market. Early hallmarks of the changing environment were the growth of cloud-based and Infrastructure-as -a-Service enterprise computing. Rosamilia quoted industry research that states that by 2022, an estimated 40% of all compute capacity would be provided through service providers. And, some 70% would be in hybrid (combination of on- and off-premise infrastructure).  Such configurations meant IBM’s systems infrastructure-based revenue streams would need altered delivery and service models to grow. 

In response, IBM Systems altered its operating model to focus on three main areas:
  1. Cognitive Solutions – through partnerships and initiatives that grow the ecosystem and increase use cases, scaling up performance and making it easier to leverage the technology.
  2. Cloud platform – building out the ecosystem, increasing accessibility by building out available services and tools and expanding utility with easy access to the newest technologies.
  3. Industry Solutions – focus on providing servers, services, platforms along with innovative solutions optimized and targeted to resolve industry-specific challenges.

In 2016, the systems product portfolio was divided into 3 areas: 
  1. The Power chip-based systems targeting the market with Open Computing (e.g. Open Compute Project), Power LC servers with NVIDIA NVLink™ for specific workloads models, the OpenCAPI[2] Consortium (dedicated to developing a standard that allows cross-vendor access to CAPI acceleration) and the OpenCAPI Consortium (developing a standard that allows cross-vendor access to CAPI acceleration), and PowerAI –  an IBM partnership with Nvidia that enables simplified access and installation of platforms for deep learning and HPC efforts;; 
  2. the zSystems with LinuxONE for hybrid clouds, high security z13s with encryption for hybrid clouds, Apache Spark support on z/OS and (the blockbuster) secure blockchain services on LinuxONE available via Bluemix or on-premise; and finally,
  3. IBM lit up the Storage /SDI (software defined infrastructure) markets with all-flash arrays available across their complete portfolio and a complete suite of software-defined storage solutions. There are plenty more coming with IBM Cloud Object Storage solution, cloud support for IBM Spectrum Virtualize and DeepFlash ESS. We don’t cover the areas, so we won’t comment more.

IBM will continue to stir things up as they expand and enhance deliverables in these areas in 2017. There is a special focus in Cognitive Computing where speed in data access and computational power are critical. Power systems with CAPI are specifically designed for Hi-speed, computationally dense computing. They benefit from partner developed accelerators. Cost is a major issue in high-performance analytics and computing. Power systems with CAPI and accelerators offer significant price/performance advantages over general purpose systems.

Block chain has been a major news item this past year. Use cases are proliferating as understanding of the technology and accessibility grows. Both significantly benefiting from IBM offerings of easy flexible access options to the technology, as well as training including some which are free.  Financial, healthcare[3] and business use cases for blockchain in secure networks are proliferating.  IBM is and will continue to be a major booster and contributor in the spread of this technology. IBM is offering secure blockchain cloud services on-premise or via Bluemix cloud with either LinuxONE or Power LC systems. 

Rosamilia discussed a number of activities underway with clients and customers expected to deliver in 2017. These include collaboration with Google and Rackspace using a Zaius server (running on yet-to-be generally available) Power9 chip and OpenCAPI to deliver 10x faster server performance on an “advanced, accelerated computing platform…(delivering an)…open platform, open interfaces, open compute”.  There is the blockchain/(HSBN) high security business network with existing and potential application across a range of business functions. These include securities transactions, supply chain, retail banking, syndicated loans, digital property management, etc.
Tom Rosamilia described how Walmart uses blockchain to guarantee the integrity of food from farm to consumer. Sensors packaged with farm products are tracked from farm to final consumer purchase to assure environmental conditions (e.g. temperature exposure) have been maintained. More examples are available (see our recent blog[4] on blockchain).

The session concluded with the list of technologies where IBM is investing. These include POWER9, LinuxONE, zNext (the next generation mainframe!), all-flash systems, next generation computing (beyond silicon), open ecosystems, blockchain and (presumably LARGE) object storage. We believe that their bets were well-placed. There is still much to be done, but it appears to us that Ginny Rometty and IBM will keep every one of those 25,000 new hires very, very busy.

[2] CAPI (Coherent Accelerator Processor Interface) developed by IBM for and initially only available on Power systems. It resolves the increasingly serious data access and transfer bottlenecks problems to improve system performance by a power of 10.)

Friday, December 9, 2016

OpenPOWER, Blockchain and more on IBM’ s Workload-driven infrastructure to “Outthink the status quo!”

By Rich Ptak

At and after IBM’s 2016 Edge even (which had over 5500 attendees), IBM has been spreading the word about and providing the experience of how IBM’s app-, service- and workload-driven infrastructure enables users to “Outthink the status quo” to drive their success. Combining agile infrastructure with enterprise digitization means pushing (if not demolishing) operational boundaries and creating business models that yield previously inconceivable solutions and capabilities to overcome challenges previously viewed as intractable or unresolvable.

Enterprises ranging from the very largest to start-ups are succeeding in innovative application of IBM and IBM-partner provided infrastructure that fully leverages cloud, cognitive and system technologies. Edge 2016 was a head-spinning event with lots of technology and technical detail, but also with benefits and operational advantages presented in terms highlighting positive enterprise impact. IBM used the event to first impress, then inspire customers to act to exceed their own expectations of what was possible. Here’s some of what impressed us.
It’s a platform view
Infrastructure remains vitally important as a means for accessing and leveraging the power of technology. Meeting the performance requirements of cutting-edge solutions (e.g. autonomous vehicles), as well as day-to-day apps (e.g. 3-D printing of medical prosthetics) is no easy task.  Doing so requires infrastructure that functions as an integrated platform combining elements from multiple sources. All of which must work together to transparently deliver the data storage capacity, access (CAPI accelerators) and processing speeds to match operational and computing demands. New capabilities in Dev/OPS are transforming the developer’s ability to access, exploit and manage the infrastructure in innovative ways, even as it changes how this is done by allowing educated users to define custom services themselves.
A major message from the event detailed how enterprises, research, market-driven, education, small and large and even individuals are accomplishing things that were previously unimagined, even unimaginable. This is possible, not simply because of the power of the technology, but also because of the increased, often cloud-based accessibility to the technology along with UIs that simplify (relatively) app creation.
Now, infrastructure provides a platform, on-premise or in a cloud, that uses elements from multiple different suppliers working transparently to the developer/user and, if necessary, across multiple platforms (mobile, cloud, server, etc.) to create a product, deliver a service or perform a function. The developer, researcher or whatever-user faces a constantly evolving, highly competitive world. A successful product/service must be able to quickly take advantage of emerging technology changes. Infrastructure platforms allow that to happen. Also, typically, a service, app, or product available across multiple infrastructure platforms has a competitive advantage. IBM is committed to delivering, as partners and customers substantiate, the products, whether hardware (mainframe, Power Systems, etc.), software or service with the required flexibility.
Taking on big challenges, succeed by outthinking the status quo
IBM noted a trend and called out a challenge to attendees. A major theme in Key Note speeches, presentations and on the show flow, was an emphasis on large enterprises, as well as smaller companies and individuals tackling big challenges. There remains plenty of tech talk and technology detail, but the focus was on the potential of today’s systems, applications and services. Recognition of that potential inspired users (enterprise and individual) to take on significant challenges in personal life, society, medicine, scientific research, etc.

These can be about on-line dating/matchmaking (PlentyOfFish), gaming (Pokemon) which just happened to double Nintendo’s capitalization to $42B in 10 days, or radically expanding access to mobile banking services for a previously grossly underserved market of transnational workers across East and West Africa.

Or, they can be a REALLY big problem, i.e. solving world hunger, curing cancer, guiding autonomous cars or solving the digital trust problem with a radically secure, peer-to-peer distributed ledger (Hyperledger). Hyperledger is an open source project from the Linux Foundation designed to enable the next generation of transactional applications by automating trust, accountability and transparency using blockchain technology. Technology which IBM makes freely easily accessible and usable to developers, and as a for-fee, highly-secure turnkey service, via its Bluemix cloud platform. (See IBM Blockchain.) Blockchain promises to have major impact in multiple markets from contract management (outsourcing) to financial transactions (cross bank, foreign exchange Letters-of-Credit) to provenance documenting for anything from agricultural products (farm-to-fork) to drugs and medical devices.

IBM Blockchain activities have exploded since its early 2016 announcement. Today, they provide access to Blockchain development services and support at centers worldwide. Blockchain Bluemix garages are in New York, London, Singapore, Toronto and Tokyo while Bluemix technology incubators are in Nice, Melbourne and San Francisco. Each week, it seems that market segments and use cases for Blockchain emerge, even as vendor offerings and services expand. IBM has clearly positioned itself to benefit as a result.

The net is that IBM is betting its business on providing broad access to that infrastructure products and services that will drive the next generation of innovation and technology-powered advancements. They are focused on the infrastructure in terms of cloud, mobile, IoT, cognitive computing and targeting markets with solutions. But, they are also investing in accessing and applying new and emerging technologies. They are building communities and ecosystems for cooperative innovation. Providing services and fabric to make it easier for these communities to leverage each other and the technologies. IBM challenged all attendees to stretch their imaginations and outthink the status quo in applying technology in both their professional and personal lives. They invited those that did to return to Las Vegas to tell their stories at Edge 2017.
The Products
Edge without products just wouldn’t be right. IBM titillated the chip community last summer with hints about Power9 which is due late 2017. To satisfy immediate demand, IBM announced a new line of OpenPOWER LC models optimized for specific market segments and styles of computing. Models included:
  1. An entry level model, the IBM Power System S812LC for customers just starting with Big Data.
  2.  IBM Power System S821LC, a 1U form factor with 2 Power8 processors.
  3. IBM Power System S822LC for Big Data. 
  4. IBM Power System S822LC for Commercial Computing.
  5. IBM Power System S822 LC for High Performance Computing, the latest POWER8 with NVLink - a high speed link between the CPU and onboard GPUs.

IBM’s Power Systems strategy continues to focus on chip advancements combined with design for specific computing models/markets and use of partner-developed accelerators and devices for additional performance enhancements.

We discuss these in our blog available at IBM also discussed a range of new applications of Watson and Watson Analytics, along with programs to provide easy, affordable access to these capabilities for developers, students and researchers. Multiple plans are available (starting at $30/month,) as well as a free introductory offer that packages access to data bases and analytics. See: for details.

The mainframe continues to make its mark as IBM comes up with new ways to provide more power, speed and versatility without raising hardware prices.
Before we finish
There is a view that the age of the small, upstart entrepreneur is over. This was the focus of the September 17th-23rd issue of The Economist. A special report about the world’s most powerful companies explains “why the age of entrepreneurialism ushered in by…Thatcher and Reagan…is giving way to an age of corporate consolidation.” The article asserts:
  1. Large corporations dominate growth, market capitalization, revenues and profits globally. They (alone) have the cash, talent and savvy to maintain these positions.
  2. Entrepreneurial endeavors are declining.
  3. Emerging entrepreneurs opt for quick buy-outs by “superstar firms” over IPOs. 
  4. Despite a growing backlash, the superstar firms successfully lobby EU and national politicians for favorable treatment (corporatism).
  5. Technology and infrastructure trends, e.g. IOT favor the superstars.
Statistics, charts and considerable consulting firm cogitation back up these opinions.
We believe the conclusions are too pessimistic. There have been a significant number of roadblocks to progress and investment in start-up and small firms have been in-place and significantly added to in the last decade. These range from the economic (protectionism, unstable markets, inflation) to escalating governmental regulation (excessive mandates, quixotic regulation, direct interference) to lack of investor confidence. But, the environment is changing.
This partially due to shifts in attitudes and partially to tectonic shifts in both the political and societal environments currently underway in multiple countries. It is also due to the rapid evolution of new technologies; the creative application of which is easier and more widespread than ever before. There is an increasing emphasis by existing and emerging technology leaders on making it easier to leverage and constructively apply technology. IBM has been a leader by providing easy, low, cocost access for operational entrepreneurial and sandbox activities with Blockchain, Cognitive Computing, Watson Analytics, OpenPower Systems, Bluemix and cloud technologies. 
What’s the message?  
As described by Tom Rosamilia, IBM System SVP, what was once a discussion about technology, the conversation is now about business. More precisely, the discussion is about solving a problem whether business, financial, design, discovery or implementing a previously inconceivable or unimaginable service.

IBM’s message emphasized unleashing the underlying, deep competitive drive unique to humans. To direct their efforts to outthink their competition (in whatever form) to pursue the solution of the really difficult problems, in innovative definition of and delivery of new services and by identifying and pursuing radical opportunity.

IBM’s customers are pursuing all of these. In the process they clearly demonstrated that IBM’s goods and services are playing a leading role in unleashing a veritable tsunami of innovation and creativity to resolve all sorts of business, enterprise or even societal problems The stories told and exhibits at Edge 2016 did a lot to confirm our impressions. We think Edge 2017 is going to be even more enlightening. 

Thursday, December 1, 2016

HPE IoT Solutions ease management, reduce risk and cut operating costs

By Bill Moran and Rich Ptak

HPE has invested heavily to deliver IoT solutions. They are well aware of the challenges faced by customers attempting large-scale IoT deployments. Thus, our interest in commenting on the November 30th London Discover announcements of new enhancements to their IoT portfolio. First, a quick overview of the IoT world.

IoT is driven by the belief that enhancing the intelligence gathering (and frequently processing) capabilities of new and existing devices coupled with centralized control will yield personal and/or business benefits. The amount of intelligence (for decision-making, etc.) necessary at the edge of the network (in the device) versus the amount retained centrally (server, controller, etc.) varies widely based the use case. For example, a device monitoring what's in front of an automobile that detects a potential crash situation should make a decision to slam on the brakes locally (in the car). The delay inherent in communicating with a remote controller (in the cloud) would be intolerable.

The complexity inherent in IoT becomes even more complicated when multiple and different devices are involved. Each device type communicates in its own way. Use cases vary by task and industry. The automotive situation discussed is very different from that of a power utility monitoring thousands of meters for billing or consumption-tracking purposes.

Many more scenarios exist, and depending on scale and distribution, some will be very difficult and expensive to deploy. HPE set out to provide solutions that reduce the cost and effort required to deploy, connect and manage this wide variety of distributed devices. Here’s the result.

HPE’s IoT portfolio enhancements are in three categories. First is the new HPE Mobile Virtual Network Enabler (MVNE) intended for mobile virtual network operators (MVNO). It simplifies device management complexity of SIM card[1] deployment and management. HPE MVNE allows the MVNO themselves to setup the SIM delivery and billing service instead of buying the service from a carrier. The resulting reduction in complexity and long term billing management reduces operator costs.

The second enhancement is the HPE Universal IOT (UIoT) Platform. The number of IoT use cases implemented over Low Power WAN (Wide Area Networks) is rapidly increasing. Each with their own infrastructure and management solutions and standards. No single vendor has a management solution that works across all the differing solutions. This meant that businesses, e.g. delivering smart city IoT deployments, had to use multiple management systems. The HPE UIOT software platform will support multiple networks (Cellular, WiFi, LoRA, etc.). Using lightweight M2M protocol, it provides a consistent way to manage the variety of networks, as well as data models and formats.

Third are IoT-related enhancements to HPE Aruba[2] (LAN) networks. HPE introduced the Aruba Clearness Universal Profiler. A standalone software package, it permits customers to quickly see and monitor all devices connecting to wired and wireless networks. Helping to satisfy network audits, security and visibility requirements. It is the ONLY such application purpose-built to identify, classify and view all devices.

Finally, the latest ArubaOS-Switch software release enhances the security features of most of the Aruba access switch family. Features include automatic tunnel creation to isolate network traffic for security devices, smart lighting and HVAC control units, and the ability to set network access control policies for IoT devices.

With these IoT offerings enhancements, HPE has made significant strides toward reducing the complexity and costs facing customers developing and deploying IoT solutions. These are unique tools to help customers economically create and operate an IoT enabled world. HPE also demonstrates its clear understanding of the difficulties that customers are encountering.

It appears to us, that HPE is succeeding in its efforts to provide solutions that reduce the cost and effort required to deploy, connect and manage the wide variety of distributed devices available today. We suspect that there are many who will agree. 

[1] SIM cards provide the interface between a device and a cellular connection.
[2] Acquired for its wireless, campus switching and network access control solutions.

Friday, November 4, 2016

BMC Survey on the Status of the Mainframe

By Bill Moran and Rich Ptak
BMC has released the results of their 11th annual mainframe survey.  BMC partners with multiple other parties to collect data and to release the results (e.g. IBM Systems Magazine[1]). This assures they have input from a variety of sources, including non-BMC customers. The resulting expanded range of opinions increases the value of the data.

We review key results of the survey here. We may revisit the topic as additional information is made available. We commend BMC for conducting the survey. It performs a real service for the industry. Studying the results can provide significant insight into what is happening in the mainframe market.[2]

Key results

In our opinion, key conclusions from the survey are:

  1. If the “death of the mainframe” needed more debunking, this survey certainly does so.  It shows there will be no funeral services held for the mainframe anytime soon. Last year 90% of the survey respondents indicated they saw a long-term future for the mainframe. This year that number declined all the way to 89%. Not a statistically significant difference!
  2. The general population of companies, on average, keep more than 50% of their data on the mainframe.  70% of large organizations see their mainframe capacity increasing in the next 24 months.
  3. In large (and other) enterprises, Digital business appears to be driving higher mainframe workload growth.
  4. Smaller organizations are more likely to forecast declining use of the mainframe. 
  5. In contrast, those companies that are increasing their mainframe usage take a long term view of the mainframe and its value. They tend to be more effective at leveraging the platform. They want to provide a superior customer experience, hence they modernize operations, add capacity and increase workloads. They view mainframe security and high availability as critically important differentiators in today's market marked by escalating transaction rates, data growth and rapid response times. 

Other interesting insights

Linux usage on the mainframe broke through the 50% point this year. Its use has been growing steadily ever since a Linux initiative was launched when Lou Gerstner was IBM CEO. Last year, 48% of the survey respondents said that they had Linux in production; this year the percentage rose to 52%.

BMC divides organizations into three groups:

  1. The first group representing 58% of those surveyed say that mainframe usage in their organization is increasing.
  2. The second group (23%) say that usage is steady.
  3. The third group (19%) say that usage is reducing.
We did not do an exhaustive analysis of the differences between the increasing, steady and reducing groups. However, it is worthwhile to sketch a view of some differences between the reducing usage group and the increasing usage group.

In the first place, many reducers indicate that their management believes the mainframe is outdated. This results in pressures to abandon the platform. Thus, their focus is removing workloads. This group is also more concerned about a mainframe skills shortage. Their solution to that problem is, again, to remove workloads thus reducing mainframe platform dependencies.

In contrast, managers of the group that is increasing usage do not appear to believe the mainframe is obsolete. Therefore, there is no pressure to move off the platform. In fact, they actively seek to move new work onto the mainframe. While also concerned about a mainframe skills shortage, their response is to provide internal training and invest in automation wherever possible. Neither outsourcing nor moving workloads off the platform are viewed as viable solutions to a skills shortage.
Figure 1 Top Mainframe Priorities – Chart courtesy of BMC
Next, of interest were respondent priorities. The top priorities for 2016 as identified in the survey include:
  1. Cost reduction/optimization – 65%
  2. Data privacy/compliance/security – 50%
  3. Application availability – 49%
  4. Application modernization – 41%.

Number 5 on the list “Becoming more responsive to our business”, is not given a percentage. We estimate (see Figure 1) it at 38%. We found this somewhat surprising. With all the focus on the digital enterprise and business, we would have thought that this would be at least #3 on the list. Like we said, interesting data comes from the study.

Future Possible Questions

As a quick aside, there are many possible questions to explore. We encouraged mainframers to participate in the survey. Going a step further, put your suggestions in ‘comments’ to this blog. Or, tweet them with the hashtag #PtakAssocMFQ. We will track the results and share them with BMC before the next annual survey.

To start things off, we have a few suggestions for future survey topics:

  1. How many projects were undertaken to move work off the mainframe? What were the results? What were the factors contributing to the project’s success or failure?
  2.  Is (and how much is) the mainframe integrated into the overall datacenter operations? Or, is it an isolated island with mostly batch methods of integration?  
  3. How many organizations are using IBM’s z/PDT, which simulates a mainframe on a PC or X86 system for development?
  4. What is the progress of DevOps modernization? This might connect to the previous point as z/PDT is Linux based and many developers prefer to use Linux tools, but also need access to mainframe data to test their applications.
Of course, we understand that there are many logistical problems in putting together a survey of this type. For example, there is a practical limit on the number of questions that one can ask. However, the answers to these would be enlightening.


The survey provides significant value to the mainframe community with insights useful to mainframe users, vendors and service suppliers. It can help any mainframe-based organization to plan and optimize for the future. It highlights ongoing community problems even as it corrects conventional “wisdom”, i.e. the mainframe is alive and well. 

Finally, the survey is a valuable tool for understanding the state of the mainframe, its user concerns, needs and priorities. BMC may want to consider extending the reach of the survey to include other organizations, i.e. user groups such as Share. In the meantime, we suggest that you visit the BMC website to discover insights that you can successfully leverage and apply in your operations. 

[1] Note this is not corporate IBM.
[2] Study details available at

Thursday, November 3, 2016

Compuware delivers again! Solution innovation for eight consecutive quarters!

By Rich Ptak

With its October launch, Compuware once again successfully met its self-imposed goal of a quarterly delivery of brand new or significantly enhanced, mainframe solutions. This makes 8 consecutive quarters they have done so. And, for each quarter, the result has been significant, ground-breaking extensions or enhancement of capabilities or accessibility in areas that include mainframe DevOps, risk reduction, app development, systems management and resolution of significant challenges to smooth mainframe operations. The current announcement continues the pattern. Congratulations and kudos to Compuware. Here’s what we found interesting.

Service-based Acquisition
We commented earlier on Compuware’s acquisition of ISPW product technology and its integration with Compuware Topaz. ISPW provides comprehensive, modern functionality for Source Code Management (SCM), Release Automation (RA) and Application Deployment (AD) for both mainframe and distributed platforms as a single, integrated solution. We were enthusiastic about the move and the success of Compuware’s integration. As it turns out, we weren’t the only ones.

The prospect of having a single solution where three separate products were previously required was very attractive to over-stretched IT staffs. Combined with a tight integration to Topaz and you have a solution that is practically irresistible. Customer demand for help in moving to ISPW was so high that it motivated Compuware to make its second business acquisition in 10 months. Compuware purchased the total SCM practice, including implementation services, experienced staff and proven methodologies from Information Technology Service Management (ITSM) firm, Itegrations[1]. Compuware’s SCM Migration Services[2] simplifies, speeds and reduces the risk when migrating from existing vendor-supplied and homegrown systems to ISPW SCM.

Topaz additions, enhancements, extensions and integrations
In keeping with Compuware’s theme of Mainstreaming the Mainframe, the announcement included new Compuware Topaz Connect[3] (formerly Itegrations NxBridge) that automates and simplifies cross-platform connectivity. Customers can automatically connect Compuware ISPW to various ITSM solutions including ServiceNow, BMC Remedy and Tivoli. This reduces manual-processes, time and effort while making the mainframe more accessible, the customer experience better and improving performance metrics. Recognizing that enterprises may not be able to migrate to agile ISPW SCM immediately, Topaz Connect enables CA Endevor users to access required Endevor functionality via Compuware Topaz Workbench[4], a modern Eclipse-based IDE. Through this integration, developers can perform critical activities such as add and move elements in the lifecycle; generate (compile) elements; create packages; and move groups in the lifecycle.

In another major step towards increasing mainframe utilization, raising the accessibility to modern tools and making the move to DevOps faster and easier, Compuware is providing REST APIs, in effect “building blocks,” to be used to control and manage application deployment in both mainframe and distributed environments. The APIs for ISPW enable users to create, promote, deploy and check the status of code releases using popular Agile/DevOps tools including Jenkins, XebiaLabs XL Release, Slack and Atlassian HipChat with Webhook notification.
Additional broader scope APIs will be available in coming months. These will be built to leverage, work with and support open standards and open standards-based tools. For example, complementing the APIs, Compuware plans to add support for a number of popular tools.

JCL has been a longtime hurdle for those looking to develop for mainframes, even more so for millennials. Compuware tackles the challenge with plug-ins for Topaz Workbench. Integrations with Software Engineering of America (SEA) technology include the JCLplus Plugin for Topaz Workbench which will automatically verify standards, check syntax and do runtime simulation of JCL. In addition, there is the SAVRS Plugin for Topaz Workbench, which allows easy viewing and interpretation of Joblog and SYSOUT reports.  

Terse error logs and messages made fault analysis a mainframe frustration for a long time. Adding to the problem, mainframe groups operated in informational and data isolation, siloed away from the rest of the enterprise. As a result, separated and off by itself, the mainframe became a “black box”, sidelined and not recognized as part of the enterprise operations team.

As a start to resolving those issues, Compuware partnered with Syncsort Ironstream to change that. Integrating with Ironstream allows the Abend-AID application fault discovery and analysis solution diagnostic data, together with the mainframe logs, security, and environmental data, to be fed in machine-readable form to Splunk, which combines that data with data from multiple different sources (security, compliance, behavioral, operations, compliance, etc.) across the organization. The combination can then be analyzed, correlated, evaluated to yield operational intelligence. The mainframe’s impact and influence in the context of the total operations is made visible and the importance of the mainframe to overall operations established.     

There is much more contained in the announcement. Our recommendation is that you follow up with Compuware to see how your customers and enterprise development and operations can benefit from their efforts.

Compuware’s overall ambition is to expose and confirm the importance of the mainframe to the enterprise. They do so by removing tool-based barriers that have traditionally inhibited its use to broader DevOps teams and by resolving significant shortcomings, including most significantly, dismantling operational silos. A significant part of the solution is the modernization of mainframe solutions, tools and capabilities so that developers, operations and business analysts can function consistently, transparently across mainframe, distributed and mobile platforms.

A Final word on Compuware’s Vision
If you haven’t noticed already, Compuware operates with a very customer-focused vision to drive its quarter-to-quarter deliveries. It isn’t that they are driven to reflexively react with little forethought. They have an established, consistent product/solution plan with a roadmap of future deliverables.

The basic, bedrock principle is to develop solutions based on what they believe are the critical and most-pressing problems confronting their customers NOW. They are driven by the belief that there are a number of identifiable and curable challenges that act as immediate roadblocks to keep the mainframe out of the mainstream. Their goal is to eliminate those roadblocks and speed the mainframe into mainstream operations. To do that, they have a prioritized, yet flexible list of which challenge they will take-on and when. 

Compuware’s plans are neither static, nor inhibiting of creative, responsive innovation. For example, late last year the team conceived of, built out and delivered Runtime Visualizer, a new feature in Topaz for Program Analysis[5], in just 84 days. This year they acquired ISPW and rapidly integrated it with Topaz[6]. On the heels of that, they acquired and delivered Compuware ISPW SCM Migration Services. Yet focused attention on customer feedback and the rapidly evolving world of enterprise IT isn’t completely unique – and it isn’t sufficient to maintain leadership. Compuware, partners, employees and executives hold themselves to an exceptionally rapid rate of development and delivery. They are aided by a great deal of flexibility in implementation due in a significant part to their own products and organizational vision. They are driven to produce extraordinary results that demonstrate their own agility, as well as that of mainframe solutions and operations. As they promise, Compuware’s employees and executives are delivering “Agility Without Compromise…simple, elegant solutions that enable a blended development ecosystem.”

We’re impressed with what they are doing. We recommend that you investigate to see if you agree.

Tuesday, November 1, 2016

First European OpenPOWER Summit – confirms growing popularity of the platform

By Rich Ptak

The first ever OpenPOWER European Summit opened in Barcelona this week. With over 270 members worldwide (60 in Europe), the OpenPOWER Foundation continues to grow and win over developers and users for the OpenPOWER Platform.

With the demise of Moore’s Law[1], IBM believed that the performance demands of computing with dense databases, extensive virtualization, container environments, etc., can best be met with workload-tuned systems. In that vein, in September, IBM announced three new POWER8-based systems. Sessions at IBM’s Edge 2016 event and an early peek last summer at the next gen POWER9 chips, indicate a platform attracting considerable attention with increasing support and use.

The systems are used by a number of OpenPOWER Foundation members. For example, IBM Power S822LC for High Performance Computing (HPC) is used by Turkey’s SC3 Electronics to create the largest HPC cluster in the Middle East and Africa. For Germany’s Human Brain Project, the Power S822LC for HPC, is the pilot system in a part of a Pre-Commercial Procurement process for the JURON supercomputer at the Jurlich Supercomputing Center.

Why OpenPOWER?
What’s behind OpenPOWER popularity? In a nutshell, in sharp distinction from other vendors, IBM and OpenPOWER Foundation members believe that workload and task specific systems, fully optimized for dominant computer tasks, such as handling and processing Big Data, High Performance Computing, compute-intensive, Commercial computing, etc. provide the best way today’s requirements of IT developers and operations staffs. IBM also recognized the power of open, collaborating communities to rapidly develop and deliver leading edge products.

IBM and the OpenPOWER community form a unique combination that facilitates and speeds innovation to deliver the power and functionality of rapid calculations of sophisticated analytics and manipulation of massive amounts of data needed for practical application of the compute intensive machine learning, deep learning, autonomous device, cutting edge research and development efforts underway today.

The OpenPOWER Community believes the widest range of enhancements, extensions and advancements will result from extensive collaboration around a fully open architecture and standards-driven technology. The OpenPOWER platform and the OpenPOWER Foundation are joined by the recently (October) announced OpenCAPI Consortium[4]. The goal of the consortium is to fully standardize CAPI[5] as an Open Interface Architecture that allows any microprocessor to attach to accelerators, advanced memory, networking and storage devices. It is faster, easier and less expensive than traditional approaches.

OpenPOWER in Europe
The European summit detailed member-driven innovation and extensions with a range of special-purpose accelerators and enhancements using the OpenPOWER platform. In addition to those mentioned earlier, Spain’s Barcelona Supercomputing Center (BSC) is collaborating with IBM -BSC Deep Learning Center to improve and expand algorithms for deep learning systems. See Figure 1 (below) for more OpenPower platform activities in Europe and the UK.

Figure 1 European OpenPOWER ecosystem growing
Key to the success of the OpenPower platform is acceptance in the Dev/Ops world. A partial list of OpenPOWER developer resources and events from around the world include:
  • A European developer cloud – A collaborative effort between the Technical University of Munich at the Department of Informatics; plans are to launch Supervessel, a European R&D by the end of 2016. Similar to a Chinese effort, Supervessel is a cloud built on POWER’s open architecture and technologies to provide open remote access to ecosystem developers and university students.
  •  CAPI SNAP Framework Is a collaborative effort by North American and European OpenPOWER Foundation members to make FPGA acceleration technology easier to implement and more accessible to the worldwide developer community. A beta version is available now.
  •  OpenPOWER READY FPGA Accelerator Boards – Alpha Data showcased its low latency, low power, OpenPOWER READY compliant FPGA accelerator boards for applications requiring high-throughput processing and software acceleration.
  •  OpenPOWER Developer Challenge Winners – some 300 developers competed in the first OpenPOWER Developer Challenge, the four Grand Prize winners announced are:

The OpenPOWER ecosystem continues to grow and add members in the UK, Europe and around the globe. Europe’s 60 members today could double in the next 12 months as leading edge companies (of all sizes) have access to the right platform with the right capabilities for rapid innovation in operations, accelerators, networking, storage and software.

The Final Word
While generic servers aren’t going to disappear any time soon, the OpenPOWER platform and associated technologies clearly address previously unmet needs of developer and operations communities operating at the leading edge of technology and its application. These teams are tackling some of the biggest and most demanding problems and challenging applications that being addressed today. And, more and more of them are finding the OpenPOWER ecosystem supportive of the innovative thinking they demand.
One more observation, OpenPower System architecture isn’t just for the R&D and cutting edge teams. Numerous companies use the technology in a range of situations. The basic strategy is to extend the system capabilities and lower technological barriers to access using creative collaboration to make innovation and leveraging emerging technologies easier. We think they along with their partners are succeeding.

We encourage you to follow the links provided to find out more about what is happening with OpenPOWER Systems[6], the OpenPOWER Foundation[7] and the OpenPOWER ecosystem as it expands around the world. platform and associated enhancements find considerable application in research and operations.