Pages

Wednesday, October 31, 2018

IBM Storage innovation for a multi-cloud world

By Rich Ptak



We’ve commented[1] positively on IBM‘s storage and data management portfolio before. Of course, portfolio announcements, plans and strategies are all well and good. But, success depends on execution and results. Has IBM delivered?  Let’s examine the results.

IBM is acknowledged as #1 in several storage-related areas, including Archiving, Storage Software apps (management, operations, collection, etc.), Mainframe Storage and branded tape. It is also #1 in Analytics. IBM states that 87% of the Fortune Global 100 companies use IBM Spectrum Storage. Overall, IBM has laid claim to being the world’s #2 storage company.

Not resting on past laurels, IBM storage continues to expand and innovate its data management and storage portfolio to resolve pressing challenges facing enterprise operations. Today’s enterprise world of multi-cloud environments coupled with on-going explosion in the amount and different types of data, poses special problems which significantly impact IT operating infrastructure, apps and services. These include escalating pressures for protection and security, speedy implementation and application of new technologies, enhancing and extending existing products and leveraging all this change for competitive advantage. Challenges that must be addressed by both vendors and their enterprise customers. 

IBM plans to maintain its success in a rapidly evolving market with aggressive, fast-paced innovation aimed at new product creation, as well as enhancement and extension of the existing portfolio. To that end, IBM recently announced new, along with current and planned enhancements to existing products and services. First, we examine the current enterprise operating environment. Here’s what’s going on.

What’s the issue?

Today’s enterprise must compete in a globally dispersed, data-driven, rapidly evolving and fiercely competitive environment. This means success for today’s enterprises depends upon data, lots and lots of data. From frontline sales to DevOps, solution and service delivery, IT infrastructure and business operations, it’s the collecting, supporting, processing and reporting of data that’s on everyone’s mind. More precisely it’s the insight and innovation based on data analysis and manipulation that, in the end, delivers lasting, significant competitive advantage. 

 In addition, many of today’s successful enterprises have reached a consensus that the preferred infrastructure and operating environment to achieve the desired pace of agile development and delivery is a hybrid mix of private cloud (on or off-premise) plus multiple public (also on/ or off-premise) clouds. See Figure 1.

(Image courtesy of IBM, Inc.)
Figure 1 Data-driven, all cloud, any cloud, anywhere
 Enterprises today require an operations environment and infrastructure that is fast, secure, extensible, able to work transparently across multiple cloud configurations, adaptable enough to leverage next generation applications and emerging technologies, all available in an extremely cost-effective manner. One based on and adaptable to the latest technology.

In addition, operations and applications supporting data collection, management and processing services must mesh and function in a multi-cloud mix.

IBM has plans, as well as new and enhanced products and services to address these needs. Here’s what we think.

Creation to Archive – Innovation drives action

IBM’s announcement included new products (Spectrum Discover, the NVMe-enabled Storwize V7000 Gen3 & Tape drives), as well as significant enhancements to storage systems infrastructure (FlashSystem, Cloud Object, Storwize), storage services, software products for management, scaling, virtualization and cluster virtualization. Figure 2 provides a view of the breadth and depth of the products affected. Clearly, there is far too much to cover in a single document. We’ll summarize details on selected pieces that we think stand out. 

(Image courtesy of IBM, Inc.)
Figure 2 Map of new and enhanced storage products
Specifically, we will discuss end-to-end portfolio support of NVMe[2] and NVME-oF[3], the new Storwize V7000 Gen3, FlashCore enhancements, the new IBM Spectrum Discover product and Spectrum enhancements.

Infrastructure – faster, more capacity, end-to-end NVMe

IBM is no stranger to flash storage and networking solutions. They’ve consistently pushed the boundaries of speed, capacity, data protection and security. They do so again to deliver end-to-end NVMe support across the IBM storage portfolio. They are also increasing transfer rates, reducing latency and increasing data volume capacity. We provide some specifics below. But first, why is NVMe support critical?

As mentioned, the task of data storage, management and processing is an enormous part of enterprise IT operations and critical to business success. We mentioned the embrace of multi-cloud environments. As a result, rapid movement and manipulation of massive amounts of data across this mixed cloud environment must be extremely quick, easy, transparent, secure and consistent.

This is the role of NVMe and NVMe-oF connect systems and servers with the full range of networked peripheral, network fabric and bus-connected devices. To provide an idea of the scale of improvement, devices using traditional data transmission protocols can only handle 32 commands/message queue (SATA) or somewhat faster 256 commands/message queue (SAS). NVMe can handle 64,000 commands/queue and has maximum of 65,535 input/output message queues.

Without such capacities, applications and services leveraging AI, Machine Learning (ML) and the analysis of massive amounts of data would be severely limited, if possible at all.

IBM Storwize V7000 Gen3 is built to address the challenge. Storwize V7000 Gen3 system uses enhanced compression technologies and advanced AI-based management to make optimal use of NVMe internal to the Storwize array and NVMe-oF technologies. NVMe dramatically speeds data movement between systems and storage devices with much reduced latency (which remains constant at all capacity levels). It increases both the speed of transfer (16Gb NVMe over fibre channel interfaces) and the volume of data moved. Both important in today’s data-heavy environments. IBM reports throughput increases of 2.7x with 25% more available capacity in the control enclosure over current products. That translates to a maximum capacity per control enclosure of 461T and the maximum capacity in a 4-way clustered system of 32PB. This exceeds anything currently available elsewhere.

IBM FlashCore technology has also been optimized for NVMe, resulting in improved performance which translates to business advantage at reduced cost. For instance, AI-driven optimization of storage management and data placement maximizes use of time, money and resources. Knowledge of data usage patterns allows Easy Tier AI to automatically move data to the most appropriate media tier. Hardware-implemented encryption and compression means there is no negative impact on performance. FIPS 140-2 certified security with end-to-end NVMe on FlashCore Modules + SAN infrastructure meets enterprise security requirements. 

In a significant move from the past operations, starting October 23, 2018, all Storwize products will be ordered through and delivered by IBM’s global network of Business Partners. This applies to all the new NVME-based Storwize V7000 Gen3, as well as earlier Storwize V7000F, V7000, V5030F, V5020 and V5010. It also applies to VeraStack systems. 

IBM is not decreasing its marketing, sales and tech support of its storage line. In fact, IBM is adding Storwize sales resources to focus on the Storwize family with its channel partner community. IBM marketing, sales and technical support for the Storwize family remains unchanged. Business Partners and customers will have local support and services with the opportunity to realize greater advantages and increase benefits.


Infrastructure – Flash

The big news here is near pervasive support for NVMe and NVMe-oF. This significantly speeds the transfer of much larger amounts of data. This improves and speeds data access and accelerates application performance. NVMe-oF support is implemented via a simple, non-disruptive, no-charge IBM Spectrum Virtualize software update. This applies to FlashSystem 9100 and other members of the IBM Spectrum Virtualize family. (Some restrictions do apply, so check eligibility.)

Plans for 2019 include:
  • IBM Spectrum Virtualize support for NVMe over ethernet
  • IBM Spectrum Accelerate (FlashSystem A9000/R) support for NVMe over ethernet
  • · IBM Cloud Object Storage software to support NVMe flash drives in software-defined configurations
 
Performance and efficiency enhancements to the IBM FlashSystem 900 appear in reduced write latency (85µs) and a high speed (16Gb) Fibre Channel NVMe SAN interface. A new double capacity MicroLatency Module (44TB with in-line compression) doubles the effective capacity to 440TB. Increasing capacity without changing system foot print means there is no increase in power load or cooling requirements, leading to cost savings.



The IBM DS88880 all-flash Data System maintains the same footprint but uses a new 15.36 TB custom flash module delivering up to 2x boost in supported flash capacity to 8PB maximum (varies by model). Of course, Easy Tier AI-based benefits of intelligent automated data placement are also available.



A quick word on security and data protection, both increasingly critical today. Between governmental mandates, such as GDPR, HIPAA, FIPS, etc. and sophisticated global hacking, protection and security techniques must constantly evolve. IBM addresses this at multiple levels. IBM Cloud Storage provides cost-effective solutions built around policy-based WORM and lockable vaults. Cybercrime, ransomware and other mandates make secure archive and backup more critical than ever. Physically isolated tape media provide “air gap” protection for backup/archiving. IBM’s new high capacity (20TB) drive, the TS1160, has improved performance and is compatible with existing IBM TS3500 and TS4500 libraries. 


(Image courtesy of IBM, Inc.)
Figure 3  What’s new and enhanced in the Spectrum portfolio
 Software and Solutions

IBM announced IBM Spectrum Discover as well as enhancements to multiple IBM Spectrum products. See Figure 3. IBM Spectrum Discover allows fully automated, fast review and characterization of billions of files of unstructured data. This allows potentially useful data to be identified for further processing.

The amount of unstructured data available today is staggeringly large. To accurately classify, characterize, sort and index unstructured data for analysis has been an unbelievably time-consuming, difficult and error-prone activity even with high-powered computers and dedicated data scientists. 

IBM Research provided the technology and techniques that allow IBM Spectrum Discover to automatically process metadata to classify and index billions of unstructured files and objects. It identifies data that can be used by AI, ML and analytics to yield insight and information. IBM Spectrum Discover works with unstructured data in IBM Cloud Object Storage and IBM Spectrum Scale, with support for Dell-EMC Isilon planned for 2019. 

IBM also announced a number of storage and data management solution, existing and planned, which include:

  • IBM Storage Solution for SAP – for data protection combining IBM storage, IBM Spectrum Protect and IBM Spectrum Copy Data Management – includes SAP TDI certification of FlashSystem 9100, the new IBM Storwize V7000 and Elastic Storage Server
  • IBM FlashSystem 9100 – received certification from Epic for electronic healthcare records (note Meditech certification is coming in 2019)
  • Support for blockchain – planned for 2019
  • IBM Spectrum Virtualize for Public Cloud – Amazon AWS availability is planned for 2019

We’ve only covered a portion of the announcement. This should give you a good flavor of the depth and breadth of IBM’s commitment and aggressive advance in this market.

Conclusion

IBM’s describes its strategy as focused on “Innovating from Creation to Archive” as they expand and enhance an already extensive family of product, service and infrastructure offerings. On first hearing, that appears to be a bit of bragging. After the briefing and reviewing the deliverables and planned portfolio additions and direction, it is not a hollow brag. There is solid substance to what they have delivered and are planning.

We must admit that we came away impressed with what IBM has delivered. They did not just maintain, but significantly improved their portfolio. Congratulations to them.



Finally, there is much more in IBM’s announcement, including usage-based pricing, improved scale-out capabilities, improved, cost-effective archiving as well as new high-availability and disaster recovery entry configurations for lower CAPEX. We highly recommend a conversation with IBM to any enterprise with hi-volume, large capacity, fast response data and stotage needs [4].



[1] https://ptakassociates.blogspot.com/2017/11/ibm-spectrum-with-cluster.html
[2] NVMe = Non-volatile Memory Express
[3] NVMe-oF = Non-volatile Memory Express over fibre


Monday, October 15, 2018

zAdviser: A New Chapter in Compuware’s Mainstreaming the Mainframe Story

By Rich Ptak

Compuware zAdviser Process 

              Compuware zAdviser Process     (Courtesy of Compuware, Inc.)
Sixteen quarters ago, following a fast-tracked Waterfall-to-Agile transformation, Compuware committed to an aggressive quarterly delivery schedule to provide customers with net new capabilities and enhancements to classic offerings that would enable IT teams to “mainstream the mainframe.” That is, make the platform an integrated, integral part of today’s enterprise computing environment. Since then Compuware has doggedly adhered to that schedule, releasing new innovations each quarter like clockwork. 

In addition to maintaining this impressive quarterly cadence, Compuware has consistently been integrating its tools with solutions from several leading DevOps tools providers such as XebiaLabs, SonarSource and Electric Cloud. Thus, making it easier for enterprises to include the mainframe in their DevOps toolchains. They’ve made five product acquisitions, all of which have been thoroughly integrated into their portfolio. And, through a steady stream of pro-mainframe thought leadership—including publication of several Agile and DevOps transformation best practices guides—Compuware has become the industry’s most outspoken proponent for modernizing mainframe software development and delivery processes. 

Based on their own experiences in transformation, Compuware recognized that enterprises must be diligent in continuously measuring and improving mainframe DevOps processes and development outcomes. To that end, they recently introduced zAdviser, a new service, free to maintenance-current customers, that leverages emergent machine learning capabilities to improve and speed mainframe development and operations management. (zAdviser is the latest evolution in the company’s Value Improvement Program (VIP), which enables customers to qualify, quantify and increase the value derived from Compuware products.)

Before we get into the specifics of zAdviser, let’s touch on the some of the dynamics that make this service invaluable today and show how it nicely fits into Compuware’s strategy to mainstream the mainframe.

The Mainframe is Thriving
The rumored demise of the mainframe is just that—a rumor. Market changes are driving an increasing demand for the mainframe’s unique capabilities. BMC’s 2018 Mainframe Survey[1] documents this change as the mainframe is increasingly integrated into overall enterprise IT infrastructure operations. The study reveals executives, development, and operations staffs’ interest in the mainframe continues to grow. Some 93% of executives with a mainframe believe in its long-term viability. Key workload measurements of transaction volume, data volume, workload volatility/unpredictability and number of databases supported show significant year-over-year increases. Some 60% of respondents have a positive expectation of MIPS growth. IBM’s introduction of pervasive encryption, dramatic capacity increases, and footprint reduction is attracting even more users to the platform as hardware and software innovations continue to roll out.

All this bodes well for the mainframe; however, issues remain.

Challenges, old and new
Enterprise IT has never been, and never will be, free of challenges. Many enterprises are plagued with a myriad of cultural, procedural, operational and organizational issues stemming from traditional siloed mainframe operations. This severely hampers cross-enterprise cooperation and coordination that is further complicated by a retiring mainframe workforce and minimal efforts to fill vacant positions, non-agile tools that don’t support innovation, and more. 

Launching enterprise modernization brings its own problems. Agile and DevOps practices, typically well-developed for distributed IT, are totally foreign to mainframe staff.  Efficiency, productivity, and morale suffer as mainframe staff play catch-up slowing efforts both in development and at modernization. Problems can also arise with new-to-the-mainframe staff with different operational expectations. They must learn new skills using familiar, as well as not-so-familiar tools on the new platform. 

Additionally, introducing a new tool can lead to an often over looked problem of “implement and forget.” For example, a common scenario can occur when a new tool to improve code quality is introduced. As staff learn the tool, code quality improves. However, productivity slows prompting a management reaction. In the typical overloaded shop, the temptation (for some) will be to forget the new, and default to old habits. Thusly, the tool was “implemented and forgotten” thereby losing its benefits. Absent a formal way to track tool usage, there is no easy way to be alerted to the problem, and, ultimately quality suffers.

Mainstream the mainframe
Mainstreaming the mainframe aims to fully integrate mainframes into the enterprise as just another platform by ensuring the mainframe is included in cross-platform DevOps work processes. As the responsibility for the mainframe must ultimately be transferred to mainframe-inexperienced developers (as well as other staff), enterprises must equip these new professionals with modernized tools and solutions that enable them to confidently work on any application, even those that are old, complex and inadequately documented. 

One such solution is Compuware Topaz, a suite of integrated products all leveraging a modern Eclipse-based IDE, designed to enable developers with a variety of technical backgrounds to be productive in a mainframe environment. The user-friendly interface allows developers to move easily between development and testing as they work on both mainframe and non-mainframe applications.

Mainstreaming the mainframe also requires an integration-friendly solution set. Compuware tools integrate with an expanding array of leading DevOps tools and APIs. As a result, it empowers developers of every stripe to perform and improve the processes necessary to fulfill each phase of the DevOps lifecycle. 

Managing for continuous improvement
CIOs have awakened to the fact that mainframe agility is a fundamental requirement for business agility. It’s no longer enough to embrace new processes and tools that speed mainframe development and enable new developers to innovate on the mainframe. To be successful, enterprises must continuously measure and improve mainframe DevOps processes and development outcomes. To accomplish this, they require a robust set of key performance indicators (KPIs) that measure mainframe DevOps quality, velocity and efficiency. Newly available Compuware zAdviser uses machine learning to find correlations between developer behaviors and DevOps metrics, giving teams both the intelligence and the KPIs needed to understand what they’re doing well in addition to identifying what they could do better.

zAdviser for evidence-based decision making
It’s practically a cliché to observe that you can’t manage what you can’t measure. In our case, it can be updated to say you can’t compete in today’s digital economy if you aren’t continuously measuring and continuously improving development processes and behaviors. Today, IT development and operations have available to them more and different types of data than ever before with more being added almost continuously.

zAdviser presents results via data-rich dashboards…(and) customers work with Compuware staff to explore data to understand correlation between developer behaviors and KPIs.

This is the realm of Compuware zAdviser, which applies advanced machine learning algorithms to metrics captured on the usage of Compuware mainframe tools and popular third-party solutions such as Atlassian Jira and ServiceNow, as well as many source code management (SCM) systems. zAdviser continuously measures and analyzes data and presents the results via data-rich dashboards, built on Elastic’s Elastic Cloud service and its Kibana data visualization technology. Customers then work with Compuware to explore their data to understand the correlation between developer behaviors and KPIs. DevOps leaders can use the actionable insights zAdviser uncovers to make informed decisions about what they need to improve. 

The KPIs


zAdviser calculates and reports on specific KPIs that have been proven to relate to and impact mainframe DevOps and development processes. (Compuware has some 15 years of experience in analyzing data provided by their customers to optimize ROI of their tools.) Compuware identified velocity, quality, efficiency and (employee) engagement as the significant KPIs to track. These are defined as:
  1. Velocity – volume of work completed in a fixed amount of time (lead time, cycle time);
  2. Quality – measure of good vs. bad results (e.g. ratio of number of escaped vs. abends trapped and percent programmers using code coverage);
  3. Efficiency - ability to execute without waste (MTTR, # features developed, MTTD);
  4.  Engagement – interest and involvement of employees in organizational success (anonymized answers to questions to employees).
Secure Streaming of Data
Data can be streamed into zAdviser via Amazon Web Services (AWS). Data is securely delivered via HTTPS, with encryption in transit and at rest, for processing and analysis. Alternatively, enterprises can periodically extract the usage data and use either FTP or the zAdviser File Transfer app for delivery to zAdviser.

zAdviser has been third-party (VeraSafe) certified as GDPR compliant in three critical areas: data storage (encryption), data removal (right to be forgotten), and data masking (of user IDs).

The Final Word
We’ve elected to focus on zAdviser as an unparalleled tool for providing insight to track performance and plan effective action that improves the critical metrics. Demos of the latest version of zAdviser were very impressive. This was especially so in terms of the depth and richness of the feedback on the currently implemented analysis and dashboard. The existing options available to the client are impressive. We expect that future abilities will also excite and delight customers. If you are either just embarking on a transformation project, or have already launched, or even completed a transformation, we think there is significant value to be gained in examining what zAdviser can do for you.

Finally, we are looking forward to next quarter’s release from Compuware. In the meantime, congratulations to the Compuware team. We think current customers will be pleased. Another important item to keep in mind—zAdviser and all its functionality is available at no additional cost to maintenance-current customers!

You’ll find more what we’ve written about zAdviser and Compuware’s Topaz family of products here[2] and on the Ptak Associates Tech Blog[3].