Pages

Wednesday, November 15, 2017

Ptak Associates Tech Blog: Busting Mainframe Myths - BMC’s 12th Annual Survey...

Ptak Associates Tech Blog: Busting Mainframe Myths - BMC’s 12th Annual Survey...: By Bill Moran and Rich Ptak BMC surprised us during the review of the results of their annual mainframe survey. Frankly, we were con...

Ptak Associates Tech Blog: Launching a Secure Environment: Applying IBM’s Lin...

Ptak Associates Tech Blog: Launching a Secure Environment: Applying IBM’s Lin...: By Bill Moran and Rich Ptak Courtesy of IBM The other day we attended an excellent presentation by Dr. Rheinhardt Buengden of IB...

Wednesday, November 1, 2017

Busting Mainframe Myths - BMC’s 12th Annual Survey

By Bill Moran and Rich Ptak


BMC surprised us during the review of the results of their annual mainframe survey. Frankly, we were concerned it would be somewhat boring. After all, after 12 years of surveys, expectations were low for something new, much less exciting. The results, when presented, changed all that.

BMC began by listing 5 popular mainframe myths. For this paper, we ‘ve reordered and reworded the list slightly, to make them more forceful. Here they are with our comments in italics:

1)    The mainframe is in maintenance mode (i.e. an old, dead platform) that no one invests in anymore. Many in the industry believe this.
2)    Executives are planning to replace their mainframes. As the trade press (and some analysts) have been saying for years.
3)    Organizations have already fully optimized the mainframes for maximum availability. No surprise here. They have had a lifetime to do so.
4)    Only elderly, ready-to-retire Cobol types work on the mainframe today. Sun Micro at one point had a video that showed some of them.
5)    If any young professionals work on the mainframe, they cannot expect much of a career.
We admit that our list exaggerates a bit, but it does so to make a valid point. Many non-mainframe people believe item number 1 item is undeniably true. This is the root of remaining 4 points. Despite efforts by IBM, BMC, Compuware, and others working for years to update, improve and mainstream the mainframe, the perception persists. 

This BMC survey provides a giant step toward finally putting these myths to rest.

Before presenting our conclusions and comments, some background. Survey details and logistics are covered in the Results e-Book[1]. The survey captures input from over 1,000 executives and professionals, all working with the mainframe in enterprises down to mid-range shops. Now, for the survey results as they expose the myths.

For myth #1, a full 91% of the respondents view the mainframe as a long-term, viable platform. 75% of respondents are using Java on the mainframe indicating their companies have made the investment to hire or train people in Java usage. 42% identify application modernization as a priority. The specific reason (for modernizing) is to take advantage of new technology. These results provide convincing proof that customers are modernizing their mainframes. Also, far from being dead – mainframes are very active platforms. Myth #1 deposed.
On to myth #2. 47% of the executives interviewed state that the mainframe will grow and attract more workloads, 43 % see it stabilizing, and only 9% say their organizations will replace the platform.  Myth #2 destroyed.

On to myth #3. The claim is that mainframe users have already squeezed the last drop of availability out of the platform. Mainframes have always delivered very high levels of availability, yet a full 66% say business requirements continue to force a focus on further reducing maintenance windows. Simply said, they must increase platform availability. Myth #3 shattered.

Consider myth #4. Mainframe users are mainly elderly, ready-to-retire types. This year, BMC added demographic questions to the survey.  They found 53% of the respondents are under the age of 50 and only 4% over 65.  20% are female of whom the majority 55%, are between 30 and 49. (Interesting side-note, latest figures say only 11% women are in STEM positions worldwide.) Myth #4 deflated.

Finally, Myth#5. No career path for younger professionals. In actuality, a full 70% of the surveyed millennials (under age 30 with less than 10 years’ experience) are convinced that the mainframe will grow and attract new workloads industry-wide. 54% believe that the mainframe will grow within their organization, a sure indication they see career opportunities with the mainframe. Myth #5 is laid to rest.

Logically, this survey will help to kill off some of these common mainframe myths. People will believe what they want to believe. Others are vested in the maintaining the myths. Typically, neither of these will let the facts alter their beliefs. We, however, want as many as possible to be aware of these facts.

We encourage you to investigate BMC’s results for more information and insight. You will likely find the results to be interesting and, possibly, unexpected.

BMC announced these results on November first. For even more of the details and your own copy of the survey, go to BMC’s Mainframe Survey Resources web page here[2].   And, you can read more of our commentary on IT topics in our Tech Blogs[3]. We think you will find that the mainframe has a significant future!


  


ignio: Artificial Intelligence for IT Ops

By Bill Moran and Rich Ptak


Digitate

Figure 1 Artificial Intelligence for IT Ops   Courtesy of Digitate
Indian multi-national Tata Consulting Services (TCS), created Digitate in 2015 to develop and deliver products based on the ignio™ Cognitive Automation platform. Today, (November 2017) these include ignio for IT Operations, ignio for Batch, and ignio for SAP ERP. We think these offer significant value and benefits to IT. Here’s why.


An IT dilemma

IT departments face a dilemma. Their budgets are under severe pressure to deliver more with fewer resources.  Yet, they must also manage and undergo a costly digital transformation that CEOs are relying on to deliver new business opportunities. This dilemma is sharpened, and risk is increased as many of IT’s best people are unavailable because they are focused on firefighting to maintain the SLAs that keep existing customers happy. 

IT benefits greatly when such resources and people can focus on these challenges. This is where Digitate’s ignio products offer substantive assistance[1]. Over time, they “learn”[2] IT operations to allow automation of routine tasks and thus speed and facilitate problem detection and solution. As its knowledge builds, ignio more fully automates problem “find and fix” activities. Meanwhile, it greatly assists with problem resolution.

Determining problems in a complex environment is difficult and time-consuming. ignio can help but most IT shops will wisely choose to selectively implement the more advanced ignio capabilities. A careful plan, as we discuss later, will deliver many advantages by reducing risks and speeding the process.


ignio products

              
ignio for Batch and ignio for SAP identify their application targets. ignio for IT Operations is designed to deliver value across the whole range of data center operations. Each product can integrate with other installed monitors. Data sheets for each product are available on the Digitate web site[3]. Figure 2 shows the ignio platform architecture.

Figure 2 ignio Platform Architecture        Courtesy of Digitate    

Key to ignio’s value is the amount of out-of-the-box knowledge it has about the data center. It knows what a server is, what storage is, and has considerable knowledge about commonly installed operating systems. Inherent in ignio is >30 years of IT infrastructure technology that includes common knowledge about data center operations and IT infrastructures.

The process on how ignio addresses IT challenges has been carefully designed. Through Blueprinting, ignio first learns the environment to identify what is there and determine “normal” behavior. Once ignio identifies “normal,” it can identify deviations. Then, it moves to analysis determining probable causes of the deviant behavior. Finally, ignio recommends or in some cases executes fixes which can be applied automatically. Such repair depends on the installation parameters.

During operation, ignio products follow a continuous cycle of Learn, Resolve, Prevent. The result is that operational models and the knowledge base are continually updated to reflect changes in the environment and operations of the data center.

In addition to being able to “Resolve” issues in the data center, ignio can automate routine tasks that used to take a significant amount of time. IT resources are stretched in most companies, ignio can help address typical employees requests quickly while allowing IT to tackle other more critical challenges.

In its “Prevent” phase, ignio will use the knowledge acquired of system operations to predict likely problems before they happen as well as model the effect of proposed system changes. Very significantly and attractively, we note that ignio does not use scripts. Therefore, staff do not have to deal with brittle scripts that are a nightmare to manage

Suggested Action Plan

We recommend beginning with a study and evaluation of ignio. We found a wealth of helpful material on Digitate’s web site[4] from which to understand Digitate’s product offerings, their potential application in the enterprise, and decide on further investigation of ignio products.

After deciding to move forward with ignio, the next step requires creation of a business case and plan. Senior management judge will judge success by the amount of business value that a technology delivers. You can expect to deliver value in a reasonably short timeline. What is “reasonable” depends on the organization.

The planner needs to understand the organization’s significant problems. They must identify where and what the possibilities are for tangible organizational benefit. Too often, new technology projects fail due to lack of a properly documented business case with a well-defined use case that includes specific benefits enumerated and quantified. Review potential targets to identify which will benefit most from ignio. Avoid a project with a high risk of visible, disruptive failure. Effective application of AI is leading edge so set modest goals to start. Establish readily identifiable payback and quantifiable benefits.

Finally, identify potential pitfalls, setbacks, and difficulties. Then, determine how to address these. How will you recover if the original objective cannot be achieved?  This is a possibility, especially with new technology. Should you consider having Digitate Consulting work with existing staff on the initial deployment and training? Where are problems most likely to crop up? Who is affected by this? Where will objections/blockages occur? How can these be avoided/minimized? 

How long will the install take?

Digitate estimates that it generally takes 6 weeks for ignio to learn and become effective in
 normal operations. This can vary widely by customer[5]. Many installations operate a variety of “normal”. Day-time processing differs from nighttime. Weekdays differ from weekends. End-of-month, -quarter and -year have unique patterns. Some have periods when operations dramatically differ. For example, tax season stresses auditing firm IT systems; fourth quarter stresses retail IT. ignio continuously learns the business context during each period to build a complete model able to detect deviations. Select the initial project time-line accordingly as it may make sense to avoid a critical business period to avoid a catastrophic result.

ignio - Be aware

Currently, ignio does have some limitations. For instance, ignio has limited mainframe support. ignio for Batch will be able to analyze data from the mainframe, but it is not designed as a batch scheduler to execute mainframe’s batch jobs. That said, ignio for Batch can be very useful in certain environments. In our opinion, any shop running hundreds, or thousands of batch jobs would be well served to take a close look at ignio’s products.

Note that current operating system support includes: Windows, Linux, AIX, and Solaris. There is no support, currently or planned, for z/OS or any other mainframe OS.  We expect UNIX versions, like HP-UX, will be added over time.

The Final Word

ignio delivers a valuable, beneficial application of AI technology to IT data center operations. It will deliver worthwhile results to organizations that follow a careful plan for its implementation. Its products merit careful examination. It is new technology and should be handled as such, i.e. with careful management and planning.  

There will be many products using AI technology. Similar offerings are in the market that use AI, robotics, machine learning for cognitive automation in different ways. Offerings for process automation and optimization are available from companies such as Automation Anywhere, Blue Prism, IBM, UIPath, WorkFusion, etc. Business and industry press, consultants and analysts discussing applications of AI and cognitive technologies will only increase management pressure for in-house AI projects.

ignio appeals to us because it offers key advantages to IT. Among the most significant is that their current products can be used in projects totally contained within IT where risk can be best managed. This allows IT to build knowledge and experience to respond to management questions about AI. A project to investigate and apply ignio products to IT operations appears to us to be a very good move.

TCS has a worldwide presence, deep pockets and highly regarded expertise in IT consulting. Digitate benefits as they leverage these in development and delivery activities. Successful, continued innovation in leading-edge technologies requires substantial on-going investment. Stable technical and financial backing benefits both Digitate and its customers.


[1] There is an excellent video, an interview with Dr. Harrick Vin, the CEO of Digitate, on the design of ignio. See https://www.digitate.com/resource/interview-harrick-vin-birth-ignio/ There are other videos as well.
[2] We realize that we are using words that imply that machine learning is identical to human learning. This can be debated but we will use these words without prejudging the results of the debate.
[3] Find these and many more informative resources at: https://www.digitate.com/resources/
[5] External events may also have to be considered. A disaster, natural or otherwise, can dramatically affect data center operations. 

Tuesday, October 10, 2017

Compuware Delivers Topaz on AWS to Mainstream the Mainframe

By Rich Ptak




Figure 1 – Topaz on AWS      Image Courtesy of Compuware, Inc.
It’s time for Compuware’s quarterly mainframe product announcements. This time Compuware kicks off its 12th quarter (3 years) of new and enhanced product releases by partnering with Amazon. The duo upends mainframe DevOps and mainframe IT by combining efforts to deliver web access to the Topaz DevOps software suite on AWS. See Figure 1.

In an industry first, Compuware provides cloud access to modern mainframe development via Topaz. Developers can enjoy the same user experience on the cloud as if Topaz was locally installed while fully leveraging all the security, performance, flexibility, reliability, scalability and accessibility features of the AWS platform.

Topaz on AWS leverages Amazon AppStream[1] 2.0 technology, a fully managed, secure application streaming service that allows applications to be streamed from  AWS to devices running a web browser. Now, all of Topaz’s rich capabilities and more are accessible anywhere through the most popular web interfaces including IE/Edge, Chrome and Firefox. The power of Topaz is accessible regardless of the device used, be it a Windows, Mac or Chromebook desktop system.

Compuware’s patent-pending technology provides an intuitive, streamlined configuration menu that leverages AWS best practices, and makes it easy for systems administrators to quickly and easily configure their Topaz on AWS infrastructure, customized to their specific needs, in a few simple steps.

Enterprises can scale the number of development environments up or down depending on their needs. Developers have fast access to new features and functionality that Compuware makes available every 90 days without administrators having to distribute, load, recompile, modify and test multiple individual systems or installations. Efforts to modernize mainframe operations and capabilities happen faster with fewer delays and without requiring the involvement of critical IT staff.

Compuware and Amazon have created a highly performant, secure and fluid developer experience. Once developers launch Topaz on AWS, they can access datasets and data files, analyze applications, make code changes and manage other mainframe tasks using the Topaz suite of tools as if the user environment was locally installed.

Some architectural benefits of AWS

An important feature is that this implementation fully leverages all of the unique enterprise product strengths of the AWS cloud architecture. These include:

  • ·         Security – individual secure deployment applied and management on a per account basis – with built-in automated security management services to review policies and monitor compliance with security best practices.
  • ·         Cost optimization – automated optimization assures least-cost-to-user and most cost-effective resource management. Periodic reviews and auto-scaling combine to optimize the operating environment as workload volumes and capacity requirements fluctuate.
  • ·         Reliability – AWS management services work to ensure systems are architected to meet operational thresholds to avoid when possible, and quickly recover from inevitable failures to meet business and customer service demands.
  • ·         Operational excellence – Amazon maintains cloud centers located around the world to assure service response and support.
  • ·         Performance efficiency – optimizes system services for maximum performance using available resources, enabling optimal utilization of IT staff and computing resources through automation, cloud-based services and management.


The pricing is right

All of this comes with no new charges from Compuware. The standard Compuware Topaz licensing charge covers the use of Topaz on-premise, in the cloud or in a mixed environment. If you already have a Topaz license, all you need to do is add an Amazon account with AWS cloud services designed to meet the requirements of your specific enterprise operating environment and workload. As mentioned earlier, Amazon AppStream 2.0 services include an automated function to help users find the right optimized pricing model and configuration for their workload and resource needs. These include highly flexible on-demand pricing, spot-discrete fixed price and reserved instances for dedicated predictable workloads, and combinations. We recommend you review the details with an AWS advisor or check here[2] for more information.

What else is new?

Compuware’s quarterly announcements are never about just one thing. This time is no exception. In addition to the major announcement, Compuware has additional improvements and new product enhancements and capabilities to deliver.  

First up is a collaboration with CloudBees Jenkins Enterprise. Leveraging Compuware ISPW and/or Compuware Topaz for Total Test in conjunction with CloudBees Jenkins Enterprise, large enterprises can streamline DevOps on their mainframes and orchestrate DevOps across all platforms. Compuware will co-host a webcast with CloudBees on October 25, which will identify opportunities and help educate users on the latest in mainframe DevOps processes.


   
Figure 2 Webhook Notifications  (Courtesy Compuware, Inc.)
Next is the addition that extends ISPW so that it can stream information and notifications to web apps through Webhook notifications. Webhooks were designed to allow third-parties, such as developers and apps, to make changes to web APIs using callbacks. See Figure 2. This is how ISPW can communicate with Jenkins and other CI services to trigger actions. In effect, it allows ISPW to integrate with other deployment tools and drive continuous integration processes. Activities can be communicated by DevOps teams as they happen in real time to such tools as Slack and Hipchat.

A bit of risk?

Choosing public cloud service delivery of the Topaz suite may appear to be risky or even premature to some potential users. Considerations that come to mind are those surrounding issues pertaining to security, reliability, privacy, infrastructure control and, unfortunately, more and more government imposed legal and legislative constraints and mandates. Most of these issues have been and will continue to be hashed over and argued about in the press. They remain and should remain issues of, at a minimum, keen awareness. Our conversation with Compuware has convinced us that they are working in lockstep with Amazon to reduce the risk and vulnerabilities as much as possible.

Potential customers should identify potential issues and resolve what needs to be done before making the move to the cloud. Others may find cloud-based but maintained and operated in-house on-premise to be the right solution.

The first step to be taken is to perform due diligence to identify and assess potential risks and vulnerabilities. Then, these can be balanced against the significant potential benefits in the form of client/customer satisfaction, staff satisfaction and cost savings that can result from improved operations, increased efficiencies and simplified infrastructure management. Examine what Compuware and Amazon have done to mitigate the risks. We believe that many will find the decision to move this development activity to the cloud makes sense.  

The Final Word

Compuware continues to deliver solutions aimed at “Mainstreaming the Mainframe.” Their strategy depends upon their ability to identify and overcome structural and operational issues that make mainframe utilization and COBOL code maintenance a complex, slow and intimidating task, especially for those new to the mainframe.

Compuware has delivered significant, game-changing products each quarter for the last 3-years. They have not only improved, simplified and sped up mainframe operations and management, but they have also introduced capabilities that were never thought possible or are radically changing mainframe operations. They appear to us to be on track to continue that success. Congratulations to them. Good luck as they move forward. We recommend examining their latest offering.


Monday, October 9, 2017

Launching a Secure Environment: Applying IBM’s LinuxONE Encryption

By Bill Moran and Rich Ptak

Courtesy of IBM


The other day we attended an excellent presentation by Dr. Rheinhardt Buengden of IBM Germany on applying the encryption in LinuxONE[1]. He provided extensive technical detail on installing and implementing a secure IBM LinuxONE Emperor II system (or one of the other IBM Linux mainframe system). It was a highly informative session.

First, nothing that we learned contradicts our earlier blog[2] on IBM’s announcement. We continue to believe that LinuxONE combined with its associated hardware represents the best commercial alternative for security on the Linux market. But, we did get some greater insight into implementing a high-security system.

 We now have a much better appreciation of the level of effort necessary to achieve a secure operating environment. As one might expect, much of the work revolves around having to choose among the many options in Linux. But, it also requires effort fit the new system into the way business is currently organized and done. To accomplish this requires significant skills in Linux and security methods as well as a detailed knowledge of the company’s current processes.

 We provide some specifics here. There are certain to be others. First, consider the interactions between the security key management and the existing disaster recovery mechanism. Some types of keys are system specific and will not work on another system. Careful planning is necessary to identify and handle inconsistencies and conflicts[3]. The LinuxONE system can automatically recover from an abnormal situation but only if the preparation work has been done.  Similarly, backup and archive policies will need a review for similar inconsistencies. The whole issue of key management will need careful study and decisions made in choosing among the various types of keys that can be implemented. Several types of keys are available; each type has its own different properties, advantages, etc.

There are choices to be made over how to handle the encryption applied to files, file systems and disks. Understanding the relative advantages and choosing the best one requires knowledge of the Linux facilities and their interactions with the security facilities. Failure here could result in an intruder being able to access the most sensitive information in the clear; fatally compromising all system security.

The last topic concerns the Linux kernel. Typically, the Linux kernel included security APIs that invoke certain software functions. LinuxONE hardware will speed up these functions. For this to work, the Linux kernel must be updated with code that supports the LinuxONE hardware. IBM has submitted a fix for inclusion in a future Linux kernel release.

This points to a bigger, more significant problem. LinuxONE relies on some Open source modules such as Open SSL, all such dependencies need to be monitored and updated or modified as necessary if security is to be maintained. We mention this point because the Equifax security breach has been tied to a lack of maintenance to open source module. The lesson is that maintenance for all modules in the system must be carefully monitored and applied. Open source code updates cannot and should not be ignored.

In sum, we think that anyone planning an installation of a LinuxONE system should understand the magnitude of the task they are undertaking and plan accordingly.

For a security project of this scope, seriously consider establishing a security subcommittee of the Board of Directors. This group needs to learn enough to ask the hard questions and supervise security audits of the organization’s activities.

A review of the presentation would benefit any group interested in security. And, be most helpful for groups considering purchase of the new LinuxONE system.  However, nothing will substitute for a knowledgeable and active staff handling the installation and operation of a LinuxONE system. Senior management support is critical. We hope our notes here make that clear.



[1] Here is the URL for the presentation: http://www.vm.ibm.com/education/lvc/LVC0927.mp4
[3] Details on this topic are beyond our current scope. See Dr. Buengden’s discussion on the topic 

Monday, October 2, 2017

IBM LinuxONE Emperor II ™, IBM’s Newest Mainframe Linux solution

By Bill Moran and Rich Ptak

IBM LinuxONE Emperor II

Introduction

On September 12th, IBM announced the IBM LinuxONE Emperor II™, a new, dedicated Linux mainframe with significant upgrades from its z13-based predecessor, IBM LinuxONE Emperor. IBM positions Emperor II as “the world’s premier Linux system for highly secured data serving, engineered for performance and scale.” IBM chose the LinuxONE Emperor II “to anchor IBM’s Blockchain Platform cloud service.” We discuss features and provide some thoughts on evaluating the system for your own environment.


Performance Features

Emperor II is a z14-based Linux-only mainframe system designed as a highly reliable and scalable platform for secure data-driven workloads. Key performance improvements include:
·         A 2-3 x performance boost over the z13-based Emperor.
·         IBM described 2.6 x performance advantage over comparable x86 systems for Java work, a result of IBM moving some CPU intensive Java operations into hardware.
·         Powerful I/O processing capability with up to 640 cores devoted to I/O operations, a benefit for I/O limited applications.
·         Emperor II can operate at near 100% utilization with very low performance degradation. Typical competing systems can achieve 50% or 60% utilization before experiencing significant performance degradation.
IBM’s LinuxONE Emperor II is an impressive, powerful, high performance system. Do keep in mind that all performance numbers are application/environment dependent. Therefore, if performance is critical, do your own testing. Vendor numbers can only provide broad guidelines to potential performance improvement.


Security Features

IBM LinuxONE Emperor I enjoyed significant market acceptance for a variety of workloads. Recognizing the escalating interests in security and high-volume data computing, IBM initiated a large engineering effort to enhance and extend already legendary mainframe system security. The z14-based Emperor II takes security to a completely new level.
IBM states that the system represents the most advanced level of security commercially available today. We believe there exists some justification for the claim. Here’s why.
·         A major block to large-scale encryption has been the extraordinary time and effort needed for encryption/decryption. IBM dramatically[1] decreased both by using an on-chip cryptographic processor (CPACF). This allows users to implement pervasive, end-to-end encryption of all data throughout (and beyond) the system. If a hacker breaks in anywhere in the chain, they only get access to encrypted data, useless without the ability to decrypt. 
·         Hardware protected decryption keys. A hardware-assist feature assures keys are never available in memory in the clear. There is no way for a user, hacker or even an administrator to unlock or make the keys visible and useable.
·         All data can be automatically encrypted and remain so, at-rest, in-motion and during processing – end-to-end – from system to user.
·         Encryption security is implemented with no application changes. Security solutions that require any changes (applications) or actions by developer/user/programmer have been a stumbling block for encryption (and other) security approaches.
·         Finally, IBM has a new architecture called Secure Service Containers. These containers protect the firmware and the boot process as well as, the data and the software from any unauthorized change. A traditional weakness has been the potential for system admins to exploit their elevated system credentials or for those credentials to be exposed to internal or external threats and then used to gain access to locally running application code and data. With Secure Service Containers, the only access is via the web or an API granted to those specifically with access to this environment. This closes a hole long used by hackers gain access to critical and private data.


Other key features

Emperor II delivers enhanced vertical scalability (scale up) possibilities, i.e. it allows a collection of tightly coupled multiprocessors to communicate at very high speed using shared memory. This architecture provides a distinct advantage for applications doing sequential updates to a relational database over scale-out systems, such as most x86 systems.
A typical example would be a banking application handling customer accounts. To maintain a correct account balance, all debits and deposits must be processed sequentially. That is, in the order they were performed, e.g. earliest date and time first. An account can be “locked” to ensure accuracy, having shared memory minimizes the latency and associated delay that results from such lock management.  Attempting this via a scale-out collection of independent systems can result in a very complicated software environment and may also result in performance problems whereas, IBM’s Emperor II would have neither problem.


IBM Strategy

Enterprise concerns about data security have changed, now having dramatically increased in priority. While previously it was on everyone’s checklist, when the final purchase decision was made price and performance dominated. Now, security is a deciding factor, and IBM is positioning the Emperor II to win.

This signals a broader change in IBM’s messaging strategy. No longer is the focus on “speeds and feed” with its reliance on numbers, processing speed, price/performance, TCO, etc. to motivate a change of platform. IBM intends to drive the decision using a business case focused on platform design (architecture) targeting the solution of major business and operational problems, as IBM LinuxONE Emperor II does.

Of course, much depends upon the platforms being compared. In many cases, inherent mainframe security will be decisive. IBM’s Emperor II with LinuxONE security and its vertical scalability far exceeds anything a standard X86 platform[2] has.

While we applaud this change in strategy, it can complicate the selling task. Since IBM’s target is x86 systems, sales reps may find themselves competing with Window systems as opposed to a Linux x86 systems. A security discussion comparing LinuxONE to other systems will require a more knowledgeable sales force. Features and functions such as security, Blockchain technology, etc. will have to be explicitly linked to specific business requirements, problem resolution, etc.

One final word on security. The heavy emphasis on security also represents a risk as bad guys are likely to focus on exploiting weaknesses in applications or lax security procedures as the easiest point of vulnerability. Consumers, businesses and journalists are notoriously quick to indiscriminately point the blame to technology for failure. A successful penetration via, for example, an app accessing an Oracle database when the platform functioned perfectly – can quickly be blamed on the platform and the app overlooked. IBM effectively and economically addresses a real problem area. But, there exists much more to be done by the entire community.


Summary

IBM has done an excellent job in implementing security in this system. Anyone looking to achieve the highest level of security in a Linux environment should carefully examine the Emperor II system.  If they have not done so already, they also need to establish a security department to create and monitor organization-wide security policies.

It can’t be said that any system is truly impenetrable. This is true for reasons relating to the very real threat of internal compromise (e.g. carelessness, poor compliance practices, etc.), technological innovation as well as the subversive efforts of very, very sophisticated and clever people attempting to crack the system. We can say that we think that IBM has done an admirable job in creatively addressing a significant number and breadth of security vulnerabilities and problems. They have made it easier and economically affordable (in cost AND resource utilization) for enterprises of all sizes to use encryption techniques to secure systems and data.

We anticipate IBM’s LinuxONE Emperor II will appeal to high-end enterprises. They are familiar with mainframes and have the staff to manage them. IBM will have to work harder to win over those with less mainframe familiarity and without experienced staff. However, recent surveys indicate that efforts to modernize mainframe management and development tools along with the availability of JAVA, Linux, etc. are attracting new users to mainframes.  

Finally, the security that the system offers will be a powerful incentive for certain customers and the total package of the architecture and its features create a system that can deliver solutions to many customers that they cannot find anywhere else. Congratulations to IBM, we’ll watch and report on how this all develops.





[1] IBM did not provide performance or overhead numbers.
[2] By “standard” we mean that high end Oracle and HPE systems may have a scale up design that eliminates the problem that many x86 systems will encounter.

Monday, September 25, 2017

IBM Research on the road to commercial Quantum Computing

By Rich Ptak




Dario Gil, Vice President AI, IBM Research and Bob Sutor, Vice President AI, Blockchain, and Quantum Solutions, IBM Research recently provided a briefing IBM’s perspective on the state of Quantum Computing. They describe three phases in the evolution of Quantum Computing. They describe IBM efforts and contributions as well as a very recent and significant IBM Research breakthrough on the road to commercializing quantum computing.

The breakthrough is in practical Quantum Computing technology. It marks a significant advance towards commercialization of Quantum Computing. We’ll talk about why in a minute. First a few words about quantum computing. The building blocks of this technology are quantum bits, or qubits, which are the quantum informational equivalent of classical bits, the basis of contemporary computing. Bits have only two states. They are either 0 or 1, i.e. binary – from there all of computing is built.

Individual qubits can exist in much more complex states than simple 0’s and 1’s, storing information in phases and amplitudes. Additionally, the states of multiple qubits can be entangled, meaning that their states are no longer independent of each other. The fact that quantum information can be represented and manipulated in these ways allows us to approach algorithms (instructions that are used to solve problems) fundamentally differently, opening up opportunities for exponentially faster computation. A major challenge to be overcome is how to design algorithms that can make use of these properties to solve problems that are traditionally difficult for conventional machines, like efficiently simulating materials. In this case the molecules at the heart of chemistry and material science.

A cover story article in the September issue of Nature magazine details how IBM researchers demonstrated a highly efficient algorithm that simulates beryllium hydride (BeH2), and then implemented that algorithm on a real quantum computer. This demonstration was the largest molecular simulation on a quantum computer to date. You can link to the article here. Unfortunately, it is behind a paywall, but there are plenty of other highly interesting articles on Quantum technology and other topics available there. IBM’s announcement with a short explanation can be found here. Read the article for more details about the breakthrough.

What matters today to enterprises, business and more

The most enterprise-significant parts of the announcement are in the implication for commercial enterprises. These are exposed in the details of IBM’s vision and focus about the commercialization of Quantum Computing technology. It provides insightful information and structure for making decisions about when to begin investigating Quantum Computing and its potential to affect your enterprise or business.
Image Courtesy of IBM, Inc. 

IBM considers the initial commercialization of Quantum Computing to be within sight. It may be as much as a decade away, but can reasonably be considered to be close enough for some early enterprise movers with interest, resources, and vision to begin exploring the technology and its potential.

Let’s position where Quantum Computing is today. The speakers described three phases of Quantum Computing These are:
·        Phase 1 – development of Quantum Science – interest began in the 1920’s, it wasn’t until the 1970’s that the attention of computer scientists’ attention was captured. This led to a decades-long effort to discover and define the physics of quantum technology and then develop the theories and concepts to build-out the science leading to Quantum Computing technology. Quantum Science underlies the entire field, and will continue as long as there is research to be done to continue to advance the technology.
·         Phase 2 – emergence of Quantum Technology – began May 2016 when IBM provided free access to the first publicly-accessible Quantum Computing prototype, e.g. IBM Q experience, on the IBM Cloud. The opportunity to experiment on a real device led to the creation of new problem-solving tools, algorithms, and even games as real Quantum Computers became accessible to the first wave of users beyond theoretical physicists and computer theoreticians. These new users are practitioners; developers, engineers, thinkers and researchers including scientists, chemists, mathematicians, etc. Their efforts focus on understanding and articulating problems in quantum terms. The phase will end when the now-wider quantum information community discovers the first applications where the use of quantum computing offers an advantage for solving certain classes of problems. This leads to the next phase…
·         Phase 3 – the age of Quantum Advantage – the age of full commercialization of Quantum Computing. It will be marked with the delivery of apps able to fully exploit Quantum Technology to solve commercial problems. Quantum Computing begins to compete, in some areas, with traditional computing methods by offering multiple orders of magnitude increases in processing speeds and computational complexity for certain classes of problems.

Things to keep in mind and conclusions:

Quantum Computing systems that can handle commercial-scale problems don’t exist yet. A considerable amount of research and development work needs to be done before you can begin to contemplate configuring a system of software and infrastructure. But the first serious prototype systems that lay the foundations for the more mature machines of the future do exist. Is it time to begin to develop some understanding of Quantum Computing, how it functions and how it is currently being used?
Quantum Computing will complement, not replace, traditional computing. By its nature, it is best suited to solving certain classes of problems that are traditionally-difficult to solve with conventional machines. These are problems where solving them requires evaluating many alternatives to find the best solution, each of which alternatives may be computationally intensive to evaluate. Today, many problems are addressed (and will remain so) with traditional computing simulation, modeling and statistical analysis, albeit while making simplifying assumptions. For many applications, solutions obtained with traditional computing techniques will be adequate. Also, despite some recent claims, Quantum Computing does not invalidate or decrease the need for recently announced advances in computing security. Such protections will remain critical to secure computing long into the future. 
For other applications, computing alternatives are needed, especially in cases that require simulating quantum behaviors. These include modeling chemical compounds, which requires the ability to predict molecular-level interactions. It is believed that wherever the analysis involves evaluating an incredibly large number of combinations of items, Quantum Computing will have a distinct advantage. Some other examples of nearer-term applications of Quantum Computing include optimization and machine learning.
So, what’s the conclusion? First, as we said, commercialized Quantum Computing is still in the future. It is not ready to address short- or medium-term issues. But, that day is coming. At this stage, most can ignore this technology. But, there also are some that should allocate a portion of their resources (time, budget) to get educated about Quantum Computing. Quantum Computing will realize its biggest advantages when users can define problems in its terms. That requires an understanding of the technology.
Clearly, the level of recommended activity varies with the potential to impact. You need to get a realistic idea of that potential. One approach would be to take advantage of IBM’s offer for free access to its Quantum Computing prototype[1]. Another approach would be to fund a sandbox project, or an off-hours task to learn more about and explore quantum technology.  AND thinking about problems in Quantum terms. IBM is making a considerable amount of resources available to do so, much of which is free, some not.  
In summary, our advice is to concentrate on:
·         Understanding the basics of Quantum Computing approach to determine its potential to impact you and your business. We expect most will find its potential optimization benefits too attractive to resist.
·         Learning about and understanding how Quantum Computing will change how problems are viewed, articulated and programmed for solutions.
·         Considering encouragement of “sandbox” or “off-hours” efforts to learn more about Quantum Computing; formal or informal depending on organizational resources and culture.
·         If the potential impact is significant (and we think it is for many), assign a senior executive the responsibility to keep current on the status of Quantum Computing. 
Finally, there exists no single standard for comparing Quantum Computing status today. The metric of the number of qubits available in an array (that makes up a system) – is insufficient.  For a time, conventional “wisdom” posited it as ‘horse-race’ with more qubits being better.
However, the number of qubits alone don’t work if there isn’t time to execute an algorithm (application) before a qubit array ‘ages’ to a bit and loses the data. A way needs to be found to control/correct such error rates. There are three issues: 1) the life of the qubit array, 2) the time for an algorithm to execute, 3) error correction/avoidance.
Researchers are working on these but no single metric yet exists to measure and relate progress. More about these efforts and other issues appear in IEEE Spectrum and Nature magazine, mentioned earlier. 




Publication Date: September 25, 2017
This document is subject to copyright.  No part of this publication may be reproduced by any method whatsoever without the prior written consent of Ptak Associates LLC. 

To obtain reprint rights contact associates@ptakassociates.com

All trademarks are the property of their respective owners.

While every care has been taken during the preparation of this document to ensure accurate information, the publishers cannot accept responsibility for any errors or omissions.  Hyperlinks included in this paper were available at publication time. 

About Ptak Associates LLC
We cover a breadth of areas to bring you a complete picture of technology trends across the industry. Whether it's Cloud, Mobile, Analytics, Big Data, DevOps, IoT, Cognitive Computing or other emerging trend, we cover these trends with a uniquely deep and broad perspective.

Our clients include industry leaders and dynamic newcomers. We help IT organizations understand and prioritize their needs within the context of present and near-future IT trends, enabling them to apply IT technology to enterprise challenges. We help technology vendors refine strategies, and provide them with both market insight and deliverables that communicate the enterprise values of their services. We support clients with our understanding how their competitors play in their market space, and deliver actionable recommendations.