Pages

Thursday, December 1, 2016

HPE IoT Solutions ease management, reduce risk and cut operating costs

By Bill Moran and Rich Ptak

HPE has invested heavily to deliver IoT solutions. They are well aware of the challenges faced by customers attempting large-scale IoT deployments. Thus, our interest in commenting on the November 30th London Discover announcements of new enhancements to their IoT portfolio. First, a quick overview of the IoT world.

IoT is driven by the belief that enhancing the intelligence gathering (and frequently processing) capabilities of new and existing devices coupled with centralized control will yield personal and/or business benefits. The amount of intelligence (for decision-making, etc.) necessary at the edge of the network (in the device) versus the amount retained centrally (server, controller, etc.) varies widely based the use case. For example, a device monitoring what's in front of an automobile that detects a potential crash situation should make a decision to slam on the brakes locally (in the car). The delay inherent in communicating with a remote controller (in the cloud) would be intolerable.

The complexity inherent in IoT becomes even more complicated when multiple and different devices are involved. Each device type communicates in its own way. Use cases vary by task and industry. The automotive situation discussed is very different from that of a power utility monitoring thousands of meters for billing or consumption-tracking purposes.

Many more scenarios exist, and depending on scale and distribution, some will be very difficult and expensive to deploy. HPE set out to provide solutions that reduce the cost and effort required to deploy, connect and manage this wide variety of distributed devices. Here’s the result.

HPE’s IoT portfolio enhancements are in three categories. First is the new HPE Mobile Virtual Network Enabler (MVNE) intended for mobile virtual network operators (MVNO). It simplifies device management complexity of SIM card[1] deployment and management. HPE MVNE allows the MVNO themselves to setup the SIM delivery and billing service instead of buying the service from a carrier. The resulting reduction in complexity and long term billing management reduces operator costs.

The second enhancement is the HPE Universal IOT (UIoT) Platform. The number of IoT use cases implemented over Low Power WAN (Wide Area Networks) is rapidly increasing. Each with their own infrastructure and management solutions and standards. No single vendor has a management solution that works across all the differing solutions. This meant that businesses, e.g. delivering smart city IoT deployments, had to use multiple management systems. The HPE UIOT software platform will support multiple networks (Cellular, WiFi, LoRA, etc.). Using lightweight M2M protocol, it provides a consistent way to manage the variety of networks, as well as data models and formats.

Third are IoT-related enhancements to HPE Aruba[2] (LAN) networks. HPE introduced the Aruba Clearness Universal Profiler. A standalone software package, it permits customers to quickly see and monitor all devices connecting to wired and wireless networks. Helping to satisfy network audits, security and visibility requirements. It is the ONLY such application purpose-built to identify, classify and view all devices.

Finally, the latest ArubaOS-Switch software release enhances the security features of most of the Aruba access switch family. Features include automatic tunnel creation to isolate network traffic for security devices, smart lighting and HVAC control units, and the ability to set network access control policies for IoT devices.

With these IoT offerings enhancements, HPE has made significant strides toward reducing the complexity and costs facing customers developing and deploying IoT solutions. These are unique tools to help customers economically create and operate an IoT enabled world. HPE also demonstrates its clear understanding of the difficulties that customers are encountering.

It appears to us, that HPE is succeeding in its efforts to provide solutions that reduce the cost and effort required to deploy, connect and manage the wide variety of distributed devices available today. We suspect that there are many who will agree. 




[1] SIM cards provide the interface between a device and a cellular connection.
[2] Acquired for its wireless, campus switching and network access control solutions.

Friday, November 4, 2016

BMC Survey on the Status of the Mainframe

By Bill Moran and Rich Ptak
BMC has released the results of their 11th annual mainframe survey.  BMC partners with multiple other parties to collect data and to release the results (e.g. IBM Systems Magazine[1]). This assures they have input from a variety of sources, including non-BMC customers. The resulting expanded range of opinions increases the value of the data.

We review key results of the survey here. We may revisit the topic as additional information is made available. We commend BMC for conducting the survey. It performs a real service for the industry. Studying the results can provide significant insight into what is happening in the mainframe market.[2]

Key results

In our opinion, key conclusions from the survey are:

  1. If the “death of the mainframe” needed more debunking, this survey certainly does so.  It shows there will be no funeral services held for the mainframe anytime soon. Last year 90% of the survey respondents indicated they saw a long-term future for the mainframe. This year that number declined all the way to 89%. Not a statistically significant difference!
  2. The general population of companies, on average, keep more than 50% of their data on the mainframe.  70% of large organizations see their mainframe capacity increasing in the next 24 months.
  3. In large (and other) enterprises, Digital business appears to be driving higher mainframe workload growth.
  4. Smaller organizations are more likely to forecast declining use of the mainframe. 
  5. In contrast, those companies that are increasing their mainframe usage take a long term view of the mainframe and its value. They tend to be more effective at leveraging the platform. They want to provide a superior customer experience, hence they modernize operations, add capacity and increase workloads. They view mainframe security and high availability as critically important differentiators in today's market marked by escalating transaction rates, data growth and rapid response times. 

Other interesting insights

Linux usage on the mainframe broke through the 50% point this year. Its use has been growing steadily ever since a Linux initiative was launched when Lou Gerstner was IBM CEO. Last year, 48% of the survey respondents said that they had Linux in production; this year the percentage rose to 52%.

BMC divides organizations into three groups:

  1. The first group representing 58% of those surveyed say that mainframe usage in their organization is increasing.
  2. The second group (23%) say that usage is steady.
  3. The third group (19%) say that usage is reducing.
We did not do an exhaustive analysis of the differences between the increasing, steady and reducing groups. However, it is worthwhile to sketch a view of some differences between the reducing usage group and the increasing usage group.

In the first place, many reducers indicate that their management believes the mainframe is outdated. This results in pressures to abandon the platform. Thus, their focus is removing workloads. This group is also more concerned about a mainframe skills shortage. Their solution to that problem is, again, to remove workloads thus reducing mainframe platform dependencies.

In contrast, managers of the group that is increasing usage do not appear to believe the mainframe is obsolete. Therefore, there is no pressure to move off the platform. In fact, they actively seek to move new work onto the mainframe. While also concerned about a mainframe skills shortage, their response is to provide internal training and invest in automation wherever possible. Neither outsourcing nor moving workloads off the platform are viewed as viable solutions to a skills shortage.
Figure 1 Top Mainframe Priorities – Chart courtesy of BMC
Next, of interest were respondent priorities. The top priorities for 2016 as identified in the survey include:
  1. Cost reduction/optimization – 65%
  2. Data privacy/compliance/security – 50%
  3. Application availability – 49%
  4. Application modernization – 41%.

Number 5 on the list “Becoming more responsive to our business”, is not given a percentage. We estimate (see Figure 1) it at 38%. We found this somewhat surprising. With all the focus on the digital enterprise and business, we would have thought that this would be at least #3 on the list. Like we said, interesting data comes from the study.

Future Possible Questions

As a quick aside, there are many possible questions to explore. We encouraged mainframers to participate in the survey. Going a step further, put your suggestions in ‘comments’ to this blog. Or, tweet them with the hashtag #PtakAssocMFQ. We will track the results and share them with BMC before the next annual survey.

To start things off, we have a few suggestions for future survey topics:

  1. How many projects were undertaken to move work off the mainframe? What were the results? What were the factors contributing to the project’s success or failure?
  2.  Is (and how much is) the mainframe integrated into the overall datacenter operations? Or, is it an isolated island with mostly batch methods of integration?  
  3. How many organizations are using IBM’s z/PDT, which simulates a mainframe on a PC or X86 system for development?
  4. What is the progress of DevOps modernization? This might connect to the previous point as z/PDT is Linux based and many developers prefer to use Linux tools, but also need access to mainframe data to test their applications.
Of course, we understand that there are many logistical problems in putting together a survey of this type. For example, there is a practical limit on the number of questions that one can ask. However, the answers to these would be enlightening.

Summary

The survey provides significant value to the mainframe community with insights useful to mainframe users, vendors and service suppliers. It can help any mainframe-based organization to plan and optimize for the future. It highlights ongoing community problems even as it corrects conventional “wisdom”, i.e. the mainframe is alive and well. 

Finally, the survey is a valuable tool for understanding the state of the mainframe, its user concerns, needs and priorities. BMC may want to consider extending the reach of the survey to include other organizations, i.e. user groups such as Share. In the meantime, we suggest that you visit the BMC website to discover insights that you can successfully leverage and apply in your operations. 




[1] Note this is not corporate IBM.
[2] Study details available at www.bmc.com/mainframesurvey

Thursday, November 3, 2016

Compuware delivers again! Solution innovation for eight consecutive quarters!

By Rich Ptak

With its October launch, Compuware once again successfully met its self-imposed goal of a quarterly delivery of brand new or significantly enhanced, mainframe solutions. This makes 8 consecutive quarters they have done so. And, for each quarter, the result has been significant, ground-breaking extensions or enhancement of capabilities or accessibility in areas that include mainframe DevOps, risk reduction, app development, systems management and resolution of significant challenges to smooth mainframe operations. The current announcement continues the pattern. Congratulations and kudos to Compuware. Here’s what we found interesting.

Service-based Acquisition
We commented earlier on Compuware’s acquisition of ISPW product technology and its integration with Compuware Topaz. ISPW provides comprehensive, modern functionality for Source Code Management (SCM), Release Automation (RA) and Application Deployment (AD) for both mainframe and distributed platforms as a single, integrated solution. We were enthusiastic about the move and the success of Compuware’s integration. As it turns out, we weren’t the only ones.

The prospect of having a single solution where three separate products were previously required was very attractive to over-stretched IT staffs. Combined with a tight integration to Topaz and you have a solution that is practically irresistible. Customer demand for help in moving to ISPW was so high that it motivated Compuware to make its second business acquisition in 10 months. Compuware purchased the total SCM practice, including implementation services, experienced staff and proven methodologies from Information Technology Service Management (ITSM) firm, Itegrations[1]. Compuware’s SCM Migration Services[2] simplifies, speeds and reduces the risk when migrating from existing vendor-supplied and homegrown systems to ISPW SCM.

Topaz additions, enhancements, extensions and integrations
In keeping with Compuware’s theme of Mainstreaming the Mainframe, the announcement included new Compuware Topaz Connect[3] (formerly Itegrations NxBridge) that automates and simplifies cross-platform connectivity. Customers can automatically connect Compuware ISPW to various ITSM solutions including ServiceNow, BMC Remedy and Tivoli. This reduces manual-processes, time and effort while making the mainframe more accessible, the customer experience better and improving performance metrics. Recognizing that enterprises may not be able to migrate to agile ISPW SCM immediately, Topaz Connect enables CA Endevor users to access required Endevor functionality via Compuware Topaz Workbench[4], a modern Eclipse-based IDE. Through this integration, developers can perform critical activities such as add and move elements in the lifecycle; generate (compile) elements; create packages; and move groups in the lifecycle.

In another major step towards increasing mainframe utilization, raising the accessibility to modern tools and making the move to DevOps faster and easier, Compuware is providing REST APIs, in effect “building blocks,” to be used to control and manage application deployment in both mainframe and distributed environments. The APIs for ISPW enable users to create, promote, deploy and check the status of code releases using popular Agile/DevOps tools including Jenkins, XebiaLabs XL Release, Slack and Atlassian HipChat with Webhook notification.
Additional broader scope APIs will be available in coming months. These will be built to leverage, work with and support open standards and open standards-based tools. For example, complementing the APIs, Compuware plans to add support for a number of popular tools.

JCL has been a longtime hurdle for those looking to develop for mainframes, even more so for millennials. Compuware tackles the challenge with plug-ins for Topaz Workbench. Integrations with Software Engineering of America (SEA) technology include the JCLplus Plugin for Topaz Workbench which will automatically verify standards, check syntax and do runtime simulation of JCL. In addition, there is the SAVRS Plugin for Topaz Workbench, which allows easy viewing and interpretation of Joblog and SYSOUT reports.  

Terse error logs and messages made fault analysis a mainframe frustration for a long time. Adding to the problem, mainframe groups operated in informational and data isolation, siloed away from the rest of the enterprise. As a result, separated and off by itself, the mainframe became a “black box”, sidelined and not recognized as part of the enterprise operations team.

As a start to resolving those issues, Compuware partnered with Syncsort Ironstream to change that. Integrating with Ironstream allows the Abend-AID application fault discovery and analysis solution diagnostic data, together with the mainframe logs, security, and environmental data, to be fed in machine-readable form to Splunk, which combines that data with data from multiple different sources (security, compliance, behavioral, operations, compliance, etc.) across the organization. The combination can then be analyzed, correlated, evaluated to yield operational intelligence. The mainframe’s impact and influence in the context of the total operations is made visible and the importance of the mainframe to overall operations established.     

There is much more contained in the announcement. Our recommendation is that you follow up with Compuware to see how your customers and enterprise development and operations can benefit from their efforts.

Compuware’s overall ambition is to expose and confirm the importance of the mainframe to the enterprise. They do so by removing tool-based barriers that have traditionally inhibited its use to broader DevOps teams and by resolving significant shortcomings, including most significantly, dismantling operational silos. A significant part of the solution is the modernization of mainframe solutions, tools and capabilities so that developers, operations and business analysts can function consistently, transparently across mainframe, distributed and mobile platforms.

A Final word on Compuware’s Vision
If you haven’t noticed already, Compuware operates with a very customer-focused vision to drive its quarter-to-quarter deliveries. It isn’t that they are driven to reflexively react with little forethought. They have an established, consistent product/solution plan with a roadmap of future deliverables.

The basic, bedrock principle is to develop solutions based on what they believe are the critical and most-pressing problems confronting their customers NOW. They are driven by the belief that there are a number of identifiable and curable challenges that act as immediate roadblocks to keep the mainframe out of the mainstream. Their goal is to eliminate those roadblocks and speed the mainframe into mainstream operations. To do that, they have a prioritized, yet flexible list of which challenge they will take-on and when. 

Compuware’s plans are neither static, nor inhibiting of creative, responsive innovation. For example, late last year the team conceived of, built out and delivered Runtime Visualizer, a new feature in Topaz for Program Analysis[5], in just 84 days. This year they acquired ISPW and rapidly integrated it with Topaz[6]. On the heels of that, they acquired and delivered Compuware ISPW SCM Migration Services. Yet focused attention on customer feedback and the rapidly evolving world of enterprise IT isn’t completely unique – and it isn’t sufficient to maintain leadership. Compuware, partners, employees and executives hold themselves to an exceptionally rapid rate of development and delivery. They are aided by a great deal of flexibility in implementation due in a significant part to their own products and organizational vision. They are driven to produce extraordinary results that demonstrate their own agility, as well as that of mainframe solutions and operations. As they promise, Compuware’s employees and executives are delivering “Agility Without Compromise…simple, elegant solutions that enable a blended development ecosystem.”

We’re impressed with what they are doing. We recommend that you investigate to see if you agree.

Tuesday, November 1, 2016

First European OpenPOWER Summit – confirms growing popularity of the platform

By Rich Ptak

The first ever OpenPOWER European Summit opened in Barcelona this week. With over 270 members worldwide (60 in Europe), the OpenPOWER Foundation continues to grow and win over developers and users for the OpenPOWER Platform.

With the demise of Moore’s Law[1], IBM believed that the performance demands of computing with dense databases, extensive virtualization, container environments, etc., can best be met with workload-tuned systems. In that vein, in September, IBM announced three new POWER8-based systems. Sessions at IBM’s Edge 2016 event and an early peek last summer at the next gen POWER9 chips, indicate a platform attracting considerable attention with increasing support and use.

The systems are used by a number of OpenPOWER Foundation members. For example, IBM Power S822LC for High Performance Computing (HPC) is used by Turkey’s SC3 Electronics to create the largest HPC cluster in the Middle East and Africa. For Germany’s Human Brain Project, the Power S822LC for HPC, is the pilot system in a part of a Pre-Commercial Procurement process for the JURON supercomputer at the Jurlich Supercomputing Center.


Why OpenPOWER?
What’s behind OpenPOWER popularity? In a nutshell, in sharp distinction from other vendors, IBM and OpenPOWER Foundation members believe that workload and task specific systems, fully optimized for dominant computer tasks, such as handling and processing Big Data, High Performance Computing, compute-intensive, Commercial computing, etc. provide the best way today’s requirements of IT developers and operations staffs. IBM also recognized the power of open, collaborating communities to rapidly develop and deliver leading edge products.

IBM and the OpenPOWER community form a unique combination that facilitates and speeds innovation to deliver the power and functionality of rapid calculations of sophisticated analytics and manipulation of massive amounts of data needed for practical application of the compute intensive machine learning, deep learning, autonomous device, cutting edge research and development efforts underway today.

The OpenPOWER Community believes the widest range of enhancements, extensions and advancements will result from extensive collaboration around a fully open architecture and standards-driven technology. The OpenPOWER platform and the OpenPOWER Foundation are joined by the recently (October) announced OpenCAPI Consortium[4]. The goal of the consortium is to fully standardize CAPI[5] as an Open Interface Architecture that allows any microprocessor to attach to accelerators, advanced memory, networking and storage devices. It is faster, easier and less expensive than traditional approaches.

OpenPOWER in Europe
The European summit detailed member-driven innovation and extensions with a range of special-purpose accelerators and enhancements using the OpenPOWER platform. In addition to those mentioned earlier, Spain’s Barcelona Supercomputing Center (BSC) is collaborating with IBM -BSC Deep Learning Center to improve and expand algorithms for deep learning systems. See Figure 1 (below) for more OpenPower platform activities in Europe and the UK.


Figure 1 European OpenPOWER ecosystem growing
Key to the success of the OpenPower platform is acceptance in the Dev/Ops world. A partial list of OpenPOWER developer resources and events from around the world include:
  • A European developer cloud – A collaborative effort between the Technical University of Munich at the Department of Informatics; plans are to launch Supervessel, a European R&D by the end of 2016. Similar to a Chinese effort, Supervessel is a cloud built on POWER’s open architecture and technologies to provide open remote access to ecosystem developers and university students.
  •  CAPI SNAP Framework Is a collaborative effort by North American and European OpenPOWER Foundation members to make FPGA acceleration technology easier to implement and more accessible to the worldwide developer community. A beta version is available now.
  •  OpenPOWER READY FPGA Accelerator Boards – Alpha Data showcased its low latency, low power, OpenPOWER READY compliant FPGA accelerator boards for applications requiring high-throughput processing and software acceleration.
  •  OpenPOWER Developer Challenge Winners – some 300 developers competed in the first OpenPOWER Developer Challenge, the four Grand Prize winners announced are:

The OpenPOWER ecosystem continues to grow and add members in the UK, Europe and around the globe. Europe’s 60 members today could double in the next 12 months as leading edge companies (of all sizes) have access to the right platform with the right capabilities for rapid innovation in operations, accelerators, networking, storage and software.

The Final Word
While generic servers aren’t going to disappear any time soon, the OpenPOWER platform and associated technologies clearly address previously unmet needs of developer and operations communities operating at the leading edge of technology and its application. These teams are tackling some of the biggest and most demanding problems and challenging applications that being addressed today. And, more and more of them are finding the OpenPOWER ecosystem supportive of the innovative thinking they demand.
One more observation, OpenPower System architecture isn’t just for the R&D and cutting edge teams. Numerous companies use the technology in a range of situations. The basic strategy is to extend the system capabilities and lower technological barriers to access using creative collaboration to make innovation and leveraging emerging technologies easier. We think they along with their partners are succeeding.

We encourage you to follow the links provided to find out more about what is happening with OpenPOWER Systems[6], the OpenPOWER Foundation[7] and the OpenPOWER ecosystem as it expands around the world. platform and associated enhancements find considerable application in research and operations.

Wednesday, October 12, 2016

BMC Engage 2016 – guiding Enterprises in digital transformation

By Rich Ptak

BMC’s annual Engage event held at the Aria Resort and Casino in Las Vegas attracted over 2500 customers, executives, staff and analysts. In 300+ technical sessions, 90+ customer presentations and multiple keynotes, BMC, clients and 170+ ecosystem partners discussed and demonstrated solutions targeting enterprise digital transformation. Here’s what we took away from the event. 

Multiple speakers detailed the emerging challenges facing enterprises, as well as society, as they undertake the transformation and transition to digitized operations. Multiple commentators label this “The Fourth Industrial Revolution.” We (and others) think this shortchanges the depth and extent of the changes taking place. We believe it merely hints at the extent of the impact. 

Setting the scene

BMC Chairman and CEO, Bob Beauchamp began the conference with a concise summary of the growth and performance of their digital business. Privately owned, BMC doesn’t reveal specific numbers. However, trends in a number of performance metrics point to strong customer acceptance. For FY16 (April ’15 to March ’16), these include: 

·         900 net-new customers
·         30% year-over-year growth in new bookings with each quarter exceeding the previous one
·         24% sales pipeline growth
·         Selection by Forbes as one of America’s Best Employers  

All of this provides convincing evidence that privatization has been good for BMC customers, partners and employees. Let’s see what’s behind all this. 

First is the extraordinarily rapidity of results in app-driven, digitized markets. One example is the disruptive speed of new business models, seen in banking having their “Uber moment” and confronting Uber itself as autonomous vehicles enter the market. Second is the extraordinarily rapid revenue impact of a successful product. It took only 10 days after the introduction of Pok√©mon GO for Nintendo’s market cap to leap from $21B to $42B. A phenomenal increase for any product, let alone a video game. In addition to market impact, transformation is driven by an extraordinary number of technologies entering the market. We’ll talk about them in the next section. 

These few examples of digitization-driven impact dramatically illustrate why BMC believes their customers must “Go Digital! Or DIE!”  Okay, BMC states it a little less dramatically as, “Go Digital or Go Extinct!” – either way, disruptive, existential threats that require action do exist. Enterprises, of all sizes, are realizing they need help to define, plan and execute to move forward. 

The technology drivers

Executives acknowledge that undertaking the journey to become a digital enterprise is inevitable. Successfully navigating the way to digital requires significant new ways of thinking, as well as quick adoption and use of disruptive technologies. These include ones recognized and in use today (e.g. mobile Internet, cloud technology, Internet of Things, virtual reality, Big Data and analytics). Along with rapid advancements in base technologies, such as artificial intelligence, natural language exploitation in combination with newly commercially viable solutions in such areas as advanced robotics, bots, Blockchain, autonomous vehicles, etc. The sheer volume creates an unprecedented number of disruptive changes occurring at remarkable speed across every market segment.  

BMC Digital Enterprise Management (DEM) for the transformation

With last year’s introduction of its DEM initiatives, BMC positioned itself as a capable, willing partner to help enterprises undertake the transformation. Robin Purohit, BMC’s Group President of Enterprise Solutions Organization stated it this way: “Our mission is to equip our worldwide customers with innovations and solutions they need to start the digital transformation journey, stay on course, and be successful in digital business.”   

A large ambition. One that will be welcome news to numerous C-level executives and IT staff who realize: “The digital imperative is clear: go digital or go extinct.” We’ve heard repeatedly from these teams that they are looking for a partner to help them advance down a path to digitization. The question is: “Can BMC deliver what they need?”  

BMC’s overview tends to indicate they can. As seen in the initiatives designed to aid customers in seven strategic areas. Three are integrated solutions targeting the following:
1.    Digital Workplace – BMW provides a faster, better dealer support experience
2.    Secure Operations (SecOps) – Aegon/Transamerica benefits with better security
3.    Service Management Excellence – Wegmans improves services with data analytics 

Then, customers documented successes achieved with innovative BMC solutions for:
4.    Agile Application Delivery – Target described their experience in speeding app improvements and development
5.    Big data – Malwarebytes detailed improving customer services with faster analysis of greater volumes and kinds of customer data
6.    IT Optimization – Swiss Re talked about optimizing IT operations
7.    Multi-Sourced cloud operations – a Ministry of Defense representative described how they simplified operations involving multiple, different cloud environments 

BMC DEM Solutions and Products for customer success

Customers ranging from the largest Fortune 100 to mid-size and entrepreneurs provided further evidence of how BMC services and products help fuel successful transformations. After sampling the over 80 customer and partner presentations and demos available, we’ve come to the conclusion that BMC definitely delivers results.

They do so with operational integration efforts involving their own products and applications to facilitate communications and cooperation between developers and LOB staff. The overall goal is to enable “service management excellence.” One example integrates BMC BladeLogic and BMC Remedy for simplified and improved automated change management. Integration details are provided on the
BMC website[1] along with customer stories. As we’ve mentioned before, customer results will vary. However, it is always worth investigating the successes (as well as the mistakes) of others.  

Proven in BMC’s own transformation

According to McKinsey & Company research[2], “less than a quarter of organizational-redesign efforts succeed. Forty-four percent run out of steam after getting under way, while a third fail to meet objectives or improve performance after implementation.” That’s one reason we were excited when BMC presented results of the five-step process they followed in their own internal Digital Transformation:

1.    Organizing for Digital – organizational and operational changes support digital transformation.

2.    Delivering with speed and agility – increase work environment use of technology and automation (Data Center consolidation, Global Command Center, Unified Communications) for cost savings.

3.    Optimizing workloads – give people meaningful work, automate the rest.

4.    Communication value through Technology Business Management (TBM) – measure and report Digital Service Management (DSM) progress in easily understood terms.

5.    Managing software assets and risks – optimize costs through (pro-)active management. 

As result, BMC went from 62,000 sq. ft. for 36 Data Center/Labs using 1.6 MW power with a $6.8M operating expense to 7,500 sq. ft. for 4 Data Centers using 640 KW power with operating expenses of $2.4M. BMC is sharing both the expertise in applying products and experience in implementation services to help their customers determine what they can achieve.  

One Last Thing

Engage 2016 included much more of interest, including announcement of an Innovation Suite to address escalating app development interest by management and business analyst types. Due in November, it uses an array of the latest development tools (slack, JIRA, Bamboo, docker, GitHub, Jenkins, CHEF, etc.) linked to existing BMC products to allow what is essentially ‘drag ‘n drop’ app creation. Intriguing when you consider the potential to accelerate the move from conception to product delivery! And, we haven’t even touched on the announcements around mainframe products and solutions. We’ll cover all that in a separate piece with more added in future pieces.  

BMC as a private company is proving its ability to act aggressively and effectively to address the most pressing challenges facing its clients, including Digital Transformation. BMC distinguishes itself with its comprehensive, understandable vision of a digital future. They developed and offered DEM as a blueprint for implementation. Uniquely, BMC also raised the issue of the wider societal implications of Digital Transformation and how these will impact the future of the Enterprise and IT in the Enterprise. We will be writing more about that topic in the future.



[2] Steven Aronowitz, Aaron De Smet, Deirdre McGinty, “Getting organizational redesign right,” McKinsey Quarterly, n.d. (accessed October 10, 2016) - https://tinyurl.com/j7za63s

Tuesday, September 27, 2016

The newest IBM Power Systems – more of everything for the hottest environments!

By Bill Moran and Rich Ptak

IBM recently introduced three new Linux-based (LC) Power Systems targeting the hottest workload environments. These POWER8-based, enhanced models are configured to satisfy system cost, performance and processing demands of Big Data, cognitive, GPU, dense computing and memory intensive, high throughput processing. When compared to a Dell system, the newly announced IBM S822LC for HPC achieved 2.5 times the performance with costs of hardware and maintenance 52% lower! Let’s review the details.

IBM’s LC family servers are designed and cost optimized for “scale-out” multi-server cloud and cluster configured environments to satisfy customer preferences for clouds over expanding on-premise data centers.

IBM’s new lineup of LC models includes:

  1. The S822LC for Big Data
  2. The S822LC for Commercial Computing
  3. An S822 LC for High Performance with a new version POWER8 chip and a very high speed link between the CPU and onboard GPUs.
Other family members include:

  1.  An “entry level” S812LC targeting customers with new memory intensive, Big Data workloads.
  2.  An S821LC with 2 POWER8 Sockets (processors) in a 1U form factor for computing in dense database, virtualization and container environments.
We created this table to highlight key features of the different models:

                  Model             # of CPUs       #sockets       Max Cores       # GPUs       Max threads 
S812LC
1
1
8 or 10
--
80
S821LC
2
2
16 or 20
1
160
S822LC for
Big Data
2
2
20
2
160
S822LC for
Commercial
2
2
20
1
160
S822LC for
High Perf.
2
2
20
4
160

Ten-core systems have a 2.92 GHZ version of POWER8, while the 8-core systems have a 3.32 GHz chip. All include what IBM calls a 9x5, 3-year warranty with next day service.

IBM’s website[1] has additional details on other system characteristics that may be important to existing or planned applications.

Some Key Considerations

Complementing the scale-out systems are scale-up systems, IBM E870 and IBM E880. These may be more appropriate for some applications. We do not discuss those here.

The S822LC for High Performance system has characteristics worth mentioning. There is the water cooling option which allows a turbo high speed mode to be used extensively. Also, it uses a new version Power8 chip with a special link to the GPUs in the system significantly speeding up the connection between the GPU and the CPU. IBM reports the old GPU-to-CPU connection speed via a PCIe link was 32GB/sec. The new NVLink clocks out at 80 GB/sec. This leads us to a discussion of system performance.

Performance Background

IBM is very clear they believe the X86 has hit a barrier regarding Moore’s law predictions of future performance improvements. Moore’s law relates to technological performance enhancements over time. See this note.[2] As long as the law applies, price/performance improvements were possible. However, physics is invalidating the law for some existing technology. IBM (and others) believe system architecture changes, not raw hardware speeds are the more likely source of necessary future performance improvements[3].

Building on this philosophy, IBM is making changes and adding interfaces to Power Systems to drive greater performance. A recent example is CAPI, which we have written about elsewhere, drives large improvements in applications using in-memory databases, support of many more threads per core than comparable X86 system and allows more to be done, faster. Adding Graphical Processing Units (GPU) NVLink technology to S822LC for High Performance are other examples of improvement.

Of course, such improvements can only benefit those applications able to take advantage of them. IBM identified those applications (emerging and existing) to gain market advantage by providing systems optimized (cost, price and performance) for such applications. This is the strategy to design and optimize systems for significant market segments.  

Performance Data

IBM has released performance and price performance data matching the latest Power Systems to comparable Intel Systems. Details appear at this URL.[4]

Summarizing IBM’s results, the best performing Power8 system, IBM S822LC for HPC, achieved 2.5 times the performance of a comparable Dell system with 52% lower hardware and maintenance costs. The S822LC for Big Data managed 40% better performance than a Comparable HP system with 31% lower hardware and maintenance costs. It appears that with comparable hardware and number of cores, Power8 systems will outperform Intel-based systems and also have a price performance advantage.

There are caveats about these results. The benchmarks are not industry standard. They are not supported by the TPC or Spec. IBM has made the effort to be transparent by documenting what they did. In the past, when we investigated IBM benchmarks of this type; we found them to be honest and accurate. We believe someone could repeat the benchmarks and get the same results. Having said that, it still remains a fact that any vendor-run benchmark will be suspect in the minds of some.

The IBM results are very useful to make a potential purchaser aware of potentially significant advantages of Power Systems. We recommend potential customers examine Power Systems to determine the potential for benefit in their environment.

Other Considerations

Intel holds the dominant position in the generic server market. We believe customers benefit from competition in an open market. We therefore support other options whether ARM-based or from AMD.

POWER8 provides a realistic alternative. We hope it flourishes. We find the growth of the OpenPOWER Foundation to over 260 companies encouraging. Note, we are not saying to blindly choose a non-Intel alternative. We do believe sensible customers should carefully evaluate all options to determine the best architecture for their business. 

IBM Power Systems possess some significant advantages for specific application types and to leverage new technologies, e.g. Big Data, analytics, AI-Cognitive Computing (Watson), etc. where customers are now investing. Take a look, and decide for yourselves. 



[2] An interesting article about Moore’s law (actually more an observation than a law) and its current state is in Wikipedia. Our opinion is that it supports IBM’s position. See https://en.wikipedia.org/wiki/Moore%27s_law.