Monday, December 14, 2015


By Audrey Rasmussen, Bill Moran and Rich Ptak

Dell recently announced a definitive agreement to acquire EMC, including its approximate 80% ownership in VMware. Shortly after, EMC and VMware announced the creation of a joint venture in Cloud computing, named Virtustream. Does this mega-merger makes sense – for the companies, the industry, customers and shareholders? This report focuses on the potential impact of this acquisition on the companies, industry including potential competitors, and customers. We need to qualify this document by saying that this merger is an evolving situation and developments after the completion of this document will affect many of our conclusions.  

Read the report at:

Monday, December 7, 2015

POWER8, Linux, and CAPI provide micro-second information processing to Algo-Logic’s Tick-to-Trade (T2T) clients

By Rich Ptak and Bill Moran

Rapid processing of data improves decision-making in trading, research, and operations, benefitting enterprises and consumers. Computer servers accelerated with Field Programmable Gate Arrays[1] (FPGAs) operate at the greatest speeds to collect, analyze, and act on data. As data volumes sky rocket, processing speed becomes critically important.

Algo-Logic[2] leverages the speed of FPGAs to achieve the lowest possible trading latency. Their clients have access to data in 1.5 millionths of a second, enabling them to make better trades. Algo-Logic Systems’ CAPI-enabled Order Book is a part of a complete Tick-to-Trade (T2T) System[3] for market makers, hedge funds, and latency-sensitive trading firms. The exchange data feed is instantly processed by an FPGA. The results go to the shared memory of an IBM POWER8 server equipped with the IBM CAPI[4] card and specialized FPGA technology. Then, in less than 1.5 microseconds, it updates an order book of transactions (buy/sell/quantity).

Stock trading generates an enormous data flow about the price and number of shares available. Regulated exchanges, such as NASDAQ, provide a real-time feed of market data to trading systems so that humans and automated trading systems can place competitive bids to buy and sell equities.  By monitoring level 3 tick data and generating a level 2 order book, traders[5] can precisely track the number of shares available at each price level. Firms using Algo-Logic’s CAPI-enabled Order Book benefit from the split-second differences in understanding and interpreting the data[6] from the stock exchange feed.

Algo-Logic released their CAPI-enabled Order Book in March 2015. Multiple customers now use it in projects that include accelerated network processing of protocol parsing, financial surveillance systems, algorithmic trading, etc. with many proof-of-concept projects underway.

Algo-Logic found success with Linux, POWER8, and CAPI. We expect to write more about, Algo-Logic and other OpenPOWER Foundation[7] partners as they continue to develop solutions and POWER8-Linux systems demonstrate their ability to handle big data at the speeds developers, architects, and users need.

[2] Located in Silicon Valley; see:
[3] See, also see:  “CAPI Enabled Order Book Running on IBM® POWER8™ Server” at:
[5] We oversimplify stock market operations for clarity. For more details visit the footnotes.
[6]  This is  High Frequency Trading (HFT), for information, see:

Tuesday, December 1, 2015

IBM Watson + Power Systems mainstream Cognitive Computing

By Rich Ptak

Five years ago, powered by IBM POWER7 servers, a master-bedroom sized Watson broke into public consciousness making headlines as an undefeated champion against past Jeopardy winners. Since then IBM has put "Watson to work" with the latest POWER8 technology, OpenPOWER Foundation partners and multiple support centers...IBM is "mainstreaming" what's being done and our take on it....

Thursday, November 19, 2015

OpenPOWER's Order–of-Magnitude Performance Improvements

By Rich Ptak and Bill Moran

Performance improvements come in different sizes. Often vendors announce a 20% or 30% performance improvement along with an increase in the price/performance of their product or technology. Much more rarely, a vendor delivers an order-of-magnitude improvement. An order-of-magnitude improvement equates to a performance increase of a factor of 10. Improvements on this scale underlie recent[1] technology acceleration announcements[2] by IBM and other OpenPOWER Foundation members.

Why are tenfold performance improvements especially important? Here’s why.  Consider this transportation example of what an order-of-magnitude change means. Let’s say running can be sustained at a rate of 10 miles per hour. An order-of-magnitude change raises that to 100 miles per hour. Many cars can achieve and maintain that speed. (We aren’t recommending that!) Another order of magnitude improvement in speed moves us to a jet airplane at 1,000 miles per hour. Another increase of this magnitude moves to a rocket reaching 10,000 mph.

Notice that each magnitude change increases not just speed, but dramatically transforms a whole landscape. Moving from the jet to the rocket allows escape from earth’s atmosphere to go to the moon. This demonstrates the potential importance of order-of-magnitude improvements. The OpenPOWER announcements detail multiple such improvements, let’s examine a few.

One example comes from Baylor College of Medicine and Rice University announcing breakthrough research in DNA structuring[3]. The discoveries were made possible by an order-of-magnitude improvement in processor performance. As reported by Erez Lieberman Aiden, senior author of the research paper, “the discoveries were possible, in part, because of Rice’s new PowerOmics supercomputer, which allowed his team to analyze more 3-D folding data than was previously possible.” A high-performance computer, an IBM POWER8 system customized with a cluster of NVIDIA graphical processing units “allowed Aiden’s group to run analyses in a few hours that would previously have taken several days or even weeks.”

Another example involves IBM’s Watson and NVIDIA’s Tesla K80 GPU system[4]. Watson[5], of course, is IBM’s leading cognitive computing offering which runs on IBM OpenPOWER servers. NVIDIA’s new system allows Watson’s Retrieval and Rank API to work at 1.7 x its normal speed. Wait a minute you might say. Where is the order-of-magnitude change here? 1.7 is impressive, but it’s no order-of-magnitude change.

Almost as an afterthought, IBM mentions that the GPU acceleration also increases Watson’s processing power to 10x its former maximum. So there we have another tenfold improvement in performance arrived at by marrying other technologies to Power.

Finally, Louisiana State University published a white paper[6] stating that Delta, its OpenPOWER-based supercomputer, accelerates Genomics Analysis by increasing performance over their previous Intel-based servers by 7.5x to 9x. Not quite an order-of-magnitude, but close. 

The announcement includes more examples demonstrating the potential of the OpenPOWER philosophy, OpenPOWER Foundation and Power Systems to achieve dramatic results across multiple industries. The fundamentals of the POWER architecture lead us to anticipate continued improvements in Big Data processing. Such developments will accelerate the growth of the Internet of Things. It will also drive fundamental changes in the possible types of processing, just like those happening with Cognitive Computing. 

Tuesday, November 17, 2015

Do Oracle's latest SPARC comparisons reveal more than intended?

by Bill Moran and Rich Ptak

At Open World 2015, Oracle announced its latest version of the SPARC microprocessor. In this blog, we focus on Oracle's performance claims versus others. We know that all vendors like to highlight their system’s performance advantages over competitors. Oracle is no different. Typically, claims are based on benchmarks tailored to a specific workload or are standardized. Standardized benchmarks have more or less rigidly enforced guidelines. The Oracle announcement claims to have advantages based on standardized benchmarks. Oracle (like any vendor) makes every effort to make their system look as good as possible. That is to be expected. We found no evidence of cheating. We do think their results call for commentary.

A few words about benchmark testing. Some years ago, there was a benchmark expert named Jack; he held a PHD in mathematics. He wanted to bet $100 that he could write a benchmark proving any system better than any other system. It didn’t matter which system was faster nor how different they were. He could ‘fix’ the winner. We didn’t doubt he could do that and didn’t bet. The point is if one completely controls the benchmark, one controls the result. That is why Industry standard benchmarks, e.g. SPEC[1], TPC[2], exist. However, some have more restrictions, e.g. TPC requires audit to certifiers and dictates how price/performance is calculated. This makes them very expensive and less likely to be run. In between TPC and Jack’s creation, SPEC’s less onerous rules make a good compromise. Care still needs to be taken when interpreting results.

Benchmark 1 SPECjEnterprise:
Oracle’s first performance point is based on results from the SPECjEnterprise2010[3] test. Table 1 is what the Oracle press release[4] presents. We added the last column
System Tested
Date of Test
SPECjEnterprise2010 EjOPS
SPECjEnterprise2010 EjOPS
IBM Power S824
SPECjEnterprise2010 EjOPS
IBM x3650 M5
SPECjEnterprise2010 EjOPS
Table 1 SPEC JEnterprise 2010 results
Oracle did include the two best “IBM” results. However, the test date shows that the IBM Power result is 16 months old. Does this make any difference? We don’t know. But, it is quite conceivable that if the test were run with a newer system, the results would be better. The IBM X3650 result is newer. But that system was sold to Lenovo making the comparison irrelevant.

Other points to consider when evaluating the data include:
  1. SPEC benchmarks have no rules controlling calculation of price/performance, nor are system prices provided. Therefore, it is impossible to calculate the system price/performance. Comparing a $100K system with $500K one makes no sense without knowing the relative costs.
  2. For a generic benchmark like SPEC, it isn’t known how close it reproduces or reflects real workload performance. There is no guarantee that the advantages hold in production environments. A benchmark with system A running faster than system B, does not assure A outperforms B running a real workload.
  3. The “Status” column scores brand new Oracle security features announced at Oracle World and described in the press release (Footnote 5). Ellison also discussed them in his Open World kickoff talk[6]. Oracle claims these new security features are low cost. The results include when the features are turned on (secure), and when turned off (unsecure). Somewhat arbitrarily, IBM/Lenovo systems are labeled “unsecure”. It isn’t surprising IBM hasn’t implemented security features just announced by Oracle. But, that is no indication the systems are unsecure. We disagree with labeling it as such.

One final observation, browsing Spec jEnterprise benchmark results, one could conclude that Oracle’s performance degraded over the past several years. Why? The most recent Spec jEnterprise 2010 result in Table 1 is 25,818 EjOPS. But data from March 26, 2013 has Oracle reporting 57,422 EjOPS! Conclusion, performance degraded by some 50%!! It doesn’t make sense to us. But, that’s what happens when context is ignored and benchmark results are taken literally. We’ll leave it to Oracle to explain this one.

Another Benchmark: Hadoop Performance

Table 2 shows another Oracle benchmark in the press release.
Oracle SPARC T7-4
4 Processors
32.5 GB/min per chip
Oracle SPARC T7-4
4 Processors
29.1 GB/min per chip
IBM Power8 S822L
8 node cluster
3.5 GHz – 6 Core
7.5 GM/min per chip
Table 2 Hadoop Terasort Benchmark
The Hadoop Terasort benchmark accompanies the Apache Hadoop distribution[7]. An examination of the results include both good news and bad news for Oracle. The good news is that the result seems to show that Oracle outperforms IBM by a factor of 4. But, there is no date given for this result. Were both tests run at the same time? Or is the IBM result, once again, older? As discussed, it makes a difference. Other context data is missing. Without system costs, there is no way to judge how realistic the comparisons are. The results do have a “gee whiz” factor but lack substance.

The bad news is a bit more subtle. Elsewhere, Oracle claims implementing their security features is very low cost. This result raises some questions as it appears performance degrades by about 10% with security turned on. Finally, the critique about labeling the IBM system unsecure still holds.

Another Benchmark: SAP performance

Perhaps the most useful commercial benchmark is the SAP benchmark.  Oracle has submitted a result for this benchmark as recently as last month (October). Table 3[8] shows result for the latest Oracle and IBM submissions.  
Solaris 11
AIX 7.1
Table 3 SAP Benchmark Results
SAPS are the key performance metric; that it is closely related to a real SAP workload[9] adds further credibility. We can’t claim it proves that IBM does a better job than Oracle running all SAP workloads. However, it is an additional data point. More data as described earlier would provide better context for a decision.

One more comment, during his Open World keynote talk[10], Larry Ellison strongly emphasized that Oracle never sees SAP or IBM in competition for business in the Cloud. He repeated this multiple times. The Oracle PR department needs to know about this. The Wall Street Journal of 11/5/2015 had a front page ad[11] by Oracle detailing performance advantages versus SAP (in the cloud). The claim is that the Oracle database runs twice as fast as the SAP Cloud when compared with HANA. (NOTE: Ads only appear in the print version of the WSJ). If Larry is accurate about never seeing SAP in the cloud in competitive situations, the ad wastes money.

However, Oracle has written a white paper[12] to document this benchmark. Note the legal disclaimer at the top of the white paper. Oracle claims in the document that SAP has tried to conceal Hana performance so Oracle is running the benchmark to clear up this issue. We think that this situation is a minor version of the “benchmark wars” of the past. Frankly, we have neither the time nor the space to attempt to sort the whole issue out at this time. However, it does reinforce our point about the care needed to interpret benchmark results.

The Final Word

We’ve pointed out some concerns with Oracle’s claims including highlighting some contradictory claims regarding their competitors and competition. In fairness, Oracle usually does just present a benchmark result letting the reader draw their own conclusions. (Okay, they would nudge the reader toward a conclusion.)

We’ve tried to present a bit more context around Oracle’s benchmark results. We’ve also pointed out that benchmark data must be treated with care. Clearly, benchmarks using real production workloads (or a subset) running on multiple systems with configuration and cost details included are most credible.  Other comparisons can be significantly cheaper but should be less trusted. Be wary of unsubstantiated, poorly documented claims whether in whatever source. Better decisions will result. 

One final word, we recently received (November 16, 2015) a press release which included product information and performance claims. It discusses OpenPOWER Foundation member activities with IBM’s Power Systems. It has great information. It also provides great examples germane to this paper. For instance, the last sub-paragraph describes OpenPOWER Server providing Louisiana State University a 7.5x to 9x performance increase over a competitor’s server doing Genomics Analysis. It is footnoted with server details, and a link to an LSU white paper with additional details about the systems and benchmarking.  We think you’ll appreciate the difference.

[5]  Oracle rates just announced ‘security’ features available only on their systems. Obviously a 6 month old IBM or Lenovo system wouldn’t include these features. See comments later in the text,
[10] You will find Ellison’s  talk at
[11] Oracle provides a URL in the ad – corrected to the following: The copy of the WSJ ad appears on the right side of the page.

Monday, November 2, 2015

Compuware’s Topaz™ Runtime Visualizer – on the way to DevOps Nirvana!

By Rich Ptak

Compuware recently announced new features for its Topaz for Program Analysis product. Agile processes, hard work and commitment allowed key component, Topaz Runtime Visualizer, to go from conception to product reality in just 84 days! It demonstrates mainframe agility, relevance and ability to move at the speed of a digitized market. 

Read what we  have to day about this latest release from Compuware on our webpage at:

Friday, October 30, 2015

BMC’s 2015 mainframe survey: Growth Opportunities for the Mainframe

By Bill Moran and Rich Ptak

BMC’s recently published results of their 2015 Mainframe survey[1] provides data supporting our views on existing mainframe (MF) growth opportunities. Over the 10 years BMC has conducted this survey, changes in markets, strategies and products meant some questions were dropped while others were added. The inconsistency prevents a fully detailed history of market evolution. However, it still provides key insights into an evolving market. We encourage a review of the complete survey. Here’s a quick sample of its results. 

Figure 1 (BMC Chart 3) identifies the reasons customers give for growing MF usage. (These results only include customers increasing their usage.) The largest group (56%) gave security strengths as the key factor for increasing usage. Second most important (55%) were the MF’s availability advantages. Taken together, these represent an important advantage and market opportunity.

Figure 1 Top Reasons for Capacity Growth

Security is hugely important. Target, Home Depot and Sony as well as others have suffered intrusions. The results have been disastrous. Target’s CEO lost his job. Sony and its executives were profoundly embarrassed when hackers published emails with comments crudely critical of some stars. We believe that if the board of directors of these companies knew they could have avoided this situation by moving key applications to a mainframe, they would have jumped at the chance. The risks rise dramatically as more users access mainframe-backed accounts and files via mobile devices. As this practice expands across industries, platform availability is critical. Customers won't tolerate long response times or unavailable servers; a company must respond quickly, or users will abandon them.

Next, two more reasons identified for increasing usage were the mainframe’s superior centralized data server (48%) performance and transaction thruput capabilities (45%).  Anyone considering creation of a centralized data server should evaluate the mainframe. Respondents view the mainframe as a heavy duty data server as well as the platform to handle very high transaction rates.

Figure 2 More Reasons for Capacity Growth

Figure 2 (BMC chart 5) identifies more advantages. Some 24%  of respondents find the cost of mainframe alternatives too high. While not true for every workloads, for the right ones the mainframe proves to very cost effective. Mainframe vendors should do more to promote customer experiences that refute stories of overly expensive mainframes by identifying specific mainframe-friendly workloads.

Figure 3 Most Important Business/IT Alignment Issue

Figure 3 (BMC Chart 12) documents more opportunities in Business/IT alignment, a widely felt need. Some 48% say that IT must respond faster to business requests. As IT becomes more integrated into enterprise activities, business unit demands for speedier service grow. Vendors helping IT to speed service creation and delivery have a ready market waiting.

Figure 4 New Java Program Development Driving Demand

Vendors offering mainframe Java development tools have a growing market. Figure 4 (BMC Chart 15) reveals that some 79% of respondents identify new development of Java apps as driving mainframe growth. COBOL’s mainframe dominance appears over, though the existing inventory of  programs won’t disappear quickly.

Figure 5 Linux used in Production Environments

Figure 5 (BMC Chart 17) shows that zLinux usage has more than doubled since 2006. Nearly ½ of respondents (48%) run Linux in production. IBM’s Linux strategy appears to have paid off. These figures don't include the impact of IBM's recently announced LinuxONE mainframes. Hopefully, they will next year.


BMC uses the survey to gain insight into the mainframe market as well as into the issues, problems, attitudes and concerns of customers as do many others. BMC's decision to publicly share a significant amount of the survey results is appreciated. A few suggestions for BMC's next survey. It would be interesting to include questions about other mainframe initiatives and products. For instance: What do Linux users think of IBM’s zLinux push? Are developers using specialized Linux mainframe clouds? Also, what's the prevailing opinion of the zOS mainframe development environment (IBM’s zPDT[2]) running on Linux x86 servers?

We have only hit a few of the highlights of BMC’s survey. Again, we recommend review of the complete study[3]. Anyone interested in the mainframe marketplace will find many of the results to be quite fascinating. Congratulations to BMC for its continuing investment in this survey. Other mainframe vendors would do well to study the survey carefully.