Tuesday, October 28, 2014

Apica Systems - DevOps: Building Performance-Ready Hybrid Applications

Today, users expect applications to be fresh, fast, and facile. “Fresh” requires more frequent software updates and refreshed content. “Fast” means that applications have to perform well or users move on to other applications/providers that do. And “facile” translates to easy and user-friendly, including minimizing frustration from underperforming or non-functional applications.
These demanding user expectations place a bigger burden on DevOps teams who must ensure that applications are operational and performance-ready when released. As a result, DevOps teams are integrating performance testing into their development processes.[1] Following are two brief case studies of DevOps teams (from Virgin America and Activision), describing how they used performance testing while developing their software.

In order to ensure performance of the application on launch day, Activision engaged with Apica to help them design a custom SDLC platform and additional testing agents for iOS, Android, and web browsers. Apica also helped Activision perform regular load tests and performance reviews on the companion app during development.

The Results:
Regular load testing caught performance issues while the apps were being developed, which saved on troubleshooting time and provided crucial insight into many code-based errors and other bugs.
As a result, application developers were able to correct performance issues during the development cycle, instead of waiting until the test phase or after deploying into production.

Case Study: Virgin America

Virgin America, a travel innovator, needed a new, high performance website to meet (and exceed) customer demand. When designing their new web storefront, Virgin developers wanted to create a faster, smoother, more intuitive, and enjoyable online booking experience.
 Virgin America needed a performance testing solution that could ensure performance lived up to their new and improved look, no matter the load.
Apica worked with Virgin throughout the development cycle to pinpoint performance bottlenecks, stress test the booking purchase flow, and provide proactive capacity planning. After working with Apica, they were able to increase overall site performance by as much as 10 times.

The Final Word

As teams rush new hybrid applications to their customers, performance testing is not the place to cut corners because there is so much at stake. Understanding the performance characteristics of each type of environment are crucial, as well as understanding what happens to the application when it reaches its capacity so outage avoidance measures can be taken preemptively.    

To read the full paper, click on this link:

Monday, October 20, 2014

IBM exits chip business to concentrate on systems and solutions

Surprising to few, satisfying many, IBM has announced signing of a Definitive Agreement with GLOBALFOUNDRIES for the latter to acquire IBM's global commercial semiconductor technology business as well as the commercial electronics business (including ADIC and specialty foundry, manufacturing, related operations and sales). The deal includes all intellectual property, world-class technologists and technologies related to IBM Microelectronics. The move has, in fact, been anticipated and rumored for some time. 

Subject to completion and approval of all applicable regulatory reviews, GLOBALFOUNDRIES will become IBM's exclusive provider for 22nm, 14nm and 10nm semiconductors for the next 10 years. The deal calls for IBM to pay GLOBALFOUNDRIES a cash consideration of $1.5B over the next 3 years.

IBM employees and facilities at Fishkill, NY and Essex Junction, VT will transition to GLOBALFOUNDRIES. Semiconductor server group employees as well as those doing semiconductor system assembly, test and fix ‘n repair facilities located in Albany, NY and Bromount, Canada will remain with IBM.

The agreement is structured to provide maximum benefit to the employees and business functioning of both companies. For example, executives in both companies as well as state and local politicians have been working together to protect jobs and investments in the region. GLOBALFOUNDRIES plans close to $10B in capital expenditures in 2014-2015, primarily in New York state.

This marks the final step in IBM’s gradual exit out of what it recognizes as low-margin, hi-volume businesses. IBM recognizes its business strength to be in value-added, service intense businesses. Acknowledging their strength in value-added solutions motivated earlier sales to Lenovo of the System x and PC business. Arguably, these moves could have been implemented earlier, but IBM deserves credit for decisively moving forward to make a clean exit.

Without having access to the details of the underlying financials, I suspect these decisions was made somewhat easier once internal analysis revealed that the end-to-end supply chain integration was neither financially viable nor operationally mandatory. IBM identified a weakness in their operations, evaluated alternatives, and selected what they saw as the best way to move forward.

IBM will be able to concentrate on the areas of high-end and mid-range systems market with mainframes and Power-based servers and solutions. Freed from day-to-day responsibility of semiconductor manufacture, IBM will focus where they have proven ability to drive profitable business. There are some questions though: Isn’t there a risk to the future of their servers if they cannot control the basic chip technology? Future platform capabilities and innovation are closely tied to the underlying chip. Won’t IBM be at a disadvantage if they don’t control the chip?

First, IBM will still develop its POWER and System z processor chips. GLOBALFOUNDRIES will manufacture those chips for IBM. So, IBM maintains control of the design of the chips. This means they will continue to be able to optimize the chip architectures for its systems.

IBM competitors like HP and Oracle have not manufactured their own chip products for a long time. In fact, today HP is almost completely reliant on Intel-sourced commodity chips for its servers – meaning, they have almost no input to the design of those chips. At the very least, it’s clear that control over manufacturing isn’t all that critical. IBM has nearly a decade long close relationship with GLOBALFOUNDRIES, which continues even after they become the largest semiconductor manufacturing employer in the Northeast.

IBM confronts other aspects of this challenge in multiple ways. There are the earlier mentioned plans for continued investment in basic research. There are plans for continuing the close collaboration with GLOBALFOUNDRIES as a partner supplier and design of chips. Next, IBM and GLOBALFOUNDRIES remain active partners in several collaborative semiconductor research activities in joint efforts with the Colleges of Nanoscale Science and Engineering 9CNSE) and SUNY (State University of New York) Polytechnic Institute in Albany, NY.  Thus, IBM has assured that they remain close to and influential in the basic platforms their servers and mainframes depend upon.

IBM will continue to influence semiconductor chip technology through its on-going R&D investment ($3B over 5 year) in semiconductors which complement on-going leading-edge research in cloud, mobile, big data analytics and secure transaction-optimized systems. Silicon chip technology will remain at the heart of basic chip technology for at least a decade, possibly more. IBM has publicly outlined multiple areas of research for the post-silicon world (discussed in our soon-to-be published blog about IBM’s Enterprise 2014 event). 

It’s our opinion that IBM has made a difficult but necessary decision. This business lost $700 million last year.  Many times, often it is compelling and necessary for a decision to be made among alternatives that are not clearly bad, good, better, best. Choices can be all good, all bad or a combination. Worst case, the only decision that can be made requires choosing the least damaging from a collection of bad decisions, then living with the consequences. The ability to make that decision and live with it separates the true leader from imitators. I don’t think this was one of those worst case decisions. Congratulations to IBM’s management that did make a decision, as IBM continues to reshape itself to grow in the changing IT landscape.

Thursday, October 9, 2014

BMC: Automating/Optimizing Mainframe Operations and License Management

One of the real strengths of applied computer technology lies in its ability to automate repetitive, intensively detailed, multi-factor operations. This is especially when the tasks follow a set of highly structured set of rules that while involving many factors are applied in a consistent manner.
Software licensing costs have long been a major bug-a-boo and attack points involving mainframe operations. For a variety of some aging but fully understandable reasons, mining the costs of running software on mainframes for savings has been a risky task. Not just because it is incredibly involved, error prone and time consuming, but also due to the potential of significantly upset clients (if service performance is degraded) or savings of sufficient magnitude are not realized. Both of which have resulted in past efforts. As a result, it is also one that has frequently been avoided or received cursory attention.
There have been multiple attempts at manual processes to manage costs and optimize operations. Typically resource intensive and, while effective, too often tied up scarce resources whose efforts could be more profitably applied at other tasks. And, BMC, IBM, CA and other vendors have offered a variety of products that poked at the challenge of managing MLC and associated mainframe software licensing costs.
We’ve examined various solutions offered by BMC and found them quite interesting. Examples of successful earlier products (such as BMC Application Accelerator for IMS™[1] BMC Capacity Optimization TrueSight Capacity Optimizer for Mainframes[2], and BMC Cost Analyzer for zEnterprise®[3], etc.) to lower the cost of software, raise and optimize the level of mainframe and lpar utilization in the data center while protecting SLAs and protecting critical services. BMC describes the potential cost and operations savings ranging from 2% to 30% from using these products.
Which is why we are very pleased to hear about BMC’s most recent solution offerings that complement these existing products, representing a comprehensive and integrated approach to mainframe software cost and operations management. Building on today’s operating environment with more data, abundant processing power, enhanced understanding of and sophisticated capabilities for integrated, end-to-end data analytics in policy-based process operation, BMC is offering more sophisticated, automated programs to tackle the problem of MLC management and mainframe systems optimization.
Recently announced by BMC were two products, one (BMC Intelligent Capping for zEnterprise®[4]) directed at dynamic, intelligent capacity management for workloads that protects critical service SLAs. The second offering (BMC Subsystem Optimizer for zEnterprise®[5]) optimizes LPAR subsystem operations by removing constraints on placement of DB2, IMS, and CICS. Here’s a thumbnail of each product’s unique benefits.
BMC Intelligent Capping (iCap) uniquely provides:
  • Monitors and dynamically manages defined capacity settings across LPARs and Capacity Groups
  • Uses customer policies and workload priorities for decision making 
  • Recaptures unused capacity – matching increases in caps with decreases
  • Helps customers implement changes that are identified in BMC Cost Analyzer reports.
BMC Subsystem Optimizer for zEnterprise (Subzero) uniquely:
  • Overcomes the technical requirement that subsystems must reside on the same LPAR
  • Requires no application changes
  • Operates transparently with DB2, IMS, CICS, and applications
  • Provides failover capability for DB2 and IMS (nonexistent today)
  • Provides logical next step for taking action based on BMC Cost Analyzer analysis.
Today’s competitive pressures tend to squeeze margins, increasing the value of efforts that optimize costs. Add to these incredibly aggressive and innovative competitors who are willing to buy entry into new markets and a managerial (financial and operational) intolerance for over-engineered, under-utilized infrastructure. IT is heavily squeezed. These solutions deserve the attention of mainframe data center operations managers and financial managers who are seriously interested in improving their operational bottom line. You can also see these in action October 13-16, at BMC’s Engage 2014[6] event in Orlando, FL.


Monday, October 6, 2014

HP splits the company

HP splits

HP announced that by the end of 2015 they plan to split the company in two. One of the companies, “HP enterprise”, will focus on business requirements and will contain the server business, HP's cloud offering –Helion, HP's enterprise services, and HP financial. The other company, “HP Inc.” will contain the printer and PC business. HP financial will continue to provide financial services to both companies.

Several key points

HP ran a conference call to discuss this announcement and answer questions. Several nuances of the announcement became clearer during this call. One of the reasons that Meg Whitman, HP CEO, had decided against splitting off the PC business several years ago was that the cost advantage from HP’s supply chain would be greatly reduced. That argument would seem to apply to the current split. After all one of the justifications that IBM gave for selling the x86 server business was that the Lenovo supply chain would give a cost advantage. Is HP was throwing away its current advantage of the unified supply chain? Meg Whitman, HP CEO, says she expects the two companies would arrange some joint agreements to cover this issue. In other words, the server folks would be able to leverage the component purchases of the PC company to get the most favorable prices.

The Wall Street Journal broke the story only yesterday (Sunday, 10/5) --, although rumors were rife towards the end of last week. This points to very good management discipline on the part of HP since they were able to keep this blockbuster story secret until almost the last minute. The dead story about HP’s negotiations with EMC did leak. Perhaps, this was part of the management plan to conceal the real story? So far, it appears that HP has planned properly.

Meg Whitman stated that there would be three transition offices set up, one in HP corporate and one for each of the two new companies, keeping the transition teams separate from the team that is running the day-to-day business. HP wants to ensure that the 2015 results are good so that the two new companies get off to a good start.

Finally, HP noted that they have suspended their stock repurchase plan because they are in possession of non-public information that requires them to make this suspension. When questioned they said that their M&A activities were involved. It seems that they think that this issue will be resolved by the end of the year but it is intriguing to speculate on what this “non-public” information might be

Customer Impacts

We do not see any immediate customer impacts from this announcement. However it is ironic that HP has pointed out the potential disruption for customers from the IBM sale of its x86 server business to Lenovo. Now HP will have to justify the far larger transition that it is going to go through to split itself in two. In 2016, customers who were buying products across HP will have to adjust to dealing with two companies. Both will still be named HP but as time goes by they will be different companies. 

HP management thinks that since the two companies have the same name (although the consumer company will have the HP logo) that this solves the branding issue. The branding issue would have been a huge cost if HP had spun off the PC business several years ago into a new company with a different name. We are not so sure about the branding issue being resolved because in the long run having two companies with essentially the same name may cause some confusion in the marketplace.

Competitive Factors

It was not mentioned in the call but we think that HP has decided that they need to position themselves to compete with the new Lenovo. Meg Whitman emphasized the value of agility in a fast changing marketplace. We think that HP considered how they will be competitively positioned against Lenovo going forward. They obviously think that the new structure with two companies competing with Lenovo instead of one would be an advantage for them. Time will tell but it’s clearly important for HP to make the right call here.

We have heard from some people in IBM that HP and IBM are not going to be competitors going forward. We think that is a serious mistake by IBM. The HP enterprise company is going to double down on the Cloud and other areas such as enterprise services where they will be directly competing with IBM for the customer’s business. It would be an error for IBM to underrate them.


HP has announced a major restructuring of the company. We think that HP management carefully considered this change. It’s too early to judge how successful this new structure will be. However, Meg Whitman, has built a track record of successful change at HP. We would not bet against her. It was notable that the company’s CFO announced during the briefing that they were going forward with plans to eliminate 14,000 more jobs. They have hit their target of 36,000 but they have identified the additional positions. They plan to reinvest the savings in sales and R&D. They believe that these investments will strengthen the two new companies as they go forward.

Wednesday, October 1, 2014

HP's Gen9 new X86 servers

HP's Gen9 servers

As HP announced their newest line of ProLiant servers, they made the point that they created the x86 server business 25 years ago. In the same way, this announcement sets the stage for the next 25 years. An ambitious goal, which for most other companies one might dismiss as pure marketing hype. However, as the leading supplier of x86 servers today one must take HP’s claim seriously. Clearly, HP will be moving in this new direction for many years to come. We waited a bit to do this write-up to give HP time to get its product offerings in order and to update its web site.[1]

It is true that server architectures need updating. In connection with its last Moonshot announcement, HP described future server performance requirements. Even merely linear development (of servers) will need unsustainable power and space requirements using current architectures. Moonshot represents the first step at addressing these issues. (And a good one in our view.)  We expect customers welcomed HP's redefined server.

The next evolutionary step is the Gen9. Here’s how to get more detailed information.
Searching the HP enterprise (not consumer) web site for “Gen9” servers eventually leads to the ProLiant Server page. Here’s a link[2]. Down and to the right is a tab for “Products and Services”, select it to see a list of current products, such as Blades, Rack servers, Tower servers, etc. Selecting “HP ProLiant Rack servers” displays “Shop for Rack Servers”, clicking on it takes you to a page[3] which provides access to a list of the rack servers that HP is currently offering.

There are four servers listed marked as “New”. All are Gen9 servers. Other severs on the same page are Gen8. Checking the Compare box (below each server) allows you to see the difference between any of the Gen 9 and any Gen8 servers.

Picking one of the Gen9 servers, click on “Learn More” goes to a page with additional models. We picked the DL380 Gen9, and then clicked on the “Select a Model” tab. This shows 4 sub-models of the DL380.[4] Selecting one of the sub-models, allows you to configure it, get more details on it, and explore benchmarks. We picked the most expensive,”HP ProLiant DL380 Gen9 E5-2650v3 2P 32GB-R P440ar 8SFF 2x10Gb 2x800W Perf Server” with a base price of $8,469. Model documentation is accessible to compare it with others. A very useful section on benchmarks appears down the page.[5] We did not explore all of the benchmarks but expect that most of them relate to Gen8 servers right now. Over time the Gen9 results will be added.[6]

Here’s a few of the key points gathered on our trip thru HP’s web site.

First, HP obviously remains in a transition state. They are still selling Gen8 servers in addition to the new Gen9. They decided a gradual changeover is better than attempting a very likely disruptive wholesale change. In the meantime, a Gen8 system might be the best choice for some customers. We agree and think that the way HP is managing this situation is best for them and for their customers as well. Customer choice is generally a good thing. For example, if one needs a special feature that is either not available or supported on Gen9, it remains available on a Gen8 system.

Second, it is well worth exploring the HP support options. Return to the web page referenced in footnote 1 above, and you will see what we mean. A specific example, Microsoft ends support for Windows Server 2003 in 2015. Customers have to move to Windows Server 2012 if they want continued support. HP offers a comprehensive set of options for the move. All are described on this page.[7]

We are not saying that HP’s offerings are the best for any individual customer. We are recommending that anyone planning a Window’s migration should be aware of their offerings; include investigating them as part your migration.

Final point. Elements of the Gen9 systems remain a work in progress. For example, at the OneView web site[8], you find that this key software component does not yet support Gen9. It is promised by year’s end. Other items are in this category. This is to be expected any time that a company like HP makes a major transition in technology. They need a reasonable amount of time to make a full transition.

We recommend that customers evaluating x86 servers to definitely include HP’s Gen9 offering in their appraisal. While it is true that HP needs to fill out the offering, they provide enough details and insight into the future to whet our appetite for more.

[1] We've learned that what a company offers as order-able products on the web site (including the base price) might be different from the announcement. Our interest is in what customers can actually order. We are not accusing HP of doing this.
[6] We did not try all combinations (There are many!) but we were not able to get any results for Gen9. 
[8]'s Gen9 servers