Pages

Thursday, August 15, 2013

The Continuing Evolution of Enterprise Computing

Guest blog by Dave Stein of Suvola Corporation, www.suvola.com 

 

As the process of evolution is both continuous and slow, it’s frequently difficult to identify important developments when they first occur.  Thus, even though the Internet already had been gradually evolving for 25 years, few recognized the importance of the Internet prior to 1993 when advent of the browser suddenly gave rise to the World Wide Web.  Yet, few would argue now with the contention that the Internet has revolutionized enterprise computing as its known today.

Similarly, integrated circuits have been steadily evolving for more than half a century, and the concept of a “computer-on-a-chip” has been around for at least 40 years.  Yet, the semiconductor industry is just now entering a new phase of evolution tentatively named “hyperscale integration” which promises to make a “computer-on-a-chip” a physical reality for the first time – albeit under the rubric “microserver.”  And, just as with the Internet, advent of microserver systems incorporating hundreds to thousands of processors well may revolutionize enterprise computing.

Given the previous 60 years of evolution in enterprise computing – encompassing as it did technology transitions ranging from vacuum tubes to discrete semiconductors through small-scale, medium-scale, large-scale and very large-scale integrated circuits – none of which had any great effect on enterprise computing other than to increase its cost-effectiveness, one might reasonably question how and why the advent of microservers would revolutionize enterprise computing.  The answer is that the vast majority of enterprise computing has made limited use of multiprocessing to date.  Thus, much of the enterprise software developed to date will have to be redesigned to cost-effectively exploit the massive parallelism inherent in microserver systems.

The most-important exception to this generalization has been the development over the last decade of software for so-called Big Data applications.  But, even after the expenditure of billions of dollars and ten years of pioneering, most Big Data systems are still unable to cost-effectively exploit hundreds not to mention thousands of processors.

At first blush, a costly and time-consuming transition from software designed to run on one or at most a few processors to software designed to cost-effectively exploit hundreds to thousands of processors may not appear to be a revolution, but it’s certainly going to usher in a new era of enterprise computing.  What then accounts for the evident lack of realization of the need for a sea change in the design and development of enterprise software?

Alas, for fear of negatively impacting sales of current products, computer hardware and software vendors alike find no advantage in telegraphing changes in future products.  Invariably, this adds to industry confusion as industry consultancies and trade publications are denied access to advance information on future product developments.  What is worse is that many computer hardware and software vendors, lacking adequate technological forecasting capabilities of their own, become dependent upon industry consultancies and trade publications for their own assessments of future product developments and technological trends.  This then begins a degenerative cycle in which bona fide technological forecasting is replaced by uninformed speculation.


The current situation regarding the transition from servers to microservers – i.e., the transition from very large-scale to hyperscale semiconductor integration – is a case in point.  Perusal of recent trade publications and industry consultant reports reflects an unusual amount of confusion regarding the definition of a microserver and the role it will play in the evolution of enterprise computing.  Much of the confusion can be attributed to the following factors.

  1.      Confusion of the economics associated with the development of embedded software for consumer electronics applications sold in quantities of millions to billions of units with the fundamentally-different economics associated with development of enterprise software for industrial applications sold in quantities of thousands to hundreds of thousands of units;

  2.      Confusion of converged-infrastructure computers based on conventional circuits and packaging technologies with converged-infrastructure computers based on hyperscale integration at the chip level – when, in actuality, the former is a mere baby step in the direction of the latter;

  3.      Failure to recognize that the parallelization needed by embedded software developed for consumer electronics applications must accommodate only limited numbers of heterogeneous processors, whereas the parallelization needed by enterprise software developed for industrial applications must accommodate virtually unlimited numbers of homogeneous processors – a fundamentally different problem with a fundamentally different solution;

  4.      A tendency by the uninitiated to interpret “microserver” literally as “a smaller server with smaller HVAC, power and space requirements” with little or no understanding of the impact on both hardware and software architectures of the unprecedented economic availability of hundreds to thousands of processors in a single system;

  5.      The inability of most of today’s individual computer systems vendors to provision and support all the hardware and software required to run end-user applications;

  6.      The popular notion of devout believers of the creed that “all software should be free” – which fails to comprehend that the vast majority of the enterprise-software market – i.e., all but “innovators” and “early adopters” – simply can’t cope with the complexities and vagaries of multivendor installations of constantly-evolving, inherently-unstable and nonstandard open-source software; and last but not least

  7.      The fact that some, but not all, technological transitions in computer systems architectures have required the development of whole new software ecologies – as was the case first with supercomputers and then with minicomputers and personal computers.


A corollary to factors 5 and 6 is that enterprise computing systems (as demonstrated by recent IBM and Oracle product announcements and IBM’s success in selling 6,000+ PureFlex systems over the last two years) are moving to fully-supported “appliances” which provision all the hardware and software required to run end-user applications as the most-effective way to bridge over the “chasm” (as defined in CROSSING THE CHASM by Geoffrey Moore) separating “innovators” and “early adopters” from the remainder of the enterprise-software market.

Indeed, one can make a strong argument that microserver systems and Big Data systems, both of which depend on evolving software stacks, will not make it across the chasm in the enterprise-software market unless and until they can be delivered as fully-supported hardware and software appliances.  Ideally, such appliances would be supported by a single vendor to provide single-vendor accountability, but that likely won’t be possible for Big Data systems until such time as de facto standards emerge, which likely won’t happen until considerable industry consolidation has taken place.

However, with the exception of Big Data applications, there is no such obstacle blocking adoption of microserver appliances.  As has been demonstrated by companies like Barracuda Networks, appliances for smaller applications such as Web servers can rapidly achieve market share and de facto standardization in the absence of industry consolidation.


It remains to be seen how drastic will be the hardware and software architectural changes needed to accommodate the transition from minimal multiprocessing to massively-parallel multiprocessing, but it’s a good bet they will be substantial.  Thus, the jury is still out on whether or not a whole new software ecology will evolve around microservers, per se.

 David L. R. (Dave) Stein, is  a computer-industry and start-up veteran (10 high-technology firms). He is currently VP and co-founder of Suvola Corporation (www.suvola.com), a leading enterprise software company in the emerging ARM-based microserver market. His background in the computer industry spans a variety of roles including industry analyst, engineering, sales, marketing and general management. He received a B.S. degree with distinction in Mathematics and Physics and did graduate work in Mathematics at the University of Minnesota Institute of Technology.