Guest blog by Dave Stein of Suvola Corporation, www.suvola.com
the process of evolution is both continuous and slow, it’s frequently difficult
to identify important developments when they first occur. Thus, even though the Internet already had
been gradually evolving for 25 years, few recognized the importance of the
Internet prior to 1993 when advent of the browser suddenly gave rise to the
World Wide Web. Yet, few would argue now
with the contention that the Internet has revolutionized enterprise computing
as its known today.
integrated circuits have been steadily evolving for more than half a century,
and the concept of a “computer-on-a-chip” has been around for at least 40 years. Yet, the semiconductor industry is just now entering
a new phase of evolution tentatively named “hyperscale integration” which promises
to make a “computer-on-a-chip” a physical reality for the first time – albeit
under the rubric “microserver.” And, just
as with the Internet, advent of microserver systems incorporating hundreds to
thousands of processors well may revolutionize enterprise computing.
the previous 60 years of evolution in enterprise computing – encompassing as it
did technology transitions ranging from vacuum tubes to discrete semiconductors
through small-scale, medium-scale, large-scale and very large-scale integrated
circuits – none of which had any great effect on enterprise computing other
than to increase its cost-effectiveness, one might reasonably question how and
why the advent of microservers would revolutionize enterprise computing. The answer is that the vast majority of enterprise
computing has made limited use of multiprocessing to date. Thus, much of the enterprise software developed
to date will have to be redesigned to cost-effectively exploit the massive
parallelism inherent in microserver systems.
most-important exception to this generalization has been the development over
the last decade of software for so-called Big Data applications. But, even after the expenditure of billions
of dollars and ten years of pioneering, most Big Data systems are still unable
to cost-effectively exploit hundreds not to mention thousands of processors.
first blush, a costly and time-consuming transition from software designed to
run on one or at most a few processors to software designed to cost-effectively
exploit hundreds to thousands of processors may not appear to be a revolution,
but it’s certainly going to usher in a new era of enterprise computing. What then accounts for the evident lack of
realization of the need for a sea change in the design and development of
for fear of negatively impacting sales of current products, computer hardware
and software vendors alike find no advantage in telegraphing changes in future products. Invariably, this adds to industry confusion
as industry consultancies and trade publications are denied access to advance
information on future product developments.
What is worse is that many computer hardware and software vendors,
lacking adequate technological forecasting capabilities of their own, become
dependent upon industry consultancies and trade publications for their own
assessments of future product developments and technological trends. This then begins a degenerative cycle in
which bona fide technological forecasting is replaced by uninformed
The current situation regarding the transition from servers to microservers – i.e., the transition from very large-scale to hyperscale semiconductor integration – is a case in point. Perusal of recent trade publications and industry consultant reports reflects an unusual amount of confusion regarding the definition of a microserver and the role it will play in the evolution of enterprise computing. Much of the confusion can be attributed to the following factors.
Confusion of the economics associated with the development of embedded software for consumer electronics applications sold in quantities of millions to billions of units with the fundamentally-different economics associated with development of enterprise software for industrial applications sold in quantities of thousands to hundreds of thousands of units;
Confusion of converged-infrastructure computers based on conventional circuits and packaging technologies with converged-infrastructure computers based on hyperscale integration at the chip level – when, in actuality, the former is a mere baby step in the direction of the latter;
Failure to recognize that the parallelization needed by embedded software developed for consumer electronics applications must accommodate only limited numbers of heterogeneous processors, whereas the parallelization needed by enterprise software developed for industrial applications must accommodate virtually unlimited numbers of homogeneous processors – a fundamentally different problem with a fundamentally different solution;
A tendency by the uninitiated to interpret “microserver” literally as “a smaller server with smaller HVAC, power and space requirements” with little or no understanding of the impact on both hardware and software architectures of the unprecedented economic availability of hundreds to thousands of processors in a single system;
The inability of most of today’s individual computer systems vendors to provision and support all the hardware and software required to run end-user applications;
The popular notion of devout believers of the creed that “all software should be free” – which fails to comprehend that the vast majority of the enterprise-software market – i.e., all but “innovators” and “early adopters” – simply can’t cope with the complexities and vagaries of multivendor installations of constantly-evolving, inherently-unstable and nonstandard open-source software; and last but not least
The fact that some, but not all, technological transitions in computer systems architectures have required the development of whole new software ecologies – as was the case first with supercomputers and then with minicomputers and personal computers.