If you happen to work in the information technology field for more than a few months, you will likely run into some form of virtualization of operating systems. In this day and age of finding ways to do more with less to reduce cost, one of the fastest ways to gain economies of scale is to virtualize systems.
Virtualization allows a company to run multiple instances of an operating system on one physical computer system. The basic concept has been around for many years in the form of the Power PC from Apple, but in more recent years, progresses in virtualization software such as VMWare or Microsoft Virtual Server and hardware such as multi-core processors have brought the capability into mainstream operations, and many companies are able to run many instances of an operating system on a single server.
Additionally, companies that perhaps were on a restricted budget and would ordinarily not be able to afford the cost associated with highly available systems, are now able to do so with less hardware expense, increasing customer satisfaction and business uptime. Such is accomplished through products offerings such as Novell’s Platespin.
The interest in virtualization becomes clear when you consider projections for fiscal year 2011 predict well over $11 billion in revenues.
So while the recent advances in virtualization have made the technology practical in many cases, the question becomes “what next?”
Perhaps the answer lies in what researchers at Northwestern University, Sandia National Labs and the University of New Mexico have been recently endeavoring. The project has involved the virtualization of a supercomputer. The Sandia’s “Red Storm” computer system was virtualized using 4096 computer nodes across a shared computing infrastructure.
Originally the more than 38,000 processor parallel computer system was design to analyze modeling for nuclear weapons stockpiles. But now the goal has a much more noble cause… the furthering of democracy in the world of operating systems.
The team was able to use a kind of wrapper base operating system, known as “Kitten” to run multiple user operating systems and make those available upon demand.
Lead researcher for Sandia, Kevin Pedretti said, “Our observation is that no single operating system will satisfy the needs of all potential users so we are attempting to leverage the virtualization hardware in modern processors to allow users to select the operating system best for them to use at run-time.”
Not only will such advances make it possible to break the bindings of software giants on licensing costs, and make newer progressive technologies available to companies that thought they were locked to a purchase made in the past, but it will make such large computing infrastructures such as these university supercomputers more readily available for research, testing, and computer simulations to companies and organizations as massive reconfiguration will not be required to accomodate the host operating system.
As we move through the new decade, many computing and advances in gadgetry are likely to evolve, and recent interest in stability of tech stocks on Wall Street in light of a floundering economy are only going to further these advances. One may wonder if Wall Street is already virtualizing their servers too.