Hyper-Converged Systems is a phrase that is continually gaining traction in the cloud market and for good reason. We’ve taken a deep look as to why Hyper-Converged Systems are the latest trend in cloud computing.
In any enterprise, the core performance of the goods or services the business specializes in requires a whole slew of ancillary functions that are not directly related to the specialty, but are critical for the business operation. Traditionally, there are two ways of providing those services, namely purchasing a product and/or hiring an employee to perform it, or contracting with an outside vendor. When applied to IT infrastructure, in many cases neither of these options are optimal- purchasing systems is both expensive and require a skilled administrator, and outside vendors may not offer the response time nor availability. Additionally, IT infrastructure involves multiple technologies, each of which have to be purchased, configured, and deployed. Providing IT is complicated and expensive.
But does it have to be?
One of the most exciting IT developments in recent years has been the move to consolidate the various aspects of IT infrastructure into unified devices, converging their functions behind a unified administration interface. At first, hypervisors consolidated host servers into virtual machines that can be configured, deployed, replicated, and managed with a click of a button. Next came virtualized networking, where layer 2 and 3 functions can be virtualized within the hypervisor. The latest advance took the storage area network (SAN) required for the performance, fault tolerance, and high availability of a modern hypervisor architecture and converged it into the same equipment and interface as well, and the hyper-convergent system was born.
What is hyper-convergence?
Hyper-convergence refers to the merging of logical and physical components of IT systems that provide computing and storage resources into a unified product or service. The promise of hyper-convergence and hyper-converged systems is the reduction of hardware capital expenditure as well as cost and complexity of operation.
Why the Hyper-Convergence boom now?
The promise of hyper-convergence and hyper-converged systems has taken a long time to match reality. Offerings that were available until very recently have been very expensive, very difficult to deploy, or very buggy. Or in most cases, all three. The algorithms for distributed storage that make converged SAN a possibility have been available in production quality code only in proprietary, costly distributed file systems such as Lustre and GPFS. In the last few years, the emergence of stable, production quality lower cost solutions such as Nutanix and open source projects such as Gluster and Ceph have changed the landscape, enabling merging the compute and storage functions into a cluster of hardware nodes to provide the performance, fault tolerance, and high availability features that result from the synergy. The cluster nodes are commoditized because all the heavy lifting is done by the software stack allowing the customer to have greater flexibility in hardware and lowering the cost by not requiring custom solutions.
But wait, there’s more!
The convergence of compute and storage into unified cluster nodes has the additional benefit of seamless scalability. Since cluster member nodes are generic, adding more nodes is seamless and scaling the cluster for either additional storage OR compute resources is as simple as adding another cluster node.
Hyper-converged systems, when delivering on their promise, become the holy grails of computing infrastructure- fault tolerant, inexpensive, and simple to grow. It is no wonder its applications cover the gamut of computing use cases, from Cloud Networking to Infrastructure-as-a-Service (Iaas), Platform-as-a-Service (Paas), Software-as-a-Service (Saas), and much more.