5/5 - (1 vote)

Before moving on to the history of the development of load balancers, we will tell you what role they play in modern computer systems.

The task of balancers is to optimize the use of server computing resources to speed up the processing of requests from user applications. This is achieved due to the uniform distribution of load (traffic) between the servers. Thus, there is no situation when one server is overloaded, and the remaining ones are idle and uselessly consume electricity.

The load balancer itself is special software (there is hardware, but we will not consider them in this article), which accepts requests and redirects them to free servers for processing.

The beginning of the nineties: the first balancers

As we said in one of the previous materials, in the era of mainframes – large universal servers – companies using the time-sharing function shared the computing resources of one system among several users. For decades, this approach has worked fine.

With the expansion of networks — when they began to consist of hundreds of computers — and the growth of server capacity, time-sharing mechanisms were no longer relevant. It took funds to manage the load on many computers at once.

One of the first mechanisms for balancing the load of such systems was Round Robin DNS. The Round Robin mechanism evenly distributes tasks between all network nodes (servers), not taking into account the features and priority of these nodes. The node that receives the processing request first is randomly selected. The remaining servers are determined in order. When the cars are finished, the queue goes back to the beginning. Hence the name – Round Robin, or “carousel.”

In terms of scalability, the solution worked just fine and made it possible to manage an almost unlimited number of servers. However, the technology had several serious drawbacks.

The approach does not take into account the current load of servers between which it distributes tasks. Theoretically, a situation may arise when one computer works more than others, for example, if it regularly receives requests that take longer to process than usual. At the same time, the algorithm does not check whether the server is functioning, therefore, it can send a task to the IP address of a machine that is turned off or completely absent (taken away for maintenance).

Methods exist to overcome such limitations. For example, the use of modified DNS servers (such as lbnamed), which regularly query the servers to check their availability and load. For this reason, Round Robin DNS balancing is still used, for example, in popular DNS servers, BIND and PowerDNS.

The end of the nineties: when you need something more powerful

The number of servers and the amount of data that they were supposed to process continued to grow. The IT community needed more powerful software balancers that were easy to configure on large networks. One of the first “new generation” utilities was the Linux Virtual Server (LVS), released in 1998.

It was able to distribute traffic flows depending on the load on the servers. At the same time, by default, this load balancer worked with Linux OS – it was enough for the system administrator to activate the necessary mechanisms in the kernel of the system and determine the rules for distributing tasks.

Later another open product appeared that quickly became popular – HAProxy. It was created by William Tarro, one of the Linux kernel developers, in 2000. The task of balancing the load in this case is transferred to the group of backend servers that process user requests. The system administrator has the ability to specify the “weight” of each server, which will affect the frequency of its choice by the balancer. So, you can set high weight for powerful cars, and low – for weaker ones.

The system has actually become the industry standard – people were bribed by the speed of work, flexibility and quality of documentation. Today, large IT companies such as Twitter, Instagram, Stack Overflow and GitHub use HAProxy.

Mid 2000s: Going to the Clouds

In the 2000s, cloud technology began to gain popularity. In 2000, the developers of the FreeBSD operating system introduced a virtualization mechanism called jails (literally, “cells”). It allowed to run several “virtual” and functionally independent OS on one computer. The technology was revolutionary for its time, and is still very popular.

Virtualization has led to a review of infrastructure solutions and the emergence of cloud-based traffic balancing systems. Many IT giants like Microsoft and Google, as well as IaaS providers (for example, Azure, Google Cloud, Mail.Ru Cloud Solutions) offered their applications to balance traffic. The creators of OpenStack, a cloud stack integrated with HAProxy, also released their own full-fledged balancer – Octavia.

Our days: new concepts

Today, the world of cloud technology is changing again – fog computing is gaining in popularity. These are such systems in which data and user requests are processed not by the central nodes of the network, but by peripheral devices – personal computers, household appliances, smartphones, drones and other Internet of things gadgets.

For this reason, we can expect the mass emergence of new load balancers that can work with large-scale distributed computing systems. Such solutions are already being developed. For example, the Traefik balancer, written in the Go programming language, is gaining popularity, largely due to its “sharpening” for working with IoT devices.

There are other developments: Netflix released the ribbon balancer, and Google, IBM and Lyft introduced the Istio solution, specifically designed to manage the load on distributed networks.

In the near future, billions of connected Internet of things devices will appear on the market – 75 billion by 2025. It can be expected that new load balancing systems will emerge with them, which will be responsible for the distribution of traffic in the IoT century.