Load Balancing | Brief Introduction
Load Balancing | Brief Introduction
When we are developing a multi audience application, we should calculate the number of users who will use the system. This will be an avrage number. We should add 25% extra users for that. Then we can select what technologies we use to develop the application. After that, we can make a plan for load balancing. If not, the system will be down by failing to handle large crowd. The great example is Sri Lanka’s National Fues Pass system.
Load balancing is the process of distributing network traffic across multiple servers. This ensures no single server bears too much demand. By spreading the work evenly, load balancing improves application responsiveness. It also increases availability of applications and websites for users. Modern applications cannot run without load balancers.
There is a variety of load balancing methods, which use different algorithms best suited for a particular situation.
- Least Connection Method — directs traffic to the server with the fewest active connections. Most useful when there are a large number of persistent connections in the traffic unevenly distributed between the servers.
- Least Response Time Method — directs traffic to the server with the fewest active connections and the lowest average response time.
- Round Robin Method — rotates servers by directing traffic to the first available server and then moves that server to the bottom of the queue. Most useful when servers are of equal specification and there are not many persistent connections.
- IP Hash — the IP address of the client determines which server receives the request.
Load balancers run as hardware appliances or are software-defined. Hardware appliances often run proprietary software optimized to run on custom processors. As traffic increases, the vendor simply adds more load balancing appliances to handle the volume.
Source: Internet