What is a Back-End Load?

Load balancing is built on the existing network structure. It provides a cheap, effective and transparent way to expand the bandwidth of network devices and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability.

Load balancing, the English name is Load Balance, which means to balance the load (work tasks) and allocate it to multiple operating units for operation, such as FTP server, Web server, enterprise core application server, and other main task servers. So as to complete the work tasks in collaboration.
Load balancing is built on the original network structure. It provides a transparent and cheap and effective method to expand the bandwidth of servers and network devices, strengthen network data processing capabilities, increase throughput, and improve network availability and flexibility.
1.Software/hardware load balancing
There are three deployment methods for load balancing: routing mode, bridge mode, and direct service return mode. Routing mode deployment is flexible. About 60% of users deploy in this way; bridge mode does not change the existing network architecture; direct service return (DSR) is more suitable for network applications with high throughput, especially content distribution. About 30% of users adopt this model.
1. Routing mode (recommended)
Deployment mode of routing mode,
There are several common software load balancing technologies:
1.DNS-based load balancing
Since the same name can be configured for multiple different addresses in the DNS server, the client that eventually queries the name will resolve the name
1. DNS load balancing The earliest load balancing technology was implemented through DNS. The same name is configured for multiple addresses in DNS, so clients querying this name will get one of them, so that different customers access different ones. Server for load balancing purposes. DNS load balancing is a simple and effective method, but it cannot distinguish the difference between servers, and it cannot reflect the current running status of the server.
2. Proxy server load balancing Use a proxy server to forward requests to internal servers. Using this acceleration mode can obviously improve the access speed of static web pages. However, you can also consider a technology that uses a proxy server to evenly forward requests to multiple servers to achieve the purpose of load balancing.
3. Address translation gateway load balancing Supports load balancing address translation gateway, which can map an external IP address to multiple internal IP addresses, and dynamically use one of the internal addresses for each TCP connection request to achieve the purpose of load balancing.
4. Supporting load balancing within the protocol In addition to these three load balancing methods, some protocols internally support functions related to load balancing, such as the redirection capability in the HTTP protocol. HTTP runs at the highest level of the TCP connection.
5. NAT Load balancing NAT (Network Address Translation) is simply to translate an IP address into another IP address, which is generally used between unregistered internal addresses and legal, registered Internet IP addresses For conversion. It is suitable for situations where Internet IP addresses are tight and you do not want to let the outside of the network know the internal network structure.
6. Reverse proxy load balancing. The general proxy method is to proxy the connection requests from internal network users to the server on the internet. The client must specify a proxy server and send the connection request that should be directly sent to the server on the internet to the proxy server for processing. The reverse proxy (Reverse Proxy) method refers to a proxy server that accepts connection requests on the internet, then forwards the requests to the server on the internal network, and returns the results obtained from the server to the client requesting the connection on the internet. At this point, the proxy server appears as a server to the outside. Reverse proxy load balancing technology is to dynamically forward connection requests from the Internet to multiple servers on the internal network for processing in the manner of a reverse proxy to achieve the purpose of load balancing.
7. Hybrid load balancing In some large networks, due to differences in hardware equipment, respective scales, and services provided in multiple server farms, you can consider using the most appropriate load balancing method for each server farm, and then here Load balancing or clustering between multiple server farms is once again provided to the outside world as a whole to provide services (that is, the multiple server farms as a new server farm), so as to achieve the best performance. This approach is called hybrid load balancing. This method is also sometimes used when the performance of a single balanced device cannot meet a large number of connection requests. [3]

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?