Load Balancer

Load Balancer เป็นหนึ่งในทางเลือกที่ใช้ในเพิ่ม HA (High Availability) ให้กับระบบ หรือลดโอกาสที่ service จะไม่พร้อมให้บริการ โดยการเพิ่มจำนวนของ service node ขึ้นมา และใช้ Load Balancer เป็นตัวกระจาย load ไปยัง service node เหล่านั้น โดยเรียกวิธีการที่เพิ่มหรือลดจำนวน service node เพื่อรองรับปริมาณของ load ที่เพิ่มขึ้นนี้ว่า horizontal scaling (การ scale ในแนวนอน)

โดยปกติแล้ว Load Balancer มีการกระจายโหลดโดยมี logic บางอย่าง เช่น round robin ที่จะวนไปหา node ต่าง ๆตามลำดับ หรือ เลือก node ที่มี load ใช้งานอยู่น้อยที่สุด ซึ่งการจะทำการคำนวนตาม logic เหล่านี้ได้ เท่ากับว่าตัว Load Balancer ก็จะต้องมี compute module เป็นหลัก ดังนั้น เราจะสามารถสร้าง server หรือ Compute Instance มาทำเป็น Load Balancer เองได้เช่นกัน

Load Balancer Usage

How Load Balancers Work

A Load Balancer receives incoming traffic, classifies it into different groups, and distributes it to a predefined set of backends. During creation, users must select specifications that suit different workloads, categorized by Purpose and Topology.

Purpose:

  • Development: Suitable for workloads that can tolerate slight performance variations or for test environments. Only available with a Standalone topology.

  • Production: Suitable for workloads requiring stable, consistent performance. Available in both Standalone and High Availability topologies.

Topology: Specifies the number of compute machines acting as the load balancer.

  • Standalone: A single compute machine. If it fails, a self-healing process will bring a new one online within 5 minutes.

  • High Availability: Two compute machines in an active/standby configuration. If the active machine fails, it can switch to the standby machine within 5 seconds.

Types of Load Balancing

NIPA Cloud Space offers two types of load balancing, and a single Load Balancer resource can perform both, depending on the protocol chosen for its Backend Groups and Listeners.

  • Network Load Balancing

  • Application Load Balancing

Load Balancing \ Component
Backend Group
Listener

Application Load Balancing

HTTP

HTTP, HTTPS

Network Load Balancing

TCP, UDP

TCP, UDP

Choosing Between Network Load Balancing and Application Load Balancing

Below is a comparison of the capabilities of Network Load Balancing and Application Load Balancing as provided by NIPA Cloud Space.

Factors
Network Load Balancing
Application Load Balancing

Performance

Up to 69,000 TPS*

Up to 62,000 TPS*

Categolize load by protocol and port number

Yes

Yes

Allowed Backend

  • Backend Group

  • Backend Group

  • Redirect to URL

  • Redirect to Prefix

  • Reject

Terminate HTTPS

No

Yes

Forwarding Policy

No

Yes

*TPS (Transactions Per Second) is the number of transactions that can be handled in one second.

Network Load Balancing

This type of load balancing categorizes and balances traffic at the transport layer (Layer 4) of the OSI model. It is highly efficient and performs basic categorization using only the protocol and port number. With TCP/UDP Load Balancing, the same protocol is used for the entire traffic path—from the client to the load balancer, to the backend member, and back again.

TCP/UDP Load Balancing

The protocols used for Network Load Balancing are TCP or UDP. The traffic maintains the same protocol throughout its entire path—from the client to the load balancer, on to the backend member, and on the response path back to the client.

TCP Load Balancing Structure

Application Load Balancing

This type of load balancing categorizes and balances traffic at the application layer (Layer 7). It offers more advanced features but at the cost of lower performance compared to Network Load Balancing.

HTTP Load Balancing

This uses an HTTP Listener paired with an HTTP Backend Group. The traffic remains HTTP throughout the entire path. It supports Layer 7 policies.

HTTP Load Balancing Structure

HTTPS Load Balancing

This uses an HTTPS Listener paired with an HTTP Backend Group. Incoming HTTPS traffic has its SSL certificate terminated at the listener, and the traffic is then forwarded to the backend members as unencrypted HTTP. Responses are re-encrypted to HTTPS before being sent back to the client. It also supports Layer 7 policies.

HTTPS Load Balancing Structure

SSL Certificate

For HTTPS Load Balancing, an SSL Certificate is essential for encrypting and decrypting traffic. You can import your certificates to NIPA Cloud Space and register them with an HTTPS listener. The imported certificate must be in .pem format and include:

  • The certificate itself

  • The private key

  • The certificate chain (optional, depending on the provider) A single SSL Certificate can be used with multiple listeners across multiple load balancers.

Load Balancer Status

The status of a Load Balancer is divided into two types:

  • Operating Status: Indicates the health of the load balancer.

    • Healthy: All components are working normally.

    • Degraded: Some backend members are down.

    • Draining: Members cannot accept new traffic.

    • Unhealthy: All backend members are down.

    • No Monitor: No health check is configured for the backend group.

  • Provisioning Status: Indicates the management state of the load balancer resource.

    • (no status): The load balancer is stable with no pending changes.

    • Creating: Displayed during the creation process.

    • Updating: Displayed when changes are being made (e.g., creating/editing/deleting a Listener or Backend Group).

    • Deleting: Displayed during the deletion process.

    • Error: Displayed if a change fails. It is recommended to contact Customer Support or delete and recreate the load balancer.

Last updated

Was this helpful?