https://assets.wp.nginx.com/wp-content/themes/nginx-theme/assets/img/logo.png

This will be the part 3 of my Nginx series. If you havent read first two, please read them before start this. Hope they would give you an idea about nginx (if you are not familiar with nginx). Part -1, Part-2.

Lets get into the topic. Nginx can work as a Layer 7 and layer 4 load balancer. Here I will mostly focus on Layer-7 . First lets see about how to configure nginx as layer-4 load-balancer. Which is not kind of full load balancer we have nginx working in layer-7.

TCP packets are the things in OSI layer 4. Where we can see only IP address and port. As an example, if we want to load-balance traffic coming to port 80. we can use following configuration.

stream {    upstream myservers {
server 127.0.0.1:2222;
server 127.0.0.1:3333;
}
}
server {
listen 80;
proxy_pass myservers;
}

If you have basic understanding about nginx, you may noticed here we do not have the main config block we had previous examples. Simply because works in layer-7 and now we try to load-balance traffic at layer-4. Here is http port, but in layer-4, it sees it just as a port. So any packet coming to will route to two services running on port or in Round-Robin fashion.

To test, you will have to use a connection such as . Normal browser traffic wont be load-balanced. Also here we can not load-balance traffic coming to specific route.

Layer — 7 Load balancing using Nginx

As I understood, this is the most important part in nginx. We can do many more smart load balancing using this option. Its from simple round-robin load balancing for multiple back-end services to caching, certificate checking etc. What I understood and explain may be a small fraction.

http {    
upstream myfirst {
server 127.0.0.1:7070;
server 127.0.0.1:7171;
}
server {
listen 8000;
location / {
proxy_pass http://myfirst;
}
}
} # http blockevents { } # events block

Above configuration will do load-balancing our service running on port .

By default nginx will will use Round-Robin algorithm to route default traffic. Http traffic coming to our root of the service will be sent to two services running on port and .

If we want to do load-balancing for one client always redirect to one particular server, specially when we use service caching we have to use , so config will be like this.

http {
upstream myfirst {
ip_hash;
server 127.0.0.1:7070;
server 127.0.0.1:7171;
}
server {
listen 8000;
location / {
proxy_pass http://myfirst;
}
}
} # http block
events { } # events block

Using Certificates with Nginx

This had been a real issue one of my colleague at my work place faced recently. Client requirement was to secure two site with same certificates, both these sites were running on same server.

We had to get some help from community, as they have suggested. He could solve it.

server {
listen 192.168.1.2:443 ssl;
server_name 192.168.1.2;
location /mysite1/ {
proxy_pass http://192.168.1.2:8000/;
}
}server {
listen 192.168.1.2:443 ssl;
server_name 192.168.1.2;
ssl_certificate "/path/to/nginx.crt";
ssl_certificate_key "/path/to/private/nginx.key";
location /mysite2 {
proxy_pass http://192.168.1.2:9000/mysite2;
}

Because of nginx’s tiny footprint, its been used widely as a loadbalancing solution. Kubernetes uses this behind the scene etc. In layer-7 we can use caches to maximize the user experience.

I have listed tiny bit details about nginx, true power of nginx is yet to be explored by me. There are a lots of good tutorials and official documentations. If you see a mistake here, please update me.

DevOps who is willing to learn and try new things.