您的位置:

nginx高可用集群的部署与优化

一、nginx高可用集群部署

1、安装nginx

# 安装依赖包
yum install -y gcc pcre-devel openssl-devel zlib-devel

# 下载安装包
wget https://nginx.org/download/nginx-1.17.3.tar.gz 
tar zxvf nginx-1.17.3.tar.gz 
cd nginx-1.17.3 

# 配置
./configure 
--prefix=/usr/local/nginx 
--with-http_stub_status_module 
--with-http_ssl_module 

# 编译
make 

# 安装
make install 

2、配置nginx集群

# 假设有两台机器192.168.0.1和192.168.0.2,分别安装好nginx

# 编辑nginx.conf
http {
    upstream myapp1 {
        server 192.168.0.1:8080 weight=5;
        server 192.168.0.2:8080;
    }

    server {
        listen       80;
        server_name  myapp.com;

        location / {
            proxy_pass  http://myapp1;
        }
    }
}

3、使用Keepalived实现负载均衡

# 安装keepalived
yum install -y keepalived 

# 编辑keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id nginx_lvs1
}

vrrp_instance VI_1 {
    state MASTER    # 主机为MASTER,备份机为BACKUP
    interface eth0  # 心跳检测的网卡
    virtual_router_id 51 # 必须是一样的
    priority 101  # 主机的优先级,备份机的默认是100
    advert_int 1 # 每秒发送一次心跳信息
    authentication {
        auth_type PASS
        auth_pass 1111 # 必须一样
    }
    virtual_ipaddress {
        192.168.10.55/24 dev eth0 label eth0:1 # VIP,必须在同一个网段,必须在同一个子网内
    }
}

# 启动keepalived
systemctl enable keepalived 
systemctl start keepalived 

二、nginx高可用

1、nginx异步IO模型

nginx采用了异步的事件驱动模型,与传统的阻塞式模型相比,它能够更高效地将请求分发至后端服务器。 nginx的进程模型也非常优秀,主进程只负责管理子进程,子进程负责处理具体业务逻辑。当一个请求到来时,nginx会将其分配至一个子进程进行处理,其他子进程仍可正常工作。 因此,即使某个子进程挂了,不会影响整个服务的正常运作。另外,nginx还支持热部署功能,可以在不停服的情况下更新服务,大大提高了系统的可用性。

2、nginx的健康检查

nginx支持对后端服务器进行健康检查,以保证服务的稳定性。常见的检查方式包括轮询检查和主动健康检查。

轮询检查是指nginx会轮流访问每个后端服务器,检查它们的健康状态。如果某个后端服务器挂了,nginx会将其从服务列表中剔除,确保请求不会再分配至该服务器。

主动健康检查是指nginx会定时向后端服务器发送特定的检查请求(如ping指令或HTTP请求),如果后端服务器超时或返回异常结果,nginx也会将其从服务列表中剔除。

三、linux高可用集群

1、使用pacemaker和corosync实现linux高可用集群

# 安装软件包
yum install -y pacemaker pcs corosync

# 配置集群环境
pcs cluster auth node1 node2 -u hacluster -p password
pcs cluster setup --name my_cluster node1 node2
pcs cluster start --all

# 部署资源
pcs resource create nginx ocf:heartbeat:nginx \
configfile=/etc/nginx/nginx.conf \
statusurl=http://localhost:8080/nginx_status \
force_stop=true

# 部署VIP
pcs resource create vip ocf:heartbeat:IPaddr2 \
ip=192.168.0.100 cidr_netmask=24 \
op monitor interval=30s

# 启动
pcs constraint location nginx prefers node1=100 \
and nginx has_slave=false
pcs constraint location vip prefers node1=100 \
and vip has_slave=false

2、使用heartbeat和drbd实现linux高可用集群

# 安装heartbeat和drbd软件包
yum install -y heartbeat drbd84-utils

# 配置heartbeat
cp /usr/share/doc/heartbeat-3.0.5/authkeys /etc/ha.d/
chmod 600 /etc/ha.d/authkeys
echo "auth 1" >> /etc/ha.d/authkeys
echo "1 sha1 password" >> /etc/ha.d/authkeys

# 配置ha.cf
echo "logfile /var/log/ha-log" >> /etc/ha.d/ha.cf
echo "logfacility local0" >> /etc/ha.d/ha.cf
echo "keepalive 2" >> /etc/ha.d/ha.cf
echo "deadtime 15" >> /etc/ha.d/ha.cf
echo "bcast eth0" >> /etc/ha.d/ha.cf
echo "ucast eth1 192.168.1.2" >> /etc/ha.d/ha.cf
echo "ucast eth1 192.168.1.3" >> /etc/ha.d/ha.cf
echo "node drbd1 drbd2" >> /etc/ha.d/ha.cf

# 配置haresources
echo "drbd1 IPaddr::192.168.1.100/24/eth0/192.168.1.255 nginx" >> /etc/ha.d/haresources

# 启动heartbeat
systemctl enable heartbeat
systemctl start heartbeat

# 配置DRBD
echo "resource r0 {" >> /etc/drbd.d/global_common.conf
echo "  protocol C;" >> /etc/drbd.d/global_common.conf
echo "  startup { wfc-timeout 15; degr-wfc-timeout 60; }" >> /etc/drbd.d/global_common.conf
echo "  disk {" >> /etc/drbd.d/global_common.conf
echo "    on-io-error detach;" >> /etc/drbd.d/global_common.conf
echo "  }" >> /etc/drbd.d/global_common.conf
echo "  net {" >> /etc/drbd.d/global_common.conf
echo "    cram-hmac-alg sha1;" >> /etc/drbd.d/global_common.conf
echo "    shared-secret-password password;" >> /etc/drbd.d/global_common.conf
echo "    after-sb-0pri discard-zero-changes;" >> /etc/drbd.d/global_common.conf
echo "    after-sb-1pri discard-secondary;" >> /etc/drbd.d/global_common.conf
echo "    after-sb-2pri call-pri-lost-after-sb;" >> /etc/drbd.d/global_common.conf
echo "  }" >> /etc/drbd.d/global_common.conf
echo "}" >> /etc/drbd.d/global_common.conf

echo "resource r0 {" >> /etc/drbd.d/r0.res
echo "  device /dev/drbd1;" >> /etc/drbd.d/r0.res
echo "  disk /dev/sdb1;" >> /etc/drbd.d/r0.res
echo "  meta-disk internal;" >> /etc/drbd.d/r0.res
echo "  on drbd1 {" >> /etc/drbd.d/r0.res
echo "    address 192.168.1.2:7788;" >> /etc/drbd.d/r0.res
echo "  }" >> /etc/drbd.d/r0.res
echo "  on drbd2 {" >> /etc/drbd.d/r0.res
echo "    address 192.168.1.3:7788;" >> /etc/drbd.d/r0.res
echo "  }" >> /etc/drbd.d/r0.res
echo "}" >> /etc/drbd.d/r0.res

# 启动DRBD
systemctl enable drbd
systemctl start drbd

# 创建文件系统并挂载
mkfs.xfs /dev/drbd1
mkdir /mnt/drbd1
echo "/dev/drbd1 /mnt/drbd1 xfs defaults 0 0" >> /etc/fstab
mount /mnt/drbd1

四、nginx高可用session

1、使用memcached实现nginx高可用session

# 安装memcached服务和扩展模块
yum install -y memcached 
yum install -y php-pecl-memcached

# 修改php.ini文件
session.save_handler = memcached 
session.save_path = "127.0.0.1:11211"

# 修改nginx.conf文件
http {
    upstream myapp1 {
        server 192.168.0.1:8080 weight=5;
        server 192.168.0.2:8080;
    }

    server {
        listen       80;
        server_name  myapp.com;

        location / {
            proxy_pass  http://myapp1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # 将Session保存到memcached服务器上
            proxy_set_header SessionID $http_cookie;
            proxy_cache_bypass $http_cookie;
            proxy_cache_key $scheme$proxy_host$request_uri$cookie_SessionID;
        }
    }
}

2、使用Redis实现nginx高可用session

# 安装Redis服务和扩展模块
yum install -y redis 
yum install -y php-pecl-redis

# 修改php.ini文件
session.save_handler = redis 
session.save_path = "tcp://127.0.0.1:6379"

# 修改nginx.conf文件
http {
    upstream myapp1 {
        server 192.168.0.1:8080 weight=5;
        server 192.168.0.2:8080;
    }

    server {
        listen       80;
        server_name  myapp.com;

        location / {
            proxy_pass  http://myapp1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # 将Session保存到Redis服务器上
            proxy_set_header SessionID $http_cookie;
            proxy_cache_bypass $http_cookie;
            proxy_cache_key $scheme$proxy_host$request_uri$cookie_SessionID;
        }
    }
}

以上的实现方式都采用了代理服务器,将session保存在分布式缓存中,以达到高可用的目的。