haproxy,nginx,jumpserver,tomcat
一、haproxy + nginx 实现四、七层IP透传
haproxy配置四层代理
四层方式:
- 客户端向代理服务器发起http请求
- 客户端和代理服务器进行TCP连接,代理服务器根据自身指定的调度规则返回web服务器的IP给客户端
- 客户端再次发起对web服务器http请求,客户端和web服务器建立连接,然后进行通信
- 客户端发起请求后,代理服务器和指定的web服务器不会将客户端的请求又重新请求一次,也就是代理服务器和指定的web服务器之间不会建立连接。
注意:haproxy是伪四层代理,所以在四层模式中,haproxy会和后台web进行连接。
特点:四层代理通过IP:端口方式进行请求,虽然可以通过调度规则修改请求的web服务器的IP,但是每台web服务器存放的内容都是一样的,所以客户端无感知变化。
四层拓扑:
- haproxy需要安装lua环境,所以需要编译lua
编译lua:
yum install -y gcc readline-devel
wget http://www.lua.org/ftp/lua-5.3.5.tar.gz
tar xf lua-5.3.5.tar.gz -C /usr/local
cd /usr/local/lua-5.3.5/
make linux test
编译haproxy:
yum -y install gcc openssl-devel pcre-devel systemd-devel
wget https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.2.6.tar.gz/sha512/b9afa4a4112dccaf192fce07b1cdbb1547060d998801595147a41674042741b62852f65a65aa9b2d033db8808697fd3a522494097710a19071fbb0c604544de5/haproxy-2.2.6.tar.gz
tar xf haproxy-2.2.6.tar.gz
cd haproxy-2.2.6
make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_LUA=1 LUA_INC=/usr/local/lua-5.3.5/src/ LUA_LIB=/usr/local/lua-5.3.5/src/
make install PREFIX=/haproxy
添加变量
echo 'PATH=/haproxy/sbin:$PATH' > /etc/profile.d/haproxy.sh
source /etc/profile.d/haproxy.sh
mkdir /haproxy/log
mkdir /haproxy/pid
mkdir /haproxy/etc
添加service
vi /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/haproxy/sbin/haproxy -f /haproxy/etc/haproxy.cfg -c -q
ExecStart=/haproxy/sbin/haproxy -Ws -f /haproxy/etc/haproxy.cfg -p /haproxy/pid/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
vi /haproxy/etc/haproxy.cfg 创建配置文件
global
maxconn 100000
chroot /haproxy
stats socket /haproxy/haproxy.sock mode 600 level admin
#uid 99
#gid 99
user haproxy
group haproxy
daemon
#nbproc 4
#cpu-map 1 0
#cpu-map 2 1
#cpu-map 3 2
#cpu-map 4 3
pidfile /haproxy/pid/haproxy.pid
log 127.0.0.1 local2 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats #创建状态页面
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:admin
useradd -r -s /sbin/nologin -d /haproxy/ haproxy 添加账户
chown haproxy.haproxy -R /haproxy
systemctl daemon-reload
systemctl start haproxy
开启日志记录,和rsyslog搭配使用
vi /etc/rsyslog.conf
#### MODULES #### 全局配置下开启
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514
#### RULES #### 定义日志记录路径
local2.* /haproxy/log/haproxy.log
systemctl restart rsyslog 重启服务
http://192.168.116.132:9999/haproxy-status 访问状态页面,测试haproxy是否正常编译
输入账户admin,密码amdin
- 编译安装nginx:
wget http://nginx.org/download/nginx-1.18.0.tar.gz
yum install -y gcc pcre-devel openssl-devel zlib-devel 安装依赖包
tar xf nginx-1.18.0.tar.gz
cd nginx-1.18.0
useradd -r -s /sbin/nologin nginx
编译安装
./configure --prefix=/nginx \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module
make && make install
echo 'PATH=/nginx/sbin:$PATH' >/etc/profile.d/nginx.sh
source /etc/profile.d/nginx.sh
mkdir /nginx/run
vi /nginx/conf/nginx.conf
pid /nginx/run/nginx.pid; 修改PID这行
chown -R nginx.nginx /nginx
配置service
vi /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target
[Service]
Type=forking
PIDFile=/nginx/run/nginx.pid
ExecStart=/nginx/sbin/nginx -c /nginx/conf/nginx.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target
systemctl daemon-reload 加载service文件
- nginx1配置
hostnamectl set-hostname nginx1
vi /nginx/conf/nginx.conf 删除其他内容,修改为下面这些
user nginx;
worker_processes auto;
error_log /nginx/logs/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$proxy_protocol_addr" ';
#$proxy_protocol_addr这个就是开启记录真实的客户端IP
access_log /nginx/logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
default_type application/octet-stream;
server {
listen 80 proxy_protocol; #开启四层代理模式
listen [::]:80;
server_name _;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location / {
root /nginx/html;
index index.html;
}
}
}
echo nginx1-192.168.116.133 >/nginx/html/index.html
systemctl restart nginx
- nginx2配置
hostnamectl set-hostname nginx2
vi /nginx/conf/nginx.conf 删除其他内容,修改为下面这些
user nginx;
worker_processes auto;
error_log /nginx/logs/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$proxy_protocol_addr" ';
#$proxy_protocol_addr这个就是开启记录真实的客户端IP
access_log /nginx/logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
default_type application/octet-stream;
server {
listen 80 proxy_protocol; #开启四层代理模式
listen [::]:80;
server_name _;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location / {
root /nginx/html;
index index.html;
}
}
}
echo nginx2-192.168.116.134 >/nginx/html/index.html
systemctl restart nginx
- haproxy配置四层代理和转发真实客户端IP
hostnamectl set-hostname haproxy
vi /haproxy/etc/haproxy.cfg 在文件底部添加
listen webs
bind 192.168.116.132:80
mode tcp
balance static-rr
server web1 192.168.116.133 send-proxy
server web2 192.168.116.134 send-proxy
#send-proxy就是在四层代理下转发客户端IP的配置
systemctl restart haproxy
测试:
客户端访问网站
for i in {
1..10};do curl http://192.168.116.132;done
在nginx服务器查看日志
tail /nginx/logs/access.log 可以在nginx服务器看到IP被透传了
haproxy配置七层代理
七层IP透传
七层方式:
- 客户端向代理服务器发起http请求
- 客户端和代理服务器进行连接,代理服务器根据自身指定的调度规则和其他配置对请求报文进行修改
- 代理服务器将修改好的报文发送给后端web进行请求连接
- 代理服务器收到web服务器返回的数据内容后再返回给客户端
特点:七层代理时四层代理的升级版本,七层代理基于IP:端口/uri进行访问,代理服务器可以根据不同的url将请求调度到不同的web服务器上,但是客户端无感知。
按照上面编译安装haproxy和nginx后
配置haproxy的七层配置:
vi /haproxy/etc/haproxy.cfg
global
maxconn 100000
chroot /haproxy
stats socket /haproxy/haproxy.sock mode 600 level admin
#uid 99
#gid 99
user haproxy
group haproxy
daemon
#nbproc 4
#cpu-map 1 0
#cpu-map 2 1
#cpu-map 3 2
#cpu-map 4 3
pidfile /haproxy/pid/haproxy.pid
log 127.0.0.1 local2 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
frontend webs #分组,定义2个ACL匹配对应2个web
bind 192.168.116.132:80
mode http
log global
acl acl_nginx1 url_dir nginx1.html
acl acl_nginx2 url_dir nginx2.html
use_backend server_nginx1 if acl_nginx1
use_backend server_nginx2 if acl_nginx2
backend server_nginx1
server web1 192.168.116.133
backend server_nginx2
server web2 192.168.116.134
systemctl restart haproxy
2台nginx配置:
vi /nginx/conf/nginx.conf 2台nginx配置文件都是一样的
user nginx;
worker_processes auto;
error_log /nginx/logs/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /nginx/logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
default_type application/octet-stream;
server {
listen 80;
listen [::]:80;
server_name _;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location / {
root /nginx/html;
index index.html;
}
}
}
在nginx1创建网页文件
echo nginx1-test-192.168.116.133>/nginx/html/nginx1.html
echo nginx1-test-192.168.116.133>/nginx/html/nginx2.html
systemctl restart nginx
在nginx2创建网页文件
echo nginx2-test-192.168.116.134>/nginx/html/nginx1.html
echo nginx2-test-192.168.116.134>/nginx/html/nginx2.html
systemctl restart nginx
客户端测试
可以看到虽然2台nginx都配了同样的文件,但是因为报文被haproxy修改了,所以都访问了ACL规则对应的web服务器
curl http://192.168.116.132/nginx1.html
curl http://192.168.116.132/nginx2.html
nginx查看七层IP透传
tail /nginx/logs/access.log
二、haproxy服务器动态下线
拓扑:
- 后台web1配置:
LAMP和httpd编译:LAMP
用yum安装httpd如下:
hostnamectl set-hostname web1
yum install -y httpd
echo http-192.168.116.133>/var/www/html/index.html
systemctl start httpd
- 后台web2配置:
hostnamectl set-hostname web2
yum install -y httpd
echo http-192.168.116.134>/var/www/html/index.html
systemctl start httpd
- 按照上面编译好haproxy后
haproxy配置:
yum -y install socat 安装动态修改工具
vi /haproxy/etc/haproxy.cfg
global
maxconn 100000
chroot /haproxy
stats socket /haproxy/haproxy.sock mode 600 level admin
#uid 99
#gid 99
user haproxy
group haproxy
daemon
#nbproc 4
#cpu-map 1 0
#cpu-map 2 1
#cpu-map 3 2
#cpu-map 4 3
pidfile /haproxy/pid/haproxy.pid
log 127.0.0.1 local2 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen webs
bind 192.168.116.132:80
log global
mode http
server web1 192.168.116.133
server web2 192.168.116.134
systemctl restart haproxy
- 测试
客户端连续访问网站
while true;do curl http://192.168.116.132;sleep 1;done 一直访问网站
可以看到网站可以正常访问
在haproxy服务器上,用命令关闭web2(192.168.116.134)这台服务器的代理
从图上可以看到当关闭web2后,只有web1响应客户端的访问
echo "disable server webs/web2" | socat stdio /haproxy/haproxy.sock
webs就是haproxy.cfg定义的listen webs
web2对应server web2 192.168.116.134这里的web2
在haproxy查看配置文件是否发生了变动
可以看到配置文件没有变化,证明这个socat工具是通过内存修改,所以当haproxy重启服务时,配置还是会还原
tail -n 10 /haproxy/etc/haproxy.cfg
三、自行实现jumpserver管理资产及MySQL(设置运维、开发、测试三个组,每个组授权不同的资产和用户)
基于容器docker方式安装jumpserver
注意:
- 安装jumpserver的硬件配置:需要2个CPU以上,4G内存,50G硬盘及以上
- mysql版本需要大于5.6
1. 初始化环境:
cd /etc/yum.repos.d/
yum install -y wget
wget http://mirrors.aliyun.com/repo/Centos-7.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo
mv CentOS-Base.repo CentOS-Base.repo.bak
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum clean all
yum makecache
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/SELINUX=enforcing$/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
2. 安装docker
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-19.03.12-3.el7 docker-ce-cli-19.03.12-3.el7 -y
systemctl start docker
systemctl enable docker
3. 创建mysql容器
创建配置文件
mkdir -p /etc/mysql/mysql.conf.d/
mkdir -p /etc/mysql/conf.d/
vi /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file= /var/run/mysqld/mysqld.pid
socket= /var/run/mysqld/mysqld.sock
datadir= /var/lib/mysql
symbolic-links=0
character-set-server=utf8
vi /etc/mysql/conf.d/mysql.cnf
[mysql]
default-character-set=utf8
#创建mysql容器
docker run -d -p 3306:3306 --name mysql --restart always \
> -e MYSQL_ROOT_PASSWORD=123456 \
> -e MYSQL_DATABASE=jumpserver \
> -e MYSQL_USER=jumpserver \
> -e MYSQL_PASSWORD=123456 \
> -v /data/mysql:/var/lib/mysql \
> -v /etc/mysql/mysql.conf.d/mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf \
> -v /etc/mysql/conf.d/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
yum install -y mysql 安装mysql客户端
4. 创建redis容器
docker run -d -p 6379:6379 --name redis --restart always redis:5.0.9
5. 创建jumpserver容器
生成2个随机数当登录jumpserver的验证码,要保存好这2个
echo SECRET_KEY=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 50` >jumpserver_key.txt
echo BOOTSTRAP_TOKEN=`cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 16` >>jumpserver_key.txt
cat jumpserver_key.txt 查看key和token信息
SECRET_KEY=mu9ghjm7ogBbNgO6S2cPC8I1qkcs2k6YPWdbxsAobnjaw90ORW
BOOTSTRAP_TOKEN=XKG61a9dslNI09p5
修改指定的key和token信息,输入数据库账户密码,创建容器
docker run --name jms_all -d \
-v /opt/jumpserver/data:/opt/jumpserver/data \
-p 80:80 \
-p 2222:2222 \
--restart always \
-e SECRET_KEY=mu9ghjm7ogBbNgO6S2cPC8I1qkcs2k6YPWdbxsAobnjaw90ORW \
-e BOOTSTRAP_TOKEN=XKG61a9dslNI09p5 \
-e DB_HOST=192.168.116.130 \
-e DB_PORT=3306 \
-e DB_USER=root \
-e DB_PASSWORD=123456 \
-e DB_NAME=jumpserver \
-e REDIS_HOST=127.0.0.1 \
-e REDIS_PORT=6379 \
-e REDIS_PASSWORD='' \
--privileged=true \
jumpserver/jms_all:v2.4.4
6. 查看容器状态
docker ps -a
docker logs -f 7bc68f42982f 指定容器ID查看日志
出现进入容器命令 docker exec -it jms_all /bin/bash 表示容器正常运行
7. 进入网页控制界面,默认账户admin,密码admin
http://192.168.116.130/core/auth/login/
7.1 创建用户
创建yunwei1,kaifa1,test1 3个用户
7.2创建用户组
创建运维,开发,测试3个组,将yunwei1,kaifa1,test1 3个用户绑定到用户组
7.3 创建资产
准备3个虚拟机作为资产列表
在每台虚拟机创建3个相同账户密码的管理账户,可以不是root账户,但是权限必须是最高权限,这里以root账户为例子
在jumpserver服务器的网页上添加管理账户,这个账户是用户获取设备信息的
将3个虚拟机添加到资产列表
7.4 创建资产管理的系统用户
在网页上创建系统账户,这个是用来登录系统的,不是root账户,这个账户可以自动推送到已经设置好的资产列表的设备中自动创建账户
注意:如果不选择自动推送,那么必须在资产列表的设备上创建这个账户
7.5 测试管理账户和资产的通信
测试管理账户和虚拟机是否能正常通信
会弹出窗口,如果都为ok,证明管理账户和资产列表的设备都能通信
7.6 测试系统账户和资产的通信
测试系统账户和资产列表的虚拟机的通信
推送是自动推送的,可能会有延迟,可以自己手动推送,然后再测试通信
推送账户弹窗
测试弹窗
7.7 将资产的权限授权给对应的账户
选择权限管理,将资产列表的3台虚拟机分别授权给yunwei1,kaifa1,test1账户
注意:选择节点时不要选择节点,要不然会把全部虚拟机都添加给这个账户管理
7.8 登录账户,测试资产和账户的权限对应关系
用yunwei1,kaifa1,test1账户登录jumpserver网页,查看是否分配成功
在命令执行输入id putong,测试账户是否有管理对应的虚拟机功能
四、编写脚本实现tomcat一键安装8.5版本
vi tomcat8.5.sh
#!/bin/bash
. /etc/init.d/functions
INSTALL_JAVA_DIR=/usr/local
INSTALL_TOMCAT_DIR=/usr/local
java_install(){
#java8u202_install
wget https://repo.huaweicloud.com/java/jdk/8u202-b08/jdk-8u202-linux-x64.tar.gz --no-check-certificate &>/dev/null|| {
action "java源码包下载失败" false;exit; }
tar xf jdk-8u202-linux-x64.tar.gz -C ${INSTALL_JAVA_DIR}
cd ${INSTALL_JAVA_DIR}
ln -s ${INSTALL_JAVA_DIR}/jdk1.8.0_202/ ${INSTALL_JAVA_DIR}/jdk
cat > /etc/profile.d/java.sh <<EOF export JAVA_HOME=${INSTALL_JAVA_DIR}/jdk export PATH=$PATH:${INSTALL_JAVA_DIR}/jdk/bin export JRE_HOME=${INSTALL_JAVA_DIR}/jdk/jre export CLASSPATH=${INSTALL_JAVA_DIR}/jdk/lib/:${INSTALL_JAVA_DIR}/jdk/lib EOF
source /etc/profile.d/java.sh
if java -version &>/dev/null;then
action "java 编译成功"
else
action "java 编译失败" false
fi
}
tomcat_install() {
#tomcat8.5.73_install
wget https://mirrors.bfsu.edu.cn/apache/tomcat/tomcat-8/v8.5.73/bin/apache-tomcat-8.5.73.tar.gz --no-check-certificate &>/dev/null|| {
action "tomcat源码包下载失败" false;exit; }
tar xf apache-tomcat-8.5.73.tar.gz -C ${INSTALL_TOMCAT_DIR}
cd ${INSTALL_TOMCAT_DIR}
ln -s ${INSTALL_TOMCAT_DIR}/apache-tomcat-8.5.73/ tomcat
cat > /etc/profile.d/tomcat.sh <<EOF PATH=${INSTALL_TOMCAT_DIR}/tomcat/bin:$PATH EOF
source /etc/profile.d/tomcat.sh
if id tomcat &>/dev/null;then
action "tomcat 用户已经存在" false
else
useradd -r -s /sbin/nologin tomcat
action "tomcat 用户创建成功"
fi
echo "JAVA_HOME=${INSTALL_JAVA_DIR}/jdk"> ${INSTALL_TOMCAT_DIR}/tomcat/conf/tomcat.conf
chown -R tomcat.tomcat -R ${INSTALL_TOMCAT_DIR}
cat > /lib/systemd/system/tomcat.service <<EOF [Unit] Description=Tomcat #After=syslog.target network.target remote-fs.target nss-lookup.target After=syslog.target network.target [Service] Type=forking EnvironmentFile=${INSTALL_TOMCAT_DIR}/tomcat/conf/tomcat.conf ExecStart=${INSTALL_TOMCAT_DIR}/tomcat/bin/startup.sh ExecStop=${INSTALL_TOMCAT_DIR}/tomcat/bin/shutdown.sh PrivateTmp=true User=tomcat Group=tomcat [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload
systemctl enable tomcat
systemctl start tomcat &> /dev/null && action "TOMCAT 安装完成" || {
action "TOMCAT 安装失败" false ; exit; }
}
java_install
tomcat_install
bash tomcat8.5.sh 执行脚本
访问测试
http://192.168.116.130:8080/
文章评论