Haproxy调整性能?
我们正在尝试为来自客户端(而非浏览网络交易类型的用户)的获取和发布请求的haproxy找到最佳的调整选项。
We are trying to find the best tuning options for haproxy for get and post request that come from a client (not users browsing the web type of deal).
使用30k线程运行jmeter测试,其中包括对服务器的5个调用,1个用户注册表和几个update调用。这些通过管道推送json数据。
Running a jmeter test with 30k threads that consists of 5 calls to the servers, 1 user reg, and a few update calls. These push json data though the pipeline.
这是我们当前的haproxy配置
Here us our current config for haproxy
global
log /dev/log local0 #notice
maxconn 14000
tune.bufsize 128000
user netcom
group netcom
pidfile /tmp/haproxy.pid
daemon
nbproc 7
#debug
#quiet
defaults
log global
mode http
### Options ###
option httplog
#option logasap
option dontlog-normal
#option dontlognull
option redispatch
option httpchk GET /?method=echo HTTP/1.1
option tcp-smart-accept
option tcp-smart-connect
option http-server-close
#option httpclose
#option forceclose
### load balance strategy ###
balance leastconn
#balance roundrobin
### Other ###
retries 5
maxconn 14000
backlog 100000
### Timeouts ###
#timeout client 25s
timeout client 60s
#timeout connect 5s
timeout connect 60s
#timeout server 25s
timeout server 60s
timeout tunnel 3600s
timeout http-keep-alive 1s
#timeout http-request 15s
timeout http-request 60s
#timeout queue 30s
timeout queue 30s
timeout tarpit 60s
listen stats *:1212
stats enable
stats show-node
stats show-desc xxxxProxy
stats realm xxxxProxy\ Statistics
stats auth xxxx:xxxx
stats refresh 5s
stats uri /
frontend http-in
bind *:1111
bind *:2222 ssl crt /home/netcom/nas/haproxy/xxxx.co.pem verify optional
acl user_request url_reg method=user.register
use_backend user_group if user_request
default_backend other_group
backend user_group
server n15 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n2 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n9 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n14 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n22 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n24 xxxx:8080 maxconn 3500 check port 8097 inter 2000
server n25 xxxx:8080 maxconn 3500 check port 8097 inter 2000
我们在centOS 6上的系统
and our sysctl on centOS 6
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_recycle = 1
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 20480 174760 25165824
net.ipv4.tcp_wmem = 20480 174760 25165824
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 1
net.core.netdev_max_backlog = 10000
# Syn flood
net.ipv4.tcp_max_syn_backlog = 8096
net.core.somaxconn = 8096
任何人指出任何显而易见的问题,他们都可以看到。不幸的是,我没有haproxy的专业知识,因此需要社区的帮助。
anyone point out any blaring issues that they can see off the top of your head. Unfortunately I do not have the expertise in haproxy so looking for help from the community.
我还需要弄清楚的是如何找到盒子可以连接的最大数量处理,其在1 gig网络上,所有后端也在1 gig上。这是haproxy管理员 http://grab.by/r12c 的屏幕截图,请注意,我们正在使用更多功能来运行它比一个核心,所以这是一个核心的快照..既然我可以告诉网络管理员不能显示所有内容..任何想法如何获得haproxy从cmd行获取的最大conn吗?
What I also prob need to figure out is how to find the max connections the box can handle, its on 1 gig network and all the backends are on one gig as well. Here is screen shot from the haproxy admin http://grab.by/r12c, note we are running it with more than one core so this is a snapshot of the one core.. since the web admin as far as I can tell cant show everything.. any idea how to get the max conn that haproxy is getting from cmd line?
无论如何还是要这样做,并希望任何人都可以给出提示或指示。
anyhow just working though this and hope that anyone can give some tips or pointers.
第一件事是,似乎您不应该运行多个haproxy进程。通常,您不想这样做,尤其是因为您正在忙于测试并尝试查看maxconn。无论如何,在单个内核上,haproxy可能会胜过maxconn设置。
Well the first thing is that it doesn't seem like you should be running multiple processes of haproxy. Typically you won't want to do that, especially because you are busy testing and trying to see the maxconn's. On a single core haproxy can way outperform the maxconn setting you have anyway.
我经历了Snapt的系统测试,而您掌握了其中大部分;我注意到它也在添加这些-
I went through Snapt's sysctl's and you have most of the; I noticed it's also adding these --
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
此外,最少建议我不要使用condenrobin。因为您正在执行HTTP流量,该流量由许多小的请求组成(老实说,我想这取决于)。这些都是小事。
Also, leastconn is not going to be worthwhile I would suggest roundrobin. Because you are doing HTTP traffic which consists of many small requests (I guess that depends though to be honest). These are such minor things though.