Xitrum学习笔记19

可以直接运行 Xitrum

Browser ------ Xitrum instance

或者在在负载均衡器如HAProxy, 或反向代理如Apache或Nginx之后:

Browser ------ Load balancer/Reverse proxy -+---- Xitrum instance1
                                            +---- Xitrum instance2

打包路径

运行sbt/sbt xitrum-package来生成target/xitrum路径中的内容,准备向生产服务器部署

target/xitrum
  config
    [config files]
  public
    [static public files]
  lib
    [dependencies and packaged project file]
  script
    runner
    runner.bat
    scalive
    scalive.jar
    scalive.bat

自定义xitrum-package

默认情况下,sbt/sbt xitrum-package命令被配置成拷贝路径config、public和script到target/xitrum。

如果要再拷贝更多的路径和文件,需要修改build.sbt:

XitrumPackage.copy("config", "public, "script", "doc/README.txt", "etc.")

获取更多信息,参考 https://github.com/xitrum-framework/xitrum-package

连接Scala console运行JVM进程

在没有预先设置的生产环境中,为了进行现场调试,可以使用Scalive连接Scala console来运行JVM进程

在script路径运行scalive:

script
  runner
  runner.bat
  scalive
  scalive.jar
  scalive.bat

在生产模式下当系统启动时启动Xitrum

script/runner (for *nix) and script/runner.bat (for Windows) 是运行带有main方法的对象的脚本。

用它可以启动在生产环境上的web服务器,script/runner quickstart.Boot

可以修改runner脚本来调整JVM设置

要在Linux系统启动时在后台启动Xitrum,一个简单的方法是把下面一行加到 /etc/rc.local中

su - user_foo_bar -c /path/to/the/runner/script/above &

daemontools是另一种解决方法。要在CentOS安装它,请查阅相关资料。

daemontools资料: http://cr.yp.to/daemontools.html

或使用Supervisord(一个进程控制系统,详见官网http://supervisord.org/), /etc/supervisord.conf 示例:

[program:my_app]
directory=/path/to/my_app
command=/path/to/my_app/script/runner quickstart.Boot
autostart=true
autorestart=true
startsecs=3
user=my_user
redirect_stderr=true
stdout_logfile=/path/to/my_app/log/stdout.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=7
stdout_capture_maxbytes=1MB

  stdout_events_enabled=false
  environment=PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:~/bin

其他选择有:runit、upstart

设置端口

Xitrum默认监听端口8000和4430,可以在config/xitrum.conf改变端口。

可以用以下命令更新/etc/sysconfig/iptables使用80以替代8000、443替代4430

sudo su - root
chmod 700 /etc/sysconfig/iptables
iptables-restore < /etc/sysconfig/iptables
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8000
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 4430
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8000
iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 443 -j REDIRECT --to-ports 4430
iptables-save -c > /etc/sysconfig/iptables
chmod 644 /etc/sysconfig/iptables

如果有其他进程运行在80和443上,要先停止这些进程

sudo /etc/init.d/httpd stop
sudo chkconfig httpd off

调整linux以应对大量连接

参考:

https://docs.basho.com/riak/kv/2.2.3/using/performance/ 

https://docs.basho.com/riak/kv/2.2.3/using/performance/amazon-web-services/

https://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/

https://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html

增加文件打开数限制

对于Linux来说,每个连接都看成一个打开的文件。默认最大打开文件数是1024。要增加这个数字,修改/etc/security/limits.conf

* soft nofile 1024000
* hard nofile 1024000

登出在登入才能使以上配置生效,运行 ulimit -n命令查看变更是否生效。

调整kernel内核

根据文章《A Million-user Comet Application with Mochiweb》,修改/etc/sysctl.conf

# General gigabit tuning
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# This gives the kernel more memory for TCP
# which you need with many (100k+) open socket connections
net.ipv4.tcp_mem = 50576 64768 98152
# Backlog
net.core.netdev_max_backlog = 2048
net.core.somaxconn = 1024
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_syncookies = 1
# If you run clients
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

运行sudo sysctl -p应用这个配置,无须重启,内核应该可以应对更多的连接了

Note about backlog

TCP does the 3-way handshake for making a connection. When a remote client connects to the server, it sends SYN
packet, and the server OS replies with SYN-ACK packet, then again that remote client sends ACK packet and the
connection is established. Xitrum gets the connection when it is completely established.
According to the article Socket backlog tuning for Apache, connection timeout happens because of SYN packet loss
which happens because backlog queue for the web server is filled up with connections sending SYN-ACK to slow
clients.
According to the FreeBSD Handbook, the default value of 128 is typically too low for robust handling of new connections in a heavily loaded web server environment. For such environments, it is recommended to increase this value to
1024 or higher. Large listen queues also do a better job of avoiding Denial of Service (DoS) attacks.
The backlog size of Xitrum is set to 1024 (memcached also uses this value), but you also need to tune the kernel as
above.
To check the backlog config:

cat /proc/sys/net/core/somaxconn

Or:

sysctl net.core.somaxconn

To tune temporarily, you can do like this:

sudo sysctl -w net.core.somaxconn=1024

HAProxy tips

为SockJS配置HAProxy,查看这个示例

defaults
    mode http
    timeout connect 10s
    timeout client 10h # 客户端非活动状态的超时时长Set to long time to avoid WebSocket connections being closed when there's
    timeout server 10h # Set to long time to avoid ERR_INCOMPLETE_CHUNKED_ENCODING on Chrome
frontend xitrum_with_discourse
    bind 0.0.0.0:80
    option forwardfor
    acl is_discourse path_beg /forum
    use_backend discourse if is_discourse
    default_backend xitrum
backend xitrum
    server srv_xitrum 127.0.0.1:8000
backend discourse
    server srv_discourse 127.0.0.1:3000

要想不重启让HAProxy重新加载配置文件,参考

https://serverfault.com/questions/165883/is-there-a-way-to-add-more-backend-server-to-haproxy-without-restarting-haproxy

HAProxy比Nginx在使用上更加简单,Xitrum响应静态文件更加快速,不必使用Nginx的静态文件功能。

Nginx tips

如果在Xitrum上使用WebSocket或者SockJS功能,并且想在Nginx1.2后运行Xitrum,必须安装像nginx_tcp_proxy_module一样的模块。

Nginx1.3以上的版本直接支持WebSocket。

Nginx为反向代理默认使用HTTP1.0协议,如果后台服务器返回的是分块的响应,你需要让Nginx使用HTTP1.1:

location / {
  proxy_http_version 1.1;
  proxy_set_header Connection "";
  proxy_pass http://127.0.0.1:8000;
}

文档描述如何让http保持浏览器和服务端之间的长连接,也要设置

proxy_set_header Connection "";