如何在PHP脚本失败时避免服务器不响应

问题描述:

I can´t find an answer for this, probably I'm not searching with the right words.

When a PHP script fails (error in script, infinite loops, endless SQL calls, etc) the whole server goes "busy". It's like all the resources are taken and focus on trying to execute that script. By restarting apache/nginx the server goes back to normal.

How could this be avoided? Setting the timeout for the script won't solve it because if it's even 10 seconds timeout the server will be irresponsive for those 10 seconds to everybody.

Is there any way to avoid this happening? Yes, solving the script issues the problem will stop happening but I am sure there is a way to protect against this on the server side.

Just now an example comes to my mind. This script used to get the country code by calling a service in that website. For some reason the website would not serve to us randomly so the script would wait forever to receive the file contents.

$getcountry = file_get_contents('http://ip-api.com/php/'. getUserIP());

Thanks

我找不到答案,可能我没有用正确的词搜索。 p >

当PHP脚本失败时(脚本错误,无限循环,无休止的SQL调用等)整个服务器都“忙”。 这就像所有资源都被采用并专注于尝试执行该脚本。 通过重新启动apache / nginx,服务器恢复正常。 p>

如何避免这种情况? 设置脚本的超时时间并不能解决问题,因为如果它甚至超过10秒,服务器对每个人的10秒都没有响应。 p>

有没有办法避免这种情况发生? 是的,解决脚本问题问题将停止发生,但我确信在服务器端有一种方法可以防止这种情况。 p>

刚才我想到了一个例子。 此脚本用于通过调用该网站中的服务来获取国家/地区代码。 由于某种原因,网站不会随机向我们提供服务,因此脚本将永远等待接收文件内容。 p>

  $ getcountry = file_get_contents('http:// ip-api  .com / php /'。getUserIP()); 
  code>  pre> 
 
 

谢谢 p> div>

Setting the timeout for the script won't solve it because if it's even 10 seconds timeout the server will be irresponsive for those 10 seconds to everybody.

Not if your server is set up properly. Setting the PHP timeout is the correct way to handle runaway scripts (aside from fixing the actual cause of the problem). Fatal script errors will terminate immediately and not take up server resources.

Your post seems to assume there is only one instance of a PHP script running at any time. This should not be the case. Your web server should be starting new processes or threads, up to some configured limit, for each web request handled by PHP. Or if you are using PHP-FPM, it's handed off to a pool of PHP processes. In no case should your whole server be locked up for one single request.

Now if your code or server has serious issues, then all requests may take too long and hang up further requests. The only solution is to fix the root of the problem and keep a reasonable timeout as a failsafe.

For Apache these are some of the settings you will want to check: ServerLimit, StartServers, MaxClients. For PHP-FPM you would start with the max_children setting.

This script used to get the country code by calling a service in that website. For some reason the website would not serve to us randomly so the script would wait forever to receive the file contents.

This is a very common problem and must be accounted for by your code. You can never trust that a 3rd party service will return promptly. Obviously offloading requests such as this to background processes is best (e.g. crons). But if it needs to be done as part of a normal page request you should always set a reasonable timeout. For your specific example of using file_get_contents, reduce the socket timeout:

ini_set('default_socket_timeout', 10);  // 10 seconds