通过curl模拟多线程抓取网页(curl_multi_*)

通过curl模拟多线程抓取网页(curl_multi_*)

  curl请求多个url,以前都是使用循环来处理。最近发现可以通过curl_multi_*系列函数来模拟多线程。比对一下,发现如果请求的url只有几个,2种方案耗时差不多,但是url比较多,差距就非常明显了。

  先来看下使用for循环的方案: 

 1 <?php
 2 //for循环 基础方案
 3 $start = microtime(true);
 4 
 5 header('Content-type:text/html;charset=utf-8');
 6 
 7 $arrs = [
 8     'https://www.yahoo.com/',
 9     'http://www.jtthink.com/',
10     'https://www.hao123.com/',
11     'http://www.cnblogs.com/loveyouyou616/',
12     'http://www.qq.com/',
13     'http://www.sina.com.cn/',
14     'http://www.163.com/',
15     'https://www.yahoo.com/',
16     'http://www.jtthink.com/',
17     'https://www.hao123.com/',
18     'http://www.cnblogs.com/loveyouyou616/',
19     'http://www.qq.com/',
20     'http://www.sina.com.cn/',
21     'http://www.163.com/',
22     'https://www.yahoo.com/',
23     'http://www.jtthink.com/',
24     'https://www.hao123.com/',
25     'http://www.cnblogs.com/loveyouyou616/',
26     'http://www.qq.com/',
27     'http://www.sina.com.cn/',
28     'http://www.163.com/'
29 ];
30 
31 $headers = array(
32     'User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36',
33 );
34 
35 $mh = curl_multi_init();
36 
37 foreach ($arrs as $i=>$url){
38     $ch = curl_init($url);
39     curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
40     curl_setopt($ch, CURLOPT_HTTPHEADER,$headers);
41     curl_setopt($ch, CURLOPT_HEADER, 0);
42     curl_setopt($ch, CURLOPT_TIMEOUT, 20);
43 
44 
45     if (strpos($url,'https')){
46         curl_setopt ( $ch, CURLOPT_SSL_VERIFYPEER, false );
47         curl_setopt ( $ch, CURLOPT_SSL_VERIFYHOST, 2 );
48     }
49 
50     $con = curl_exec($ch);
51     curl_close($ch);
52     var_dump($con);
53 }
54 
55 
56 $end = microtime(true) - $start;
57 
58 echo '<br/>';
59 echo $end;  //平均19.002983093262s

  接下来使用curl_multi_* 一次发送多个url请求

  

 1 <?php
 2 //此模型虽然是一次多个url请求,但缺陷是 要等所有数据请求结束一起返回,才能逐个处理数据。
 3 $start = microtime(true);
 4 
 5 header('Content-type:text/html;charset=utf-8');
 6 
 7 $arrs = [
 8     'https://www.yahoo.com/',
 9     'http://www.jtthink.com/',
10     'https://www.hao123.com/',
11     'http://www.cnblogs.com/loveyouyou616/',
12     'http://www.qq.com/',
13     'http://www.sina.com.cn/',
14     'http://www.163.com/',
15     'https://www.yahoo.com/',
16     'http://www.jtthink.com/',
17     'https://www.hao123.com/',
18     'http://www.cnblogs.com/loveyouyou616/',
19     'http://www.qq.com/',
20     'http://www.sina.com.cn/',
21     'http://www.163.com/',
22     'https://www.yahoo.com/',
23     'http://www.jtthink.com/',
24     'https://www.hao123.com/',
25     'http://www.cnblogs.com/loveyouyou616/',
26     'http://www.qq.com/',
27     'http://www.sina.com.cn/',
28     'http://www.163.com/'
29 ];
30 
31 $headers = array(
32     'User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36',
33 );
34 
35 $mh = curl_multi_init();
36 
37 foreach ($arrs as $i=>$url){
38     $conn[$i] = curl_init($url);
39     curl_setopt($conn[$i],CURLOPT_RETURNTRANSFER,1);
40     curl_setopt($conn[$i], CURLOPT_HTTPHEADER,$headers);
41     curl_setopt($conn[$i], CURLOPT_HEADER, 0);
42     curl_setopt($conn[$i], CURLOPT_TIMEOUT, 20);
43 
44 
45     if (strpos($url,'https')){
46         curl_setopt ( $conn[$i], CURLOPT_SSL_VERIFYPEER, false );
47         curl_setopt ( $conn[$i], CURLOPT_SSL_VERIFYHOST, 2 );
48     }
49     curl_multi_add_handle($mh,$conn[$i]);
50 }
51 
52 $active = null;
53 /*
54  * 这样写会轻易导致CPU占用100%
55 
56 do {
57     $n=curl_multi_exec($mh,$active);
58 } while ($active);
59  *
60  */
61 
62 //改写
63 /*
64 do {
65     $mrc = curl_multi_exec($mh,$active);
66 }while($mrc == CURLM_CALL_MULTI_PERFORM);
67 
68 while ($active and $mrc == CURLM_OK){
69     if (curl_multi_select($mh) != -1) {
70         do {
71             $mrc = curl_multi_exec($mh, $active);
72         } while ($mrc == CURLM_CALL_MULTI_PERFORM);
73     }
74 }
75 */
76 
77 //最简单方案
78 do {
79     curl_multi_exec($mh, $running);
80     curl_multi_select($mh);
81 } while ($running > 0);
82 
83 
84 //获取内容
85 foreach ($arrs as $i => $url) {
86     $res[$i]=curl_multi_getcontent($conn[$i]);
87     var_dump($res[$i]);
88     curl_close($conn[$i]);
89     //等待所有http请求结束返回数据依次生成文件。
90     file_put_contents('curl_multi.log',$res[$i]."



",FILE_APPEND);
91 }
92 
93 $end = microtime(true) - $start;
94 
95 echo '<br/>';
96 echo $end; // 平均 10.091157913208s

  执行上面2段代码,可以发现使用curl_multi_*系列函数来处理,效率明显高不少。

  但是上面的模型也存在一个问题,返回的时间依赖最长的请求,通俗的说就是结果数据要等所有http请求结束后一起返回,然后逐个处理数据。