如何Python下载大文件?

我想用python脚本下载很多文件,但是经常就有那么几个出错,写了个error handling,跳了过去,但是把出错的链接保存了一下。

转过天来,研究了一下出的什么错。


一个报错如下:

PS C: emp> python .DownloadFromList.py
Downloading
https://github.com/Unity-Technologies/ScriptableRenderPipeline/archive/master.zip
Traceback (most recent call last):
   File ".DownloadFromList.py", line 20, in <module>
     r = requests.get(url)
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestsapi.py", line 72, in get
     return request('get', url, params=params, **kwargs)
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestsapi.py", line 58, in request
     return session.request(method=method, url=url, **kwargs)
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestssessions.py", line 512, in request
     resp = self.send(prep, **send_kwargs)
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestssessions.py", line 644, in send
     history = [resp for resp in gen] if allow_redirects else []
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestssessions.py", line 644, in <listcomp>
     history = [resp for resp in gen] if allow_redirects else []
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestssessions.py", line 222, in resolve_redirects
     **adapter_kwargs
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestssessions.py", line 662, in send
     r.content
   File "C:UsersAdministratorAppDataLocalProgramsPythonPython37-32libsite-packages equestsmodels.py", line 827, in content
     self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
   File ".DownloadFromList.py", line 28, in <module>
     print("Error happened:", e.message)
AttributeError: 'MemoryError' object has no attribute 'message'
PS C: emp>


上网搜索了一下, 找到了解决方案.

为了防止这个参考资料的网页消失(以前经常发生的), 所以我就直接把代码抄过来放在这里, 备用(抄袭,嗯,注明了出处就可以光明正大的抄袭).


使用request

def download_file(url):

local_filename = url.split('/')[-1]

# NOTE the stream=True parameter

r = requests.get(url, stream=True)

with open(local_filename, 'wb') as f:

for chunk in r.iter_content(chunk_size=1024):

if chunk: # filter out keep-alive new chunks

f.write(chunk)

f.flush()

return local_filename


使用urllib2

file = urllib2.urlopen('url')

with open('filename','w') as f:

while True:

tmp = file.read(1024)

if not tmp:

break

f.write(tmp)


参考资料

==================

https://ox0spy.github.io/post/python/python-download-large-file-without-out-of-memory/

参考资料所援引的代码来自下面的两个链接。

http://*.com/questions/16694907/how-to-download-large-file-in-python-with-requests-py

http://*.com/questions/27053028/how-to-download-large-file-without-memoryerror-in-python