遍历AWS S3存储桶中的对象
我拥有的脚本正在运行,但是我正在扫描的存储桶很大,并且过一会儿就会超时.我该怎么做才能提高效率或从特定位置开始?
The script I have is working, but the bucket I am scanning over is massive and times out after a while. What can I do to make this more efficient or start from a specific location?
import boto3
s3 = boto3.resource('s3')
b = s3.Bucket('my_bucket')
for obj in b.objects.all():
# Open the file, run some RegEx to find some data. If it's found, output to a log file
我遇到的第一个问题是水桶的大小.它大约有150万个对象.我用我的代码打开文本文件,寻找一些RegEx,如果RegEx上存在匹配项,则输出对象名称和找到的内容.
The first problem I have is the size of the bucket. It's about 1.5 million objects. I have my code opening up text files looking for some RegEx and if there's a match on the RegEx then it outputs the Object name and what was found.
运行脚本大约一个小时后,它会在引发错误之前将其容纳约40k个对象:
After running the script for about an hour, it makes it about 40k objects in before throwing an error:
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
或
object at 0x109e82d50>: Failed to establish a new connection: [Errno 60] Operation timed out',))
它搜索的搜索项是按字母顺序排列的,因此我们将其说成是通过"E"部分,然后超时.我想从以"F"开头的对象开始.
The search items it's searching through are alphabetical, so we'll say it makes it through the "E" section and then times out. I want start with objects starting with "F".
如果您的Amazon S3存储桶中有大量对象,则 objects.all()
不是一种有效的迭代方法,因为它会尝试将它们全部同时加载到内存中.
If you have a large number of objects in your Amazon S3 bucket, then objects.all()
is not an efficient iteration method, since it tries to load them all into memory simultaneously.
相反,请使用 list_objects_v2()
来逐个浏览1000个对象.然后,使用返回的 ContinuationToken
再次调用它.
Instead, use list_objects_v2()
to page through the objects in groups of 1000. Then, call it again with the ContinuationToken
that was returned.
您实际上将需要一个 for
循环,调用 list_objects_v2()
,并在其中的另一个 for
循环中遍历每个对象.
You will effectively need a for
loop calling list_objects_v2()
and another for
loop within that which loops through each object.