python讀取大檔案

weixin_33807284發表於2018-07-04

我們在處理小的文字檔案時一般使用.read()、.readline() 和 .readlines()方法,但是當我們的檔案有2個G,5個G甚至更大時,用這些方法記憶體就直接爆掉了。

對一般檔案,如果檔案很小,read()一次性讀取最方便;如果不能確定檔案大小,反覆呼叫read(size)比較保險;如果是配置檔案,呼叫readlines()最方便。

讀取大檔案方法:

一、Read In Chunks

把大檔案分成小塊來讀

def read_in_chunks(filePath, chunk_size=1024*1024):
    """
    Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1M
    You can set your own chunk size 
    """
    file_object = open(filePath)
    while True:
        chunk_data = file_object.read(chunk_size)
        if not chunk_data:
            break
        yield chunk_data
if __name__ == "__main__":
    filePath = './path/filename'
    for chunk in read_in_chunks(filePath):
        process(chunk) # <do something with chunk>

二、Using with open()

with語句開啟和關閉檔案,包括丟擲一個內部塊異常。for line in f檔案物件f視為一個迭代器,會自動的採用緩衝IO和記憶體管理,所以你不必擔心大檔案。

#If the file is line based
with open(...) as f:
    for line in f:
        process(line) # <do something with line>

三、fileinput處理

import fileinput
for line in fileinput.input(['sum.log']):
    print line

參考:
http://www.zhidaow.com/post/python-read-big-file
https://www.cnblogs.com/wulaa/p/7852592.html

f = open(filename,'r')
f.read()

#1:
while True:
    block = f.read(1024)
    if not block:
        break


#2:
while True:
    line = f.readline()
    if not line:
        break

#3:
for line in f.readlines():
    pass


#4:
with open(filename,'r') as file:
    for line in file:
        pass

#5:the second line
import linecache
txt = linecache.getline(filename,2)

相關文章