最近被多執行緒給坑了下,沒意識到類變數在多執行緒下是共享的,還有一個就是沒意識到 記憶體釋放問題,導致越累越大
1.python 類變數 在多執行緒情況 下的 是共享的
2.python 類變數 在多執行緒情況 下的 釋放是不完全的
3.python 類變數 在多執行緒情況 下沒釋放的那部分 記憶體 是可以重複利用的
import threading import time class Test: cache = {} @classmethod def get_value(self, key): value = Test.cache.get(key, []) return len(value) @classmethod def store_value(self, key, value): if not Test.cache.has_key(key): Test.cache[key] = range(value) else: Test.cache[key].extend(range(value)) return len(Test.cache[key]) @classmethod def release_value(self, key): if Test.cache.has_key(key): Test.cache.pop(key) return True @classmethod def print_cache(self): print 'print_cache:' for key in Test.cache: print 'key: %d, value:%d' % (key, len(Test.cache[key])) def worker(number, value): key = number % 5 print 'threading: %d, store_value: %d' % (number, Test.store_value(key, value)) time.sleep(10) print 'threading: %d, release_value: %s' % (number, Test.release_value(key)) if __name__ == '__main__': thread_num = 10 thread_pool = [] for i in range(thread_num): th = threading.Thread(target=worker,args=[i, 1000000]) thread_pool.append(th) thread_pool[i].start() for thread in thread_pool: threading.Thread.join(thread) Test.print_cache() time.sleep(10) thread_pool = [] for i in range(thread_num): th = threading.Thread(target=worker,args=[i, 100000]) thread_pool.append(th) thread_pool[i].start() for thread in thread_pool: threading.Thread.join(thread) Test.print_cache() time.sleep(10)
總結
公用的資料,除非是隻讀的,不然不要當類成員變數,一是會共享,二是不好釋放。