譯者說
Tornado 4.3
於2015年11月6日釋出,該版本正式支援Python3.5
的async
/await
關鍵字,並且用舊版本CPython編譯Tornado同樣可以使用這兩個關鍵字,這無疑是一種進步。其次,這是最後一個支援Python2.6
和Python3.2
的版本了,在後續的版本了會移除對它們的相容。現在網路上還沒有Tornado4.3
的中文文件,所以為了讓更多的朋友能接觸並學習到它,我開始了這個翻譯專案,希望感興趣的小夥伴可以一起參與翻譯,專案地址是tornado-zh on Github,翻譯好的文件在Read the Docs上直接可以看到。歡迎Issues or PR。
示例 – 一個併發網路爬蟲
Tornado的 tornado.queues
模組實現了非同步生產者/消費者模式的協程, 類似於通過Python 標準庫的 queue
實現執行緒模式.
一個yield Queue.get
的協程直到佇列中有值的時候才會暫停. 如果佇列設定了最大長度yield Queue.put
的協程直到佇列中有空間才會暫停.
一個Queue
從0開始對完成的任務進行計數. Queue.put
加計數;Queue.task_done
減少計數.
這裡的網路爬蟲的例子, 佇列開始的時候只包含base_url. 當一個worker抓取到一個頁面它會解析連結並把它新增到佇列中, 然後呼叫Queue.task_done
減少計數一次. 最後, 當一個worker抓取到的頁面URL都是之前抓取到過的並且佇列中沒有任務了.於是worker呼叫 Queue.task_done
把計數減到0. 等待 Queue.join
的主協程取消暫停並且完成.
import time
from datetime import timedelta
try:
from HTMLParser import HTMLParser
from urlparse import urljoin, urldefrag
except ImportError:
from html.parser import HTMLParser
from urllib.parse import urljoin, urldefrag
from tornado import httpclient, gen, ioloop, queues
base_url = `http://www.tornadoweb.org/en/stable/`
concurrency = 10
@gen.coroutine
def get_links_from_url(url):
"""Download the page at `url` and parse it for links.
Returned links have had the fragment after `#` removed, and have been made
absolute so, e.g. the URL `gen.html#tornado.gen.coroutine` becomes
`http://www.tornadoweb.org/en/stable/gen.html`.
"""
try:
response = yield httpclient.AsyncHTTPClient().fetch(url)
print(`fetched %s` % url)
html = response.body if isinstance(response.body, str)
else response.body.decode()
urls = [urljoin(url, remove_fragment(new_url))
for new_url in get_links(html)]
except Exception as e:
print(`Exception: %s %s` % (e, url))
raise gen.Return([])
raise gen.Return(urls)
def remove_fragment(url):
pure_url, frag = urldefrag(url)
return pure_url
def get_links(html):
class URLSeeker(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.urls = []
def handle_starttag(self, tag, attrs):
href = dict(attrs).get(`href`)
if href and tag == `a`:
self.urls.append(href)
url_seeker = URLSeeker()
url_seeker.feed(html)
return url_seeker.urls
@gen.coroutine
def main():
q = queues.Queue()
start = time.time()
fetching, fetched = set(), set()
@gen.coroutine
def fetch_url():
current_url = yield q.get()
try:
if current_url in fetching:
return
print(`fetching %s` % current_url)
fetching.add(current_url)
urls = yield get_links_from_url(current_url)
fetched.add(current_url)
for new_url in urls:
# Only follow links beneath the base URL
if new_url.startswith(base_url):
yield q.put(new_url)
finally:
q.task_done()
@gen.coroutine
def worker():
while True:
yield fetch_url()
q.put(base_url)
# Start workers, then wait for the work queue to be empty.
for _ in range(concurrency):
worker()
yield q.join(timeout=timedelta(seconds=300))
assert fetching == fetched
print(`Done in %d seconds, fetched %s URLs.` % (
time.time() - start, len(fetched)))
if __name__ == `__main__`:
import logging
logging.basicConfig()
io_loop = ioloop.IOLoop.current()
io_loop.run_sync(main)