Introduction
本文對colly如何使用,整個程式碼架構設計,以及一些使用例項的收集。
Colly是Go語言開發的Crawler Framework,並不是一個完整的產品,Colly提供了類似於Python的同類產品(BeautifulSoup
或 Scrapy
)相似的表現力和靈活性。
Colly這個名稱源自 Collector
的簡寫,而Collector
也是 Colly的核心。
Colly Official Docs,內容不是很多,最新的訊息也很就遠了,僅僅是活躍在Github
Concepts
Architecture
從理解上來說,Colly的設計分為兩層,核心層和解析層,
Collector
:是Colly實現,該元件負責網路通訊,並負責在Collector
作業執行時執行對應事件的回撥。Parser
:這個其實是抽象的,官網並未對此說明,goquery和一些htmlquery,通過這些就可以將訪問的結果解析成類Jquery物件,使html擁有了,XPath選擇器和CSS選擇器
通常情況下Crawler的工作流生命週期大致為
- 構建客戶端
- 傳送請求
- 獲取響應的資料
- 將相應的資料解析
- 對所需資料處理
- 持久化
而Colly則是將這些概念進行封裝,通過將事件註冊到每個步驟中,通過事件的方式對資料進行清理,抽象來說,Colly面向的是過程而不是物件。大概的工作架構如圖
event
通過上述的概念,可以大概瞭解到 Colly
是一個基於事件的Crawler,通過開發者自行註冊事件函式來觸發整個流水線的工作
Colly 具有以下事件處理程式:
- OnRequest:在請求之前呼叫
- OnError :在請求期間發生錯誤時呼叫
- OnResponseHeaders :在收到響應頭後呼叫
- OnResponse: 在收到響應後呼叫
- OnHTML:如果接收到的內容是 HTML,則在 OnResponse 之後立即呼叫
- OnXML :如果接收到的內容是 HTML 或 XML,則在 OnHTML 之後立即呼叫
- OnScraped:在 OnXML 回撥之後呼叫
- OnHTMLDetach:取消註冊一個OnHTML事件函式,取消後,如未執行過得事件將不會再被執行
- OnXMLDetach:取消註冊一個OnXML事件函式,取消後,如未執行過得事件將不會再被執行
Reference
Utilities
簡單使用
package main
import (
"fmt"
"github.com/gocolly/colly"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(
// Visit only domains: hackerspaces.org, wiki.hackerspaces.org
colly.AllowedDomains("hackerspaces.org", "wiki.hackerspaces.org"),
)
// On every a element which has href attribute call callback
c.OnHTML("a[href]", func(e *colly.HTMLElement) {
link := e.Attr("href")
// Print link
fmt.Printf("Link found: %q -> %s\n", e.Text, link)
// Visit link found on page
// Only those links are visited which are in AllowedDomains
c.Visit(e.Request.AbsoluteURL(link))
})
// Before making a request print "Visiting ..."
c.OnRequest(func(r *colly.Request) {
fmt.Println("Visiting", r.URL.String())
})
// Start scraping on https://hackerspaces.org
c.Visit("https://hackerspaces.org/")
}
錯誤處理
package main
import (
"fmt"
"github.com/gocolly/colly"
)
func main() {
// Create a collector
c := colly.NewCollector()
// Set HTML callback
// Won't be called if error occurs
c.OnHTML("*", func(e *colly.HTMLElement) {
fmt.Println(e)
})
// Set error handler
c.OnError(func(r *colly.Response, err error) {
fmt.Println("Request URL:", r.Request.URL, "failed with response:", r, "\nError:", err)
})
// Start scraping
c.Visit("https://definitely-not-a.website/")
}
處理本地檔案
word.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document title</title>
</head>
<body>
<p>List of words</p>
<ul>
<li>dark</li>
<li>smart</li>
<li>war</li>
<li>cloud</li>
<li>park</li>
<li>cup</li>
<li>worm</li>
<li>water</li>
<li>rock</li>
<li>warm</li>
</ul>
<footer>footer for words</footer>
</body>
</html>
package main
import (
"fmt"
"net/http"
"github.com/gocolly/colly/v2"
)
func main() {
t := &http.Transport{}
t.RegisterProtocol("file", http.NewFileTransport(http.Dir(".")))
c := colly.NewCollector()
c.WithTransport(t)
words := []string{}
c.OnHTML("li", func(e *colly.HTMLElement) {
words = append(words, e.Text)
})
c.Visit("file://./words.html")
for _, p := range words {
fmt.Printf("%s\n", p)
}
}
使用代理交換器
通過 ProxySwitcher
, 可以直接使用一批代理IP池進行訪問了,然而這裡只有RR,如果需要其他的均衡演算法,需要有自己實現了
package main
import (
"bytes"
"log"
"github.com/gocolly/colly"
"github.com/gocolly/colly/proxy"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// Rotate two socks5 proxies
rp, err := proxy.RoundRobinProxySwitcher("socks5://127.0.0.1:1337", "socks5://127.0.0.1:1338")
if err != nil {
log.Fatal(err)
}
c.SetProxyFunc(rp)
// Print the response
c.OnResponse(func(r *colly.Response) {
log.Printf("Proxy Address: %s\n", r.Request.ProxyURL)
log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1))
})
// Fetch httpbin.org/ip five times
for i := 0; i < 5; i++ {
c.Visit("https://httpbin.org/ip")
}
}
隨機延遲
該功能可以對行為設定一種特徵,以免被反扒機器人檢測,並禁止我們,如速率限制和延遲
package main
import (
"fmt"
"time"
"github.com/gocolly/colly"
"github.com/gocolly/colly/debug"
)
func main() {
url := "https://httpbin.org/delay/2"
// Instantiate default collector
c := colly.NewCollector(
// Attach a debugger to the collector
colly.Debugger(&debug.LogDebugger{}),
colly.Async(true),
)
// Limit the number of threads started by colly to two
// when visiting links which domains' matches "*httpbin.*" glob
c.Limit(&colly.LimitRule{
DomainGlob: "*httpbin.*",
Parallelism: 2,
RandomDelay: 5 * time.Second,
})
// Start scraping in four threads on https://httpbin.org/delay/2
for i := 0; i < 4; i++ {
c.Visit(fmt.Sprintf("%s?n=%d", url, i))
}
// Start scraping on https://httpbin.org/delay/2
c.Visit(url)
// Wait until threads are finished
c.Wait()
}
多執行緒請求佇列
package main
import (
"fmt"
"github.com/gocolly/colly"
"github.com/gocolly/colly/queue"
)
func main() {
url := "https://httpbin.org/delay/1"
// Instantiate default collector
c := colly.NewCollector(colly.AllowURLRevisit())
// create a request queue with 2 consumer threads
q, _ := queue.New(
2, // Number of consumer threads
&queue.InMemoryQueueStorage{MaxSize: 10000}, // Use default queue storage
)
c.OnRequest(func(r *colly.Request) {
fmt.Println("visiting", r.URL)
if r.ID < 15 {
r2, err := r.New("GET", fmt.Sprintf("%s?x=%v", url, r.ID), nil)
if err == nil {
q.AddRequest(r2)
}
}
})
for i := 0; i < 5; i++ {
// Add URLs to the queue
q.AddURL(fmt.Sprintf("%s?n=%d", url, i))
}
// Consume URLs
q.Run(c)
}
非同步
預設情況下,Colly的工作模式是同步的。可以使用 Async
函式啟用非同步模式。在非同步模式下,我們需要呼叫Wait
等待Collector
工作完成。
package main
import (
"fmt"
"github.com/gocolly/colly/v2"
)
func main() {
urls := []string{
"http://webcode.me",
"https://example.com",
"http://httpbin.org",
"https://www.perl.org",
"https://www.php.net",
"https://www.python.org",
"https://code.visualstudio.com",
"https://clojure.org",
}
c := colly.NewCollector(
colly.Async(),
)
c.OnHTML("title", func(e *colly.HTMLElement) {
fmt.Println(e.Text)
})
for _, url := range urls {
c.Visit(url)
}
c.Wait()
}
最大深度
深度是在訪問這個頁面時,其頁面還有link,此時需要採集到入口link幾層的link?預設1
package main
import (
"fmt"
"github.com/gocolly/colly"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(
// MaxDepth is 1, so only the links on the scraped page
// is visited, and no further links are followed
colly.MaxDepth(1),
)
// On every a element which has href attribute call callback
c.OnHTML("a[href]", func(e *colly.HTMLElement) {
link := e.Attr("href")
// Print link
fmt.Println(link)
// Visit link found on page
e.Request.Visit(link)
})
// Start scraping on https://en.wikipedia.org
c.Visit("https://en.wikipedia.org/")
}
Reference