Jsoup + HtmlUtil 實現網易新聞網頁爬蟲

Liam_Fang_發表於2019-01-14

1.這裡先說明為什麼要用HtmlUtil,僅用Jsoup不行嗎?

如果用Jsoup的方法,那麼爬取網頁的程式碼如下,這也是比較簡單的形式了。

Document docu1=Jsoup.connect(url).get();

用上述程式碼只能爬取靜態網頁的,當遇到動態網頁就會發現你想要的內容爬取不出來。因此我用到了HtmlUtil。

具體程式碼如下:這裡面的方法getHtmlFromUrl(String url)返回一個文件物件,然後可以通過Jsoup的一系列方法獲得想要的內容。

具體的解釋看這篇文章

import org.jsoup.nodes.Document;
import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
import org.jsoup.Jsoup;
public class HtmlUnitUtil {
	public static Document getHtmlFromUrl(String url) throws Exception{
		WebClient webClient = new WebClient();
        webClient.getOptions().setJavaScriptEnabled(true);
        webClient.getOptions().setCssEnabled(false);
        webClient.getOptions().setActiveXNative(false);
        webClient.getOptions().setCssEnabled(false);
        webClient.getOptions().setThrowExceptionOnScriptError(false);
        webClient.getOptions().setThrowExceptionOnFailingStatusCode(false);
        webClient.getOptions().setTimeout(10000);
        HtmlPage htmlPage = null;
        try {
            htmlPage = webClient.getPage(url);
            webClient.waitForBackgroundJavaScript(10000);
            String htmlString = htmlPage.asXml();
            return Jsoup.parse(htmlString);
        } finally {
            webClient.close();
        }
	}
}

 下面的url表示你想要爬取的頁面的地址。

for(String url:ls){
			Document docu1=null;
			try {
				docu1 = HtmlUnitUtil.getHtmlFromUrl(url);
				Elements lis = docu1.getElementsByClass("hot_text");
				//爬取的模組名
				Elements first_span = docu1.select("#list_wrap > div.list_content > div.area.baby_list_title > h2 > a");				
				for(Element e:lis){
					if(e.getElementsByTag("a").size()==0){
						continue;
					}
					else{
						Element e_a = e.getElementsByTag("a").get(0);
						//新聞標題
						String title = e_a.text();
						String newsUrl=e_a.attr("href");
						newsUrl = "http:" + newsUrl;
						count++;		
						String moduleName=first_span.get(0).text();
						System.out.println(title+"("+moduleName+"):"+newsUrl);												
					}					
				}
			} catch (Exception e1) {
				// TODO Auto-generated catch block
				e1.printStackTrace();
			}					
		}

上面的程式碼實現瞭如下的內容爬取。

maven依賴如下:

<dependency>
            <groupId>net.sourceforge.htmlunit</groupId>
            <artifactId>htmlunit</artifactId>
            <version>2.18</version>
        </dependency>
 
        <dependency>
            <groupId>net.sourceforge.htmlunit</groupId>
            <artifactId>htmlunit-core-js</artifactId>
            <version>2.9</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
		    <groupId>commons-logging</groupId>
		    <artifactId>commons-logging-api</artifactId>
		    <version>1.1</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/commons-collections/commons-collections -->
		<dependency>
		    <groupId>commons-collections</groupId>
		    <artifactId>commons-collections</artifactId>
		    <version>3.2</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
		<dependency>
		    <groupId>commons-io</groupId>
		    <artifactId>commons-io</artifactId>
		    <version>2.5</version>
		</dependency>

 

 感興趣的可以試試。

參考文章: https://blog.csdn.net/gx304419380/article/details/80619043

相關文章