数据挖掘之网络爬虫 - 基础

心意乱 2018-10-21

服务器 entity 数据挖掘 Create utf-8

在项目添加 maven 配置

<!-- 解析数据 -->
<dependency>
    <groupId>org.jsoup</groupId>
    <artifactId>jsoup</artifactId>
    <version>1.8.3</version>
</dependency>

<!-- 爬网页 -->
<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.6</version>
</dependency>

使用 HttpClient 发起请求获取页面数据

HttpGet httpGet = new HttpGet("https://www.baidu.com/");

HttpHost proxy = new HttpHost("125.70.13.77", 8080);// 使用代理服务器发送请求
httpGet.setConfig(RequestConfig.custom()
                  .setSocketTimeout(30000)// 发送时间
                  .setConnectTimeout(30000)// 连接时间
                  .setProxy(proxy)// 设置代理服务器
                  .build());

CloseableHttpClient httpClient = HttpClientBuilder.create().build();
HttpClientContext context = HttpClientContext.create();

CloseableHttpResponse response = httpClient.execute(httpGet, context);

HttpEntity entity = response.getEntity();
String html = EntityUtils.toString(entity, "utf-8");

System.out.println(html);
  1. 不使用本机发起请求, 而是使用代理服务器发起请求是应为: 防止被一些反扒网站拉黑: 代理服务器

使用Jsoup发起请求并且解析页面数据

// 获取 Document 对象
Document document = Jsoup.connect("https://www.baidu.com/").get();

// 通过 CSS 选择器, 匹配标签数据
Elements select = document.select("a");

for (Element element : select) {
    System.out.println(element);
}
  1. Document对象中有各种JavaScript 和 CSS解析方法

一般发送请求使用httpclient解析网页数据使用jsoup, 两者组合使用

HttpGet httpGet = new HttpGet("https://www.baidu.com/");

HttpHost proxy = new HttpHost("125.70.13.77", 8080);
httpGet.setConfig(RequestConfig.custom()
                  .setSocketTimeout(30000)
                  .setConnectTimeout(30000)
                  .setProxy(proxy)
                  .build());

CloseableHttpClient httpClient = HttpClientBuilder.create().build();
HttpClientContext context = HttpClientContext.create();

CloseableHttpResponse response = httpClient.execute(httpGet, context);

HttpEntity entity = response.getEntity();
String htmlData = EntityUtils.toString(entity, "utf-8");

Document document = Jsoup.parse(htmlData);

Elements select = document.select("a");

for (Element element : select) {
    System.out.println(element);
}
登录 后评论
下一篇
云攻略小攻
1898人浏览
2019-10-11
相关推荐
Python爬虫入门一之综述
1416人浏览
2016-12-16 18:46:00
Python爬虫学习系列教程
1922人浏览
2017-03-02 00:07:00
2
3
0
1240