Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Python-100-Days
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
huangkq
Python-100-Days
Commits
2315b0ce
Commit
2315b0ce
authored
May 28, 2018
by
jackfrued
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
更新了爬虫第2天文档
parent
e86dece2
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
125 additions
and
44 deletions
+125
-44
02.数据采集和解析.md
Day66-75/02.数据采集和解析.md
+65
-0
example01.py
Day66-75/code/example01.py
+26
-17
example02.py
Day66-75/code/example02.py
+34
-27
No files found.
Day66-75/02.数据采集和解析.md
View file @
2315b0ce
## 数据采集和解析
## 数据采集和解析
通过上一个章节,我们已经了解到了开发一个爬虫需要做的工作以及一些常见的问题,至此我们可以对爬虫开发需要做的工作以及相关的技术做一个简单的汇总,可能有些库我们之前并没有使用过,不过别担心,这些内容我们都会讲到的。
1.
下载数据 - urllib / requests / aiohttp。
2.
解析数据 - re / lxml / beautifulsoup4(bs4)/ pyquery。
3.
持久化 - pymysql / redis / sqlalchemy / pymongo。
4.
调度器 - 进程 / 线程 / 协程。
### HTML页面分析
### HTML页面分析
```
HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>首页</title>
</head>
<body>
<h1>Hello, world!</h1>
<p>这是一个神奇的网站!</p>
<hr>
<div>
<h2>这是一个例子程序</h2>
<p>静夜思</p>
<p class="foo">床前明月光</p>
<p id="bar">疑似地上霜</p>
<p class="foo">举头望明月</p>
<div><a href="http://www.baidu.com"><p>低头思故乡</p></a></div>
</div>
<a class="foo" href="http://www.qq.com">腾讯网</a>
<img src="./img/pretty-girl.png" alt="美女">
<img src="./img/hellokitty.png" alt="凯蒂猫">
<img src="/static/img/pretty-girl.png" alt="美女">
<table>
<tr>
<th>姓名</th>
<th>上场时间</th>
<th>得分</th>
<th>篮板</th>
<th>助攻</th>
</tr>
</table>
</body>
</html>
```
如果你对上面的代码并不感到陌生,那么你一定知道HTML页面通常由三部分构成,分别是:用来承载内容的Tag(标签)、负责渲染页面的CSS(层叠样式表)以及控制交互式行为的JavaScript。通常,我们可以在浏览器的右键菜单中通过“查看网页源代码”的方式获取网页的代码并了解页面的结构;当然,我们也可以通过浏览器提供的开发人员工具来了解网页更多的信息。
#### 使用requests获取页面
1.
GET请求和POST请求。
2.
URL参数和请求头。
3.
复杂的POST请求(文件上传)。
4.
操作Cookie。
### 三种采集方式
### 三种采集方式
#### 三种采集方式的比较
| 抓取方法 | 速度 | 使用难度 | 备注 |
| ---------- | --------------------- | -------- | ------------------------------------------ |
| 正则表达式 | 快 | 困难 | 常用正则表达式
<br>
在线正则表达式测试 |
| lxml | 快 | 一般 | 需要安装C语言依赖库
<br>
唯一支持XML的解析器 |
| Beautiful | 快/慢(取决于解析器) | 简单 | |
> 说明:Beautiful的解析器包括:Python标准库(html.parser)、lxml的HTML解析器、lxml的XML解析器和html5lib。
#### BeautifulSoup的使用
1.
遍历文档树。
2.
五种过滤器:字符串、正则表达式、列表、True、方法。
Day66-75/code/example01.py
View file @
2315b0ce
...
@@ -8,7 +8,8 @@ import ssl
...
@@ -8,7 +8,8 @@ import ssl
from
pymysql
import
Error
from
pymysql
import
Error
def
decode_page
(
page_bytes
,
charsets
=
(
'utf-8'
,
)):
# 通过指定的字符集对页面进行解码(不是每个网站都将字符集设置为utf-8)
def
decode_page
(
page_bytes
,
charsets
=
(
'utf-8'
,)):
page_html
=
None
page_html
=
None
for
charset
in
charsets
:
for
charset
in
charsets
:
try
:
try
:
...
@@ -20,7 +21,8 @@ def decode_page(page_bytes, charsets=('utf-8', )):
...
@@ -20,7 +21,8 @@ def decode_page(page_bytes, charsets=('utf-8', )):
return
page_html
return
page_html
def
get_page_html
(
seed_url
,
*
,
retry_times
=
3
,
charsets
=
(
'utf-8'
,
)):
# 获取页面的HTML代码(通过递归实现指定次数的重试操作)
def
get_page_html
(
seed_url
,
*
,
retry_times
=
3
,
charsets
=
(
'utf-8'
,)):
page_html
=
None
page_html
=
None
try
:
try
:
page_html
=
decode_page
(
urlopen
(
seed_url
)
.
read
(),
charsets
)
page_html
=
decode_page
(
urlopen
(
seed_url
)
.
read
(),
charsets
)
...
@@ -32,25 +34,31 @@ def get_page_html(seed_url, *, retry_times=3, charsets=('utf-8', )):
...
@@ -32,25 +34,31 @@ def get_page_html(seed_url, *, retry_times=3, charsets=('utf-8', )):
return
page_html
return
page_html
# 从页面中提取需要的部分(通常是链接也可以通过正则表达式进行指定)
def
get_matched_parts
(
page_html
,
pattern_str
,
pattern_ignore_case
=
re
.
I
):
def
get_matched_parts
(
page_html
,
pattern_str
,
pattern_ignore_case
=
re
.
I
):
pattern_regex
=
re
.
compile
(
pattern_str
,
pattern_ignore_case
)
pattern_regex
=
re
.
compile
(
pattern_str
,
pattern_ignore_case
)
return
pattern_regex
.
findall
(
page_html
)
if
page_html
else
[]
return
pattern_regex
.
findall
(
page_html
)
if
page_html
else
[]
def
start_crawl
(
seed_url
,
match_pattern
):
# 开始执行爬虫程序并对指定的数据进行持久化操作
def
start_crawl
(
seed_url
,
match_pattern
,
*
,
max_depth
=-
1
):
conn
=
pymysql
.
connect
(
host
=
'localhost'
,
port
=
3306
,
conn
=
pymysql
.
connect
(
host
=
'localhost'
,
port
=
3306
,
database
=
'crawler'
,
user
=
'root'
,
database
=
'crawler'
,
user
=
'root'
,
password
=
'123456'
,
charset
=
'utf8'
)
password
=
'123456'
,
charset
=
'utf8'
)
try
:
try
:
with
conn
.
cursor
()
as
cursor
:
with
conn
.
cursor
()
as
cursor
:
url_list
=
[
seed_url
]
url_list
=
[
seed_url
]
visited_url_list
=
{
seed_url
:
0
}
while
url_list
:
while
url_list
:
current_url
=
url_list
.
pop
(
0
)
current_url
=
url_list
.
pop
(
0
)
depth
=
visited_url_list
[
current_url
]
if
depth
!=
max_depth
:
page_html
=
get_page_html
(
current_url
,
charsets
=
(
'utf-8'
,
'gbk'
,
'gb2312'
))
page_html
=
get_page_html
(
current_url
,
charsets
=
(
'utf-8'
,
'gbk'
,
'gb2312'
))
links_list
=
get_matched_parts
(
page_html
,
match_pattern
)
links_list
=
get_matched_parts
(
page_html
,
match_pattern
)
url_list
+=
links_list
param_list
=
[]
param_list
=
[]
for
link
in
links_list
:
for
link
in
links_list
:
if
link
not
in
visited_url_list
:
visited_url_list
[
link
]
=
depth
+
1
page_html
=
get_page_html
(
link
,
charsets
=
(
'utf-8'
,
'gbk'
,
'gb2312'
))
page_html
=
get_page_html
(
link
,
charsets
=
(
'utf-8'
,
'gbk'
,
'gb2312'
))
headings
=
get_matched_parts
(
page_html
,
r'<h1>(.*)<span'
)
headings
=
get_matched_parts
(
page_html
,
r'<h1>(.*)<span'
)
if
headings
:
if
headings
:
...
@@ -68,7 +76,8 @@ def start_crawl(seed_url, match_pattern):
...
@@ -68,7 +76,8 @@ def start_crawl(seed_url, match_pattern):
def
main
():
def
main
():
ssl
.
_create_default_https_context
=
ssl
.
_create_unverified_context
ssl
.
_create_default_https_context
=
ssl
.
_create_unverified_context
start_crawl
(
'http://sports.sohu.com/nba_a.shtml'
,
start_crawl
(
'http://sports.sohu.com/nba_a.shtml'
,
r'<a[^>]+test=a\s[^>]*href=["\'](.*?)["\']'
)
r'<a[^>]+test=a\s[^>]*href=["\'](.*?)["\']'
,
max_depth
=
2
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
Day66-75/code/example02.py
View file @
2315b0ce
...
@@ -13,7 +13,7 @@ def main():
...
@@ -13,7 +13,7 @@ def main():
</head>
</head>
<body>
<body>
<h1>Hello, world!</h1>
<h1>Hello, world!</h1>
<p>Good!!!
</p>
<p>这是一个神奇的网站!
</p>
<hr>
<hr>
<div>
<div>
<h2>这是一个例子程序</h2>
<h2>这是一个例子程序</h2>
...
@@ -26,19 +26,26 @@ def main():
...
@@ -26,19 +26,26 @@ def main():
<a class="foo" href="http://www.qq.com">腾讯网</a>
<a class="foo" href="http://www.qq.com">腾讯网</a>
<img src="./img/pretty-girl.png" alt="美女">
<img src="./img/pretty-girl.png" alt="美女">
<img src="./img/hellokitty.png" alt="凯蒂猫">
<img src="./img/hellokitty.png" alt="凯蒂猫">
<img src="./static/img/pretty-girl.png" alt="美女">
<img src="/static/img/pretty-girl.png" alt="美女">
<goup>Hello, Goup!</goup>
<table>
<tr>
<th>姓名</th>
<th>上场时间</th>
<th>得分</th>
<th>篮板</th>
<th>助攻</th>
</tr>
</table>
</body>
</body>
</html>
</html>
"""
"""
# resp = requests.get('http://sports.sohu.com/nba_a.shtml')
# html = resp.content.decode('gbk')
soup
=
BeautifulSoup
(
html
,
'lxml'
)
soup
=
BeautifulSoup
(
html
,
'lxml'
)
# JavaScript - document.title
print
(
soup
.
title
)
print
(
soup
.
title
)
# JavaScript: document.body.h1
# JavaScript - document.body.h1
# JavaScript: document.forms[0]
print
(
soup
.
body
.
h1
)
print
(
soup
.
body
.
h1
)
print
(
soup
.
find_all
(
re
.
compile
(
r'p$'
)))
print
(
soup
.
find_all
(
re
.
compile
(
r'^h'
)))
print
(
soup
.
find_all
(
re
.
compile
(
r'r$'
)))
print
(
soup
.
find_all
(
'img'
,
{
'src'
:
re
.
compile
(
r'\./img/\w+.png'
)}))
print
(
soup
.
find_all
(
'img'
,
{
'src'
:
re
.
compile
(
r'\./img/\w+.png'
)}))
print
(
soup
.
find_all
(
lambda
x
:
len
(
x
.
attrs
)
==
2
))
print
(
soup
.
find_all
(
lambda
x
:
len
(
x
.
attrs
)
==
2
))
print
(
soup
.
find_all
(
'p'
,
{
'class'
:
'foo'
}))
print
(
soup
.
find_all
(
'p'
,
{
'class'
:
'foo'
}))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment