Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
Python-100-Days
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
huangkq
Python-100-Days
Commits
79f297a4
Commit
79f297a4
authored
May 30, 2018
by
jackfrued
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
hello crawler
parent
b8725569
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
3 deletions
+5
-3
example06.py
Day66-75/code/example06.py
+5
-3
No files found.
Day66-75/code/example06.py
View file @
79f297a4
...
@@ -17,20 +17,22 @@ def main():
...
@@ -17,20 +17,22 @@ def main():
seed_url
=
urljoin
(
base_url
,
'explore'
)
seed_url
=
urljoin
(
base_url
,
'explore'
)
# 创建Redis客户端
# 创建Redis客户端
client
=
Redis
(
host
=
'1.2.3.4'
,
port
=
6379
,
password
=
'1qaz2wsx'
)
client
=
Redis
(
host
=
'1.2.3.4'
,
port
=
6379
,
password
=
'1qaz2wsx'
)
# 设置用户代理
# 设置用户代理
(否则访问会被拒绝)
headers
=
{
'user-agent'
:
'Baiduspider'
}
headers
=
{
'user-agent'
:
'Baiduspider'
}
# 通过requests模块发送GET请求并指定用户代理
# 通过requests模块发送GET请求并指定用户代理
resp
=
requests
.
get
(
seed_url
,
headers
=
headers
)
resp
=
requests
.
get
(
seed_url
,
headers
=
headers
)
# 创建BeautifulSoup对象并指定使用lxml作为解析器
# 创建BeautifulSoup对象并指定使用lxml作为解析器
soup
=
BeautifulSoup
(
resp
.
text
,
'lxml'
)
soup
=
BeautifulSoup
(
resp
.
text
,
'lxml'
)
href_regex
=
re
.
compile
(
r'^/question'
)
href_regex
=
re
.
compile
(
r'^/question'
)
# 将URL处理成SHA1摘要(长度固定更简短)
hasher_proto
=
sha1
()
# 查找所有href属性以/question打头的a标签
# 查找所有href属性以/question打头的a标签
for
a_tag
in
soup
.
find_all
(
'a'
,
{
'href'
:
href_regex
}):
for
a_tag
in
soup
.
find_all
(
'a'
,
{
'href'
:
href_regex
}):
# 获取a标签的href属性值并组装完整的URL
# 获取a标签的href属性值并组装完整的URL
href
=
a_tag
.
attrs
[
'href'
]
href
=
a_tag
.
attrs
[
'href'
]
full_url
=
urljoin
(
base_url
,
href
)
full_url
=
urljoin
(
base_url
,
href
)
#
将URL处理成SHA1摘要(长度固定更简短)
#
传入URL生成SHA1摘要
hasher
=
sha1
()
hasher
=
hasher_proto
.
copy
()
hasher
.
update
(
full_url
.
encode
(
'utf-8'
))
hasher
.
update
(
full_url
.
encode
(
'utf-8'
))
field_key
=
hasher
.
hexdigest
()
field_key
=
hasher
.
hexdigest
()
# 如果Redis的键'zhihu'对应的hash数据类型中没有URL的摘要就访问页面并缓存
# 如果Redis的键'zhihu'对应的hash数据类型中没有URL的摘要就访问页面并缓存
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment