python Requests库入门与实战

中国大学MOOC上的北京理工大学开的一门《python》网络爬虫与信息提取这门课的学习记录。

基础代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import requests


def getHTMLText(url):
try:
r = requests.get(url, timeout=30)
# https://2.python-requests.org/en/master/user/quickstart/#response-status-codes
r.raise_for_status()
# https://2.python-requests.org/en/master/api/#requests.Response.apparent_encoding
r.encoding = r.apparent_encoding
# https://2.python-requests.org/en/master/api/#requests.Response.text
return r.text
except:
return "产生异常"


if __name__ == "__main__":
url = "http://www.baidu.com"
print(getHTMLText(url))

如何简单地理解Python中的if name==”main“:

其中 r.raise_for_status() )在方法内部判断 r.status_code 是否等于200,不需要增加额外的if语句,该语句便于利用try‐except进行异常处理,r.apparent_encoding 是从内容中分析出的响应内容编码方式(备选编码方式),r.encoding 是从HTTP header中猜测的响应内容编码方式。

在上面的例子中 r.apparent_encodingutf-8r.encodingISO-8859-1,如果没有 r.encoding = r.apparent_encoding 这行代码的话,返回的不是中文的百度首页。

https://2.python-requests.org/en/master/api/#requests.Response.text

r.text :Content of the response, in unicode.

If Response.encoding is None, encoding will be guessed using chardet.

The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non-HTTP knowledge to make a better guess at the encoding, you should set r.encoding appropriately before accessing this property.

Robots协议

作用:网站告知网络爬虫哪些页面可以爬取,哪些不可以爬取

形式:京东的Robots协议

1
2
3
4
5
6
7
8
9
10
11
12
User-agent: * 
Disallow: /?*
Disallow: /pop/*.html
Disallow: /pinpai/*.html?*
User-agent: EtaoSpider
Disallow: /
User-agent: HuihuiSpider
Disallow: /
User-agent: GwdangSpider
Disallow: /
User-agent: WochachaSpider
Disallow: /
1
2
3
# 注释,*代表所有,/代表根目录
User‐agent: *
Disallow: /

Requests实战

百度360搜索关键词提交

1
2
3
4
5
6
7
8
9
10
import requests
keyword = "Python"
try:
kv = {'wd': keyword}
r = requests.get("http://www.baidu.com/s",params=kv)
print(r.request.url) # http://www.baidu.com/s?wd=Python
r.raise_for_status()
print(len(r.text))
except:
print("爬取失败")

百度关键词接口:https://www.baidu.com/s?wd=keyword

网络图片的爬取和存储

文件操作上不熟悉…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import requests
import os
url = "http://image.ngchina.com.cn/2018/0926/20180926031035591.jpg"
root = "D://pics//"
path = root + url.split('/')[-1]
try:
if not os.path.exists(root):
os.mkdir(root)
if not os.path.exists(path):
r = requests.get(url)
with open(path,'wb') as f:
f.write(r.content)
f.close()
print("文件保存成功")
else:
print("文件已存在")
except:
print("爬取失败")