ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

批量爬取贴吧图片 糗事百科 煎蛋网

2019-05-30 20:48:56  阅读:206  来源: 互联网

标签:煎蛋 links url 百科 req request 爬取 headers link


批量爬取贴吧图片
from urllib import request
import re
# %e5%9b%be%e7%89%87
url = "http://tieba.baidu.com/f?kw=%E6%91%84%E5%BD%B1%E5%90%A7"

headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36"
}

req = request.Request(url = url,headers=headers)

response = request.urlopen(req)

html = response.read().decode("utf-8")

img_link = re.findall(r'<img src="(.*?)"',html)

# print(img_link)
# str.startswith()

for link in img_link:
# 判断链接是否是以http开头的
if link.startswith("http"):
print("开始爬取:%s"%link)
request.urlretrieve(url = link,filename = '../images/'+link[-10:])
else:
pass


糗事百科
from urllib import request
import re
url = "https://www.qiushibaike.com/pic/page/%s/"
headers = {
"User-Agent"": ""Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36"",
}
for i in range(1,11):
req = request.Request(url=url%i,headers=headers)
response = request.urlopen(req)
html = response.read().decode("utf-8")
img_link = re.findall(r'<img src="(.*?)"',html)
for link in img_link:
if link.startswith("//pic"):
links="http:"+link
try:
request.urlretrieve(url=links,filename="./images/"+links[-10:])
except:
pass
else:
print(link,"图片路径不正确")


煎蛋网
from urllib import request
import re
url = "http://jandan.net/pic/page-127"
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Accept-Language": "zh-CN,zh;q=0.9",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Cookie": "_ga=GA1.2.1246691030.1543560067; _gid=GA1.2.1877436102.1559200246",
"Host": "jandan.net",
"Pragma": "no-cache",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36",
}
req = request.Request(url=url,headers=headers)
response = request.urlopen(req)
html = response.read().decode("utf-8")
link = re.findall(r'<img .* org_src="(.*?)"',html)
for var in link:
links="http:"+var
print("正在下载:%s"%(links))
request.urlretrieve(links,"./image/"+links[-10:])






标签:煎蛋,links,url,百科,req,request,爬取,headers,link
来源: https://www.cnblogs.com/wyf2019/p/10951937.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有