python爬虫简单实战(三)

2021/4/10 20:41:36

本文主要是介绍python爬虫简单实战(三),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

基于bs4中的一个模块BeautifulSoup进行解析数据的一种方法
爬取诗词名句中三国演义小说
1.导入库

import requests
from bs4 import BeautifulSoup

2.发起请求

url = 'https://www.shicimingju.com/book/sanguoyanyi.html'
headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63'
    }
resp = requests.get(url=url,headers=headers)

3.BeautifulSoup进行解析数据

#实例化一个BeautifulSoup对象
soup = BeautifulSoup(resp,'lxml')
#解析出章节标题和详情页url
li_list = soup.select('.book-mulu > ul > li')
for li in li_list:
    title = li.a.string
    detail_url ='https://www.shicimingju.com/' + li.a['href']
    detail_text = requests.get(url=detail_url,headers=headers)
	soup_detail = BeautifulSoup(detail_text,'lxml')
    content = soup_detail.find('div',class_='chapter_content').text

4.结果发现有数据乱码 进行处理后,代码如下:

import requests
from bs4 import BeautifulSoup
url = 'https://www.shicimingju.com/book/sanguoyanyi.html'
headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63'
    }
resp = requests.get(url=url,headers=headers)
#处理乱码数据
resp.encoding = 'utf-8'
resp.apparent_encoding
resp = resp.text
#实例化一个BeautifulSoup对象
soup = BeautifulSoup(resp,'lxml')
#解析出章节标题和详情页url
li_list = soup.select('.book-mulu > ul > li')
for li in li_list:
    title = li.a.string
    detail_url ='https://www.shicimingju.com/' + li.a['href']
    detail_text = requests.get(url=detail_url,headers=headers)
    detail_text.encoding = 'utf-8'
    detail_text.apparent_encoding  # 处理乱码
    detail_text = detail_text.text
    soup_detail = BeautifulSoup(detail_text,'lxml')
    content = soup_detail.find('div',class_='chapter_content').text
    print(title,content,'\n')

5.总结
BeautifulSoup的一个简单用法,后面继续补充。。。



这篇关于python爬虫简单实战(三)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程