时间:2021-07-01 10:21:17 帮助过:54人阅读
本文实例讲述了Python获取当前页面内所有链接的四种方法。分享给大家供大家参考,具体如下:
- '''
- 得到当前页面所有连接
- '''
- import requests
- import re
- from bs4 import BeautifulSoup
- from lxml import etree
- from selenium import webdriver
- url = 'http://www.testweb.com'
- r = requests.get(url)
- r.encoding = 'gb2312'
- # 利用 re (太黄太暴力!)
- matchs = re.findall(r"(?<=href=\").+?(?=\")|(?<=href=\').+?(?=\')" , r.text)
- for link in matchs:
- print(link)
- print()
- # 利用 BeautifulSoup4 (DOM树)
- soup = BeautifulSoup(r.text,'lxml')
- for a in soup.find_all('a'):
- link = a['href']
- print(link)
- print()
- # 利用 lxml.etree (XPath)
- tree = etree.HTML(r.text)
- for link in tree.xpath("//@href"):
- print(link)
- print()
- # 利用selenium(要开浏览器!)
- driver = webdriver.Firefox()
- driver.get(url)
- for link in driver.find_elements_by_tag_name("a"):
- print(link.get_attribute("href"))
- driver.close()
注意:若页面中含有 iframe,则 iframe 内所包含页面的所有标签都无法用以上四种方法获得!!!此时则要:
- # 再打开所有iframe查找全部的a标签
- for iframe in soup.find_all('iframe'):
- url_ifr = iframe['src'] # 取得当前iframe的src属性值
- rr = requests.get(url_ifr)
- rr.encoding = 'gb2312'
- soup_ifr = BeautifulSoup(rr.text,'lxml')
- for a in soup_ifr.find_all('a'):
- link = a['href']
- m = re.match(r'http:\/\/.*?(?=\/)',link)
- #print(link)
- if m:
- all_urls.add(m.group(0))
以上就是Python使用四种方法实现获取当前页面内所有链接的对比分析的详细内容,更多请关注Gxl网其它相关文章!