当前位置:Gxlcms > Python > python实现爬虫统计学校BBS男女比例之多线程爬虫(二)

python实现爬虫统计学校BBS男女比例之多线程爬虫(二)

时间:2021-07-01 10:21:17 帮助过:48人阅读

接着第一篇继续学习。

一、数据分类

正确数据:id、性别、活动时间三者都有

放在这个文件里file1 = 'ruisi\\correct%s-%s.txt' % (startNum, endNum)

数据格式为293001 男 2015-5-1 19:17

  • 没有时间:有id、有性别,无活动时间

放这个文件里file2 = 'ruisi\\errTime%s-%s.txt' % (startNum, endNum)

数据格式为2566 女 notime

  • 用户不存在:该id没有对应的用户

放这个文件里file3 = 'ruisi\\notexist%s-%s.txt' % (startNum, endNum)

数据格式为29005 notexist

  • 未知性别:有id,但是性别从网页上无法得知(经检查,这种情况也没有活动时间)

放这个文件里 file4 = 'ruisi\\unkownsex%s-%s.txt' % (startNum, endNum)

数据格式 221794 unkownsex

  • 网络错误:网断了,或者服务器故障,需要对这些id重新检查

放这个文件里 file5 = 'ruisi\\httperror%s-%s.txt' % (startNum, endNum)

数据格式 271004 httperror

如何不间断得爬虫信息

  • 本项目有一个考虑:是不间断爬取信息,如果因为断网、BBS服务器故障啥的,我的爬虫程序就退出的话。那我们还得从间断的地方继续爬,或者更麻烦的是从头开始爬。
  • 所以,我采取的方法是,如果遇到故障,就把这些异常的id记录下来。等一次遍历之后,才对这些异常的id进行重新爬取性别。
  • 本文系列(一)给出了一个 getInfo(myurl, seWord),通过给定链接和给定正则表达式爬取信息。
  • 这个函数可以用来查看性别的最后活动时间信息。
  • 我们再定义一个安全的爬取函数,不会间断程序运行的,这儿用到try except异常处理。

这儿代码试了两次getInfo(myurl, seWord),如果第2次还是抛出异常了,就把这个id保存在file5里面
如果能获取到信息,就返回信息

  1. file5 = 'ruisi\\httperror%s-%s.txt' % (startNum, endNum)
  2. def safeGet(myid, myurl, seWord):
  3. try:
  4. return getInfo(myurl, seWord)
  5. except:
  6. try:
  7. return getInfo(myurl, seWord)
  8. except:
  9. httperrorfile = open(file5, 'a')
  10. info = '%d %s\n' % (myid, 'httperror')
  11. httperrorfile.write(info)
  12. httperrorfile.close()
  13. return 'httperror'

依次遍历,获取id从[1,300,000]的用户信息

我们定义一个函数,这儿的思路是获取sex和time,如果有sex,进而继续判断是否有time;如果没sex,判断是否这个用户不存在还是性别无法爬取。

其中要考虑到断网或者BBS服务器故障的情况。

  1. url1 = 'http://rs.xidian.edu.cn/home.php?mod=space&uid=%s'
  2. url2 = 'http://rs.xidian.edu.cn/home.php?mod=space&uid=%s&do=profile'
  3. def searchWeb(idArr):
  4. for id in idArr:
  5. sexUrl = url1 % (id) #将%s替换为id
  6. timeUrl = url2 % (id)
  7. sex = safeGet(id,sexUrl, sexRe)
  8. if not sex: #如果sexUrl里面找不到性别,在timeUrl再尝试找一下
  9. sex = safeGet(id,timeUrl, sexRe)
  10. time = safeGet(id,timeUrl, timeRe)
  11. #如果出现了httperror,需要重新爬取
  12. if (sex is 'httperror') or (time is 'httperror') :
  13. pass
  14. else:
  15. if sex:
  16. info = '%d %s' % (id, sex)
  17. if time:
  18. info = '%s %s\n' % (info, time)
  19. wfile = open(file1, 'a')
  20. wfile.write(info)
  21. wfile.close()
  22. else:
  23. info = '%s %s\n' % (info, 'notime')
  24. errtimefile = open(file2, 'a')
  25. errtimefile.write(info)
  26. errtimefile.close()
  27. else:
  28. #这儿是性别是None,然后确定一下是不是用户不存在
  29. #断网的时候加上这个,会导致4个重复httperror
  30. #可能用户的性别我们无法知道,他没有填写
  31. notexist = safeGet(id,sexUrl, notexistRe)
  32. if notexist is 'httperror':
  33. pass
  34. else:
  35. if notexist:
  36. notexistfile = open(file3, 'a')
  37. info = '%d %s\n' % (id, 'notexist')
  38. notexistfile.write(info)
  39. notexistfile.close()
  40. else:
  41. unkownsexfile = open(file4, 'a')
  42. info = '%d %s\n' % (id, 'unkownsex')
  43. unkownsexfile.write(info)
  44. unkownsexfile.close()

这儿后期检查发现了一个问题

  1. sex = safeGet(id,sexUrl, sexRe)
  2. if not sex:
  3. sex = safeGet(id,timeUrl, sexRe)
  4. time = safeGet(id,timeUrl, timeRe)

这个代码如果断网的时候,调用了3次safeGet,每次调用都会往文本里面同一个id写多次httperror

  1. 251538 httperror
  2. 251538 httperror
  3. 251538 httperror
  4. 251538 httperror

多线程爬取信息?

数据统计可以用多线程,因为是独立的多个文本
1、Popen介绍

使用Popen可以自定义标准输入、标准输出和标准错误输出。我在SAP实习的时候,项目组在linux平台下经常使用Popen,可能是因为可以方便重定向输出。

下面这段代码借鉴了以前项目组的实现方法,Popen可以调用系统cmd命令。下面3个communicate()连在一起表示要等这3个线程都结束。

疑惑?
试验了一下,必须3个communicate()紧挨着才能保证3个线程同时开启,最后等待3个线程都结束。

  1. p1=Popen(['python', 'ruisi.py', str(s0),str(s1)],bufsize=10000, stdout=subprocess.PIPE)
  2. p2=Popen(['python', 'ruisi.py', str(s1),str(s2)],bufsize=10000, stdout=subprocess.PIPE)
  3. p3=Popen(['python', 'ruisi.py', str(s2),str(s3)],bufsize=10000, stdout=subprocess.PIPE)
  4. p1.communicate()
  5. p2.communicate()
  6. p3.communicate()

2、定义一个单线程的爬虫

用法:python ruisi.py

这段代码就是爬取[startNum, endNum)信息,输出到相应的文本里。它是一个单线程的程序,若要实现多线程的话,在外部调用它的地方实现多线程。

  1. # ruisi.py
  2. # coding=utf-8
  3. import urllib2, re, sys, threading, time,thread
  4. # myurl as 指定链接
  5. # seWord as 正则表达式,用unicode表示
  6. # 返回根据正则表达式匹配的信息或者None
  7. def getInfo(myurl, seWord):
  8. headers = {
  9. 'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'
  10. }
  11. req = urllib2.Request(
  12. url=myurl,
  13. headers=headers
  14. )
  15. time.sleep(0.3)
  16. response = urllib2.urlopen(req)
  17. html = response.read()
  18. html = unicode(html, 'utf-8')
  19. timeMatch = seWord.search(html)
  20. if timeMatch:
  21. s = timeMatch.groups()
  22. return s[0]
  23. else:
  24. return None
  25. #尝试两次getInfo()
  26. #第2次失败后,就把这个id标记为httperror
  27. def safeGet(myid, myurl, seWord):
  28. try:
  29. return getInfo(myurl, seWord)
  30. except:
  31. try:
  32. return getInfo(myurl, seWord)
  33. except:
  34. httperrorfile = open(file5, 'a')
  35. info = '%d %s\n' % (myid, 'httperror')
  36. httperrorfile.write(info)
  37. httperrorfile.close()
  38. return 'httperror'
  39. #
输出一个 idArr 范围,比如[1,1001) def searchWeb(idArr): for id in idArr: sexUrl = url1 % (id) timeUrl = url2 % (id) sex = safeGet(id,sexUrl, sexRe) if not sex: sex = safeGet(id,timeUrl, sexRe) time = safeGet(id,timeUrl, timeRe) if (sex is 'httperror') or (time is 'httperror') : pass else: if sex: info = '%d %s' % (id, sex) if time: info = '%s %s\n' % (info, time) wfile = open(file1, 'a') wfile.write(info) wfile.close() else: info = '%s %s\n' % (info, 'notime') errtimefile = open(file2, 'a') errtimefile.write(info) errtimefile.close() else: notexist = safeGet(id,sexUrl, notexistRe) if notexist is 'httperror': pass else: if notexist: notexistfile = open(file3, 'a') info = '%d %s\n' % (id, 'notexist') notexistfile.write(info) notexistfile.close() else: unkownsexfile = open(file4, 'a') info = '%d %s\n' % (id, 'unkownsex') unkownsexfile.write(info) unkownsexfile.close() def main(): reload(sys) sys.setdefaultencoding('utf-8') if len(sys.argv) != 3: print 'usage: python ruisi.py ' sys.exit(-1) global sexRe,timeRe,notexistRe,url1,url2,file1,file2,file3,file4,startNum,endNum,file5 startNum=int(sys.argv[1]) endNum=int(sys.argv[2]) sexRe = re.compile(u'em>\u6027\u522b(.*?)\u4e0a\u6b21\u6d3b\u52a8\u65f6\u95f4(.*?))\u62b1\u6b49\uff0c\u60a8\u6307\u5b9a\u7684\u7528\u6237\u7a7a\u95f4\u4e0d\u5b58\u5728<') url1 = 'http://rs.xidian.edu.cn/home.php?mod=space&uid=%s' url2 = 'http://rs.xidian.edu.cn/home.php?mod=space&uid=%s&do=profile' file1 = '..\\newRuisi\\correct%s-%s.txt' % (startNum, endNum) file2 = '..\\newRuisi\\errTime%s-%s.txt' % (startNum, endNum) file3 = '..\\newRuisi\\notexist%s-%s.txt' % (startNum, endNum) file4 = '..\\newRuisi\\unkownsex%s-%s.txt' % (startNum, endNum) file5 = '..\\newRuisi\\httperror%s-%s.txt' % (startNum, endNum) searchWeb(xrange(startNum,endNum)) # numThread = 10 # searchWeb(xrange(endNum)) # total = 0 # for i in xrange(numThread): # data = xrange(1+i,endNum,numThread) # total =+ len(data) # t=threading.Thread(target=searchWeb,args=(data,)) # t.start() # print total main()

多线程爬虫

代码

  1. # coding=utf-8
  2. from subprocess import Popen
  3. import subprocess
  4. import threading,time
  5. startn = 1
  6. endn = 300001
  7. step =1000
  8. total = (endn - startn + 1 ) /step
  9. ISOTIMEFORMAT='%Y-%m-%d %X'
  10. #hardcode 3 threads
  11. #沒有深究3个线程好还是4或者更多个线程好
  12. #
输出格式化的年月日时分秒 #输出程序的耗时(以秒为单位) for i in xrange(0,total,3): startNumber = startn + step * i startTime = time.clock() s0 = startNumber s1 = startNumber + step s2 = startNumber + step*2 s3 = startNumber + step*3 p1=Popen(['python', 'ruisi.py', str(s0),str(s1)],bufsize=10000, stdout=subprocess.PIPE) p2=Popen(['python', 'ruisi.py', str(s1),str(s2)],bufsize=10000, stdout=subprocess.PIPE) p3=Popen(['python', 'ruisi.py', str(s2),str(s3)],bufsize=10000, stdout=subprocess.PIPE) startftime ='[ '+ time.strftime( ISOTIMEFORMAT, time.localtime() ) + ' ] ' print startftime + '%s - %s download start... ' %(s0, s1) print startftime + '%s - %s download start... ' %(s1, s2) print startftime + '%s - %s download start... ' %(s2, s3) p1.communicate() p2.communicate() p3.communicate() endftime = '[ '+ time.strftime( ISOTIMEFORMAT, time.localtime() ) + ' ] ' print endftime + '%s - %s download end !!! ' %(s0, s1) print endftime + '%s - %s download end !!! ' %(s1, s2) print endftime + '%s - %s download end !!! ' %(s2, s3) endTime = time.clock() print "cost time " + str(endTime - startTime) + " s" time.sleep(5)

这儿是记录时间戳的日志:

  1. "D:\Program Files\Python27\python.exe" E:/pythonProject/webCrawler/sum.py
  2. [ 2015-11-23 11:31:15 ] 1 - 1001 download start...
  3. [ 2015-11-23 11:31:15 ] 1001 - 2001 download start...
  4. [ 2015-11-23 11:31:15 ] 2001 - 3001 download start...
  5. [ 2015-11-23 11:53:44 ] 1 - 1001 download end !!!
  6. [ 2015-11-23 11:53:44 ] 1001 - 2001 download end !!!
  7. [ 2015-11-23 11:53:44 ] 2001 - 3001 download end !!!
  8. cost time 1348.99480677 s
  9. [ 2015-11-23 11:53:50 ] 3001 - 4001 download start...
  10. [ 2015-11-23 11:53:50 ] 4001 - 5001 download start...
  11. [ 2015-11-23 11:53:50 ] 5001 - 6001 download start...
  12. [ 2015-11-23 12:16:56 ] 3001 - 4001 download end !!!
  13. [ 2015-11-23 12:16:56 ] 4001 - 5001 download end !!!
  14. [ 2015-11-23 12:16:56 ] 5001 - 6001 download end !!!
  15. cost time 1386.06407734 s
  16. [ 2015-11-23 12:17:01 ] 6001 - 7001 download start...
  17. [ 2015-11-23 12:17:01 ] 7001 - 8001 download start...
  18. [ 2015-11-23 12:17:01 ] 8001 - 9001 download start...

上面是多线程的Log记录,从下面可以看出,1000个用户平均需要500s,一个id需要0.5s。500*300/3600 = 41.666666666667小时,大概需要两天的时间。
我们再试验一次单线程爬虫的耗时,记录如下:

  1. "D:\Program Files\Python27\python.exe" E:/pythonProject/webCrawler/sum.py
  2. 1 - 1001 download start...
  3. 1 - 1001 download end !!!
  4. cost time 1583.65911889 s
  5. 1001 - 2001 download start...
  6. 1001 - 2001 download end !!!
  7. cost time 1342.46874278 s
  8. 2001 - 3001 download start...
  9. 2001 - 3001 download end !!!
  10. cost time 1327.10885725 s
  11. 3001 - 4001 download start...

我们发现一次线程爬取1000个用户耗时的时间也需要1500s,而多线程程序是3*1000个用户耗时1500s。

故多线程确实能比单线程省很多时间。

Note:
在getInfo(myurl, seWord)里有time.sleep(0.3)这样一段代码,是为了防止批判访问BBS,而被BBS拒绝访问。这个0.3s对于上文多线程和单线程的统计时间有影响。
最后附上原始的,没有带时间戳的记录。(加上时间戳,可以知道程序什么时候开始爬虫的,以应对线程卡死情况。)

  1. "D:\Program Files\Python27\python.exe" E:/pythonProject/webCrawler/sum.py
  2. 1 - 1001 download start...
  3. 1001 - 2001 download start...
  4. 2001 - 3001 download start...
  5. 1 - 1001 download end !!!
  6. 1001 - 2001 download end !!!
  7. 2001 - 3001 download end !!!
  8. cost time 1532.74102812 s
  9. 3001 - 4001 download start...
  10. 4001 - 5001 download start...
  11. 5001 - 6001 download start...
  12. 3001 - 4001 download end !!!
  13. 4001 - 5001 download end !!!
  14. 5001 - 6001 download end !!!
  15. cost time 2652.01624951 s
  16. 6001 - 7001 download start...
  17. 7001 - 8001 download start...
  18. 8001 - 9001 download start...
  19. 6001 - 7001 download end !!!
  20. 7001 - 8001 download end !!!
  21. 8001 - 9001 download end !!!
  22. cost time 1880.61513696 s
  23. 9001 - 10001 download start...
  24. 10001 - 11001 download start...
  25. 11001 - 12001 download start...
  26. 9001 - 10001 download end !!!
  27. 10001 - 11001 download end !!!
  28. 11001 - 12001 download end !!!
  29. cost time 1634.40575553 s
  30. 12001 - 13001 download start...
  31. 13001 - 14001 download start...
  32. 14001 - 15001 download start...
  33. 12001 - 13001 download end !!!
  34. 13001 - 14001 download end !!!
  35. 14001 - 15001 download end !!!
  36. cost time 1403.62795496 s
  37. 15001 - 16001 download start...
  38. 16001 - 17001 download start...
  39. 17001 - 18001 download start...
  40. 15001 - 16001 download end !!!
  41. 16001 - 17001 download end !!!
  42. 17001 - 18001 download end !!!
  43. cost time 1271.42177906 s
  44. 18001 - 19001 download start...
  45. 19001 - 20001 download start...
  46. 20001 - 21001 download start...
  47. 18001 - 19001 download end !!!
  48. 19001 - 20001 download end !!!
  49. 20001 - 21001 download end !!!
  50. cost time 1476.04122024 s
  51. 21001 - 22001 download start...
  52. 22001 - 23001 download start...
  53. 23001 - 24001 download start...
  54. 21001 - 22001 download end !!!
  55. 22001 - 23001 download end !!!
  56. 23001 - 24001 download end !!!
  57. cost time 1431.37074164 s
  58. 24001 - 25001 download start...
  59. 25001 - 26001 download start...
  60. 26001 - 27001 download start...
  61. 24001 - 25001 download end !!!
  62. 25001 - 26001 download end !!!
  63. 26001 - 27001 download end !!!
  64. cost time 1411.45186874 s
  65. 27001 - 28001 download start...
  66. 28001 - 29001 download start...
  67. 29001 - 30001 download start...
  68. 27001 - 28001 download end !!!
  69. 28001 - 29001 download end !!!
  70. 29001 - 30001 download end !!!
  71. cost time 1396.88837788 s
  72. 30001 - 31001 download start...
  73. 31001 - 32001 download start...
  74. 32001 - 33001 download start...
  75. 30001 - 31001 download end !!!
  76. 31001 - 32001 download end !!!
  77. 32001 - 33001 download end !!!
  78. cost time 1389.01316718 s
  79. 33001 - 34001 download start...
  80. 34001 - 35001 download start...
  81. 35001 - 36001 download start...
  82. 33001 - 34001 download end !!!
  83. 34001 - 35001 download end !!!
  84. 35001 - 36001 download end !!!
  85. cost time 1318.16040825 s
  86. 36001 - 37001 download start...
  87. 37001 - 38001 download start...
  88. 38001 - 39001 download start...
  89. 36001 - 37001 download end !!!
  90. 37001 - 38001 download end !!!
  91. 38001 - 39001 download end !!!
  92. cost time 1362.59222822 s
  93. 39001 - 40001 download start...
  94. 40001 - 41001 download start...
  95. 41001 - 42001 download start...
  96. 39001 - 40001 download end !!!
  97. 40001 - 41001 download end !!!
  98. 41001 - 42001 download end !!!
  99. cost time 1253.62498539 s
  100. 42001 - 43001 download start...
  101. 43001 - 44001 download start...
  102. 44001 - 45001 download start...
  103. 42001 - 43001 download end !!!
  104. 43001 - 44001 download end !!!
  105. 44001 - 45001 download end !!!
  106. cost time 1313.50461988 s
  107. 45001 - 46001 download start...
  108. 46001 - 47001 download start...
  109. 47001 - 48001 download start...
  110. 45001 - 46001 download end !!!
  111. 46001 - 47001 download end !!!
  112. 47001 - 48001 download end !!!
  113. cost time 1322.32317331 s
  114. 48001 - 49001 download start...
  115. 49001 - 50001 download start...
  116. 50001 - 51001 download start...
  117. 48001 - 49001 download end !!!
  118. 49001 - 50001 download end !!!
  119. 50001 - 51001 download end !!!
  120. cost time 1381.58027296 s
  121. 51001 - 52001 download start...
  122. 52001 - 53001 download start...
  123. 53001 - 54001 download start...
  124. 51001 - 52001 download end !!!
  125. 52001 - 53001 download end !!!
  126. 53001 - 54001 download end !!!
  127. cost time 1357.78699459 s
  128. 54001 - 55001 download start...
  129. 55001 - 56001 download start...
  130. 56001 - 57001 download start...
  131. 54001 - 55001 download end !!!
  132. 55001 - 56001 download end !!!
  133. 56001 - 57001 download end !!!
  134. cost time 1359.76377246 s
  135. 57001 - 58001 download start...
  136. 58001 - 59001 download start...
  137. 59001 - 60001 download start...
  138. 57001 - 58001 download end !!!
  139. 58001 - 59001 download end !!!
  140. 59001 - 60001 download end !!!
  141. cost time 1335.47829775 s
  142. 60001 - 61001 download start...
  143. 61001 - 62001 download start...
  144. 62001 - 63001 download start...
  145. 60001 - 61001 download end !!!
  146. 61001 - 62001 download end !!!
  147. 62001 - 63001 download end !!!
  148. cost time 1354.82727645 s
  149. 63001 - 64001 download start...
  150. 64001 - 65001 download start...
  151. 65001 - 66001 download start...
  152. 63001 - 64001 download end !!!
  153. 64001 - 65001 download end !!!
  154. 65001 - 66001 download end !!!
  155. cost time 1260.54731607 s
  156. 66001 - 67001 download start...
  157. 67001 - 68001 download start...
  158. 68001 - 69001 download start...
  159. 66001 - 67001 download end !!!
  160. 67001 - 68001 download end !!!
  161. 68001 - 69001 download end !!!
  162. cost time 1363.58255686 s
  163. 69001 - 70001 download start...
  164. 70001 - 71001 download start...
  165. 71001 - 72001 download start...
  166. 69001 - 70001 download end !!!
  167. 70001 - 71001 download end !!!
  168. 71001 - 72001 download end !!!
  169. cost time 1354.17163074 s
  170. 72001 - 73001 download start...
  171. 73001 - 74001 download start...
  172. 74001 - 75001 download start...
  173. 72001 - 73001 download end !!!
  174. 73001 - 74001 download end !!!
  175. 74001 - 75001 download end !!!
  176. cost time 1335.00425259 s
  177. 75001 - 76001 download start...
  178. 76001 - 77001 download start...
  179. 77001 - 78001 download start...
  180. 75001 - 76001 download end !!!
  181. 76001 - 77001 download end !!!
  182. 77001 - 78001 download end !!!
  183. cost time 1360.44054978 s
  184. 78001 - 79001 download start...
  185. 79001 - 80001 download start...
  186. 80001 - 81001 download start...
  187. 78001 - 79001 download end !!!
  188. 79001 - 80001 download end !!!
  189. 80001 - 81001 download end !!!
  190. cost time 1369.72662457 s
  191. 81001 - 82001 download start...
  192. 82001 - 83001 download start...
  193. 83001 - 84001 download start...
  194. 81001 - 82001 download end !!!
  195. 82001 - 83001 download end !!!
  196. 83001 - 84001 download end !!!
  197. cost time 1369.95550676 s
  198. 84001 - 85001 download start...
  199. 85001 - 86001 download start...
  200. 86001 - 87001 download start...
  201. 84001 - 85001 download end !!!
  202. 85001 - 86001 download end !!!
  203. 86001 - 87001 download end !!!
  204. cost time 1482.53886433 s
  205. 87001 - 88001 download start...
  206. 88001 - 89001 download start...
  207. 89001 - 90001 download start...

以上就是关于python实现爬虫统计学校BBS男女比例的第二篇,重点介绍了多线程爬虫,希望对大家的学习有所帮助。

人气教程排行