当前位置:Gxlcms > Python > Python实现简单HTML表格解析的方法

Python实现简单HTML表格解析的方法

时间:2021-07-01 10:21:17 帮助过:44人阅读

本文实例讲述了Python实现简单HTML表格解析的方法。分享给大家供大家参考。具体分析如下:

这里依赖libxml2dom,确保首先安装!导入到你的脚步并调用parse_tables() 函数。

1. source = a string containing the source code you can pass in just the table or the entire page code

2. headers = a list of ints OR a list of strings
If the headers are ints this is for tables with no header, just list the 0 based index of the rows in which you want to extract data.
If the headers are strings this is for tables with header columns (with the tags) it will pull the information from the specified columns

3. The 0 based index of the table in the source code. If there are multiple tables and the table you want to parse is the third table in the code then pass in the number 2 here

It will return a list of lists. each inner list will contain the parsed information.

具体代码如下:

  1. #The goal of table parser is to get specific information from specific
  2. #columns in a table.
  3. #Input: source code from a typical website
  4. #Arguments: a list of headers the user wants to return
  5. #Output: A list of lists of the data in each row
  6. import libxml2dom
  7. def parse_tables(source, headers, table_index):
  8. """parse_tables(string source, list headers, table_index)
  9. headers may be a list of strings if the table has headers defined or
  10. headers may be a list of ints if no headers defined this will get data
  11. from the rows index.
  12. This method returns a list of lists
  13. """
  14. #Determine if the headers list is strings or ints and make sure they
  15. #are all the same type
  16. j = 0
  17. print 'Printing headers: ',headers
  18. #route to the correct function
  19. #if the header type is int
  20. if type(headers[0]) == type(1):
  21. #run no_header function
  22. return no_header(source, headers, table_index)
  23. #if the header type is string
  24. elif type(headers[0]) == type('a'):
  25. #run the header_given function
  26. return header_given(source, headers, table_index)
  27. else:
  28. #return none if the headers aren't correct
  29. return None
  30. #This function takes in the source code of the whole page a string list of
  31. #headers and the index number of the table on the page. It returns a list of
  32. #lists with the scraped information
  33. def header_given(source, headers, table_index):
  34. #initiate a list to hole the return list
  35. return_list = []
  36. #initiate a list to hold the index numbers of the data in the rows
  37. header_index = []
  38. #get a document object out of the source code
  39. doc = libxml2dom.parseString(source,html=1)
  40. #get the tables from the document
  41. tables = doc.getElementsByTagName('table')
  42. try:
  43. #try to get focue on the desired table
  44. main_table = tables[table_index]
  45. except:
  46. #if the table doesn't exits then return an error
  47. return ['The table index was not found']
  48. #get a list of headers in the table
  49. table_headers = main_table.getElementsByTagName('th')
  50. #need a sentry value for the header loop
  51. loop_sentry = 0
  52. #loop through each header looking for matches
  53. for header in table_headers:
  54. #if the header is in the desired headers list
  55. if header.textContent in headers:
  56. #add it to the header_index
  57. header_index.append(loop_sentry)
  58. #add one to the loop_sentry
  59. loop_sentry+=1
  60. #get the rows from the table
  61. rows = main_table.getElementsByTagName('tr')
  62. #sentry value detecting if the first row is being viewed
  63. row_sentry = 0
  64. #loop through the rows in the table, skipping the first row
  65. for row in rows:
  66. #if row_sentry is 0 this is our first row
  67. if row_sentry == 0:
  68. #make the row_sentry not 0
  69. row_sentry = 1337
  70. continue
  71. #get all cells from the current row
  72. cells = row.getElementsByTagName('td')
  73. #initiate a list to append into the return_list
  74. cell_list = []
  75. #iterate through all of the header index's
  76. for i in header_index:
  77. #append the cells text content to the cell_list
  78. cell_list.append(cells[i].textContent)
  79. #append the cell_list to the return_list
  80. return_list.append(cell_list)
  81. #return the return_list
  82. return return_list
  83. #This function takes in the source code of the whole page an int list of
  84. #headers indicating the index number of the needed item and the index number
  85. #of the table on the page. It returns a list of lists with the scraped info
  86. def no_header(source, headers, table_index):
  87. #initiate a list to hold the return list
  88. return_list = []
  89. #get a document object out of the source code
  90. doc = libxml2dom.parseString(source, html=1)
  91. #get the tables from document
  92. tables = doc.getElementsByTagName('table')
  93. try:
  94. #Try to get focus on the desired table
  95. main_table = tables[table_index]
  96. except:
  97. #if the table doesn't exits then return an error
  98. return ['The table index was not found']
  99. #get all of the rows out of the main_table
  100. rows = main_table.getElementsByTagName('tr')
  101. #loop through each row
  102. for row in rows:
  103. #get all cells from the current row
  104. cells = row.getElementsByTagName('td')
  105. #initiate a list to append into the return_list
  106. cell_list = []
  107. #loop through the list of desired headers
  108. for i in headers:
  109. try:
  110. #try to add text from the cell into the cell_list
  111. cell_list.append(cells[i].textContent)
  112. except:
  113. #if there is an error usually an index error just continue
  114. continue
  115. #append the data scraped into the return_list
  116. return_list.append(cell_list)
  117. #return the return list
  118. return return_list

希望本文所述对大家的Python程序设计有所帮助。

人气教程排行