<ENGLISH>
Hello - I hope you have a good day. Happy weekend should be happy cording day :smile:
Ok, today I will not proceed the scripting and I'd like to modify previous script. The script is below from #2:
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
c = 0
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
break
c += 1
response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
text = ""
for con in tag.contents:
p = re.compile(r'<.*?>')
text += p.sub('', con.encode('utf8'))
Yes, it works, but want to use (1) BeautifulSoup instead of regular expression and (2)Hash list instead of counting inside for.
(1) BeautifulSoup
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
text = ""
for con in tag.contents:
p = re.compile(r'<.*?>')
text += p.sub('', con.encode('utf8'))
Regular Expression is strong tool, but I have to learn BeautifulSoup more. Beautiful Soup is using unique type for it's string, and we can check how to use it in user's guide. I modified it as below.
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
soup2 = BeautifulSoup(tag.encode('utf8'), "lxml")
print "".join([string.encode('utf8') for string in soup2.strings])
Looks smarter? :satisfied: you got another soup for getting strings. Which do you like?
(2) Hash List for splitting. Watch out!
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
c = 0
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
break
c += 1
To get splitter strings for each web site, used c as count up integer. That's not cool. So I modified as below.
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
numHash = {}
for i in range(len(scraper)):
numHash[scraper[i][0]] = i
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
c = numHash[domain[0]]
break
yes, it becomes longer, but I think it's much better than previous, isn't it?
Great, next I hope I can proceed to next step... It will be getting elements for learning.
Yes, domo. It's a weekend. Let's go coding to have a good weekend. Today I would like to modify the script I did in # 2 before proceeding. This is it.
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
c = 0
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
break
c += 1
response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
text = ""
for con in tag.contents:
p = re.compile(r'<.*?>')
text += p.sub('', con.encode('utf8'))
It still works, but the changes are (1) use BeautifulSoup instead of regular expression for tag removal, and (2) use hash list instead of count-up for delimiter selection. I will.
(1) Use Beautiful Soup
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
text = ""
for con in tag.contents:
p = re.compile(r'<.*?>')
text += p.sub('', con.encode('utf8'))
Regular expressions are very useful, but I was wondering if I could make the Beautiful Soup more effective. In BS, tools for extracting the character string inside are available, but it was difficult at first because of the unique character string format. However, it is well documented, so I have no choice but to get used to it.
And this is after the change!
soup = BeautifulSoup( html, "lxml" )
soup.originalEnoding
tag = soup.find( scraper[c][1], {scraper[c][2] : scraper[c][3]})
soup2 = BeautifulSoup(tag.encode('utf8'), "lxml")
print "".join([string.encode('utf8') for string in soup2.strings])
Doesn't it feel cool? I changed the soup to pull out the character string in the tag by replacing it again.
(2) Use a hash list as a delimiter About here.
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
c = 0
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
break
c += 1
It looks like counting up the C variable and adjusting the delimiter number. Hmmm, is it going to be crazy? And it's a nice makeover.
scraper = [
["hatenablog.com","div","class","entry-content"],
["qiita.com","section","itemprop", "articleBody"]
]
numHash = {}
for i in range(len(scraper)):
numHash[scraper[i][0]] = i
for domain in scraper:
print url, domain[0]
if re.search( domain[0], url):
c = numHash[domain[0]]
break
The script has become longer than I expected. But I like this one very much. I wonder if I can make it a little cleaner.
So, this time I made a self-satisfying correction. Next time I think I will move on to the next. It is scraping of links and tag lists that are the basis of learning. When will we get to machine learning? .. .. It's about to be called a fraud.
Recommended Posts