Rietveld Code Review Tool
Help | Bug tracker | Discussion group | Source code

Delta Between Two Patch Sets: sitescripts/crawler/bin/extract_crawler_sites.py

Issue 8327353: Crawler backend (Closed)
Left Patch Set: README fix Created Sept. 14, 2012, 2:42 p.m.
Right Patch Set: Created Sept. 27, 2012, 2:15 p.m.
Left:
Right:
Use n/p to move between diff chunks; N/P to move between comments.
Jump to:
Left: Side by side diff | Download
Right: Side by side diff | Download
LEFTRIGHT
1 # coding: utf-8 1 # coding: utf-8
2 2
3 # This Source Code is subject to the terms of the Mozilla Public License 3 # This Source Code is subject to the terms of the Mozilla Public License
4 # version 2.0 (the "License"). You can obtain a copy of the License at 4 # version 2.0 (the "License"). You can obtain a copy of the License at
5 # http://mozilla.org/MPL/2.0/. 5 # http://mozilla.org/MPL/2.0/.
6 6
7 import os, re, subprocess 7 import MySQLdb, os, re, subprocess
8 from sitescripts.utils import get_config 8 from sitescripts.utils import get_config
9 9
10 def hg(args): 10 def hg(args):
11 return subprocess.Popen(["hg"] + args, stdout = subprocess.PIPE) 11 return subprocess.Popen(["hg"] + args, stdout = subprocess.PIPE)
12 12
13 def extract_urls(filter_list_dir): 13 def extract_urls(filter_list_dir):
14 os.chdir(filter_list_dir) 14 os.chdir(filter_list_dir)
15 process = hg(["log", "--template", "{desc}\n"]) 15 process = hg(["log", "--template", "{desc}\n"])
16 urls = set([]) 16 urls = set([])
17 17
18 while True: 18 for line in process.stdout:
19 line = process.stdout.readline() 19 match = re.search(r"\b(https?://\S*)", line)
20 if line == "": 20 if not match:
Wladimir Palant 2012/09/14 17:24:18 Is this really a good break condition? An empty co
Felix Dahlke 2012/09/26 15:20:30 Done. An empty commit line would be "\n", but you'
21 break
22
23 matches = re.match(r"[A-Z]:.*(https?://.*)", line)
Wladimir Palant 2012/09/14 17:24:18 What if we have some additional text following the
Felix Dahlke 2012/09/26 15:20:30 Done.
24 if not matches:
25 continue 21 continue
26 22
27 url = matches.group(1).strip() 23 url = match.group(1).strip()
28 urls.add(url) 24 urls.add(url)
29 25
30 return urls 26 return urls
31 27
32 def print_statements(urls): 28 def print_statements(urls):
33 for url in urls: 29 for url in urls:
34 print "INSERT INTO crawler_sites (url) VALUES ('" + url + "');" 30 escaped_url = MySQLdb.escape_string(url)
Wladimir Palant 2012/09/14 17:24:18 While this might not be a big issue here, failing
Felix Dahlke 2012/09/26 15:20:30 Done. I'd rather stick to generating a file with s
31 print "INSERT INTO crawler_sites (url) VALUES ('" + escaped_url + "');"
35 32
36 if __name__ == "__main__": 33 if __name__ == "__main__":
37 filter_list_dir = get_config().get("crawler", "filter_list_repository") 34 filter_list_dir = get_config().get("crawler", "filter_list_repository")
38 urls = extract_urls(filter_list_dir) 35 urls = extract_urls(filter_list_dir)
39 print_statements(urls) 36 print_statements(urls)
LEFTRIGHT

Powered by Google App Engine
This is Rietveld