2017-04-02 6 views
0
내가 긁어하려고

URL : https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched문제가

총 3 페이지가 있는데, 첫 번째 페이지가 용어을 선택, 두 번째 페이지는 주제, 페이지를 선택합니다 실제 코스 정보는입니다.

내가으로 실행하고 문제는 그 과정 (호출대상())가 파일에 기록되는 response.body의 HTML 콜백 대신 교육 과정의 주제 페이지의 HTML되면 페이지. 정확한 응답을 수신 할 수 있도록 올바른 양식 데이터를 보내고 있음을 어떻게 알 수 있습니까? 당신이 subject()에서 어떤 형태의 데이터 누락과 같은

# term(): 
# Selects the school term to use. Clicks submit 

def term(self, response): 
    return scrapy.FormRequest.from_response(
    response, 
    formxpath="/html/body/div[3]/form", 
    formdata={ 
     "p_term" : "201705" }, 
    clickdata = { "type": "submit" }, 
    callback=self.subject 
    ) 

# subject(): 
# Selects the subject to query. Clicks submit 

def subject(self, response): 
    return scrapy.FormRequest.from_response(
    response, 
    formxpath="/html/body/div[3]/form", 
    formdata={ 
     "sel_subj" : "ART" }, 
    clickdata = { "type": "submit" }, 
    callback=self.courses 
    ) 

# courses(): 
# Currently just saves all the html on the page. 

def courses(self, response): 
    page = response.url.split("/")[-1] 
    filename = 'uvic-%s.html' % page 
    with open(filename, 'wb') as f: 
     f.write(response.body) 
    self.log('Saved file %s' % filename) 

디버그 출력

2017-04-02 01:15:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapy4uvic) 
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy4uvic.spiders', 'SPIDER_MODULES': ['scrapy4uvic.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy4uvic'} 
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-04-02 01:15:28 [scrapy.core.engine] INFO: Spider opened 
2017-04-02 01:15:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-04-02 01:15:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/robots.txt> (referer: None) 
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched> (referer: None) 
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date> (referer: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched) 
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckschd.p_get_crse_unsec> (referer: https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date) 
2017-04-02 01:15:30 [uvic] DEBUG: Saved file uvic-bwckschd.p_get_crse_unsec.html 
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Closing spider (finished) 
2017-04-02 01:15:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 2335, 
'downloader/request_count': 4, 
'downloader/request_method_count/GET': 2, 
'downloader/request_method_count/POST': 2, 
'downloader/response_bytes': 105499, 
'downloader/response_count': 4, 
'downloader/response_status_count/200': 4, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 4, 2, 8, 15, 30, 103536), 
'log_count/DEBUG': 6, 
'log_count/INFO': 7, 
'request_depth_max': 2, 
'response_received_count': 4, 
'scheduler/dequeued': 3, 
'scheduler/dequeued/memory': 3, 
'scheduler/enqueued': 3, 
'scheduler/enqueued/memory': 3, 
'start_time': datetime.datetime(2017, 4, 2, 8, 15, 28, 987034)} 
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Spider closed (finished) 

답변

1

보인다.

formdata={ 
    "sel_subj": ["dummy", "ART"], 
} 

내가 디버깅에 대한 갔다 방법 :

나는이 작업을 얻을 수 있었다.
첫째로 당신은 당신이 크롤링 중에, 파일로 저장 inspect_response 수 없습니다 :

def courses(self, response): 
    from scrapy.shell import inspect_response 
    inspect_response(response, self) 

responserequest 개체와 쉘을 열 것이다, 당신도 당신의 브라우저에서 HTML을 열 수 view(response)를 호출 할 수 있습니다 . 또한 사용 가능한 경우 ipython 또는 bpython 셸을 사용합니다. 아래 예제에서는 편리한 형식 지정을 위해 ipython을 사용합니다. 당신으로

bar = '''term_in=201705&sel_subj=dummy&sel_day=dummy&sel_schd=dummy&sel_insm=dummy& 
     sel_camp=dummy&sel_levl=dummy              
     &sel_sess=dummy&sel_instr=dummy&sel_ptrm=dummy&sel_attr=dummy&sel_subj=ART&sel_crse 
     =&sel_title=&sel_schd                
     =%25&sel_insm=%25&sel_from_cred=&sel_to_cred=&sel_camp=%25&sel_levl=%25&sel_ptrm=%2 
     5&sel_instr=%25&begin_hh               
     =0&begin_mi=0&begin_ap=a&end_hh=0&end_mi=0&end_ap=a''' 
# split into arguments 
bar = sorted(bar.split('&')) 
# do the same with the request body that was sent by scrapy 
foo =sorted(request.body.split('&')) 
# now join these together and find the differences! 
zip(foo, bar) 
[('begin_ap=a', 'begin_ap=a'), 
('begin_hh=0', 'begin_hh\n=0'), 
('begin_mi=0', 'begin_mi=0'), 
('end_ap=a', 'end_ap=a'), 
('end_hh=0', 'end_hh=0'), 
('end_mi=0', 'end_mi=0'), 
('sel_attr=dummy', 'sel_attr=dummy'), 
('sel_camp=%25', 'sel_camp=%25'), 
('sel_camp=dummy', 'sel_camp=dummy'), 
('sel_crse=', 'sel_crse='), 
('sel_day=dummy', 'sel_day=dummy'), 
('sel_from_cred=', 'sel_from_cred='), 
('sel_insm=%25', 'sel_insm=%25'), 
('sel_insm=dummy', 'sel_insm=dummy'), 
('sel_instr=%25', 'sel_instr=%25'), 
('sel_instr=dummy', 'sel_instr=dummy'), 
('sel_levl=%25', 'sel_levl=%25'), 
('sel_levl=dummy', 'sel_levl=dummy\n'), 
('sel_ptrm=%25', 'sel_ptrm=%25'), 
('sel_ptrm=dummy', 'sel_ptrm=dummy'), 
('sel_schd=%25', 'sel_schd\n=%25'), 
('sel_schd=dummy', 'sel_schd=dummy'), 
('sel_sess=dummy', 'sel_sess=dummy'), 
('sel_subj=ART', 'sel_subj=ART'), 
('sel_title=', 'sel_subj=dummy'), 
('sel_to_cred=', 'sel_title='), 
('term_in=201705', 'sel_to_cred=')] 

:

둘째 나는 전송 된 브라우저 (파이어 폭스)를 확인 어떤 형태 나 버튼을 클릭하면이 보내는 변수 bar에서 쉘을 복사 요청 본문의 scrapy에 비해 sel_subj에서 "더미"가 누락 된 것을 볼 수 있습니다. 'term_in'이 있으면 안되지만 효과가없는 것 같습니다.