We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
爬虫运行过程中会出现2019-04-03 12:25:25 [scrapy.extensions.logstats] INFO: Crawled 18852 pages (at 0 pages/min), scraped 1762 items (at 0 items/min)这种状况,大概过了一到两分钟才重新拉取到redis的队列里面的url
The text was updated successfully, but these errors were encountered:
兄弟,我也遇到这个问题了,你解决了吗? @python-D
Sorry, something went wrong.
因为scrapyredis爬虫依赖空闲信号idle signal来开始爬取
@Cehae 我建议你们用scapy frontera,不会有那个问题,因为它使用kafka和hbase作为后端处理。
No branches or pull requests
爬虫运行过程中会出现2019-04-03 12:25:25 [scrapy.extensions.logstats] INFO: Crawled 18852 pages (at 0 pages/min), scraped 1762 items (at 0 items/min)这种状况,大概过了一到两分钟才重新拉取到redis的队列里面的url
The text was updated successfully, but these errors were encountered: