ayhoung 0 Posted May 18, 2013 Report Share Posted May 18, 2013 Hi guys and experts! I was just hoping to pick your brains here for good methods to speeding up scraping and avoiding redundant information. So let's say a site is continuously updating with new information and has pages and pages of it. Two questions:1. Can you scrape every individual page as a separate thread to increase speed for scraping?2. Any good methods of avoiding pages that have already been scraped? What methods do you use? Hoping for interesting insight! Thanks,Allen Quote Link to post Share on other sites
AutomationNinja 194 Posted May 18, 2013 Report Share Posted May 18, 2013 With the pro and dev version you can multi-threadYou could record the urls you have gone to and add them to a list and then compare against that list ..... Quote Link to post Share on other sites
ayhoung 0 Posted May 18, 2013 Author Report Share Posted May 18, 2013 "With the pro and dev version you can multi-thread" As far as I understand, multi-threading does the same thing in multiple threads, like creating an account. Can you input variables into different threads such as page numbers for them to scrape different pages? Quote Link to post Share on other sites
AutomationNinja 194 Posted May 18, 2013 Report Share Posted May 18, 2013 yes you can Quote Link to post Share on other sites
Code Docta (Nick C.) 638 Posted May 18, 2013 Report Share Posted May 18, 2013 You can try this, it basically compares the list and subtracts what has been posted/done and leaves you with what has not neen posted/done.http://wiki.ubotstudio.com/wiki/$subtract_lists HTH too. Quote Link to post Share on other sites
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.