Jump to content
UBot Underground

Good methods for avoiding redundant data scraping?


Recommended Posts

Hi guys and experts!

 

I was just hoping to pick your brains here for good methods to speeding up scraping and avoiding redundant information.

 

So let's say a site is continuously updating with new information and has pages and pages of it.

 

Two questions:

1. Can you scrape every individual page as a separate thread to increase speed for scraping?

2. Any good methods of avoiding pages that have already been scraped? What methods do you use?

 

Hoping for interesting insight!

 

Thanks,

Allen

Link to post
Share on other sites

"With the pro and dev version you can multi-thread"
 

As far as I understand, multi-threading does the same thing in multiple threads, like creating an account. Can you input variables into different threads such as page numbers for them to scrape different pages?

Link to post
Share on other sites

You can try this, it basically compares the list and subtracts what has been posted/done and leaves you with what has not neen posted/done.

http://wiki.ubotstudio.com/wiki/$subtract_lists

 

HTH too.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...