Jump to content
UBot Underground

How would you go about doing this?


Recommended Posts

ok this is what im trying to do:

 

without ubot i would normally just right click the link on this page:http://links2rss.com/convert.php the one titled "link" and then click "save link as" which then downloads the webpage how would you do it with ubot?

 

and please no theory, i've been at this for hours i need a working solution.

 

NOTE: i don't just want to scrape the url to file. i want to actually download or save the page. so please don't tell me how to save the link, i need to know how to download or save the page.

Link to post
Share on other sites

nevermind i got it to work. but what i still dont get is why can i just save the files to a folder instead of a pre-existing file?

 

i don't want to have to create 250 file FIRST ya know. isn't there a way to just download consistently and have it all go into it's own folder like normal downloads do?

Link to post
Share on other sites

If you are saving to file, the file does not have to exist. You can have a filename set to a variable, and then add another incremented variable that gives the file name a number.

So if you want to create a file named blue.xml and you want to add a number to every other. Xml file named blue,

Just set the word blue to a variable. Set the number 0 or 1 to a variable. Increment the number variable.

When you're saving to the file, decide where you want them saved. So it would look like this for example:

 

Save to file>$document folder\variable "blue"variable"1".xml

 

nevermind i got it to work. but what i still dont get is why can i just save the files to a folder instead of a pre-existing file?

 

i don't want to have to create 250 file FIRST ya know. isn't there a way to just download consistently and have it all go into it's own folder like normal downloads do?

Link to post
Share on other sites

I posted this exact response on your other thread (that is virtually the same question):

 

Here you go man, a working example that does the following:

 

Visits the page where you need to save all the feeds

 

Scrapes that pages XML urls & adds them to a list

 

Downloads each XML file from the list of URLs AND dynamically creates the file names for them

 

If you have any questions, let me know.

 

P.S. - It will save the file to "my documents" with the file name 1.xml

 

If there were 3 urls to download, they would be saved as:

 

1.xml

2.xml

3.xml

 

etc, etc, etc.

 

It uses an incremented variable as the file name to save. You'll see when you download the example & check out the source code.

Example.ubot

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...