fakankun 0 Posted November 6, 2010 Report Share Posted November 6, 2010 i want to download files offline to a folder, not to a file. im not creating a list, im creating a folder of xml files. how would i go about doing this? Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 Where are you 'downloading' the files offline from? Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 well its saving a "link as" it downloads the page behind the link, it's online not offline sorry Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 If each link is a URL for a specific file, why not use the 'download file' command? Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 i am. but the thing is in the file download node or whatever...i have to choose a file to save the download to, i don't understand why, but i can't just save the download to a folder instead. when i try to i get an error when running the script saying it couldn't download Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 If it shows a cannot download error, it's likely that an incorrect URL has been selected for the download file command. If you need help selecting the URL, post a partial screenshot or the .ubot file. Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 no i've successfully downloaded the page before, so that command is correct. the problem is it only works when i save the file as a file that already existed. in the download file node it asks for the url and the file, i have to select a file to save the download to for some reason. when i do this it works fine. but i don't want to have to do that 250 a day. i want to download the file into a folder, but it won't even let me pick a folder, i have to pick a file then erase the file so only the folder in showing like this: user/documents/ubots/blank.xml becomes user/documents/ubots/ when i try the second one i get an error that it can't download, and i know it won't download because it doesn't have pre-picked file to save to Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 the problem is it only works when i save the file as a file that already existed.The 'download file' command works even if the specified file does not physically exist. Even when you download a file from IE, it asks you to specify a file, not a folder. It may help if you post a screenshot of your ubot code. Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 here is a screen shot http://prpackets.com/working.jpgit works this way, but if i remove the jj.xml from the end it won't work. when i make the download command the file part asks me to save something, but there is nothing to save Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 In the above scenario, 'download file' is unlikely to be the best option. A better way to do the above is to add the scraped attributes to a list, and then loop through the list and save each item to file. Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 right i thought about that, but this isn't a zip file or something that downloads automatically, it's a xml page, if i grab the link, it would be like grabbing google.com, how would i be able to download that? Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 right i thought about that, but this isn't a zip file or something that downloads automatically, it's a xml page, if i grab the link, it would be like grabbing google.com, how would i be able to download that?Not sure what you mean - your code above implies you are scraping a list of .xml URLs, not a zip file. After you've chosen by attribute, use the $add to list command. This list will contain the list of .xml file URL's. Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 i don't want the urls though, i want the downloadable page. Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 I think you mean xml files - which is what you'll be getting. If you don't understand, I'd suggest going through the scraping and variable tutorials (again). Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 lol, ok this is the page im stuck on:http://links2rss.com/convert.php it tells me to right click the "link 1" and save the page if i just scrape "link 1" url to file i won't be able to go back later and save the page. Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 It may help if you read what was written in post #12. You won't need to go back to 'save the page', because the URL of the xml file will be in a list. Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 i want to download the entire page as a .xml file and upload it to my own domain. you know like when you go in your browser click "save page as" and download a firefox page... i can only do this if i download from the "link 1" part if i scrape the url i won't be able to download the page only open it, but that won't do me any good. you see what im saying? if i collect the url how will i then "save page as"? but if i right click and save link as its just like going into my browser and clicking "save page as". if i scrape the url it will just be a url to a page, but a page i can no longer download Quote Link to post Share on other sites
IRobot 43 Posted November 6, 2010 Report Share Posted November 6, 2010 If you wish to save the current web page, then use: download file URL: Document Constants > $url File: myfilename.ext Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 so i scrape the urls first? then open the page and save it? Quote Link to post Share on other sites
fakankun 0 Posted November 6, 2010 Author Report Share Posted November 6, 2010 how bout you tell me how you would download the page, and i''l try it and tell you if it works Quote Link to post Share on other sites
crazyflx 22 Posted November 6, 2010 Report Share Posted November 6, 2010 Here you go man, a working example that does the following: Visits the page where you need to save all the feeds Scrapes that pages XML urls & adds them to a list Downloads each XML file from the list of URLs AND dynamically creates the file names for them If you have any questions, let me know. P.S. - It will save the file to "my documents" with the file name 1.xml If there were 3 urls to download, they would be saved as: 1.xml2.xml3.xml etc, etc, etc. It uses an incremented variable as the file name to save. You'll see when you download the example & check out the source code.Example.ubot 1 Quote Link to post Share on other sites
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.