Jump to content
UBot Underground

chris weber

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chris weber

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Nevermind. I got it to work. I just have it replacing the javascript error lines with a dummy url that just goes to google.com so that it doesn't error out on those lines. Thanks again for all the help Lily:)
  2. THANK YOU!!! That works great for what I need to do. Now my question is that after I savethose url's I have another script that loads those urls and navigates to each url. However when the javascript gets removed from the list it leaves a blank line in the list instead, so the script stops and errors when it gets to the blank line because it doesn't know how to navigate to a blank site. So is there any way to remove that line entirely and just keep a continous flow of url's. I also attached my scrubbed list of url's. Thanks again for your amazing help:) business links.txt
  3. Hi Lily, Thanks for that great info. I have tried logging in to my google account through the script and then scraping the links but I am still having the javascript problems. I attached the txt file that has the line of javascript code in it. If you know how I could search the text file and remove those lines that would be great. Thanks again:) business links.txt
  4. Hey Lily, Here is my google maps scraper as of right now. You may need to change file paths in it though for it to work. So the problem I'm having is in the scraping links script. I have a sub named "delete bad links" that I'm trying to get to load the file, search through it and delete the bad line and then save the scrubbed list of urls back out to the file. So when you go to run it just enter a city like "los angeles" and a business type like "dentist" and then run it and it will search google maps and get the links but when it saves them to a file you will see the file has at least
  5. Hi everyone, I am having a bit of a problem with my bot again and trying to use the $replace command. I have a list of url's that I have scraped. For some reason when I scrape them there are a few lines that have this "javascript:void(0)" instead of a website url. So what I want to do is load in this file of url's and use the $replace command to replace all the lines that say "javascript:void(0)" with $nothing. I'd then like to save the scrubbed url's list back out to a file. Please if anyone knows how I can accomplish this, please let me know. Thanks again, Chris
  6. AWESOME. That is just what I was looking for. Thank you, Thank you, Thank you:)
  7. Hey everyone, I am working on a bot right now and as part of it I need to be able to go to google maps, enter a search term like "los angeles dentists" and then I need to scrape the data from say the first 5 pages or so. I really only need to scrape the data like business name, address, and phone number but if I have to scrape it all that would be ok too. My question though is how to do this. If I try to scrape like normally where I select say a row to scrape and right click it I don't get the <TR> option to scrape like I would if I was just scraping on google.com. So if anyone kn
  8. WOW....THANK YOU SOOOOOOOOOOOOOOOOOOOOOOOOOOOO much, you are a god. It works AMAZINGLY. You just saved me from having to stare at my computer screen all day. Thank you so much:)
  9. IRobot, did the file download for you allright????????????
  10. I have tried using all kinds of threads and delays in different spots but nothing seems to work for the second save dialog button???
  11. I have that option enabled, yes. For me it only clicks the first save dialog but it won't click the second one when the "save as" window pops up. So it automatically went through the whole thing and downloaded the file completely without you having to do anything? Thanks again for the help too by the way:) I really appreciate it
  12. Allright, here is my bot so far. Basically what it is doing is going to a proxy site, choosing the proxy options, enters my specific code and then filters the results for the proxies I wanted. Then the site also gives me an option to download all of the filtered proxies to a txt file. Well I have gotten the bot to click the download button and then click the save button on the first popup (the one that asks you to Open, Save, Close), However the problem comes when the save name as dialog comes up. The preferred file name is all ready entered into the dialog field so all I want to do is cl
  13. Hey everyone, I am new here and new to ubot as well. I am working on a script right now that logs into a bunch of accounts on a website that has a bunch of stats. I was wondering if it is possible to use ubot, once it logs into an account to take a screenshot of the current page it is at and save the screenshot out as an image. I would very much appreciate any help that could be provided.
×
×
  • Create New...