Jump to content
UBot Underground

APTS

Members
  • Content Count

    64
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by APTS

  1. I am trying to scrape some real estate data from this page: http://v3.torontomls.net/Live/Pages/Public/Link.aspx?Key=6106f46dc223411685c459310be3c8c0&App=TREB This page lists 29 separate properties, each identified by a unique MLS#. There is no problem scraping all of the table data at the top of the page, but I am having difficulty scraping the more detailed information that appears below the table. For example, the first piece of data that I am trying to scrape is the "Sold:" value which appears in the top-right hand corner of each property record. This is the HTML for the "Sold:"
  2. This worked for me too! Thanks for posting your solution.
  3. Thank you Docta, for taking away my code pain. Your advice worked perfectly. Thanks for pointing me in the right direction.
  4. I am trying to scrape all the photos from this page: http://v3.torontomls.net/Live/Pages/Public/Link.aspx?Key=2091fd55c22344748e1f3a9ef24ff150&App=TREB As you can see, this page contains a list of homes for sale, with multiple photos for each home. I am having difficulty figuring out how to scrape each of these photos. All of the individual links to the photos are inside this: <img src="http://v3.torontomls.net/Live/photos/FULL/1/620/N3257620.jpg?20150710120456" onerror="this.className += ' imgerror'; this.parentNode.className += ' hasimgerror';" class="formitem imageset multi-p
  5. I am trying to connect to my gmail account, loop through the emails, and scrape certain links from each email. Everything seems to be working fine up until the line where it deletes the email after the "add list to list" command. I am getting an error that says: "Script Error Error: Socket not ready for send/recv" "Source: > torontomls > connect to mail server > loop > if > then > delete mail" When I step through the code and look at the debugger I can see that the "add list to list" command actually works, so I know the code is making it that far. I can also see that
  6. Thanks Dan. That Advanced Ubot 2 plugin looks interesting. I will buy that and play with it to see if it can solve my issue.
  7. Hello fellow Ubotters, I would like to get some advice on best practices for how to automate the running of a bot which has a large number of pages to scrape. First I will give a little bit of background, and then hopefully someone can give me a few good ideas to implement. There is one site that I would like to scrape, and I need to pass through a series of unique URLs to the site. With each loop, I write the unique URL into a separate table so that I can keep track of which ones have been done and which ones still need to be done. Perhaps an example will help to demonstrate my situati
  8. Nice! Thanks Copper. I felt like I was in handcuffs for a while. I'm free!
  9. Thanks for your suggestion. We have tried your code and unfortunately we are getting both columns in the url. We only want column 1 in the URL and column 2 as the file name. Are you sure we should be using "add list to list"? should we be using "add item to list"??
  10. Hello All, Newbie here, asking what will hopefully be a simple question for someone to answer for me. I am currently using the following two lines of code to navigate to a URL and then save the browser image: navigate("http://mysite.ca/s/v/49_James_St_Toronto_Ontario_Canada", "Wait") save browser image("C:\\mypath\\browserimage.jpg")The above is working fine and saves a single image of the static address passed through the URL, but the next step is for me to incorporate this into a loop. I want to be able to loop through the list of addresses in Column 1 of the attached CSV file, and
  11. Thanks so much Gogetta. Much appreciated. I have another question to ask, but I think it would be best if I start a new thread since it is not related to this one.
  12. I built a page that displays a Google Street View based on an address passed through a URL parameter. What I would like to do is loop through my list of URL's, display the resulting street view on the page, and then capture the streetview image in some way. I understand that the actual street view control is not an image, but I figure there must be some way that I can capture the image displayed on the screen, perhaps through a screen snapshot. The street view is the only thing on the page, so I want to capture the entire page. Can someone point me in the right direction as to how I can g
  13. Hi Praney - Can you tell me the name of the Ubot 4.x solution you are referring to? (Thanks for all your great work, by the way!)
×
×
  • Create New...