Jump to content
UBot Underground

mdc101

Fellow UBotter
  • Content Count

    119
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by mdc101

  1. Hi John Thanks for the great video. That really helped a lot and I learnt a great deal. Got it working Thanks regards Matt
  2. Hi Guys I am trying to get the following to work but fo some reason it is not scraping the google competing pages results. Could you have a look at what I am doing wrong please. set(#CBP_phrase, $page scrape("About ", " results (*.* seconds)"), "Global I added the *.* for wild cards but am not sure if this works. I am after the number Any help will be appreciated Thanks Matt
  3. Thanks DarkAngel That did the trick, Appreciate your help. Regards Matt
  4. Is there a way to have all check boxes checked on a page using ubot4. I am working with a webpage that provides different amounts of check boxes depending on the query made. All I want is to select all the checkboxes that appear on the page. I have asked the developers to add a check all box but its just not getting done so is there a way we can populate these checkboxes? The checkbox name is <input type="checkbox" value="1" name="syns[852619]" id="syns_852619"> The wildcards are name="syns[*]" id="syns_*" Do you use a loop to check or is there a javascript trick that can be us
  5. Hi Guys I was wondering if someone could assist me. I have just installed ub4 and have noticed all the changes and have gotten lost. Seems I have to start from scratch learning to build the bots. What I want to do is run a simple app that scapes data from a page if a condition is met. The bot runs in a loop ever 30 seconds to check is auctions are closed. The page is made up of div tags and each row is between the <li> tag Find "Closed" in <span class="bid_time"> <div id="tCounter_1259059" class="col6"> <span class="bid_time">Closed</span> </div&g
  6. Is it possible to be able to scrape pdf content via the browser? This would be a great feature
  7. Hi Folks, I am scraping results and have noted that I get the same urls that I don't want from the website. The first 27 rows are always the same and I want to delete them in the text file before I use the file to get the information I need. What is the best approach to deleting the top 27 rows and removing any blank rows? Process i want to achieve for the clean up. - delete top 27 rows - delete any blank rows - save the file Thanks Matt
×
×
  • Create New...