Jump to content
UBot Underground

gabel

Fellow UBotter
  • Content Count

    284
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by gabel

  1. or you can try this clear list(%bing results) ui text box("Search", #search) navigate("http://www.bing.com/", "Wait") wait for element(<for="nofilt">, "", "Appear") type text(<name="q">, #search, "Standard") click(<name="go">, "Left Click", "No") wait(5) add list to list(%bing results, $find regular expression($scrape attribute(<id="results_container">, "outerhtml"), "(?<=<h3><a\\ href=\").*?(?=\"\\ h=\"ID=SERP)"), "Delete", "Global")
  2. that`s really weird, make sure you have the latest flash version , maybe that`s an issue
  3. Here's a quick demo bot , it includes an Avatar folder where you should place your images http://botswiz.com/free/TwitterProfileUpload.zip
  4. Hey, Use this piece of code for the captcha part type text(<id="recaptcha_response_field">, $solve captcha(<src=w"https://www.google.com/recaptcha/api/image?c=*">), "Standard")
  5. i`ve been using the same thing for quite some time and tried last night the code P0s3id0n posted and works just fine
  6. try this code , works just fine at my end navigate("http://www.yellowbook.com/yellow-pages/?what=auto+insurance&where=albuquerque%2C+nm", "Wait") add list to list(%companyname, $scrape attribute(<title=w"View more information about*">, "innertext"), "Delete", "Global") set list position(%companyname, 0) loop($list total(%companyname)) { set(#addressscrape, $scrape attribute(<outerhtml=w"*id=\"divInAreaSummary*{$next list item(%companyname)}</a>*<div class=\"quick-info-details\">*</li>">, "outerhtml"), "Global") set(#cleanaddress, $find regular expression(#
  7. this is what i came up with quickly clear list(%googleresults) ui text box("Keyword", #keyword) ui stat monitor("Links", $list total(%googleresults)) navigate("http://www.google.co.uk/#hl=en&sclient=psy-ab&q={#keyword}", "Wait") set(#scrapepages, $replace regular expression($scrape attribute(<onmousedown=w"return *">, "href"), "http://webcache\\.googleusercontent\\.com.*", ""), "Global") add list to list(%googleresults, $list from text(#scrapepages, $new line), "Delete", "Global")
  8. not really as different sites have the text formatted differently so you`ll have to come up with a code for each one. (one for those who are on the same platform)
  9. Just wanted to say thanks the Ubot team for finally adding a way to spoof our browser info . Keep it up !
  10. are you sure you`ve pressed run and not run node ?
  11. this should do it add list to list(%profileurs, $find regular expression($scrape attribute(<outerhtml=w"<a href=\"/profile/view?id=*">, "fullhref"), "http.*?(?=&)"), "Delete", "Global")
  12. had a quick look and came up with this navigate("http://www.linkedin.com/home?trk=guest_home#orderBy=Time&typeFilter=ALL", "Wait") add list to list(%likes, $scrape attribute(<innertext=w"Like*">, "href"), "Delete", "Global") set list position(%likes, 0) loop($list total(%likes)) { set(#clicknow, $next list item(%likes), "Global") wait(3) click(<href=#clicknow>, "Left Click", "No") }
  13. you can do that by creating a custom ui html panel as i see you have Dev . here`s an example http://www.ubotstudio.com/forum/index.php?/topic/11064-run-command-in-dev-edition/#entry57869
  14. my pleasure mate , if you have any other questions don't hesitate to ask :-)
  15. guess that's the captcha from rediff. adding a 2 seconds delay before getting the capthca always work for me ex : navigate("http://register.rediff.com/register/register.php?FormName=user_details", "Wait") wait(2) type text(<class="captcha">, $solve captcha(<src=w"/register/tb135/tb_getimage.php?uid=*">), "Standard")
  16. hey data , here's the code to make it scrape all the pages. i`ve added a small stat monitor clear list(%googleplus) set(#resultsnumber, $find regular expression($scrape attribute(<id="resultStats">, "innertext"), "(?<=about\\ ).*results"), "Global") ui stat monitor("Scraped ", "{$list total(%googleplus)} from {#resultsnumber}") navigate("http://www.google.co.uk/search?q=dentist&hl=en&biw=1041&bih=900&noj=1&prmd=imvnsl&source=lnms&tbm=plcs&sa=X&ei=JXjsT8ihK6mk0QWQlYD6DA&ved=0CHMQ_AUoAQ&prmdo=1&changed_loc=1", "Wait") type text(&l
  17. just did a quick test and this works just fine ( replace the add to list from the loop with this one) , it gets all 10 results from the page add list to list(%googleplus, $find regular expression($scrape attribute(<innerhtml=w"*href=\"/url?url=https://plus.google.com/*</span>">, "innerhtml"), "https://plus\\.google\\.com/.*(?=/about)"), "Delete", "Global")
  18. hey kev , this is how i would do it in ubot navigate("http://www.google.co.uk/search?q=dentist&hl=en&biw=1041&bih=900&noj=1&prmd=imvnsl&source=lnms&tbm=plcs&sa=X&ei=JXjsT8ihK6mk0QWQlYD6DA&ved=0CHMQ_AUoAQ&prmdo=1&changed_loc=1", "Wait") type text(<id="lc-input">, "London", "Standard") click(<class="ksb mini">, "Left Click", "No") wait for browser event("Page Loaded", 30) loop(10) { add list to list(%googleplus, $find regular expression($scrape attribute(<innertext="Google+ page">, "innerhtml"), "https://plus\\.google\\.
  19. haven't used 3.5 in quite some time and from what i remember there were a few problems with those kind of proxies in that version. i would advise you to move to v4 as in that are supported and my opinion is that v4 is a lot better then 3.5
  20. it works just fine , many of my clients use those type of proxies and they have no problems whatsoever. You must've read some old threads when there were problems with this type of proxies. BTW what version are you using?
×
×
  • Create New...