Jump to content
UBot Underground

stever

Fellow UBotter
  • Content Count

    115
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by stever

  1. Hi I am trying to enter an email address in the website https://haveibeenpwned.com/, and then scrape the result. The problem is that once the page is loaded everything stops: I can't select an element to enter the text, nor click any link or the submit button. Obviously it works in a normal browser. I tried setting header, changing from Chrome 49 to Chrome 21 (even worse - just an error page with Chrome 21). There are several ways to get what I'm looking for from this site but every approach produces an unresponsive page. Any ideas? Thank you and Happy New Year!
  2. Yep - that worked - thank you, great job!
  3. I am trying to build a bot to log in to Reddit. The problem is that the login button won't click, although it works when you click it manually in the browser. (In fact, you can get it to work by setting the User Agent to Android but this has other bad consequences later on by triggering a Captcha that doesn't happen with a normal desktop type UA.) I've tried changing the UA later on but that logs you out. I've tried different selectors and although the button highlights, it won't click. Anyway - here's the code that doesn't work: set header("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64
  4. Yes, I have had to resort to the solution of a small amount of window showing when compiling and then it works.
  5. Thank you for the replies. Seems like the issue was around the element I selected for clicking to login: silly mistake on my part. Thanks again: I learned something about headers and user agents so will use that from now on.
  6. I am trying to create a bot to log in to the Hatena Japanese social bookmarking website. This is the script: clear cookies set user agent("Firefox 6") navigate("https://www.hatena.ne.jp/login?location=http%3A%2F%2Fh.hatena.ne.jp%2F","Wait") wait for element(<full name field>,"","Appear") wait(3) type text(<full name field>,"testerUBOT","Standard") wait(3) type text(<password field>,"ubotter1","Standard") wait(3) click(<type="submit">,"Left Click","No") The login name is testerUBOT and the p/w is ubotter1. The problem is that the site seems to know that the login
  7. Thank you for all the input. I have an arrangement with someone who compiles my bots for me in the dev version at a nominal fee, so that's cheaper for now than forking out on the dev version. But is there an explanation of the set headerless browser function anywhere? I can't find anything on the wiki. Is this approach, for example, different from compiling with the browser hidden? Thanks again for the help. Steve
  8. When I compile my bot (which visits and scrapes a certain site) it works fine when the browser is visible, and even when the browser window is just a few pixels high, but if the bot is compiled with the browser hidden, then the bot fails. I would prefer the site not to be visible when the bot is run - are there any common solutions to this kind of problem? Thanks
  9. Thanks - that did the trick. Lesson learned: try mobile user agents if Chrome and Firefox don't work!
  10. Hi - I have a simple problem: can't click buttons in Reddit when trying to login and post. Here's the login bit - the click in the last line doesn't work. clear cookies navigate("https://www.reddit.com/login","Wait") wait for browser event("Page Loaded","") wait(3) type text($element offset(<class="c-form-control">,4),$list item(%LoginNames,#LoopCounter),"Standard") wait for browser event("Everything Loaded","") wait(3) type text($element offset(<class="c-form-control">,5),$list item(%LoginPW,#LoopCounter),"Standard") wait(3) click(<class=
  11. Hi - I wonder if anyone can see what I'm doing wrong - or missing something. I am trying to upload an image file to the Exifer - an online image metadata editor - edit the exif fields and then download. I want to repeat the process with different metadata tags without having to upload the image every time. Everything is fine until the downloading. I've tried two methods, neither work. The code for the first method is below: it uploads, edits fine, then scrapes the location of the edited image and downloads - again all OK. The problem is that this is hosing up the Javascript on the webpage
  12. I'll answer my own question: set the user agent to Chrome - fixes it.
  13. Thanks for help - I appreciate that wasn't much information but wondered if this is a common problem with a quick fix. Here's a fuller description of the problem. The task is about setting metatags for jpg images. It uses eXifer.net to do this. The task is to upload an image to eXifer.net, tag it and then download it - and repeat for the same image with different tags. The video is here: http://sendvid.com/n86jn0hi The first bit shows that the button is working when the script is stopped. Then I run the node and it executes properly. Then I run the script and it doesn't work. But after sto
  14. I am struggling to get Ubot to click on a button. When I open the page in the browser, I can click on the button manually. When I use the click function, and run the node, the button clicks correctly. So the selector must be OK. But when the script is running, the link doesn't click. I'm being careful that the element has loaded before clicking - what could be going wrong? (Is there a way to slow down the click speed: perhaps that's my problem?)
  15. I'll answer my own question: works perfectly with Chrome 21 but not 49. Strange.
  16. Thanks - this works. I'm finding the file upload/download the hardest part of Ubot so far. Here's another example of a site where I can't get Ubot to upload anything: https://www.thexifer.net/#upload How do I upload an image file? Nothing works with any of the selectors I'm using. And when I click on the element in the Ubot browser, an error message comes up, which doesn't happen when you do the same thing outside of Ubot.
  17. I know this is a topic some people struggle with at first - I'm no exception. I'm trying to save an image file, but nothing I've tried seems to work when the script runs. Video here: https://sendvid.com/1nst957d I tried an alternative way which was to open the image file in the browser and save it from URL. This works a treat, but unfortunately when you compile the bot, if the browser window isn't visible, the image doesn't save properly. Only found this out after compiling. Since I want this to run without the browser, it's not an option. Unless someone knows another workaround.
  18. Thanks - that did work . The issue was waiting long enough for the page to load. Even though the navigate page was set to wait until loaded, it needed another 5 seconds before the script worked. Bit weird but there you are.
  19. This should be simple but not for me. How do I click the "Choose File" button and upload a file on this page: https://www.imgonline.com.ua/eng/exif-editor.php ? The basics don't seem to be working. Thank you for your patience people.
  20. Just splashed out on standard version thinking this would solve my compiling problems, but have now discovered I can't hide the browser, which was my original objective. Doh! Any ideas here about how to prevent a user who is running a compiled bot from seeing what the browser is doing? (Apart from spending another chunk of change on Pro version...) I don't need the bot to be invisible - just the browser window. S
  21. Hi Darryl - I presume you have to have Ubot Studio Standard version or higher for these to work? I have Community Edition with no compiler. Steve
  22. @deliter - yes, you guessed right! Sorry, such a rookie mistake. Anyway - that fixed it perfectly. Brilliant forum - thank you for all your help!
  23. I am trying to scrape a table of URLs in expireddomains.net. The table provides a list of URLs each with a host of parameters like trust flow and backlinks associated with each URL, arranged in columns. I have found the elements I'm interested in, and am scraping the table. The problem is that it's hit and miss: I can scrape each of the URLs reliably, but some of the contents of other cells get missed sometimes for no obvious reason. So if there are 10 URLs (1 per row) in a table, I might get 8 of the associated backlinks and 7 of the other parameters. The items that get missed out are not i
  24. OK - just to finish this off: I put a browser wait command between the two scraping operations and that fixed it. Could explain why I've been having problems on other sites. It seems to be the case when there's a long list of stuff to add to a list, followed immediately by another long list to add to a list, then the browser is vulnerable to crashing, or the scraping fails or both. Thanks everyone!
×
×
  • Create New...