Jump to content
UBot Underground

VaultBoss

Fellow UBotter
  • Content Count

    790
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by VaultBoss

  1. set(#hour, $substring(#time, 0, $find index(#time, ":")), "Global") set(#time, $replace(#time, "{#hour}:", $nothing), "Global") set(#min, $substring(#time, 0, $find index(#time, ":")), "Global") set(#seg, $replace(#time, "{#min}:", $nothing), "Global") ^^^ The code above will work just the same, no matter if your scraped string is set(#time, "7:02:25", "Global") or set(#time, "11:02:25", "Global") However, while due to the source already giving you the hours part correctly, but the min/sec with leading zeros, you may want to replace those in the results, too, something
  2. Ahh... Another way would be to pad an extra character (blank) at the beginning of the string, inside an IF (looking for its length to be equal to 7 chars) and then apply the exact same code, no matter what the hour is, except for the results you would add an extra $trim to the #hour resulting variable Hope this helps
  3. Your code is 'assuming' that your scraped string has a fixed length, while in fact, due to the 2-digit hours, such as 10, 11 or 12... in fact the string's length varies! You can do it almost as you coded, but you'll need to provide extra code to determine the length and change the substrings' starting/ending points, accordingly. However, the $list from text using : as the delimiter looks much simpler to implement. Another way might be to subtract the substrings using a $find index and looking for the ":" character as the delimiter.. etc...
  4. divider comment("Check a Local Folder to Grab the File Names") add list to list(%lst_DataFolderFiles, $get files(#var_SpecificFolder, "No"), "Delete", "Global") divider ^^^ That is how I do it. (the "No" means I'm only grabbing the file names within the given folder - that is stored from ahead in the #var_SpecificFolder variable - but if you want to get the full path, use "Yes" instead) Next step is to use the newly created list to randomly get file names from it and pass each file name to your custom command/function that runs on a separate thread. At the same time, remove the used file
  5. Quick Q .. how do I scrape the actual URL from the browser, after the redirect? In other words, I navigate to a url I give UBot, but that website has an internal redirect and instead of the url I gave it in the beginning, I end up on a different URL. How do I find what is that URL? Any idea? Thanks in advance, Steve
  6. Ok, I found I could navigate to this link, basically, which is the Inbox: http://sn125w.snt125.mail.live.com/default.aspx?rru=inbox Now, maybe the subdomains could be different for different users, I don't know that yet, but it looks like the Inbox page is invariably at: ~/default.aspx?rru=inbox Hope this helps others too
  7. I am working on a Bot to look for a certain email in the Inbox (for confirmation purpose) and I have a similar issue, looks like I cannot 'click' the Inbox link to go to view the received emails, after I signin to the account. If this is a thing that happens to others, is anyone who found a workaround, or maybe a method to make the click work? Thanks in advance!
  8. Hi UBot-ers ... I'm new here and I'm just playing around with UBot v4 (Standard License) trying to get some nasty data from a table on a page. (Standard License doesn't have the $scrape table command) I tried various solutions, but with each of them I got stuck at one point. A) In one of the ways I tried, I ended up adding each row of the table to a list, in order to later on assemble my own table/file from the scraped data. The issue is that one of the cell elements I scraped from the original html table, comes up with the scraped text on multiple rows, like this: SomeEmail@Gmail.c
×
×
  • Create New...