Jump to content
UBot Underground

Siegfried88

Fellow UBotter
  • Content Count

    28
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Siegfried88

  • Rank
    Member

Profile Information

  • Gender
    Not Telling

System Specs

  • OS
    Windows 8
  • Total Memory
    8Gb
  • Framework
    v4.0
  • License
    Developer Edition

Recent Profile Visitors

3254 profile views
  1. Thanks botsimmer, Much appreciated. Unfortunately, money is tight right now and I noticed that the most important tutorials are only paid. I'll have to find another solution. Also, I am concerned that the attributes may still have some problems when it comes to programming, specifically from looking at this thread... http://www.ubotstudio.com/forum/index.php?/topic/14290-parent-child-and-sibling-learninghelping-search-bot/ Although this was back in 2013, I thought things may have changed.
  2. Hey all, I am at my wits end on this one. For almost a year I have been having limited success when it comes to using the "$scrape attribute" function in UBot. I have almost always defaulted to using "$find regular expression" after wasting hours with the former. However, I know I am holding myself back. I have finally made some examples below to illustrate my thought process. I am hoping that some savvy Ubotters here can both tell me what I am actually telling Ubot to do, why I am thinking about this the wrong way, and what should the code look like to get the results I am expecting.
  3. Apologies, As an update, I realized the code above was HORRIBLY customized to my personal bot. Here is a slightly simplified version for anyone. Keep in mind... #List1 and #List2 are strings that are separated by commas#list1 is the stored list. #List2 is compared against #List1. Note, what was confusing to me at first was that the returned string did NOT have anything from #List2 to identify which word was being compared. You will have to take this into account. So, if List1 has 3 values and List2 has 4 values, then you will receive a string sent to"#Results" that is separated by semico
  4. Hey all, Question. Can the cached images from a UBot program running be saved? I ask because I am having trouble downloading the images for an online web book. I have managed to find the four to five images that are loaded in the background and save them to a list (not labeled w/ a file extension, but JPGs. e.g. http://coursesmartpf1.bvdep.com/imagepage/F5C6EF72B2DA969D866455ED9A78C559C5B25128491910097D473598B7CD430C822FD31F60E622B308ABC123 , see code below). There are two problems. First, if I traverse the pages too quickly, I found that I am sent to an "about:blank" in UBot directl
  5. Thanks bestmacros and Traffik Cop, I tried the basic connection with my private Gmail account using the wizard (UBot 5) and it fails to even connect. The other issue is that the Gmail account I am trying to send Emails from is a my company's Google account (bought for us). Even the server is different (i.e. m.google.com instead of smtp.google.com). I have had trouble connecting to it on my Android Phone device due to the policies set on the server side. I ended up having to root my phone and install customized apps just to get it working the way I wanted. Anyway, this is a long way of
  6. UPDATE: Just found this link which at least gives information on how to construct an Email with the subject, to, bcc, and body areas ( http://stackoverflow.com/questions/6548570/url-to-compose-a-message-in-gmail-with-full-gmail-interface-and-specified-to-b ) . Now all I need to figure out is how to post HTML/formatted text w/o messing up the URL. I tried using HTML codes (http://www.ascii.cl/htmlcodes.htm) , but that did not work If you have a Gmail account and are logged in, try clicking this... https://mail.google.com/mail/?view=cm&fs=1&to=someone@example.com&su=SUBJECT&am
  7. Hey, just wanted to see if this was possible. Can we replace part of the DOM in an HTML page on the fly (i.e. while it is loaded)? I am looking to generate Gmail Emails (from drafts) and am having a lot of trouble keeping the format of certain text (bold, italics, URL links, etc.). Copy and Paste is just buggy and generating everything with a $type text() (along with clicking/unclicking bold, italic, link button, etc.) just seems like an exceptional waist of time. This does not go into the fact that I would like these Emails to be up-datable by someone else other than me later on (i.e.
  8. (lots of typos above. Been on 5 hrs sleep for a few days, so forgive)
  9. FINALLY got it working! So, as a gift, I present this updated code. I'v gotten a LOT of help over the past few months, so this is my way way of giving something more than a simple "thank you". You can read the (ton of) comments if you like, otherwise, just 'use it'. A few quick things... Takes two test strings (as #List1 and #List2, separated by ","). You may want to use "$text from list" with "," as deliminator if you start with %Lists It creates a new browser window, scrapes that window, saves to a (local) file, scrapes that local file into a table, deletes the local text file, then fi
  10. Thanks UBotDev and blumi40, I tried creating an entire new account on Adobe Connect and do NOT seem to be having the same issues when pulling the XML. This mean's it is probably my compay. UBotDev: I have NEVER played around with that before. Looks much easier to try than messing w/ my cookies. I'll have to check it out. blumi40: That has even more interesting implications for another project. While being retired, my company is still using an "webpage.hta" method to access some of the data on their servers. If memory serves correctly, I spent many a day agonizing how to bypass the r
  11. Ok, I am at it again with my fuzzy matching javascript algorithm. The original link can be found here http://www.ubotstudio.com/forum/index.php?/topic/14573-adapting-a-javascript-prototype-object-to-a-ubot-script/ . The new script has some additional features, namely (1) returns the exact name that was matched and (2) can check the name against a list, rather than a 1:1 setup (as before). Eventually, I'd like to adapt it to take an entire string and return the index number of where it found the fuzzy match (see end of this post). Anyway, I have a new script I downloaded from the following l
  12. Hey all, Just a quick question, is there any way to give uBot the necessary credentials to read a file that is behind a password protected site? It may be something with the site itself (I admit), but every time I try to load XML data from a URL (behind www.acrobat.com, using Adobe Connect). The meeting reports in XLM format give more data than a regular report. However, every time I try to use $read file, I get an error that I don't have the credentials (when I have clearly logged in). I am not running running the data collection in a new browser/window/thread/socket, etc. so I can't fi
  13. Found a work around, which incidentally works similar to the way "Element Child" should work I suppose. Basically, I used the "Scrape Attribute" with scrapping "innerhtml". I was originally using "innertext", which obviously didn't work for grabbing all the data I needed. The below example throws it into a list. Ignore the exact "session_#", as I am working w/ many pages, but it is a quick fix to the problem above. add list to list(%Temp List, $scrape attribute(<class="session_1008533106">, "innerhtml"), "Delete", "Global") Thanks for your help Dan and Edward_2.
  14. Hello all, Been having a bit of a problem with uBot 5. I am unable to match/capture a new line. I first tried to pull the data with "Scrape Attribute" and using "child element" (or whatever it's called). Anyway, here's the code you can see what I am trying to do. <tr class="session_1008488717" style="display: table-row; "> <td class="Present"> <input id="1184475242" onchange="$(this).addClass('dirty');" type="checkbox"> </td> <td>John Doe</td> <td>JohnDoe@google.com</td> <td>Active<
×
×
  • Create New...