Jump to content
UBot Underground

Learjet

Fellow UBotter
  • Content Count

    147
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Learjet

  1. I'm trying to figure out a way to do a data-merge (Spintax with table) with Ubot and I'm kind of stumped! So far, while tedious, I got a killer app going and this is one of the last pieces. Here's the Spintax example: {{I saw|I found|My wife noticed} {a great piece|an article|a writeup} on a [Year] [Car], it was {a beautiful|an amazing|a brilliant} [Color]{ and| &|,} [Car-Company] was {doing everything right|on-point} {at the time,|then,} {check it out|here's the article}: [Article]}}Here's what the table would look like: Year,Car,Color,Car-Company,Article 1967,Mustang,Candy Apple Red
  2. <div class="ProfileHeaderCardEditing-editableField rich-editor u-borderUserColorLight notie" contenteditable="true" tabindex="2" role="textbox" data-placeholder="Bio" aria-multiline="true" spellcheck="true" name="user[description]" dir="ltr"> YOUR PROFILE TEXT HERE <br> </div> You should be able to use the "ProfileHeaderCardEditing-editableField" class as something to work with either via REGEX or innertext, no? Peace, LJ
  3. Thanks Brutal, admittedly my knowledge is pretty limited now. Moving from front end development to programming is taking a bit of time but I will get there :-) There's nothing Front End wise that I cannot do, and it got very boring. This however is very fun, I'm still having a blast, frustrating at times but still fun. I got pretty good at fixing PHP scripts and customizing them, but starting from scratch is another issue. Starting to come together slowly! Seriously, I can't thank you guys enough for your patience and willingness to share and help out! Many thanks! Peace, LJ
  4. Pash, Thanks so much, I see what you did and more importantly I understand why you did it! Can't thank you enough, thanks for your patience while I'm learning :-) Respectfully, LJ
  5. Thanks for the response Brutal and Pash, here's what I used to scrape the list: navigate("http://www.theassemblyshow.com/index.php/attend/exhibitor-list","Wait") add list to list(%companies,$find regular expression($read file("http://www.theassemblyshow.com/index.php/attend/exhibitor-list"),"(?<=\">(<strong>|)).*(?=(</strong>|)</a></td>)"),"Delete","Global") Thanks again!
  6. I scraped a table of companies and now the information is in a list, however there are some bold tags and image tags that I need to filter out. I just need the company name and that's it (please see attached image). I've looked on the forum and can't find any wisdom regarding how to filter the list. If you could provide some wisdom to point me in the right direction I would be very grateful! Thanks for your help! Here's the image: Peace, LJ
  7. Thanks for your tips! I'm going to setup OneDrive as soon as I get it running, got my parts this afternoon and I'm working on getting everything setup. 1) New Western Digital Blue Drive 1TB (got it for 45% off ($54) I'm sure it will be more reliable than the cheap Hitachi that I had) 2) Cheap graphics card to run multiple monitors ($69) Researched the best low budget graphics cards on Tom's Hardware ended up with a GeForce GT 730. Thanks for all your wisdom, I really appreciate it! Peace, EJ
  8. Been smelling something 'electronic' burning for the last few days and thought that it was just the new TV that I put in my room making the smell. I was 'sure' that it wasn't my computer! Well it was, my graphics card was full of dust which caused the fan to die, which caused the system to overheat and shutdown, which corrupted a file in Windows. While plugging my hard disk into another machine to format it, the little 'L' piece that the SATA cable connects to on the drive snapped off, grrrrr! So now I'm expecting a new hard disk and graphics card tomorrow. By the only stroke of lu
  9. If I could suggest anything it would be the option to use TOR as the browser. It works great when you are trying to run multiple accounts on sites that monitor your IP's (like Tumblr). Additionally, it's opensource so there would be no licensing snafu's. Thanks for listening :-)
  10. I keep getting an "Index was was outside of the array" error when I run this, can any of you see an obvious problem with the code? clear cookies wait(3) ui text box("Proxy: (proxy:port)",#Proxy) change proxy("(#Proxy)") alert("Your proxy and port is: {#Proxy}") navigate("http://whatismyipaddress.com/","Wait") wait(10) navigate("http://www.cnet.com/","Wait") It's basically a field where you can insert a proxy and port which will set the proxy for you. I need to set the proxy info in a variable because it will change with every loop of the program. Thanks for your tips! Peace, LJ
  11. I'm having an issue where I have a list which contains spun files and another list that has news links that go in the spun files. Inside the spun files I have a marker: **VAR** What I need to do is replace the **VAR** marker with links in the news links list and I'm at a loss on how to do it. Thanks for any suggestions! Peace, LJ PS: Love the pin feature, works great!
  12. Learjet

    Scraping Rss

    ds062692, Thanks so much, perfect! Thanks to you too pash! My head is swimming, there's so much to learn that it seems a bit overwhelming right now, but I'm getting it slowly... Peace, EJ
  13. Learjet

    Scraping Rss

    Hi Pash, I got the file downloaded and the regex figured out, having a hard time figuring how to scrape from the .txt file that I created with the RSS Code in it. Here's the Regex for Google News RSS in case someone needs it: https://news.google.com/news?cf=all&hl=en&ned=us&q=KEYWORD&output=rss Or https://news.google.com/news?cf=all&hl=en&ned=us&q=YOUR+KEYWORD&output=rss Here's the REGEX to extract the links: (?<=\&url=).*?(?=<\/link>) Thanks in advance for your help! Peace, Z
  14. Is that double back slash in your code causing the issue? Sorry, I'm new but maybe that will help? download file(#myURL,"{$special folder("Application")}\my_file.pdf") Try that :-) Peace, EJ
  15. Learjet

    Scraping Rss

    Been trying to get this for about 6 hours without any luck, I'm trying to get the links from Google RSS here: https://news.google.com/news?cf=all&hl=en&ned=us&q=Red+Skelton&output=rss The links look like this: <link>http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNGdVPVljckRj3DjAnoe4B1Bs8I6Ow&clid=c3a7d30bb8a4878e06b80cf16b898331&cid=52779022175028&ei=u9yJVvi_MerrwQGb67GQBA&url=http://www.buffalonews.com/life-arts/book-reviews/book-review-limping-on-water-by-phil-beuth-with-kc-schulberg-20160103</link> However, I just need
  16. Greetings, I'm a brand new user that's been waiting awhile, doing a ton of research, and saving money before purchasing. I'm a full time web developer already so catching on to the software has been pretty easy for me, I've had the software a few days and have been going through the tutorials. I just wanted to say thanks to Frank and all of you for taking the time to make all of the great tutorials, you have all been wonderful for sharing your feedback and wisdom. Special thanks to Frank for the 'official tutorials' I watched hours and hours of them prior to purchasing and they made a h
  17. Thanks pftg4! Got it working with page scrape, great feature! I use this kind of function all the time in Scrapebox and it's very handy. Love being able to put it all together in uBot, saves me a lot of time! Peace, EJ
  18. Greetings, first post on the forum :-) I'm following Frank's scraping tutorial but Google has removed the class from the links on their pages making things a bit more tricky. I can do this easily with Scrapebox but I'm having trouble figuring out how to do it with Ubot. What I need to do is scrape the content between one point and another: <a href="https://www.allaboutbirds.org/guide/" onmousedown="return rwt(this,'','','','1','AFQjCNEtCVsjUVy4rwZhRkep3d599ciT0g','','0ahUKEwid9s640IXKAhWM6yYKHWfEAf0QFggcMAA','','',event)">Bird Guide - All About Birds</a> Basically I want to s
×
×
  • Create New...