Jump to content
UBot Underground

Biks

Fellow UBotter
  • Content Count

    217
  • Joined

  • Last visited

  • Days Won

    1

Biks last won the day on January 28 2011

Biks had the most liked content!

Community Reputation

9 Neutral

About Biks

  • Rank
    Advanced Member

Profile Information

  • Gender
    Male

System Specs

  • OS
    Windows 10
  • Total Memory
    More Than 9Gb
  • Framework
    v4.0
  • License
    Professional Edition

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm trying to automatically download .ascm (epub's) files from the Overdrive library network. I just can't click the last button to click to actually START the download. This script gets you logged into a temporary account # and gets you to the account loan page where I want to download it. When I run it in NODE mode, it works, but not in the sequence. I've also tried moving the mouse over the button (works in NODE mode), not in the sequence. Scraping the URL then downloading doesn't seem to work either. What am I doing wrong? Note: this is using a temporary library card # - good for 2 we
  2. I'm also trying to download a file via a paywall and having the same problem -I can't start a download file, run node works vs running the entire box doesn't. How did you solve it? I'm trying to download an ebook via Overdrive: https://bpl.overdrive.com/bpl-visitor/content/media/1438507 Obviously you need a library card and logged in to initiate the download/borrow. I can download the .acsm URL directly - that works but the file doesn't open within Digital Editions. It adds something about my subscription when I click it through the online interface. I mean we're taking about having
  3. I'm trying to automate a simple Google alert. Looks like it should be easy, but the standard Ubot choices don't work. You can't drag anything over, and drop menu choices aren't the classic <option value=ITEM>. Google Alerts creation page: https://www.google.com/alerts (obviously login w a Google account) The HTML for one of the drop menus looks like this: <div class="goog-flat-menu-button jfk-select volume_select goog-inline-block" tabindex="0" role="listbox" aria-activedescendant=":6" aria-expanded="false" aria-haspopup="true" style="-webkit-user-select: none;"><div class
  4. This should be simple, but I've never done it before. Let's say I scraped a 300 character long block of text and it's sitting in a variable. I want to shorten it to the first 200 characters. (I know how to FIND the first 200 characters, but I want to delete everything outside of my find)
  5. Does Ubot require Microsoft C++ redistributable to run? Then I'm assuming any compiled bots would also require it... I don't have the Developer Edition of Ubot - all these additional files are also required for compiled bots that go out to others if I DID have the developer edition? Thanks everyone for helping me out on this.
  6. re: Microsoft C++ redistributable What's the deal on that one? Did I forget that I (probably) installed it years ago? Not suggested, but mandatory, right? (I'm trying to keep this as simple as possible)
  7. Simple question: Do compiled bots need .net Framework to run? I'm giving someone (a non programmer) one of my compiled bots on Windows 10. He can't get it to run. I'm assuming this is the issue. (in addition to adding an exclusion to Windows security) Any other tips I should know when giving out bots?
  8. I've done this a million times, now I can't. Just trying to scrape emails. In an earlier version of Ubot, this regex code worked for scraping emails (NODE VIEW): (\([A-Z0-9._%-])+@([A-Z0-9.-]+)\.([A-Z]{2,4})(\ But in CODE VIEW I see this: (\\([A-Z0-9._%-])+@([A-Z0-9.-]+)\\.([A-Z]\{2,4\})(\\ It's adding more slashes. What's going on? When someone says USE THIS REGEX code, do I paste it NODE or CODE view? This regex code is supposed to scrape all variations of emails: [a-zA-Z0-9\._\-]{3,}(@|AT|\s(at|AT)\s|\s*[\[\(\{]\s*(at|AT)\s*[\]\}\)]\s*)[a-zA-Z]{3,}(\.|DOT|\s(dot|DOT)\s|\s*[\[\(\{]\
  9. So what other software can I learn/use to do this? (that won't crash) or I would really love to have this scraped: https://soundcloud.com/harperaudio_us/followers And maybe this too: https://soundcloud.com/audible/followers Anyone willing to run my code on their machine? Does anyone know of anyone who could/would do this? How much do you/they need?
  10. clear list(%followers) navigate("https://soundcloud.com/random-house-audio/followers","Wait") wait for browser event("Everything Loaded","") loop(9999) { add list to list(%followers,$scrape attribute(<class="userBadgeListItem__heading sc-type-small sc-link-dark sc-truncate">,"href"),"Delete","Global") run javascript("window.setTimeout(function() \{ window.scrollTo(0, document.body.scrollHeight) \}, 500)") wait(3) } save to file("C:\\Users\\Public\\Ubot\\Soundcloud\\SCRAPED USERS.txt",%followers) I'm basically do this. Each javascript page load gives me 25 new profiles at the
  11. From what I can see, these deal with data once you've acquired it. The problem is I need to hold 2.5 million entries in memory (before the scrape) before I can do anything with it. I can't parse the INPUT into manageable smaller sections. Giganut, how many Twitter followers can you scrape at one time?
  12. I have never gotten Ubot to scrape beyond a certain point. It seems once I hit around 42,000 entries, the whole thing collapses. I just had this happen twice on the same site. I'm guessing I'm running out of memory. At this point I'm using 16 GIG, will doubling my memory help? I've recently been grabbing followers on a few websites that require you keep loading a new batch of users as you scroll down the page. (using the Javascript load command) There's no way of stopping, saving and continuing beyond a certain point, it's just offering me an endless list. As an example: The Spotify Twitte
  13. Ver 1.1.4.4 installed. Why can't I find the clipboard functions? Searching for CLIPBOARD brings up nothing.
  14. I had the same problem - that seemed to do the trick. (chrome 21)
×
×
  • Create New...