Jump to content
UBot Underground

MrGeezer

Fellow UBotter
  • Content Count

    52
  • Joined

  • Last visited

Everything posted by MrGeezer

  1. Hi Guys, Just an update, turns out it wasnt just me, they were able to isolate the speed issue and are looking to get it back up to speed. http://tracker.ubotstudio.com/issues/356
  2. Sounds like its just me thats experiencing this, thanks guys i will lodge a ticket
  3. Hi quite_interesting. Just tried that, takes about 2 seconds (IE enough time for me to alt tab into the debugger and wait see it turn from blank to the text within the scrapped attribute.
  4. Ah makes sense to me thank you for your help! Do you think it is better (performance wise) to append each line to a csv file, or for every 500 rows or so?
  5. Hi Guys, Today I decided to work on converting my existing bots into v5 (there were minor bugs that were previously causing the bot to freeze when compiling which I have since resolved). However I have noticed that my bots are now running extremely slow. Whilst running within Ubot studio, looking at the debugger, I notice that scrape attribute to a variable is running extremely slow. I created a very simple html file with my own custom tags to test it out (the webpage is less that 1kb) EG <customtag1>text</customtag1><customtag2>text2</customtag2> etc etc... s
  6. Hi guys! I am scraping about 5000 rows of data to a table, then writing that information to a csv sheet. However I am noticing that there are performance issues when i get to around 2000 rows and sometimes it fails to even store the data at all. Wondering what is the best practice to store large amounts of data locally as from what I have read on the forums, tables are not ideal. Cheers
  7. Hi Abbas, Sorry for the late reply, have not had a chance to test this out today as it was not on my machine! Thank you very much for your advice it worked a treat!
  8. Hi guys, Having an issue with one of my bots and to diagnose it i have simplified the script completely. Basically all it does onload is log into a proxy, then connect to https google web address to run a web search. On my computer it works completely fine, however on another it pulls up an error SSL Protocol Error 107. Any idea how to fix this? My guess is it is not related to Ubot but either the proxy server is rejecting the new computer or something to do with how the pc is configured?
  9. Guys thanks for your tips very interesting! What is the optimal way of doing this if we want to ensure that we keep cookies intact? IE scraping from a membership based website but still freeing up resources?
  10. V5.01 would open my bot but fail to compile, now v5.02 completely freezes when I try to open the .ubot file
  11. dipswitch can you compile this code? I have narrowed it down on my system to this on load("Bot Loaded") { load html("jjjjjjjjjjj") }
  12. I am also getting a boost error! --------------------------- Microsoft Visual C++ Runtime Library --------------------------- Assertion failed! Program: ... File: C:\boost\boost\boost/smart_ptr/shared_ptr.hpp Line: 653 Expression: px != 0 For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts (Press Retry to debug the application - JIT must be enabled) --------------------------- Abort Retry Ignore
  13. tested with .net 4 and .net 4.5 both failed to work!
  14. I have an issue with DeathByCaptcha also where it will not auto solve. Seems to only happen on Windows 7 Machines that I have tested on.
  15. Hi guys, Thanks for your help, I adjusted my code as suggested so it only picks up the image, however I am still having the same problem. define solveGoogleCaptcha { loop while($search page("Our systems have detected unusual traffic from your computer network")) { then { type text(<name="captcha">, $solve captcha(<outerhtml=w"<img src=\"/sorry/image?id=*\">">), "Standard") click(<name="submit">, "Left Click", "No") wait for browser event("Everything Loaded", "") wait(3) } } } There are a few thin
  16. Thanks for the tip quite_interesting! I do not believe the element is changing, however the wait time could be the issue. I am currently testing this out now, will post back on how it goes, thankyou very much for your input!
  17. Sorry forgot to attach the captcha popup!
  18. Also note, when the window pops up, it does not solve the captcha!
  19. Hi guys, I have integrated Deathbycaptcha into my compiled software and it usually runs fine without an issue. It essentially runs a search query on google and scrapes the first result. I have disabled the browser capability of the bot and it usually solves any captchas in the background without a hitch. But sometimes it does not work at all and will show the captcha window instead of silently running it in the backgorund. I am running the bot on two computers, and have seen it happen on both machines. I do not believe it is an issue with connectivity with dbc because I have run the b
  20. Hi guys, I am opening a CSV file using ubot. However when I try to save back to the csv file, it strips away all quotation marks. This is not usually an issue, however one of the particular cells uses code so links are clickable =hyperlink("http://www.somewebsite.com") This makes a csv link clickable from within excel. But because it strips the quotation mark away to =hyperlink(http://www.somewebsite.com) it is no longer a clickable link Any ideas on getting around this? Is this a bug that should be reported?
  21. Thanks for the tip Vaultboss! Went with your suggestion and it got it to go Although I am curious as to how Regular Expressions work from within Scrape Attribute? I have searched ubotstudio but cannot seem to find a guide
  22. Thanks Anonym, unfortunately in this case scrape attribute is not going to work as it was pulling out attributes that were irrelevant, i used scrape attribute to isolate the code above and need to regex it to refine it further. The other alternative i thought was to regex replace the entire <a*> tag.and then run another regex but this is probably not the most ideal way to do it
  23. Hi Blumi, I appreciate your help, Not sure if I am implementing it incorrectly but it did not seem to work on my end? Basically, I need to locate the h3 tag, and a tag, but ignore anything in between the href as this differs from link to link, then scrape the content between the A tag
  24. Hi guys, I have encountered a problem with a wildcard. Basically, I wish to match content between an <a></a> tag. The problem is that I need to add a wildcard when matching the tag. <h3><a class="name" href="/user/id=31266009&authToken=TRVU&locale=en_US&srchid=4852501813&srchindex=1">Name i want to scrape</a> eg (?<=</h3><a class="name" href="WILDCARDGOESHERE">)(.?*)(?=</a>) Is this possible? I tried inserting [a-zA-Z0-9\t\n .\/<>?;:"'`,!@#$%^&*()\[\]{}_+=|\\-] where WILDCARDGOESHERE exists, however this
×
×
  • Create New...