Jump to content
UBot Underground

Search the Community

Showing results for tags 'scraping'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Announcements and News
    • Join the UBot Community
  • General
    • General Discussion
    • Mac and UBot Studio
    • Journeys
    • Buy, Sell, Free
    • Scripting

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. Hello, I`m having a hard time scraping some values from a keyword researching tool. In total I have 3 different problems, which I`m going to describe in more detail in my following 2 short videos: Problem 1: How can I remove the " signs, from the scraped data? Problem 2: How can I get all the scraped elements into one line, seperated by comma? https://www.loom.com/share/c46ad50de064427c8b6c06a18c8bca8b Problem 3: How can I scrape only one specific element, without scraping all similar elements? https://www.loom.com/share/e005e88c283246d28ec9870702c202d1 Would
  2. Not really sure if this can be done. But how do you scrape tables to an excel file? Lets say Season 1 on: https://en.wikipedia.org/wiki/List_of_The_Legend_of_Korra_episodes I have looked up tutorials but most seem to be Regex or list based examples. I want to extract this data and put it in to an excel in row 1, 2, 3 so it looks like that the page but in excel. Is this possible?
  3. Hi guys, been tinkering around with uBot more and more and I'm trying to develop a scraping bot for myself but I'm running into trouble. When I create a command that goes to scrape a certain part of information, it's all under the same div class, so it scrapes a bulk information like the following as an example: (FYI: I tried using offsets, wildcards, etc and it's either It doesn't work or scrapes everything on the page) Fruits: Bananas, Apples, Watermelon Color: Yellow, Red, Yellow, Green Flavor: Sweet, Sour Shape: Long, Round, Oval So once I got all this information saved as a variable
  4. Hello. I already read Offset tutorial, but i still can't get it. So i want to scrape a href link under all of <li> tag's and add them to the new list. <li><a href="/alfaromeo3455534/">....</li> So the whole code looks like this : <body><div id="main"><div class="content"><div class="c-1 endless_page_template"><ul class="list"><li> <ahref> </li><li> <ahref> </li><li> <ahref> </li><li> <ahref> </li><li> <ahref> </li>...</ul>
  5. I am trying to scrape the search engine pages and once I have the page scraped, I want to extract all the bolded keywords and put them in a list. I've been trying to make this happen and I just can't seem to put my finger on the right code to get this to work. If you could help me out with this would be greatly appreciated. So again I want to scrape the search engines for a specific keyword. Once I have the page scraped with all the listings this includes the title, URL, and a description that is in the search engines I want to then extract the bolded keywords from that scraped page. I then
  6. how can i scrap using socket commands? like if i want to go to craigslist and scrap the emails or phone numbers using sockets
  7. I am very new to all of this and have actually come across Ubot by pure luck after years of searching for something like this. I am trying to create a very simple bot that will go to a website and search for something specific. I have made it that far and it works successfully. When Ubot clicks the "search" button, the webpage will sometimes return buttons with options for what I was looking for. I am trying to find the best way to set an alert if the webpage comes back with a result and I don't have a clue. I know this is probably beyond elementary for you guys, but if anyone could point
  8. Does anyone have a REGEX CODE that could grab IPs and Ports on almost all of the pages which offers free proxies? Or i have to scrape proxies and add them to a table, then do the same with the ports and finally create a .txt file with both tables IP:PORT Thanks
  9. Hey Boys and Girls, I'm writing a bot and having issues with multi threading and also the scraping aspect. The scraped info is not being added to the lists/tables. For no other reason other than I don't know any better (right or wrong - I'm not sure), I'm using: thread spawn("","") { } Could someone set me straight on how to do this properly OR where I can find up to date Ubot tutorials? Thanks!!!!!!!!!!!!!!!!!!!!!!!!
  10. Hello, I am looking to buy 500+ different kinds of software. If you are are selling any between the price point of $1 - $20 let me know. The software that i'm looking for can fall under any category. SEO, SMM, Facebook, Youtube, Google, Wordpress etc etc etc. Also 6 months support for the software that you are selling is mandatory. P.S. - I do not want the source code, just the exe should be fine. Thanks Imran
  11. Hello everyone, anybody knows how to scrape toolbars? Maybe with the "http post" plugin or something. Thanks for your help.
  12. I am having issues scraping ids from a gig site. I made a short video showing my issue. I have the video link down below to see the issue I am having. Some help on this would be appreciated. Video Link Click Here
  13. Hi, New here so sorry for just asking for some help in the first instance, hopefully be able to offer back advice myself when I get into the software. I am trying to scrape the following data from this url, move to the next page if it exists and place into a csv. http://www.the-saleroom.com/en-gb/auction-catalogues/1818-auctioneers/catalogue-id-sr1810075/search-filter?page=1&pageSize=240 Lot number Title URL I then need to move to the next page if one exists so I am able to grab all the details for the one sale. So far what I am able to do is scrape all the date from the one pa
  14. Hello community, This is my first post and am new to the forums. Have studied uBot for a little while, but still a little new I'm having trouble scraping emails from a webpage... I have tried most of the regex codes in EditPad and the only one that seemed to work and highlight the email addresses was this code: \b[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b But when I am using this code in uBot I dont seem to be able to scrape anything... Also tried looking with insersions like: (?<=.)\b[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b(?=.) Also tried it in brackets like this: (\([A-Z0-9._%-]
  15. Dear Ubotters, I have a list of about 1,000 URL's I am looking to scrape for phone numbers. To be clear, I'm just talking about the home page then, if necessary the about / contact us page. I just need the one main number for each URL but If there happens to be more than one on the page and we can grab them that would be good but not a 100% requirement. I don't want the bot, I have a copy of UBOT but it's just a little too complicated and technical software for me. I just want you to return me the spreadsheet with the phone numbers filled in. If the first batch goes well we can do future bat
  16. Guys i have this Block of Text what i am trying to do is Find All Links inside it using some sort of Universal Regex for Finding URL-s and then replace it with a text "mariners" I guess i need help with logic can anyone help?? My Code So Far set(#description,$replace($scrape attribute(<id="eow-description">,"innertext"),$new line," "),"Global") set(#description,$replace(#description,$find regular expression(#description,),"konjine"),"Global") set(#watchcount,$scrape attribute(<class="watch-view-count">,"innertext"),"Global") set(#video title,$scrape attribute(<id="eow-t
  17. Hello fellow Ubotters, I would like to get some advice on best practices for how to automate the running of a bot which has a large number of pages to scrape. First I will give a little bit of background, and then hopefully someone can give me a few good ideas to implement. There is one site that I would like to scrape, and I need to pass through a series of unique URLs to the site. With each loop, I write the unique URL into a separate table so that I can keep track of which ones have been done and which ones still need to be done. Perhaps an example will help to demonstrate my situati
  18. Guys this is my issue in Ubot when scraping something i noticed that when you scrape addresses lets say one address is St. 34, Alesandra Avenue Scraping is fine and It will fill one column in Excel and i am ok with that but What about long strings lets say address is Street 34.Allesandra Avenue New York Department,Ministry of defense,local directory......bla,bla,bla In these cases Ubot automatically goes into new line and ruins my data my question is this: Is there a way to Limit how many informations Script scrapes from a particular string? Is there a way to force Ubot to keep ev
  19. Guys i have strange thing i am trying to scrape Phone number from this spanish site and my goal is this i have to scrape only mobile numbers so every number which starts with "9" will be excluded from Scraping results and i managed to do the hard part but i have this weird issue. Check the Image Attached on load("Bot Loaded") { navigate("http://www.milanuncios.com/ofertas-de-empleo/","Wait") } wait for browser event("Everything Loaded","") wait for browser event("DOM Ready","") set(#loop,0,"Global") loop(#loop) { click($element offset(<tagname="b">,#loop),"Left Click","No")
  20. Guys i have the weirdest issue i was updating my old bot anyway data looks solid and well formatted in Ubot Debugger(Check the picture attached) Issue 1 But when i open it with Excel my data looks jumbled this never happened before to me only things i changed is i added few plugins "Advanced Ubot" by pash and "HMA Commands" from T.J. But i i don't think any of these are the issue they have nothing to do with this error i think something changed inside Excel i don't know what sadly! I added the Details.csv that is what Ubot Outputs try to import it into excel ,i always did this normally
  21. http://posting.albany.backpage.com/online/classifieds/PostAdPPI.html/alb/albany.backpage.com/?u=alb&serverName=albany.backpage.com&category=5416&section=4374 Guys when I make a script to navigate to each of the pages I have which is over 400 of them, I want to post to each biz op page for an actual job that I am offering for sales reps. Anyway I go to the page that you see above and now have to scrape each one and pick them each in a new navigation. Blah blah I can't scrape the href's and it's got me going crazy here is the code: loop($list total(%urls)) { navigate($list it
  22. Hi Guys, How do I go about scraping just the values from the below div? So I want to end up with: peterjeffry86 jeffrypeter456 pjeffry27 Obviously suggestions are going to change so I think I need a regex? though i just cant figure out how to get going with regex in UBot. <div id="username-suggestions" class="username-suggestions" style="display: block;">Available: <a href="">peterjeffry86</a><a href="">jeffrypeter456</a><a href="">pjeffry27</a></div> Any help appreciated.
  23. Hello UBotters, It's been a while since I've picked up UBot - I've a client that wants to scrape the urls for amazon best sellers and this is the code that I've come up with: clear list(%urls) navigate("http://www.amazon.com/gp/bestsellers/electronics/ref=sv_e_1", "Wait") wait for browser event("Page Loaded", "") add list to list(%urls, $scrape attribute(<class="zg_itemImmersion">, "href"), "Delete", "Global") The page source for one of the item looks like this: <div class="zg_itemImmersion"><div class="zg_rankDiv"><span class="zg_rankNumber">5.</span><
  24. So I'm trying to create a bot that goes to a webpage, scrapes a certain email address snd saves it to a .txt file. The bot then navigates to another webpage and scrapes another email address and saves it to that same .txt file. It then keeps repeating this process for a set amount of times. I'm able to get it to navigate to the first website on my list and scrape the email address to a .txt file fine. The problem is when it navigates to the second website on the list and scrapes that email address. It scrapes the email like it's supposed to, but instead of adding it to the list with the prev
  25. http://videosniper.s3.amazonaws.com/img/Salespage1.jpg youtube Video Demo $7 ONLY (dimesale price) http://tntsofthouse.com/wp-content/uploads/2013/09/order-button-5.jpg Software feature - Fast scraping with socket and youtube gdata api - grab video check competitor with single click - grab video competitor data (tag, keyword, description etc with single click) - 15 day money back guarantee if you not happy with our software - newbie friendly - simple and easy to use - create by ubot and develope by me so just contact me if you have any problem http://tntsofthouse.com/wp-con
×
×
  • Create New...