Jump to content
UBot Underground

Search the Community

Showing results for tags 'Scrape'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Announcements and News
    • Join the UBot Community
  • General
    • General Discussion
    • Mac and UBot Studio
    • Journeys
    • Buy, Sell, Free
    • Scripting

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. I am trying to scrape all the photos from this page: http://v3.torontomls.net/Live/Pages/Public/Link.aspx?Key=2091fd55c22344748e1f3a9ef24ff150&App=TREB As you can see, this page contains a list of homes for sale, with multiple photos for each home. I am having difficulty figuring out how to scrape each of these photos. All of the individual links to the photos are inside this: <img src="http://v3.torontomls.net/Live/photos/FULL/1/620/N3257620.jpg?20150710120456" onerror="this.className += ' imgerror'; this.parentNode.className += ' hasimgerror';" class="formitem imageset multi-p
  2. Hi I would like scrape table from this page http://odmiana.net/odmiana-przez-przypadki-imienia-Marcin using command scrape table but it doesn't work for me. I would like scrape only part of this row Marcin Marcina Marcinowi Marcina z Marcinem o Marcinie Marcinie! I'll be grateful for your help
  3. hi,heed some help i am trying to scrape home year from zillow navigate("http://www.zillow.com/homes/3804 Emerson Dr%0960176_rb/","Wait") wait for element(<class="zsg-content-header addr">,"","Appear") wait(3) add list to list(%home year,$scrape attribute(<innertext=r"Built in ">,"innertext"),"Delete","Global") ive tried all combination with scrape attribute with no results so i think it can be done only with regex but i don't know regex very well,i just start to learn it
  4. I'm trying to scrape usernames off of youtube , add them to a list , and save their name as a .html file. Now the problem is , sometimes people have lots of spaces in their names , which messes up the html file.. Is there a way to scrape their names using regex and replace the spaces with a special character , such as a dash?? Eg. - Any help is appreciated
  5. hi,need some help working on a scrape project and can not get the script working my code is simple: clear cookies clear all data navigate("http://www.tripadvisor.ca/Restaurants-g155032-Montreal_Quebec.html","Wait") wait(1) add list to list(%company name,$scrape attribute(<class="property_title ">,"innertext"),"Delete","Global") add list to table as column(&restaurant,0,0,%company name) add list to list(%company url,$scrape attribute(<class="property_title ">,"fullhref"),"Delete","Global") loop($list total(%company url)) { navigate($next list item(%company url),"Wait")
  6. Guys anyone has some regex code or something to get Links from Bing Search??
  7. So i am wondering imagine this scenario: I am scraping data from 3 different sites into a same table every site is on its own thread Now when you save data i usually use variable to increment row position what i want to know is what logic do you guys use to save data into the same table when multithreading there can be issues like: What if the data is re-written by other variable from other Thread? i mean what if data from site 1 is in the table and then it gets re-written over by data from site 2?? I am just exploring possibilities here to learn what is the best way! Any help i
  8. Hello fellow Ubotters, I would like to get some advice on best practices for how to automate the running of a bot which has a large number of pages to scrape. First I will give a little bit of background, and then hopefully someone can give me a few good ideas to implement. There is one site that I would like to scrape, and I need to pass through a series of unique URLs to the site. With each loop, I write the unique URL into a separate table so that I can keep track of which ones have been done and which ones still need to be done. Perhaps an example will help to demonstrate my situati
  9. http://i.imgur.com/nyyUt32.jpg You All Know how hard it is when you start working in Ubot and you have to learn all new tricks and quirks of this Automation Hassle?? Well Since this Community has been so Helpful to me i decided to give something back as a Thank you to all those People Which helped me greatly for me and my work i will name a few which i harassed till their Hair Fell off, but this doesn't mean i don't love all you guys So Big thanks to: arunner26>>A man with incredible patience when it comes to newcomers. pash>> A Plugin Beast, that never sleeps! T.J>>
  10. Hi everyone, I just purchased ubot studio I have basic question here, I have project to create bot for my self like these: 1. Scrape urls from specific website that contain articles match with my keywords 2. Store the urls to specified folder 4. Grab articles from urls I saved 5. Save article in specified folder become: article 1, article 2, article 3 and so on Please help with sample script you made is very appreciated Thanks so much and Please see image http://oi58.tinypic.com/34p04mv.jpg
  11. let me now if you can swimhg this one: 1: Goto Indigogo.com 2: select EXPLORE 3: Select a category - I will type the name of the category on the Ubot UI 4: Goto each and every listing "Note that there is a load more button that keeps adding to the list" 5: once in a listings click the twitter link "bottom of listing page" [Find This Campaign On] 6: Once on their twatter page, select their followers link 7: scrape all followers: Collect Name & @twitter handle. 8. click +follow 9: this can only be done 1800 times or so due to limits 10: I will wait a few days re-launch the bot, use a check
  12. Hey guys so i am facing a small problem see i am scraping only specific link this is the idea i use some keyword to search Youtube then put all links from search info a list Then visit them one by one and now it comes tricky part I have to click on "Show More" to check for youtube description Now in most cases videos which i am looking for have a lot of Links in their Description check out the example https://www.youtube.com/watch?v=hpqbzPj92HUSo if a link starts with www.something.com Scrape it into a Table If not just skip to another youtube URL from the list. I was wondering is there
  13. So i have this issue with Manta.com i am building something and the weirdest thing i cant seem to find HREF so i can visit link by link only way i found is looping trough offsets which is very sloppy,i would appreciate any help this is the URL http://www.manta.com/search?search_source=nav&pt=34.0396%2C-118.2661&search_location=Los+Angeles+CA&search=handyman&location=Near+Los+Angeles%2C+CA Image Is Attached to let you know what i am talking about!
  14. I've noticed that there is a view source and a generated source in UBot when I right click on the page. I am trying to scrap information off of a page, but I want it to scrape from the normal source and not the generated source. When I scrape attribute, it scrapes the generated source. How can I make it scrape the normal source? Is it possible to just scrape the normal source code? Thanks!
  15. Hello There,i am the new guy here so i would require some assistance on few things. I usually work in Imacros,but that language lacks some things,like conditional statements although i am used to it by now so its hard for me to look at Ubot with brand new eyes,and these two work very differently so i wanted to ask few questions to help me start making Bots. Firstly i would like to Thank Seth Turin for creating this amazing program,i am Using Developer version and well i have a lot to learn. 1 - I saw almost all tutorials are for Ubot 4 on the tutorial page so my question is,which version d
  16. Hi, I'm new to UBot, but went through the training videos and have a good background in html and using xpath for scraping. I'm trying to scrape google urls, but for some reason I'm not getting any classes or ids that will work. The only thing that repeats on each listing is an onmouseclick javascript code, but it's different for every one. Really appreciate any help here. Thanks, Mike
  17. Hey people I just joined this great community yesterday, I'm excited! I want to take all Elite proxies from us-proxy.org. I saw that you guys helped him before: http://www.ubotstudio.com/forum/index.php?/topic/11109-help-me-scrape-proxies-from-hidemyass/ So I thought you might help me out too. Here is what I got: navigate("http://us-proxy.org/", "Wait") change dropdown($element offset(<tagname="select">, 4), "elite proxy") change dropdown(<name="proxylisttable_length">, 80) clear list(%usproxy) add list to list(%usproxy, $scrape attribute(<tagname="td">, "innertext"), "D
  18. Hi, I just purchased Ubot (standard) so I'm pretty new to the software. I'm trying to scrape the URL links for all products on page below but I can't figure out how to do it. I can scrape each the item Title and Price and save them in a list using add list to list - $scrape attribute and using the appropriate CLASS but can't figure out how to get the URL. Everything I try doesn't seem to work. Can anyone offer any advice? http://www.argos.co.uk/static/Browse/ID72/33012746/c_1/1|category_root|Technology|33006169/c_2/2|33006169|Home+audio|33008502/c_3/3|cat_33008502|Clock+radios|330127
  19. Hello friends ask help for this code that I can not function Asher and do not know where this error could help me.? navigate("http://proxylist.hidemyass.com/", "Wait") wait for browser event("Everything Loaded", "") change checkbox(<name="ac">, "Unchecked") change dropdown(<name="c[]">, "United States ") change checkbox($element offset(<name="sp[]">, 0), "Unchecked") change checkbox($element offset(<name="ct[]">, 0), "Unchecked") wait for browser event("Everything Loaded", 30) click(<innertext="UPDATE RESULTS">, "Left Click", "No") wait for browser event("Everyth
  20. It has been a long time since I have messed around with ubot I am working on getting back into it again. But my question is how can I scrape a forum? I am wanting to click on a topic then scrape the whole page on that topic. Then be able to go to the next topic and do the same thing again. I have been trying to do scrape attribute with inner text, outer text, and etc. and I am not having any luck. Some advice on this would be appreciated.
  21. I really need to be able to scrape javascript alert boxes on ubot. In a normal browser, they do pop up, however they don't on ubot. I'm talking about these kind of popups: http://i.imgur.com/mW0yKYS.jpg http://i.imgur.com/70bgNqm.jpg How can I scrape the text on them???????
  22. Hi there, I need assistance for a customer. Goal ist to scrape a range of URLs for table entries and store its rows in a CSV (Step1) or even better to insert them directly into some mysql tables. (Step2) Page result example to scrape: http://www.fn-neon.de/Turniere/60344/Ergebnisse/1069537/ErgebnisseEinerPruefung.html First number in path corespond to event to scrape, second number to part of the event. Concerning events and their entries itself can determined by searching from 16.9.2012 up to today in http://www.fn-neon.de/Turniersuche/index.html Has someone interest in doing this job
  23. N6CNH

    Twitter Scrape

    Hi Guys, Firstly can I apologise for my poor coding terminology that is one of many things I hope to improve through this forum and this project that I am hoping you guys can help me with. So I have done masses of research on this subject and now it’s just time to ask for help!! So I am wanting to scrape Twitter, I have managed to create a .py code that does this for me when the shell is executed, however this is something currently done manually and the results are stored in a excel file and replaced every time the shell is run. What I would like is for the following; To create some c
  24. Hey Guys, I have few textarea tags on a page <textarea name="1"></textarea><textarea name="2"></textarea><textarea name="3"></textarea> I am scraping its value using add list to list add list to list(%textarea, $list from text($scrape attribute(<tagname="textarea">, "value"), ""), "Don\'t Delete", "Global") It adds values from all the textareas to list item 0, if i keep the delimiter blank, and it adds all the new lines as new list item if i add delimiter $newline Can you tell me what delimiter should i set to get the value of each textarea which might
  25. I have a bot that needs to scrape the seconds (#s), minutes (#m) or hours (#h) that is listed on the first 15 search results. It just won't save the times properly at all-I can't get them separated into separate rows or do a replace to add in commas or anything. I'm very stuck-what am I doing wrong? Here is my code: clear list(%search term) clear list(%times) navigate("https://twitter.com/search-home", "Wait") type text(<id="search-home-input">, "puppy", "Standard") click(<class="button btn primary-btn submit selected search-btn">, "Left Click", "No") wait for element(<titl
×
×
  • Create New...