Jump to content
UBot Underground

Search the Community

Showing results for tags 'scrape'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Announcements and News
    • Join the UBot Community
  • General
    • General Discussion
    • Mac and UBot Studio
    • Journeys
    • Buy, Sell, Free
    • Scripting

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







  1. Has anyone successfully scraped flickr images ? Basically I want to search flicker and download the results page pictures in med res. Anyone done similar ?
  2. I am looking to create a desktop or web-based software that will let me extract products from amazon USA and import them to Mercadolibre.cl. Mercodolibre is a similar service based in Latin America. The software needs to extract: Titles Photos Product description Price Inventory Any other pertinent information After extraction, the software should import all data to my Mercadolibre account as a new product. There should be no limit on the number of products that can be imported, perhaps even entire categories. The idea is to create entire Mercadolibre web stores filled with amazon product
  3. Hello,I need to scrape these questions and can not find the way IMAGE https://imgur.com/a/uiPxj url: https://www.rpgwatch.com/forums/register.php?do=register Any ideas? Thanks for the help
  4. Hi there, I have found a bug in the Ubot software where the browser function $document text doesn't refresh after a page scroll. I'm trying to scroll down and scrape a page but I found out that it always scrapes only the first 22 results even though I see the page being scrolled down and see all the other results. Ubot, in the loop scrapes just the first 22 results because it scrapes from $document text which doesn't refresh itself after the scroll. When I manually check the source code, it shows the unrefreshed content - the same one as $document text and when I check the view generated sourc
  5. We need your help expert ubot users. We would like to scrape the urls from all the 821 companies show on this site below. http://launchpoint.marketo.com/?show=all Seen in the screen shots it requires to click on the company, scroll a bit down and off to the right is the "Website" which if clicked takes you to the company website. What would be the best way to scrape all 821 companies websites of this page? Thank you for any help we greatly appreciate your time.
  6. hello friends I would like your help to scrape keywords youtube. <meta name="keywords" content="correo electrónico, cuenta gmail, cuenta google, tutorial gmail, Gmail (Website), Ilimitada, google, Correo gmail, celular"> add list to list(%keywormetas, $scrape attribute(<meta name="keywords" content=">, "innertext"), "Delete", "Global") I can not find a way. Thanks for the help
  7. Hi, I'm trying to scrape only numbers, unfortunately the number is in the same class with the rest of text. <span class="stats-row">15 Items Avaible</span> I need this value for comparison. Also, i want to remove brackets from scraped innertext : <a href="#reviews" class="gig-ratings-count js-gtm-event-auto" data-gtm-category="gig-page-triple" data-gtm-action="click" data-gtm-label="buyer-reviews">(205)</a> Thank for help in advance!
  8. Hi guys, been tinkering around with uBot more and more and I'm trying to develop a scraping bot for myself but I'm running into trouble. When I create a command that goes to scrape a certain part of information, it's all under the same div class, so it scrapes a bulk information like the following as an example: (FYI: I tried using offsets, wildcards, etc and it's either It doesn't work or scrapes everything on the page) Fruits: Bananas, Apples, Watermelon Color: Yellow, Red, Yellow, Green Flavor: Sweet, Sour Shape: Long, Round, Oval So once I got all this information saved as a variable
  9. why ui button scrape link not work but when i go to ubot 4.2 run mode on set(#HotmailConfirmEmail, $page scrape("<", "<https://instagram.com"), "Global") } then work ui button("IG Master Scrape") { click(<title="Inbox">, "Left Click", "No") click(<onclick="onClkRdMsg(this, 'IPM.Note', 0, 0);">, "Left Click", "No") click(<id="lnkHdrreply">, "Left Click", "No") set(#HotmailConfirmEmail, $page scrape("<", "<https://instagram.com"), "Global") }
  10. 1: random business in a city selected "scrape a YELP review" 2: use Spinrewriter API 3: log into a google account "with an associated proxy" 4: submit spun review rinse, flush, repeat Need to be able to use 20-50 accounts with 2-5 posts a day, for 2-3 months...
  11. I'm trying to scrape Twitter followers. But the popular twitter profiles have over 1 million followers, so I need to keep on scrolling down for a very very long time to get all the followers. I want to... 1. Change the CSS of the page so the follower box is smaller, so I can scroll down faster 2. Progressively delete followers as I scroll to keep the page more light weight. Need someone to do screen share with me to show me how to do it. How much per hour? Show me experience you've had with this. Your email.
  12. Hi, I want to know if selection of content is possible with ubot, for example let say I find a page of amazon products, But there are 100 items out which ,I want to scrape content from 20 items only. I mean some kind of check bot system we can implement if yes how..your help is appriciated.
  13. i tried this code not work it work but scrape everything Remote Host xxxx IP Address xxxxx how to get ip only http://www.lagado.com/proxy-test set(#captureIP, $scrape attribute($element offset(<tagname="p">, 7), "innertext"), "Global")
  14. Hello everyone, I just did an enter in a blog explaining how you can download all images from a profile and doing pagination with a line of bash code. This can be executed in any OS, also on Windows if you set up the curl module. So what it is going to do, is using curl to mass scrape and paginate a tumblr profile, so you will get a list of URLs that will be processed with a while loop inside the curl, and then saving it on the folder you are running this command. But first… You might need to install cURL in your server, dont worry is easy: 1 sudo apt-get install c
  15. Help, I am trying to scrape specific data from multiple pages from the same website. I am able to get this to work, but on multiple different sites it only pulls data for the first 18-20 url's then does not pull any data but continues to cycle through all of the rest of the pages. Also I am running into an issue where I need the blank lines either kept as nothing or a placeholder put in place, otherwise when the data is written to the table it will not line up with existing data in the table. I am really new to this and any help would be appreciated. clear table(&data) clear list(%newp
  16. Hi Guys, First of all, let introduce my self my name is Cassio Lacerda, this is my first post. I am a new UBotStudio user and I hope to find a new family here. I would like a simple help. I did a new bot to scrape jobs from a job portal. The bot is attached. I have some question that I am going be very happy if some one could help me on that. 1) How to extract exactly the URL of the job and add in the list to lest %job_url Nowadays the command html is coming together 2) In job description has always one number less then another lists. It's happen because there is not title f
  17. Hi, I want to scrape 4 fields of a webpage But I want to show the scraped data in UI section of bot so that I can copy it easily.. Now issues is How this can done ..if possible if not possible is it possible to have a single text file of example below.. and is there any way I can have all 4 field scraped data in single UI box But single UI box should have data according to fields. like if the webpage have 10 parts from 1 ...10 and part have 4 fields...name,age,address,city now is this possible to have data in this manner in a UI box or a text file single example Line 1 of text f
  18. Hello guys, What is the best way to "clean" an innerHTML scraped attribute ? Basically I'm scraping an innerHTML containing an empty div "inline styled", for which I need to find the inline styled background-image URL... <div class="blablah" style="height:120px;background-image:url(http://somewhere.com/image.jpeg)"></div> I scraped the innerHTML of the parent div of blahblah because else I didn't get what I needed, but now I need to clean up a bit. Any tip is welcome ! Thanks a lot, Cheers,
  19. Hello UBot community! I've been using for about 3 weeks now, and I'm more excited every day. So far, I've managed to put together some little bots that are actually helping me, and I'm thrilled about it, and what I think I'll be able to do in the future. Now, I'm starting my first scraping venture and I feel a little lost. I've been searching/googling/youtubing, but I can't figure out which commands, or set there of, that I need to be using. If someone could guide me in the right direction, I'd really appreciate it. I'm not expecting anyone to spell it all out for me, just point me in the
  20. i just bought HTTP POST plugin and try to update my old script used for scraping emails in the old script everything work fine but with HTTP POST there is some problem one of them are hidden javascript emails add item to list(%http second url,$plugin function("HTTP post.dll", "$http get", "http://www.jc-design.com/contact-us.html", $plugin function("HTTP post.dll", "$http useragent string", "Random"), "http://google.com", "", 10),"Delete","Global") set(#http second url,%http second url,"Global") add list to list(%emails,$find regular expression(#http second url,"(?i)\\b[!#$%&\'*+./0-9
  21. Just finished plowing through Tutorial 5 on the Ubot site, but unfortunately that video is horribly outdated and the source code of Google has changed significantly. There is no <class = "l"/> tag anymore, only the <class = "r"/> tag which gives you a bunch of www.google.com/ssl= results for your list items. If all that above means nothing to you, I guess my question is what is a updated and efficient way of scraping Google results, using either the $scrape attribute or $page scrape parameters?
  22. hi, need some help with my code i want to make a simple operation, add list to list(%all url,$scrape attribute(<href=r"{'#clean main url}">,"fullhref"),"Delete","Global") and its work in Code View, but after i set Node view code is changed automatic to add list to list(%all url,$scrape attribute(<href=r#clean main url>,"fullhref"),"Delete","Global") and its doesnt work
  23. Hello, I need a bot (original code to copy in my ubot studio professional license, don't have time to develop) The objective is to scrape results and properties details on dubizzle.com The bot will scrape all properties details from results page like https://dubai.dubizzle.com/property-for-sale/residential/apartment/?added__gte=7&bedrooms__gte=0&bedrooms__lte=12&has_photos=1&keywords=owner&listed_by=LA&places__id__in=90,63,193,194,74,&price__gte=500000&price__lte=5000000 Property details page type: https://dubai.dubizzle.com/property-for-sale/res
  24. Does anybody know how to scrape information from a Chrome extension? Or is it even possible.
  • Create New...