-
Content Count
6 -
Joined
-
Last visited
Community Reputation
0 NeutralAbout semenoffalex
-
Rank
Newbie
Profile Information
-
Gender
Not Telling
System Specs
-
OS
Windows 8
-
Total Memory
4Gb
-
Framework
v4.0
-
License
Standard Edition
Recent Profile Visitors
1784 profile views
-
semenoffalex started following Scraping elements from JavaScript/AJAX scrolled page
-
Scraping elements from JavaScript/AJAX scrolled page
semenoffalex replied to semenoffalex's topic in Scripting
I'm not sure, where to copy/paste that, so I've done it this way (see attachment): An it still doesn't work Maybe I was following the advices about JavaScript wrong way all the time?- 2 replies
-
- JavaScript
- AJAX
-
(and 1 more)
Tagged with:
-
I can also suggest CodeSchool's free courses on JavaScript as an easy starter: http://www.codeschool.com/paths/javascript
-
Hi, all. I need to scrape results from a page, that load necessary elements on scrolling. I know the id of the element, that appears at the bottom of the page after several "scrolls", so it seems to me, that I should use "run javascript", "wait for an element" or "focus". However, I've tried several receipts from this forum, but none of them worked. In particular, I've tried the script window.scrollTo(0,document.body.scrollHeight); which was suggested several times in this forum (here, here and here). It didn't help. I saw neither scrolling no parsed elements. More sophisticated tric
- 2 replies
-
- JavaScript
- AJAX
-
(and 1 more)
Tagged with:
-
Script collects all the data only after several runs
semenoffalex replied to semenoffalex's topic in Scripting
Thanks, Frank! That works perfectly. -
Hi! I've written a simple script to get results from http://people.yandex.ru, but if i run it for the firs time after launching UBot Studio it scrapes only urls from first page. On the second run it gets data from 1st and 2nd pages and so on. Once it reaches the neccessary amount of pages/runs it starts working correctly and scrapes all the links in one run. The workflow of the script is really simple: navigate -> type text -> click (search button) -> wait for browser event (page loaded) -> clear list (%urls) -> loop while (exists -> wait for browser event (page loaded) -