Jump to content
UBot Underground

a-harvey

Members
  • Content Count

    53
  • Joined

  • Last visited

Everything posted by a-harvey

  1. Thanks HelloInsomnia, I will get them to try that if the full install doesn't work. How or what is the process / where do I go to become a known publisher.
  2. I have asked the client to try the installer version so when he confirms the situation I will let you know if that solved it.
  3. I have just used the installer on my machine to test that and if that is the answer will send to the client. It does say unknown publisher? how or what do I do it fix this. Do I need to register somewhere as a publisher, cannot see where in Ubot to add a name etc. Many thanks Andrew
  4. Hi, I am having some issues with a bot I developed for a client. It's the first one I have sent to a machine other than mine. They keep getting a "you are not authorised to use this" or something like that. I just complied it with some settings but didn't use the installer function, is this the issue? I can run it fine on the machine it was produced on. The client has tried to run as admin and no difference. many thanks Andrew
  5. Hi Helloinsomnia, Yes that's what I am currently doing, downloading the sheet as csv and then pulling into Ubot as a table. I just wondered if Ubot could create a table from a google sheet but obviously not. I don't see it as a command, maybe something that could be added at some stage to the software, or an addon. Thanks, I think the answer then is no not possible. I will stick with the way it currently works which is fine, just was trying to take out another step in the process. Andrew
  6. Hi, Some advise if possible. I have the following script, which does exactly what I want. Which is to import to a table a number of image urls with names, download the images and rename them. perfect. clear table(&images) create table from file("C:\\Users\\Andrew\\Desktop\\silverim.csv",&images) set(#row,0,"Global") create folder($special folder("Desktop"),"Silverstone Images") set(#path,"{$special folder("Desktop")}/Silverstone Images/","Global") loop($table total rows(&images)) { download file($table cell(&images,#row,1),"{#path}{$table cell(&images,#row,0)}.jpg")
  7. Thank you both, I did think the bulk url download and then access maybe was the only way, and you have both confirmed my thoughts. I will try the next list item option as not used that previous. Many thanks Andrew
  8. Hi, Can someone give me some idea how to tackle the following. I need to scrape the details from each of the lots listed within this page: http://www.silverstoneauctions.com/silverstone-classic-race-car-sale-2018/view_lots I currently scrape all the list urls but need to scrape all the details on each page accessed via clicking the further details icon. The problem is that they don't have a next lot button to cycle through the lots as most of my clients do, so you need to go back to the list and move down each lot on the above page, how would that be coded, ie click the first one and then sc
  9. This is what it pulls, all into row 0 in the csv Only on this its not showing the url that's pulled first, then a , and the image url that it is showing. so item ulr and image url are on same line. ABNC Archives c.1860-70s Jose Galves, Peru Banknotes Intaglio PROOF Vignette Unc ,https://www.ebay.com/itm/ABNC-Archives-c-1860-70s-Jose-Galves-Peru-Banknotes-Intaglio-PROOF-Vignette-Unc/391986008221?hash=item5b442fd09d:g:KFkAAOSwn9VajbbX,https://i.ebayimg.com/thumbs/images/g/KFkAAOSwn9VajbbX/s-l225.jpg Franklin Banknote Co. Proof Intaglio Vignette Michigan State Arms c.1880 XF+
  10. Hi HelloInsomnia, or someone. I have used the code as above, when I scrape the data, in ubot it shows 3 colums (debugger) but when it save it to the csv it puts everything into one ?????? Any reason its doing that ? also if you run it, I need to then move to the next page, the issue is the > arrow always stays so if you use the click function it will move through but then locks in a cycle ? Sorry any help would be greatly appreciated. Andrew navigate("https://www.ebay.com/sch/m.html?item=350996870504&_ssn=archivesonline&_ipg=200&rt=nc","Wait") wait(4) clear list(%contai
  11. I think its almost there which is really great Ok so if I ignore the error it does seem to pull all the data into table &data. I presume I just need to save &data to csv and I will get what I need. Where or how do I download the image files ? I can see its pulling the urls
  12. Ok so finally got the plugin and added it and the code above with the clients ebay listing url at the top, It runs but throws an error The given kew was not present in the dictionary. Source: untitled bot -> Untitled Script -> run command ? don't know the issue, my Ubot is fairly limited to scrapping Auction Houses urls for their auctions.
  13. ok will try with offset, always thought we shouldn't use them but maybe wrong
  14. Hi, I have been playing about with a clients ebay listings trying to get the following but seem to not be doing to well. https://www.ebay.com/sch/m.html?item=350996870504&_ssn=archivesonline&_ipg=400&rt=nc I have set their account above at 400 deep listings as they will not have more than that running at any one time. I am trying to do the following: scrape each listing: 1) Title 2) Listing url 3) The main image or images if multiple once each is clicked into. Having issues, when scraping the above page, I can get the urls, but the title using the class pulls in every t
  15. Many thanks, for some reason I didn't get a notification you had responded so sorry for late thank you. I will do as explained, and great on the url as a lot of my client sites have an initial url and then it pushes to a seo one which is best to scrape. Kind regards Andrew
  16. Hi, I have a client site I am trying to scrape the lot number, title and the url for each lot. The site only seems to load more lots as you scroll down the page. https://www.forumauctions.co.uk/Important-Books-Western-Manuscripts-and-Works-on-Paper-Day-One/15-11-2016?gridtype=listview What action in Ubot will enable this scroll effect so it load more data, at present it will only load and scrape the data of the lots that I have already loaded, so have to manually scroll down the page to the end and then activate the bot. also the urls its scraping from say Full Details for lot 1 are: https
  17. Ok managed to get it to work by just setting the loop to run x times depending on the amount of additional pages ie 9 if 9 and just getting it to click the next button. works great. again needed to slow it down as it was running faster then the page would load so missing data. All is good Many thanks for you help, it focused my mind on how it could work so have a better unedrstanding now. Andrew
  18. Bill, Managed to get this to add the code to each line now, needed to take the code from here and make plan text and then add to code view, however it just says there is an error in the code, fix befor you switch to node view so not sure whats the issue is.
  19. its always the case.. hope it solves the issue, there is a new version so I would download it.
  20. No still not pulling the next batch of titles in, really strange as the urls and the lot numbers are fine, so see no reason why its not pulling them
  21. I crashed ubot and lost the file so need to rebuild it, I think it could be just not loading that data in time so will try a slow down function and see as there is no reason for it not to collect the additional data. many thanks for all your help. its really helped me bend my mind around it. I will let you know. Kind regards Andrew
  22. Hi Bill, I don't seem to be able to post your code in, it posts as single line, do you have this as a bot file I can open? I have had a ubot crash and the last one I did above has vanished..ahahhah so will recreate - maybe the delay until fully loaded is the issue, maybe because its going so quick the titles are not loading in time.. will try and slow down my version, but if you have it working and have it in code form I can open that would be great. Thanks both of you for your help, really appreciate this, also Bill your version, enables me to see another way of doing it which is also g
  23. Many thanks Stanf, that really helped.. I knew the software was able to do what I wanted, hence why I purchased it.. just trying to bend my mind around it. This is what I ended up with - Something seems odd ! Its pulling 348 lot numbers and urls but only adding the first 237 descriptions (this is when I set it to 250 per page) so only pulling the first page of descriptions - this seems strange any idea why its doing that as the lot numbers and urls are all being pulled. The mylist.txt is holding all the urls is this correct. I am using a dedicated server with 16gb of memory and a pip
  24. set loop at 23 as this is most likely the highest loop, it will need, when not set it didn't loop
  25. Managed to get it to loop through all the pages - great however its only pulling the first set of page data, so I maust be doing something wrong after it loops. clear list(%scape) clear list(%title) clear list(%Lot number) navigate("http://www.the-saleroom.com/en-gb/auction-catalogues/1818-auctioneers/catalogue-id-sr1810075/search-filter?page=1&pageSize=240","Wait") clear table(&saleroomscrape) add list to list(%Lot number,$scrape attribute(<innertext=w"Lot *">,"innertext"),"Delete","Global") add list to table as column(&saleroomscrape,0,0,%Lot number) add list to list(%tit
×
×
  • Create New...