-
Content Count
146 -
Joined
-
Last visited
Everything posted by allcapone1912
-
i just bought HTTP POST plugin and try to update my old script used for scraping emails in the old script everything work fine but with HTTP POST there is some problem one of them are hidden javascript emails add item to list(%http second url,$plugin function("HTTP post.dll", "$http get", "http://www.jc-design.com/contact-us.html", $plugin function("HTTP post.dll", "$http useragent string", "Random"), "http://google.com", "", 10),"Delete","Global") set(#http second url,%http second url,"Global") add list to list(%emails,$find regular expression(#http second url,"(?i)\\b[!#$%&\'*+./0-9
- 3 replies
-
- http post plugin
- scrape
-
(and 1 more)
Tagged with:
-
hi, currently have a problem scraping flash sites my code scrape all url from website navigate("http://sutroarchitects.com/","Wait") wait(5) add list to list(%all url,$scrape attribute(<href=r"">,"fullhref"),"Delete","Global") and its not working for site like http://sutroarchitects.com/ so, how to scrape it? if i check website page souce its show a lot of url
-
problem is that i dont need general <href=#clean main url>i need regular expression <href=r#clean main url>with your example i need something like this: navigate("http://www.ubotstudio.com/resources","Wait") wait for browser event("Everything Loaded","") wait(1) set(#clean main url,"ubotstudio.com","Global") add list to list(%all url,$scrape attribute(<href=r#clean main url>,"fullhref"),"Delete","Global")
-
hi, need some help with my code i want to make a simple operation, add list to list(%all url,$scrape attribute(<href=r"{'#clean main url}">,"fullhref"),"Delete","Global") and its work in Code View, but after i set Node view code is changed automatic to add list to list(%all url,$scrape attribute(<href=r#clean main url>,"fullhref"),"Delete","Global") and its doesnt work
-
when i bought first time ubot it was too difficult for me and they refund me the money without any questions on the 2st week of use so, problem is not in ubot but on your side i assume ubot dont refund you money because of fraud/illegal activity
-
New Ubot Version Create Too Many Temp File/folder
allcapone1912 replied to allcapone1912's topic in General Discussion
in both -
New Ubot Version Create Too Many Temp File/folder
allcapone1912 posted a topic in General Discussion
after updating to new version i get some unusual problem When scripts are running a lot of temp file/folder are created 1 script running for 10 hours and in temp folder were up to 50GB of new file/folder Ive tried with chrome 21 and 39 Can someone give me an idea why this happens? -
After Upgrade Change Proxy And User Agent Not Working
allcapone1912 replied to allcapone1912's topic in General Discussion
its work only until you will save and close the file next time when you will open it will be the same problem -
After Upgrade Change Proxy And User Agent Not Working
allcapone1912 replied to allcapone1912's topic in General Discussion
Chrome 39 is not working at all, ive set 39 but after i close its reset auto to 21 also, very strange things with Change proxy, if i write code from Node View then its work fine but if i save/close it or go to Code View then Change Proxy and User Agent dont work -
hi, after updating to 5.9 my scripts dont work set user agent($random list item($list from file("D:\\UBot Studio\\List\\UserName.txt"))) change proxy($random list item($list from file("D:\\UBot Studio\\List\\Proxy.txt"))) its pop an error each time i try to change proxy or user agent on old one everything its work fine Can someone help me with this or should i download old version?
-
Connect To Mysql From Other Pc In Lan
allcapone1912 replied to allcapone1912's topic in General Discussion
i just found why its not working just add a new user to sql with grant access any hostname and its work fine now -
Connect To Mysql From Other Pc In Lan
allcapone1912 replied to allcapone1912's topic in General Discussion
that's the problem ive change host name and from PC2 in browser if i type "server" its loading my localhost page from PC1 but with ubot not working, mysql:server=\'server\' its not working -
currently have a bot that work direct with my localhost and i want to use it from other PC in LAN plugin command("DatabaseCommands.dll", "connect to database", "mysql:server=\'localhost\';uid=\'***\'; pwd=\'***\'; database=\'***\'; port=\'3306\'; pooling=false;Convert Zero Datetime=True") with what to replace "localhost" to work from other PC on LAN localhost is instaled on 192.168.1.100
-
thanks for your help, i will try
-
i just test a simple demo to get total number of rows and columns ui open file("file path",#uifile) plugin command("Bigtable.dll", "Large Table From file", #uifile, "tablefromfile") alert("rows:{$plugin function("Bigtable.dll", "Large Table Total Rows", "tablefromfile")}cols:{$plugin function("Bigtable.dll", "Large Table Total Columns", "tablefromfile")}") i have a powerfull pc but still can not use plugin for my file http://s8.postimg.org/fgkmncepx/error.png
-
this plugin have some max limit for rows and columns? i have csv file with 14 millions rows and i would like to split it in multiple file but its not working
-
Scrape With Multithreads And Update To Sql
allcapone1912 replied to allcapone1912's topic in General Discussion
thanks for this example i never use large sql plugin, so i will need some time to understand it -
Scrape With Multithreads And Update To Sql
allcapone1912 replied to allcapone1912's topic in General Discussion
thanks for your reply i will make a new script with your steps -
hi, currently have a code that scrape info from hotfrog.sg plugin command("DatabaseCommands.dll", "connect to database", "mysql:server=\'***\';uid=\'***\'; pwd=\'***\'; database=\'***\'; port=\'3306\'; pooling=false;Convert Zero Datetime=True") { plugin command("DatabaseCommands.dll", "query with results", "SELECT * FROM hotfrog WHERE submit = \'\' ORDER BY id ASC LIMIT 1000", &url) add list to list(%url,$plugin function("TableCommands.dll", "$list from table", &url, "Column", 1),"Delete","Global") } thread { increment(#threads) in new browser { set(#url1,$next
-
Scrape Website Url From Tripadvisor.co.uk
allcapone1912 replied to allcapone1912's topic in General Discussion
thanks for your code good idea to go mobile -
Scrape Website Url From Tripadvisor.co.uk
allcapone1912 replied to allcapone1912's topic in General Discussion
i've used this method in past and not always get the right url,sometime email have free domain(yahoo.gmail or other domain that not match url) -
Scrape Website Url From Tripadvisor.co.uk
allcapone1912 replied to allcapone1912's topic in General Discussion
-
Scrape Website Url From Tripadvisor.co.uk
allcapone1912 replied to allcapone1912's topic in General Discussion
-
Scrape Website Url From Tripadvisor.co.uk
allcapone1912 replied to allcapone1912's topic in General Discussion
i dont want to get current url,i need to get company website http://s28.postimg.org/4x7zzt2gt/Untitled.png -
hi, can not get website url from tripadvisor.co.uk For ex: http://www.tripadvisor.co.uk/Restaurant_Review-g1016887-d1847312-Reviews-Wild_Boar_Restaurant_at_Sitwell_Arms_Hotel-Renishaw_Derbyshire_England.html As results should be "http://www.sitwellarms.com/restaurant.html#_=_" I dont see url in site code