Jump to content
UBot Underground

Http Get Bug: Ubot Request Cache? Rotating Ip Service


Recommended Posts

Ok I am un deleting this since there seems to be no solution. This links back to an issue I found a before but now no longer works either here is my old post http://network.ubotstudio.com/forum/index.php/topic/22259-stormproxies-and-http-post/

 

So basically uBot seems to have some sort of cache when you use either HTTP Post Get or Heopas HTTP Get. Both plugins get requests have the same problem. What is happening is that if you use a back connect proxy rotate service you will constantly get the same cached source code returned to you. 

 

Here is the set up. I have a single loop that loops 50 times. It sets #source as the http get response. The http get is using a proxy that when you connect to it each time a different IP address will be used. I then add #source to a list to keep track of it. The page I am requesting is "http://lumtest.com/myip.json"which will report back json about your ip on that request.

 

loop 50 

set #source to http get (using the single connect proxy)

add #source to list

 

If I run this all 50 items in the list will be the exact the same. They will all have the same 50 ip addresses. 

 

Now here is the next issue. If I use this exact same code with no changes other than a wait of 120 seconds the problem is fixed. If you wait 5 seconds no, if you wait 10 no, 30 no, it only works if I wait 120 seconds. With a 120 second wait each list item from the loop will be a unique IP just as the service says.

 

loop 50 

set #source to http get (using the single connect proxy)

add #source to list

wait 120

 

So lets test the service as well. If I just make terminal curl requests it gives a unique ip every time. If I use their service with python and scrape it works fine. It is ONLY with uBot where each request is not unique. There is some sort of cache per request per IP address that it is keeping. Because if we use normal proxies of course every request works fine. 

 

I have also tested this on two other back connect services it is always the same error of the cache response. 

 

So there is some sort of cache nonsense on each of these requests that assumes you will not be requesting the same webpage with the same proxy (back connect remember) and some how expect a new response. 

 

Edit: I will mention that if I use the same loop above but instead I use Exbrowser with a new chrome on each loop it will get a new proxy ip like it should with no problems. Open chrome, get page, close chrome no delay works.

Link to post
Share on other sites

Use python or create your plugin. I already wrote that there are problems in the plugins.

http://network.ubotstudio.com/forum/index.php/topic/22498-heopas-http-get-vs-aymen-http-get/ (I want to warn against losing a lot of time)

There are bugs everywhere and I mistake. But when you do your own plugin/code, you have control. And you can see logs and fix situation.

Link to post
Share on other sites

It's not a bug. You are simply not clearing the data at the beginning of your loop. The way that you have it now when your script loops the response data from the previous request is being sent along with the next GET request. Either clear the cookies and headers or use the thread command to run a new instance on each loop. See this code below:

ui text box("Proxy:", #Proxy)
clear list(%ips)
loop(20) {
    set(#running, "true", "Global")
    thread {
        testProxy()
        set(#running, "false", "Global")
    }
    loop while($comparison(#running, "=", "true")) {
        wait(0.2)
    }
}
define testProxy {
    plugin command("HTTP post.dll", "http auto redirect", "Yes")
    plugin command("HTTP post.dll", "http max redirects", 5)
    set(#soup, $plugin function("HTTP post.dll", "$http get", "https://www.whatsmyip.com/", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36", "", #Proxy, ""), "Local")
    add item to list(%ips, $plugin function("File Management.dll", "$Find Regex First", #soup, "(?<=id=\"shownIpv4\">).*?(?=<\\/p>)"), "Don\'t Delete", "Global")
    set(#soup, $nothing, "Local")
}

I don't have Storm Proxies but it should work when you run it.

Link to post
Share on other sites

It's not a bug. You are simply not clearing the data at the beginning of your loop. The way that you have it now when your script loops the response data from the previous request is being sent along with the next GET request. Either clear the cookies and headers or use the thread command to run a new instance on each loop. See this code below:

ui text box("Proxy:", #Proxy)
clear list(%ips)
loop(20) {
    set(#running, "true", "Global")
    thread {
        testProxy()
        set(#running, "false", "Global")
    }
    loop while($comparison(#running, "=", "true")) {
        wait(0.2)
    }
}
define testProxy {
    plugin command("HTTP post.dll", "http auto redirect", "Yes")
    plugin command("HTTP post.dll", "http max redirects", 5)
    set(#soup, $plugin function("HTTP post.dll", "$http get", "https://www.whatsmyip.com/", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36", "", #Proxy, ""), "Local")
    add item to list(%ips, $plugin function("File Management.dll", "$Find Regex First", #soup, "(?<=id=\"shownIpv4\">).*?(?=<\\/p>)"), "Don\'t Delete", "Global")
    set(#soup, $nothing, "Local")
}

I don't have Storm Proxies but it should work when you run it.

 

This is not the problem. I have tested it with clearing the local variables as well, even though that should make no sense to begin with because each new get should overwrite the last one and each one is a local variable in its own define like you showed. But yeah, I did try that as well. I also used clear objects and clear headers with http post. I tried to make the gets more dynamic by adding random variables appended to the urls, random generated referrers as well. Nothing has worked. Also if it was a problem with clearing the variable each time why would it work only with a 120 second wait between requests? 

 

What I have done is I am now using python within ubot and using python requests. By doing this the whole thing works as it should and each request gets its own ip, no 120 second wait is needed. I made no other changes to any testing scripts other than that. 

 

https://luminati.ioproxy service which is back connect rotating has a free trial. You can test all of this out yourself and see if you can find a solution. 

  • Like 1
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...