OkHttp will potentially repeat your requests on a slow/unreliable connection “aggressively” until it succeeds. This is done for GET, POST or any other type of requests (except for requests that include a
StreamedRequestBody which implements
UnrepeatableRequestBody). You will not get notified about these silent retries. The server can receive multiple requests even though you made only a single request. At the end you will just get a response after several retries or it will fail (which doesn’t contradict the fact that the request could have made it to the server).
Think about the consequences
Imagine someone is using your chat application on a flaky network connection. He sends a message — which executes a POST request over Retrofit (using OkHttp as it’s HTTP client). The server will receive 3 POST requests and the recipient will suddenly get 3 “Hi!” messages.
Is this a bug or problem of OkHttp? No. OkHttp is making maximum effort to deliver the message. Your API should be able to determine if a request was already delivered and handle accordingly — e.g. delivering an error message, or possibly confirming that the request was delivered but do not perform any changes (since it was already executed).
A look into the source code
The retry logic is implemented in
RetryAndFollowUpInterceptor.java in the methods
isRecoverable() determines if the request can be recovered and this is executed in a
while(true) cycle inside the
Can or should I disable request retrying?
There was a long discussion about this in issue #1043 and some related issues on Github. This was later resolved by merging pull request #1259 and there was also a change in some retry behaviour recently in pull request #2479 (Change: Don’t recover if we encounter a read timeout after sending the request, but do recover if we encounter a timeout building a connection). This fix is part of OkHttp 3.3.0 and decreases (but probably doesn’t eliminate) the chances of an unnecessary request retry.
You can only disable request retrying globally for the whole
OkHttpClient instance. This is done by using the OkHttpClient.Builder and setting retryOnConnectionFailure to false.
Configure this client to retry or not when a connectivity problem is encountered. By default, this client silently recovers from the following problems:
- Unreachable IP addresses. If the URL’s host has multiple IP addresses, failure to reach any individual IP address doesn’t fail the overall request. This can increase availability of multi-homed services.
- Stale pooled connections. The
ConnectionPoolreuses sockets to decrease request latency, but these connections will occasionally time out.
- Unreachable proxy servers. A
ProxySelectorcan be used to attempt multiple proxy servers in sequence, eventually falling back to a direct connection.
Set this to false to avoid retrying requests when doing so is destructive. In this case the calling application should do its own recovery of connectivity failures.
I think that the main cause of retries on slow and unreliable connections is the second paragraph — Stale pooled connections.
You shouldn’t disable this for all requests — you can
clone() the OkHttpClient instance — which is supported and it’s a quite “cheap”. Then use this cloned and modified OkHttpClient instance for that specific call.
Fix the real problem
Disabling the silent retries may help in some cases but it’s basically hiding the real problem. Imagine your application will send a request to the server, the server will process it and do some changes on its’ database and will then send a response back to the application. The internet connection will cut-off right before receiving the response. You will get an exception or failure callback in your code. And you will not be able to determine whether the request made it to the server or not. Eventually your application will notify the user that the request failed and you will retry it again. If your API is not ready for this, it will create duplicates or otherwise end-up in a wrong state.
GET requests should be idempotent, so retrying them shouldn’t cause any problems. POST requests need to be checked on the server side or made idempotent too. You could detect duplicates based on some unique GUID included by the client application— so that the server will not execute the operation multiple times for the same request. Delivering something exactly once is more complicated than most developers think.