Follow-up to c4e6968127
When a new transfer is created, as a resuly of an acknowledged push,
that transfer needs a download buffer allocated.
Closes#5590
This avoids using a pair of TCP ports to provide wakeup functionality
for every multi instance on Windows, where socketpair() is emulated
using a TCP socket on loopback which could in turn lead to socket
resource exhaustion.
Reviewed-by: Gergely Nagy
Reviewed-by: Marc Hörsken
Closes#5397
Now that all functions in select.[ch] take timediff_t instead
of the limited int or long, we can remove type conversions
and related preprocessor checks to silence compiler warnings.
Avoiding conversions from time_t was already done in 842f73de.
Based upon #5262
Supersedes #5214, #5220 and #5221
Follow up to #5343 and #5479Closes#5490
... and free it as soon as the transfer is done. It removes the extra
alloc when a new size is set with setopt() and reduces memory for unused
easy handles.
In addition: the closure_handle now doesn't use an allocated buffer at
all but the smallest supported size as a stack based one.
Closes#5472
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
More connection cache accesses are protected by locks.
CONNCACHE_* is a beter prefix for the connection cache lock macros.
Curl_attach_connnection: now called as soon as there's a connection
struct available and before the connection is added to the connection
cache.
Curl_disconnect: now assumes that the connection is already removed from
the connection cache.
Ref: #4915Closes#5009
- If an easy handle is owned by a multi different from the one specified
then return CURLM_BAD_EASY_HANDLE.
Prior to this change I assume user error could cause corruption.
Closes https://github.com/curl/curl/pull/5116
- Don't check errno on wakeup socket if sread returned 0 since sread
doesn't set errno in that case.
This is a follow-up to cf7760a from several days ago which fixed
Curl_multi_wait to stop busy looping sread on the non-blocking wakeup
socket if it was closed (ie sread returns 0). Due to a logic error it
was still possible to busy loop in that case if errno == EINTR.
Closes https://github.com/curl/curl/pull/5047
- Do not say that conn->data is "cleared" by multi_done().
If the connection is in use then multi_done assigns another easy handle
still using the connection to conn->data, therefore in that case it is
not cleared.
Closes https://github.com/curl/curl/pull/4901
... since the current transfer is being killed. Setting to NULL is
wrong, leaving it pointing to 'data' is wrong since that handle might be
about to get freed.
Fixes#4845Closes#4858
Reported-by: dmitrmax on github
Previously it was stored in a global state which contributed to
curl_global_init's thread unsafety. This boolean is now instead figured
out in curl_multi_init() and stored in the multi handle. Less effective,
but thread safe.
Closes#4851
A regression made the code use 'multiplexed' as a boolean instead of the
counter it is intended to be. This made curl try to "over-populate"
connections with new streams.
This regression came with 41fcdf71a1, shipped in curl 7.65.0.
Also, respect the CURLMOPT_MAX_CONCURRENT_STREAMS value in the same
check.
Reported-by: Kunal Ekawde
Fixes#4779Closes#4784
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
This commit adds curl_multi_wakeup() which was previously in the TODO
list under the curl_multi_unblock name.
On some platforms and with some configurations this feature might not be
available or can fail, in these cases a new error code
(CURLM_WAKEUP_FAILURE) is returned from curl_multi_wakeup().
Fixes#4418Closes#4608
Prior to this change:
The check if an extra wait is necessary was based not on the
number of extra fds but on the pointer.
If a non-null pointer was given in extra_fds, but extra_nfds
was zero, then the wait was skipped even though poll was not
called.
Closes https://github.com/curl/curl/pull/4610
Since 59041f0, a new timer might be set in multi_done() so the clearing
of the timers need to happen afterwards!
Reported-by: Max Kellermann
Fixes#4575Closes#4583
Repeatedly we see problems where using curl_multi_wait() is difficult or
just awkward because if it has no file descriptor to wait for
internally, it returns immediately and leaves it to the caller to wait
for a small amount of time in order to avoid occasional busy-looping.
This is often missed or misunderstood, leading to underperforming
applications.
This change introduces curl_multi_poll() as a replacement drop-in
function that accepts the exact same set of arguments. This function
works identically to curl_multi_wait() - EXCEPT - for the case when
there's nothing to wait for internally, as then this function will by
itself wait for a "suitable" short time before it returns. This
effectiely avoids all risks of busy-looping and should also make it less
likely that apps "over-wait".
This also changes the curl tool to use this funtion internally when
doing parallel transfers and changes curl_easy_perform() to use it
internally.
Closes#4163
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
Curl_disconnect bails out if conn->easyq is not empty, detach_connection
needs to be called first to remove the current easy from the queue.
Fixes#4144Closes#4151
- The transfer hashes weren't using the correct keys so removing entries
failed.
- Simplified the iteration logic over transfers sharing the same socket and
they now simply are set to expire and thus get handled in the "regular"
timer loop instead.
Reported-by: Tom van der Woerdt
Fixes#4012Closes#4014
Since more than one socket can be used by each transfer at a given time,
each sockhash entry how has its own hash table with transfers using that
socket.
In addition, the sockhash entry can now be marked 'blocked = TRUE'"
which then makes the delete function just set 'removed = TRUE' instead
of removing it "for real", as a way to not rip out the carpet under the
feet of a parent function that iterates over the transfers of that same
sockhash entry.
Reported-by: Tom van der Woerdt
Fixes#3961Fixes#3986Fixes#3995Fixes#4004Closes#3997
... so that timeouts or other state machine actions get going again
after a changing pause state. For example, if the last delivery was
paused there's no pending socket activity.
Reported-by: sstruchtrup on github
Fixes#3994Closes#4001
An inner loop within the singlesocket() function wrongly re-used the
variable for the outer loop which then could cause an infinite
loop. Change to using a separate variable!
Reported-by: Eric Wu
Fixes#3970Closes#3973
They need to be removed from the socket hash linked list with more care.
When sh_delentry() is called to remove a sockethash entry, remove all
individual transfers from the list first. To enable this, each Curl_easy struct
now stores a pointer to the sockethash entry to know how to remove itself.
Reported-by: Tom van der Woerdt and Kunal Ekawde
Fixes#3952Fixes#3904Closes#3953
They serve very little purpose and mostly just add noise. Most of them
have been around for a very long time. I read them all before removing
or rephrasing them.
Ref: #3876Closes#3883
Codacy/CppCheck warns about this. Consistently use parentheses as we
already do in some places to silence the warning.
Closes https://github.com/curl/curl/pull/3866