sends the 220 response or otherwise is dead slow, libcurl will not
acknowledge the connection timeout during that phase but only the "real"
timeout - which may surprise users as it is probably considered to be the
connect phase to most people. Brought up (and is being misunderstood) in:
http://curl.haxx.se/bug/view.cgi?id=2844077
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
not in the mood enough to fight this now.
65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
multi interface is used, libcurl will fail if the (passive) TCP connection
for the data transfer isn't more or less instant as the code does not
properly wait for the connect to be confirmed. See test case 564 for a first
shot at a test case.
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
for any further requests or transfers. The work-around is then to close that
handle with curl_easy_cleanup() and create a new. Some more details:
http://curl.haxx.se/mail/lib-2009-04/0300.html
getaddrinfo() sorts the response list
This isn't a libcurl bug since this is how getaddrinfo() is *supposed* to work!
Apparently you deal with this using the /etc/gai.conf file.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
Server with the correct content-length. Sending a file with 511 or less
bytes, content-length 512 is used. Sending a file with 513 - 1023 bytes,
content-length 1024 is used. Files with a length of a multiple of 512 Bytes
show the correct content-length. Only these files work for upload.
http://curl.haxx.se/bug/view.cgi?id=2057858
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
server using the multi interface, the commands are not being sent correctly
and instead the connection is "cancelled" (the operation is considered done)
prematurely. There is a half-baked (busy-looping) patch provided in the bug
report but it cannot be accepted as-is. See
http://curl.haxx.se/bug/view.cgi?id=2006544
due to KfW's library header files exporting symbols/macros that should be
kept private to the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/
51.Kevin Reed's reported problem with a proxy when doing CONNECT and it
wants NTLM and close the connection to the initial CONNECT response:
http://curl.haxx.se/bug/view.cgi?id=1879375
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
(http://curl.haxx.se/bug/view.cgi?id=1705802), which was filed by Daniel
Black identifying several FTP-SSL test cases fail when we build libcurl with
NSS for TLS/SSL. Listed as #42 in KNOWN_BUGS.
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
authentication (with performs multiple "passes" and authenticates a
connection rather than a HTTP request), and particularly when using the
multi interface, there's a risk that libcurl will re-use a wrong connection
when doing the different passes in the NTLM negotiation and thus fail to
negotiate (in seemingly mysterious ways).
36. --limit-rate (CURLOPT_MAX_SEND_SPEED_LARGE and
CURLOPT_MAX_RECV_SPEED_LARGE) are broken on Windows (since 7.16.0, but
that's when they were introduced as previous to that the limiting logic was
made in the application only and not in the library). This problem is easily
repeated and it takes a Windows person to fire up his/hers debugger in order
to fix. http://curl.haxx.se/bug/view.cgi?id=1603712
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
while not fixing things very nicely, it does make the SOCKS5 proxy
connection slightly better as it now acknowledges the timeout for connection
and it no longer segfaults in the case when SOCKS requires authentication
and you did not specify username:password.
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
This happens because the multi-pass code abuses the redirect following code
for doing multiple requests, and when we following redirects to an absolute
URL we must use the newly specified port and not the one specified in the
original URL. A proper fix to this would need to separate the negotiation
"redirect" from an actual redirect.
server configured in the system, the ares_init() call fails and thus
curl_easy_init() fails as well. This causes weird effects for people who use
numerical IP addresses only.
Curl_ossl_connect(), the Curl_gtls_connect() function does not send the user
certificate to the peer. In fact, it ignores the conn->data->set.cert field
completely, it always uses the anonymous credentials. See
http://curl.haxx.se/bug/view.cgi?id=1348930
least it should no longer cause a compiler error. However, it does not have
AI_NUMERICHOST so we cannot getaddrinfo() any numerical addresses with it (we
use that for FTP PORT/EPRT)! So, I modified the configure check that checks if
the getaddrinfo() is working, to use AI_NUMERICHOST since then it'll fail on
AIX 4.3 and it will automatically build with IPv6 support disabled.
the correct dynamic library names by default, and provides override switches
--with-ldap-lib, --with-lber-lib and --without-lber-lib. Added
CURL_DISABLE_LDAP to platform-specific config files to disable LDAP
support on those platforms that probably don't have dynamic OpenLDAP
libraries available to avoid compile errors.
many third-party libs and tools have problems. When curl is built without
--disable-shared, the testing is done with a front-end script which makes the
valgrind testing include (ba)sh as well and that often causes valgrind
errors. Either we improve the valgrind error scanner a lot to better identify
(lib)curl errors only, or we disable valgrind checking by default
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
contains %0a or %0d in the user, password or CWD parts. (A future fix would
include doing it for %00 as well - see KNOWN_BUGS for details.) Test case 225
and 226 were added to verify this