When a connection is no longer used, it is kept in the cache. If the
cache is full, the oldest idle connection is closed. If no connection is
idle, the current one is closed instead.
Tidied up code from commit 6b6bdc83bdUpdated where a few instances of
the pop3c struct variable used the longer conndata struct rather than
matching what other code in pop3_authenticate() used.
Fixed an issue where (lib)curl is compiled without support for a
supported challenge-response based SASL authentication mechanism, such
as CRAM-MD5 or NTLM, the server doesn't support the LOGIN or PLAIN
mechanisms and (lib)curl doesn't fallback to Clear Text authentication.
Note: In order to fallback to Clear Text authentication properly this
fix adds support for the LOGINDISABLED server capability.
imap: Fixed no known authentication mechanism when fallback is required
Fixed an issue where (lib)curl is compiled without support for a
supported challenge-response based SASL authentication mechanism, such
as CRAM-MD5 or NTLM, the server doesn't support the LOGIN or PLAIN
mechanisms and (lib)curl doesn't fallback to Clear Text authentication.
Note: In order to fallback to Clear Text authentication properly this
fix adds support for the LOGINDISABLED server capability.
Related bug: http://curl.haxx.se/mail/lib-2013-02/0004.html
Reported by: Stanislav Ivochkin
Fixed an issue where (lib)curl is compiled without support for a
supported challenge-response based SASL authentication mechanism, such
as CRAM-MD5 or NTLM, the server doesn't support the LOGIN or PLAIN
mechanisms and (lib)curl doesn't fallback to APOP or Clear Text
authentication.
Bug: http://curl.haxx.se/mail/lib-2013-02/0004.html
Reported by: Stanislav Ivochkin
Remove timeout argument that's never used.
Make the actual connection get detected on a single spot to reduce code
duplication.
Store the IPv6 state already when the connection is attempted.
There was a bug where, if SSLWrite() returned errSSLWouldBlock but did
succeed in transmitting at least something, then we'd incorrectly
resend the packet. Now we never take errSSLWouldBlock as a sign that
nothing was transferred to/from the server.
Bug: http://curl.haxx.se/mail/lib-2013-01/0295.html
Reported by: Bruno de Carvalho
Minor code tidy up to add comments similar to those used in the pop3
and imap end of resp functions, in order to assist anyone reading the
code and highlight the similarities between each of these protocols.
smtp_state_upgrade_tls() would attempt to incorrectly complete the
upgrade to smtps and start the EHLO command if
Curl_ssl_connect_nonblocking() returned a failure code and if ssldone
was set to TRUE. This would only happen when a non-blocking API hadn't
been provided by the SSL implementation and curlssl_connect() was
called underneath.
pop3_state_upgrade_tls() would attempt to incorrectly complete the
upgrade to pop3s and start the CAPA command if
Curl_ssl_connect_nonblocking() returned a failure code and if ssldone
was set to TRUE. This would only happen when a non-blocking API hadn't
been provided by the SSL implementation and curlssl_connect() was
called underneath.
imap_state_upgrade_tls() would attempt to incorrectly complete the
upgrade to imaps and start the CAPABILITY command if
Curl_ssl_connect_nonblocking() returned a failure code and if ssldone
was set to TRUE. This would only happen when a non-blocking API hadn't
been provided by the SSL implementation and curlssl_connect() was
called underneath.
- document the double-quote and backslash need be escaped if quoting.
- libcurl formdata escape double-quote in filename by backslash.
- curl formparse can parse filename both contains '"' and ',' or ';'.
- curl now can uploading file with ',' or ';' in filename.
Bug: http://curl.haxx.se/bug/view.cgi?id=1171
Fixed an issue where Curl_ssl_connect_nonblocking() wouldn't complete
correctly and the ssldone flag wouldn't be set to true for pop3s based
connections.
Bug introduced in commit: 4ffb8a6398.
Remove internal separated behavior of the easy vs multi intercace.
curl_easy_perform() is now using the multi interface itself.
Several minor multi interface quirks and bugs have been fixed in the
process.
Much help with debugging this has been provided by: Yang Tse
Fixes initial proxy response being processed by the tunneled protocol
handler instead of the HTTP wrapper handler. This issue would trigger
upon delayed CONNECT response from the proxy.
Additionally fixes a multi interface code-path in which connections
would not time out properly.
This does not fix known bug #39.
URL: http://curl.haxx.se/mail/lib-2013-01/0191.html
This commit fixes a regression introduced in 052a08ff.
NSS caches certs/keys returned by the SSL_GetClientAuthDataHook callback
and if we connect second time to the same server, the cached cert/key
pair is used. If we use multiple client certificates for different
paths on the same server, we need to clear the session cache to force
NSS to call the hook again. The commit 052a08ff prevented the session
cache from being cleared if a client certificate from file was used.
The condition is now fixed to cover both cases: consssl->client_nickname
is not NULL if a client certificate from the NSS database is used and
connssl->obj_clicert is not NULL if a client certificate from file is
used.
Review by: Kai Engert
This commit renames lib/setup.h to lib/curl_setup.h and
renames lib/setup_once.h to lib/curl_setup_once.h.
Removes the need and usage of a header inclusion guard foreign
to libcurl. [1]
Removes the need and presence of an alarming notice we carried
in old setup_once.h [2]
----------------------------------------
1 - lib/setup_once.h used __SETUP_ONCE_H macro as header inclusion guard
up to commit ec691ca3 which changed this to HEADER_CURL_SETUP_ONCE_H,
this single inclusion guard is enough to ensure that inclusion of
lib/setup_once.h done from lib/setup.h is only done once.
Additionally lib/setup.h has always used __SETUP_ONCE_H macro to
protect inclusion of setup_once.h even after commit ec691ca3, this
was to avoid a circular header inclusion triggered when building a
c-ares enabled version with c-ares sources available which also has
a setup_once.h header. Commit ec691ca3 exposes the real nature of
__SETUP_ONCE_H usage in lib/setup.h, it is a header inclusion guard
foreign to libcurl belonging to c-ares's setup_once.h
The renaming this commit does, fixes the circular header inclusion,
and as such removes the need and usage of a header inclusion guard
foreign to libcurl. Macro __SETUP_ONCE_H no longer used in libcurl.
2 - Due to the circular interdependency of old lib/setup_once.h and the
c-ares setup_once.h header, old file lib/setup_once.h has carried
back from 2006 up to now days an alarming and prominent notice about
the need of keeping libcurl's and c-ares's setup_once.h in sync.
Given that this commit fixes the circular interdependency, the need
and presence of mentioned notice is removed.
All mentioned interdependencies come back from now old days when
the c-ares project lived inside a curl subdirectory. This commit
removes last traces of such fact.
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h
1. When the downloaded data file from Mozilla is current, but the output
bundle does not exist: continue processing to create the bundle. The
goal is to have the output file - not just download the latest input.
2. added -f option to force re-processing the file. Useful for
debugging/testing the process.
3. added support for output to '-' (stdout), allowing the output to be
piped.
4. All progress and error messages go to STDERR rather than STDOUT (3)
5. The script opened and closed the output file many times
unnecessarily. It now opens it once, does the output and closes it.
6. Backup of the input files happens after successful processing, not
before.
7. The output is written to a temporary file, and renamed to the
requested name after backup - this greatly reduces the window where the
file can be seen partially written.
8. all die calls have a \n at the end to suppress perl's traceback - the
traceback isn't useful to end users.
Patch: http://curl.haxx.se/mail/lib-2013-01/0045.html
lib/objnames.inc provides definition of curl_10char_object_name() shell
function. The intended purpose of this function is to transliterate a
(*.c) source file name that may be longer than 10 characters, or not,
into a string with at most 10 characters which may be used as an OS/400
object name.
Test case 1221 does unit testng of this function and also verifies
that it is possible to generate distinct short object names for all
curl and libcurl *.c source file names.
lib/objnames-test.sh is the shell script used for test case 1221.
tests/runtests.pl modified to accept shell script test cases.
More details inside lib/objnames.inc and lib/objnames-test.sh
* Changing the order of the state machine to represent the order in
which commands are sent to the server.
* Reworking the imap_endofresp() function as the FETCH response doesn't
include the command id and shouldn't be part of the length comparison
that takes into account the id string.
Fixed a problem with the state machine when attempting to log in with
invalid credentials. The server would report login failure but libcurl
would not read the response due to inappropriate IMAP_STOP states being
set after the login was sent.
Applied some of the comment and layout changes that had already been
applied to the pop3 and smtp code over the last 6 to 9 months.
This is in preparation of adding SASL based authentication.
... on Snow Leopard and Lion
Snow Leopard introduced the SSLSetSessionOption() function, but it
doesn't disable peer verification as expected on Snow Leopard or
Lion (it works as expected in Mountain Lion). So we now use sysctl()
to detect whether or not the user is using Snow Leopard or Lion,
and if that's the case, then we now use the deprecated
SSLSetEnableCertVerify() function instead to disable peer verification.
... it also clobbered the 'result' return value so that it wouldn't
return the error back to the parent function properly, which broke test
809 when run with 'multi-always'.
When prefixing a path with /~/ it is supposed to be used relative to the
user's home directory but it didn't work. Now we cut off the entire
three byte sequenct "/~/" which seems to be how OpenSSH does it.
Bug: http://curl.haxx.se/bug/view.cgi?id=1173
Reported by: Balaji Parasuram
Issue: When building a 32bit target with large file support HP-UX
<sys/socket.h> header file may simultaneously provide two different
sets of declarations for sendfile and sendpath functions, one with
static and another with external linkage. Given that we do not use
mentioned functions we really don't care which linkage is the
appropriate one, but on the other hand, the double declaration emmits
warnings when using the HP-UX compiler and errors when using modern
gcc versions resulting in fatal compilation errors.
Mentioned issue is now fixed as long as we don't use sendfile nor
sendpath functions.
A bundle is a list of all persistent connections to the same host.
The connection cache consists of a hash of bundles, with the
hostname as the key.
The benefits may not be obvious, but they are two:
1) Faster search for connections to reuse, since the hash
lookup only finds connections to the host in question.
2) It lays out the groundworks for an upcoming patch,
which will introduce multiple HTTP pipelines.
This patch also removes the awkward list of "closure handles",
which were needed to send QUIT commands to the FTP server
when closing a connection.
Now we allocate a separate closure handle and use that
one to close all connections.
This has been tested in a live system for a few weeks, and of
course passes the test suite.
BLANK_AT_MAKETIME may be used in our Makefile.am files to blank
LIBS variable used in generated makefile at makefile processing
time. Doing this functionally prevents LIBS from being used for
all link targets in given makefile.
This handling already works with the easy-interface code. When a request
is sent on a re-used connection that gets closed by the server at the
same time as the request is sent, the situation may occur so that we can
send the request and we discover the broken connection as a RECV_ERROR
in the PERFORM state and then the request needs to be retried on a fresh
connection. Test 64 broke with 'multi-always-internally'.
Although it is not explicitly stated in the documentation, NSS uses
*pRetCert and *pRetKey even if the client authentication hook returns
a failure. Namely, if we destroy *pRetCert without clearing *pRetCert
afterwards, NSS destroys the certificate once again, which causes a
double free.
Reported by: Bob Relyea
.. that are sent when auth-negotiating before a chunked
upload or when setting the 'Transfer-Encoding: chunked'
header and intentionally sending no content.
Adjust test565 and test1333 accordingly.
DNS cache entries populated with CURLOPT_RESOLVE were not properly freed
again when done using the multi interface.
Test case 1502 added to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=3575448
Reported by: Alex Gruz
If we use memory functions (malloc, free, strdup etc) in C sources in
libcurl and we fail to include curl_memory.h or memdebug.h we either
fail to properly support user-provided memory callbacks or the memory
leak system of the test suite fails.
After Ajit's report of a failure in the first category in http_proxy.c,
I spotted a few in the second category as well. These problems are now
tested for by test 1132 which runs a perl program that scans for and
attempts to check that we use the correct include files if a memory
related function is used in the source code.
Reported by: Ajit Dhumale
Bug: http://curl.haxx.se/mail/lib-2012-11/0125.html
When using only 1 second precision, curl doesn't create new cnonce
values quickly enough for all uses.
For example, issuing the following command multiple times to a recent
Tomcat causes authentication failures:
curl --digest -utest:test http://tomcat.test.com:8080/manager/list
This is because curl uses the same cnonce for several seconds, but
doesn't increment the nonce counter. Tomcat correctly interprets
this as a replay attack and rejects the request.
When microsecond-precision is available, this commit causes curl to
change cnonce values much more frequently.
With microsecond resolution, increasing the nounce length used in the
headers to 32 was made to further reduce the risk of duplication.
axTLS:
This will make the axTLS backend perform the RFC2818 checks, honoring
the VERIFYHOST setting similar to the OpenSSL backend.
Generic for OpenSSL and axTLS:
Move the hostcheck and cert_hostcheck functions from the lib/ssluse.c
files to make them genericly available for both the OpenSSL, axTLS and
other SSL backends. They are now in the new lib/hostcheck.c file.
CyaSSL:
CyaSSL now also has the RFC2818 checks enabled by default. There is a
limitation that the verifyhost can not be enabled exclusively on the
Subject CN field comparison. This SSL backend will thus behave like the
NSS and the GnuTLS (meaning: RFC2818 ok, or bust). In other words:
setting verifyhost to 0 or 1 will disable the Subject Alt Names checks
too.
Schannel:
Updated the schannel information messages: Split the IP address usage
message from the verifyhost setting and changed the message about
disabling SNI (Server Name Indication, used in HTTP virtual hosting)
into a message stating that the Subject Alternative Names checks are
being disabled when verifyhost is set to 0 or 1. As a side effect of
switching off the RFC2818 related servername checks with
SCH_CRED_NO_SERVERNAME_CHECK
(http://msdn.microsoft.com/en-us/library/aa923430.aspx) the SNI feature
is being disabled. This effect is not documented in MSDN, but Wireshark
output clearly shows the effect (details on the libcurl maillist).
PolarSSL:
Fix the prototype change in PolarSSL of ssl_set_session() and the move
of the peer_cert from the ssl_context to the ssl_session. Found this
change in the PolarSSL SVN between r1316 and r1317 where the
POLARSSL_VERSION_NUMBER was at 0x01010100. But to accommodate the Ubuntu
PolarSSL version 1.1.4 the check is to discriminate between lower then
PolarSSL version 1.2.0 and 1.2.0 and higher. Note: The PolarSSL SVN
trunk jumped from version 1.1.1 to 1.2.0.
Generic:
All the SSL backends are fixed and checked to work with the
ssl.verifyhost as a boolean, which is an internal API change.
The text "additional stuff not fine" text was added for debug purposes a
while ago, but it isn't really helping anyone and for some reason some
Linux distributions provide their libcurls built with debug info still
present and thus (far too many) users get to read this info.
The logic previously checked for a started NTLM negotiation only for
host and not also with proxy, leading to problems doing POSTs over a
proxy NTLM that are larger than 2000 bytes. Now it includes proxy in the
check.
Bug: http://curl.haxx.se/bug/view.cgi?id=3582321
Reported by: John Suprock
The existing logic only cut off the fragment from the separate 'path'
buffer which is used when sending HTTP to hosts. The buffer that held
the full URL used for proxies were not dealt with. It is now.
Test case 5 was updated to use a fragment on a URL over a proxy.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579813
As a handle can be re-used after having done HTTP auth in a previous
request, it must make sure to clear out the HTTP types that aren't
wanted in this new request.
This reverts commit ce8311c7e4.
The commit made test 2024 work but caused a regression with repeated
Digest authentication. We need to fix this differently.
After a research team wrote a document[1] that found several live source
codes out there in the wild that misused the CURLOPT_SSL_VERIFYHOST
option thinking it was a boolean, this change now bans 1 as a value and
will make libcurl return error for it.
1 was never a sensible value to use in production but was introduced
back in the days to help debugging. It was always documented clearly
this way.
1 was never supported by all SSL backends in libcurl, so this cleanup
makes the treatment of it unified.
The report's list of mistakes for this option were all PHP code and
while there's a binding layer between libcurl and PHP, the PHP team has
decided that they have an as thin layer as possible on top of libcurl so
they will not alter or specifically filter a 'TRUE' value for this
particular option. I sympathize with that position.
[1] = http://daniel.haxx.se/blog/2012/10/25/libcurl-claimed-to-be-dangerous/
Since automake 1.12.4, the warnings are issued on running automake:
warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')
Avoid INCLUDES and roll these flags into AM_CPPFLAGS.
Compile tested on:
Ubuntu 10.04 (automake 1:1.11.1-1)
Ubuntu 12.04 (automake 1:1.11.3-1ubuntu2)
Arch Linux (automake 1.12.4)
As pointed out in Bug report #3579064, curl_multi_perform() would
wrongly use a blocking mechanism internally for some commands which
could lead to for example a very long block if the LIST response never
showed.
The solution was to make sure to properly continue to use the multi
interface non-blocking state machine.
The new test 1501 verifies the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579064
Reported by: Guido Berhoerster
When given a string as 'srp' it didn't work, but required 'SRP'.
Starting now, the check disregards casing.
Bug: http://curl.haxx.se/bug/view.cgi?id=3578418
Reported by: Jeff Connelly
Previously the Metalink code used Apple's CommonCrypto library only if
curl was built using the --with-darwinssl option. Now we use CommonCrypto
on all Apple operating systems including Tiger or later, or iOS 5 or
later, so you don't need to build --with-darwinssl anymore. Also rolled
out this change to libcurl's md5 code.
The iOS build was broken by a reference to a function that only existed
under OS X; fixed. Also fixed a hard-to-reproduce problem where, if the
server disconnected before libcurl got the chance to hang up first and
SecureTransport was in use, then we'd raise an error instead of failing
gracefully.
This is a minor change in behavior after having been pointed out by Mark
Tully and discussed on the list. Initially this case would internally
call poll() with no sockets and a timeout which would equal a sleep for
that specified time.
Bug: http://curl.haxx.se/mail/lib-2012-10/0076.html
Reported by: Mark Tully
Since there are servers that seem to return very big encrypted
data packages, we need to be able to handle those without having
an internal size limit. To avoid the buffer growing to fast to
early the initial size was decreased and the minimum free space
in the buffer was decreased as well.
During the periods of rate limitation, the speedcheck function wasn't
called and thus the values weren't updated accordingly and it would then
easily trigger wrongly once data got transferred again.
Also, the progress callback's return code was not acknowledged in this
state so it could make an "abort" return code to get ignored and not
have the documented effect of aborting an ongoing transfer.
Bug: http://curl.haxx.se/mail/lib-2012-09/0081.html
Reported by: Jie He
The Curl_reconnect_request() function could end up returning a pointer
to a free()d struct when Curl_done() failed inside. Clearing the pointer
unconditionally after Curl_done() avoids this risk.
Reported by: Ho-chi Chen
Bug: http://curl.haxx.se/mail/lib-2012-09/0188.html
Selected socks proxy in Google's Chrome browser. Resulting in the
following environment variables:
NO_PROXY=localhost,127.0.0.0/8
ALL_PROXY=socks://localhost:1080/
all_proxy=socks://localhost:1080/
no_proxy=localhost,127.0.0.0/8
... and libcurl didn't treat 'socks://' as socks but instead picked HTTP
proxy.
Reported by: Scott Bailey
Bug: http://curl.haxx.se/bug/view.cgi?id=3566860
Each certificate section of the input certdata.txt file has a trust
section following it with details.
This script failed to detect the start of the trust for at least one
cert[*], which made the script continue pass that section into the next
one where it found an 'untrusted' marker and as a result that certficate
was not included in the output.
[*] = "Hellenic Academic and Research Institutions RootCA 2011"
Bug: http://curl.haxx.se/mail/lib-2012-09/0019.html
SMTP client will send SIZE parameter in MAIL FROM command only if server
supports it. Without this patch server might say "504 Command parameter
not implemented" and reject the message.
Bug: http://curl.haxx.se/bug/view.cgi?id=3564114
/*
* Name: curl_multi_wait()
*
* Desc: Poll on all fds within a CURLM set as well as any
* additional fds passed to the function.
*
* Returns: CURLMcode type, general multi error code.
*/
CURL_EXTERN CURLMcode curl_multi_wait(CURLM *multi_handle,
struct curl_waitfd extra_fds[],
unsigned int extra_nfds,
int timeout_ms);
In Mountain Lion, Apple added TLS 1.1 and 1.2, and deprecated a number
of SecureTransport functions, some of which we were using. We now check
to see if the replacement functions are present, and if so, we use them
instead. The old functions are still present for users of older
cats. Also fixed a build warning that started to appear under Mountain
Lion
Commit b91d29a28e170c16d65d956db79f2cd3a82372d2 introduces a bug and breaks Curl_closesocket function. sock_accepted flag for the second socket should be tagged as TRUE before the sockopt callback is called because in case the callback returns an error, Curl_closesocket function is going to call the - fclosesocket - callback for the accept()ed socket
For active FTP connections, applications may need setting the sockopt after accept() call returns successful. This fix gives a call to the callback registered with CURL_SOCKOPTFUNCTION option. Also a new sock type - CURLSOCKTYPE_ACCEPT - is added. This type is to be passed to application callbacks with - purpose - parameter. Applications may use this parameter to distinguish between socket types.
Commit e351972bc8 brought in the ssh agent support but some uses of
the libssh2 agent API was done unconditionally which wasn't good enough
since that API hasn't always been present.
By reading the ->head pointer and using that instead of the ->size
number to figure out if there's a list remaining we avoid the (false
positive) clang-analyzer warning that we might dereference of a null
pointer.
I suspect this is a regression introduced in commit 207cf150, included
since 7.24.0.
Avoid showing '(nil)' as hostname in verbose output by making sure the
hostname fixup function is called early enough to set the pointers that
are used for this. The name data is set again for each request even for
re-used connections to handle multiple hostnames over the same
connection (like with proxy) or that the casing etc of the host name is
changed between requests (which has proven to be important at least once
in the past).
Test1011 was modified to use a redirect with a re-used a connection
since it then showed the bug and now lo longer does. There's currently
no easy way to have the test suite detect 'nil' texts in verbose ouputs
so no tests will detect if this problem gets reintroduced.
Bug: http://curl.haxx.se/mail/lib-2012-07/0111.html
Reported by: Gisle Vanem
We found a problem with ftp transfer using libcurl (7.23 and 7.25)
inside an application which is receiving unix signals (SIGUSR1,
SIGUSR2...) almost continuously. (Linux 2.4, PowerPC, HAVE_POLL_FINE
defined).
Curl_socket_check() uses poll() to wait for the socket, and retries it
when a signal is received (EINTR). However, if a signal is received and
it also happens that the timeout has been reached, Curl_socket_check()
returns -1 instead of 0 (indicating an error instead of a timeout).
In our case, the result is an aborted connection even before the ftp
banner is received from the server, and a return value of
CURLE_OUT_OF_MEMORY from curl_easy_perform() (Curl_pp_multi_statemach(),
in pingpong.c, actually returns OOM if Curl_socket_check() fails :-)
Funny to debug on a system on which OOM is a possible cause).
Bug: http://curl.haxx.se/mail/lib-2012-07/0122.html
Due to WSAPoll bugs, libcurl does not work as intended. When the cURL
library is used to setup a connection to an incorrect port, normally the
result is CURLE_COULDNT_CONNECT, /* 7 */, but due to the bug in WSAPoll,
the result now is CURLE_OPERATION_TIMEDOUT, /* 28 - the timeout time was
reached */.
On August 1, Jan Koen Annot opened a case for this to Microsoft Premier
Online (https://premier.microsoft.com/). The support engineer handling
the case wrote that the case description is quite clear. He will try to
reproduce the issue and then proceed with troubleshooting it.
Reported by: Jan Koen Annot
Bug: http://curl.haxx.se/mail/lib-2012-07/0310.html
When figuring out if the data stream needs to be rewound when the
request is to be resent, we must not access the HTTP struct unless the
protocol used is indeed HTTP...
Bug: http://curl.haxx.se/bug/view.cgi?id=3544688