In the old line number 290, CC and CURL_CC had the same value. After
that, /DCURL_STATICLIB was added to CC but not CURL_CC (intended?).
This gets rid of the CC variable entirely. It is a first step to make it
possible to manualyl set a CC variable in order to be able to change the
compiler.
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes#857
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
If a call to GetSystemDirectory fails, the `path` pointer that was
previously allocated would be leaked. This makes sure that `path` is
always freed.
Closes#938
Many applications assume the actual contents of the public types and use
that do for example forward declarations (saving them from including our
public header) which then breaks when we switch from void * to a struct
*.
I'm not convinced we were wrong, but since this practise seems
widespread enough I'm willing to (partly) step down.
Now libcurl uses the struct itself when it is built and it allows
applications to use the struct type if CURL_STRICTER is defined at the
time of the #include.
Reported-by: Peter Frühberger
Fixes#926