Since Curl_pgrsDone() itself calls Curl_pgrsUpdate() which may return an
abort instruction or similar we need to return that info back and
subsequently properly handle return codes from Curl_pgrsDone() where
used.
(Spotted by a Coverity scan)
Some functions using getaddrinfo and gethostbyname were still
mistakingly being used/linked even if c-ares was selected as resolver
backend.
Reported by: Arthur Murray
Bug: http://curl.haxx.se/mail/lib-2012-01/0160.html
By setting PROTOPT_NOURLQUERY in the protocol handler struct, the
protocol will get the "query part" of the URL cut off before the data is
handled by the protocol-specific code. This makes libcurl adhere to
RFC3986 section 2.2.
Test 1220 is added to verify a file:// URL with query-part.
After a PORT has been issued, and the multi handle would switch to the
CURLM_STATE_DO_MORE state (which is unique for FTP), libcurl would
return the wrong fdset to wait for when curl_multi_fdset() is
called. The code would blindly assume that it was waiting for a connect
of the second connection, while that isn't true immediately after the
PORT command.
Also, the function multi.c:domore_getsock() was highly FTP-centric and
therefore ugly to keep in protocol-agnostic code. I solved this problem
by introducing a new function pointer in the Curl_handler struct called
domore_getsock() which is only called during the DOMORE state for
protocols that set that pointer.
The new ftp.c:ftp_domore_getsock() function now returns fdset info about
the control connection's command/response handling while such a state is
in use, and goes over to waiting for a writable second connection first
once the commands are done.
The original problem could be seen by running test 525 and checking the
time stamps in the FTP server log. I can verify that this fix at least
fixes this problem.
Bug: http://curl.haxx.se/mail/lib-2011-10/0250.html
Reported by: Gokhan Sengun
Set ACK timeout to 5 seconds.
If we are waiting for block X and receive block Y that is the expected one, we
should send ACK and increase X (which is already implemented). Otherwise drop
the packet and don't increase retry counter.
The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
tftpd-hpa has a bug where it will send an incorrect ack when the block
counter wraps and tftp options have been sent. Work around that by
accepting an ack for 65535 when we're expecting one for 0.
Older unixes want an 'int' instead of 'size_t' as the 3rd
argumment so before this change it would cause warnings such as:
There is an implicit conversion from "unsigned long" to "int";
rounding, sign extension, or loss of accuracy may result.
Eric Mertens posted bug #3003705: when we made TFTP use the
correct timeout option when sent to the server (fixed May 18th
2010) it became obvious that libcurl used invalid timeout values
(300 by default while the RFC allows nothing above 255). While of
course it is obvious that as TFTP has worked thus far without
being able to set timeout at all, just removing the setting
wouldn't make any difference in behavior. I decided to still keep
it (but fix the problem) as it now actually allows for easier
(future) customization of the timeout.
(http://curl.haxx.se/bug/view.cgi?id=3003705)
In a normal expression, doing [unsigned short] + 1 will not wrap
at 16 bits so the comparisons and outputs were done wrong. I
added a macro do make sure it gets done right.
Douglas Kilpatrick filed bug report #3004787 about it:
http://curl.haxx.se/bug/view.cgi?id=3004787
Eric Mertens posted bug report #3003005 pointing out that the
libcurl TFTP code was not sending the timeout option properly to
the server, and suggested a fix.
(http://curl.haxx.se/bug/view.cgi?id=3003005)
Error codes were not properly returned to the main curl code (and on to apps
using libcurl).
tftp was crapping out when tsize == 0 on upload, but I see no reason to fail
to upload just because the remote file is zero-length. Ignore tsize option on
upload.
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.