mirror of
https://github.com/moparisthebest/curl
synced 2024-12-22 08:08:50 -05:00
updated
This commit is contained in:
parent
2912537533
commit
eb6a14fe10
27
docs/TODO
27
docs/TODO
@ -15,7 +15,8 @@ TODO
|
|||||||
* Introduce an interface to libcurl that allows applications to easier get to
|
* Introduce an interface to libcurl that allows applications to easier get to
|
||||||
know what cookies that are received. Pushing interface that calls a
|
know what cookies that are received. Pushing interface that calls a
|
||||||
callback on each received cookie? Querying interface that asks about
|
callback on each received cookie? Querying interface that asks about
|
||||||
existing cookies? We probably need both.
|
existing cookies? We probably need both. Enable applications to modify
|
||||||
|
existing cookies as well.
|
||||||
|
|
||||||
* Make content encoding/decoding internally be made using a filter system.
|
* Make content encoding/decoding internally be made using a filter system.
|
||||||
|
|
||||||
@ -23,13 +24,6 @@ TODO
|
|||||||
less copy of data and thus a faster operation.
|
less copy of data and thus a faster operation.
|
||||||
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
|
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
|
||||||
|
|
||||||
* Run-time querying about library characterics. What protocols do this
|
|
||||||
running libcurl support? What is the version number of the running libcurl
|
|
||||||
(returning the well-defined version-#define). This could possibly be made
|
|
||||||
by allowing curl_easy_getinfo() work with a NULL pointer for global info,
|
|
||||||
but perhaps better would be to introduce a new curl_getinfo() (or similar)
|
|
||||||
function for global info reading.
|
|
||||||
|
|
||||||
* Add asynchronous name resolving (http://daniel.haxx.se/resolver/). This
|
* Add asynchronous name resolving (http://daniel.haxx.se/resolver/). This
|
||||||
should be made to work on most of the supported platforms, or otherwise it
|
should be made to work on most of the supported platforms, or otherwise it
|
||||||
isn't really interesting.
|
isn't really interesting.
|
||||||
@ -51,12 +45,9 @@ TODO
|
|||||||
>4GB all over. Bug reports (and source reviews) indicate that it doesn't
|
>4GB all over. Bug reports (and source reviews) indicate that it doesn't
|
||||||
currently work properly.
|
currently work properly.
|
||||||
|
|
||||||
* Make the built-in progress meter use its own dedicated output stream, and
|
|
||||||
make it possible to set it. Use stderr by default.
|
|
||||||
|
|
||||||
* CURLOPT_MAXFILESIZE. Prevent downloads that are larger than the specified
|
* CURLOPT_MAXFILESIZE. Prevent downloads that are larger than the specified
|
||||||
size. CURLE_FILESIZE_EXCEEDED would then be returned. Gautam Mani
|
size. CURLE_FILESIZE_EXCEEDED would then be returned. Gautam Mani
|
||||||
requested. That is, the download should even begin but be aborted
|
requested. That is, the download should not even begin but be aborted
|
||||||
immediately.
|
immediately.
|
||||||
|
|
||||||
* Allow the http_proxy (and other) environment variables to contain user and
|
* Allow the http_proxy (and other) environment variables to contain user and
|
||||||
@ -66,8 +57,7 @@ TODO
|
|||||||
LIBCURL - multi interface
|
LIBCURL - multi interface
|
||||||
|
|
||||||
* Make sure we don't ever loop because of non-blocking sockets return
|
* Make sure we don't ever loop because of non-blocking sockets return
|
||||||
EWOULDBLOCK or similar. This concerns the HTTP request sending (and
|
EWOULDBLOCK or similar. This FTP command sending etc.
|
||||||
especially regular HTTP POST), the FTP command sending etc.
|
|
||||||
|
|
||||||
* Make uploads treated better. We need a way to tell libcurl we have data to
|
* Make uploads treated better. We need a way to tell libcurl we have data to
|
||||||
write, as the current system expects us to upload data each time the socket
|
write, as the current system expects us to upload data each time the socket
|
||||||
@ -86,6 +76,9 @@ TODO
|
|||||||
receiver will convert the data from the standard form to his own internal
|
receiver will convert the data from the standard form to his own internal
|
||||||
form."
|
form."
|
||||||
|
|
||||||
|
* Since USERPWD always override the user and password specified in URLs, we
|
||||||
|
might need another way to specify user+password for anonymous ftp logins.
|
||||||
|
|
||||||
* An option to only download remote FTP files if they're newer than the local
|
* An option to only download remote FTP files if they're newer than the local
|
||||||
one is a good idea, and it would fit right into the same syntax as the
|
one is a good idea, and it would fit right into the same syntax as the
|
||||||
already working http dito works. It of course requires that 'MDTM' works,
|
already working http dito works. It of course requires that 'MDTM' works,
|
||||||
@ -103,12 +96,6 @@ TODO
|
|||||||
also prevents the authentication info from getting sent when following
|
also prevents the authentication info from getting sent when following
|
||||||
locations to legitimate other host names.
|
locations to legitimate other host names.
|
||||||
|
|
||||||
* "Content-Encoding: compress/gzip/zlib" HTTP 1.1 clearly defines how to get
|
|
||||||
and decode compressed documents. There is the zlib that is pretty good at
|
|
||||||
decompressing stuff. This work was started in October 1999 but halted again
|
|
||||||
since it proved more work than we thought. It is still a good idea to
|
|
||||||
implement though. This requires the filter system mentioned above.
|
|
||||||
|
|
||||||
* Authentication: NTLM. Support for that MS crap called NTLM
|
* Authentication: NTLM. Support for that MS crap called NTLM
|
||||||
authentication. MS proxies and servers sometime require that. Since that
|
authentication. MS proxies and servers sometime require that. Since that
|
||||||
protocol is a proprietary one, it involves reverse engineering and network
|
protocol is a proprietary one, it involves reverse engineering and network
|
||||||
|
Loading…
Reference in New Issue
Block a user