1
0
mirror of https://github.com/moparisthebest/curl synced 2024-12-22 08:08:50 -05:00

TODO: 1.8 Modified buffer size approach

Thoughts around buffer sizes and what might be possible to do...
This commit is contained in:
Daniel Stenberg 2013-06-23 22:48:39 +02:00
parent ad47d8e263
commit d23745f7c9

View File

@ -18,6 +18,7 @@
1.5 get rid of PATH_MAX 1.5 get rid of PATH_MAX
1.6 progress callback without doubles 1.6 progress callback without doubles
1.7 Happy Eyeball dual stack connect 1.7 Happy Eyeball dual stack connect
1,8 Modified buffer size approach
2. libcurl - multi interface 2. libcurl - multi interface
2.1 More non-blocking 2.1 More non-blocking
@ -178,6 +179,28 @@
http://tools.ietf.org/html/rfc6555 http://tools.ietf.org/html/rfc6555
1.8 Modified buffer size approach
Current libcurl allocates a fixed 16K size buffer for download and an
additional 16K for upload. They are always unconditionally part of the easy
handle. If CRLF translations are requested, an additional 32K "scratch
buffer" is allocated. A total of 64K transfer buffers in the worst case.
First, while the handles are not actually in use these buffers could be freed
so that lingering handles just kept in queues or whatever waste less memory.
Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
since each need to be individually acked and therefore libssh2 must be
allowed to send (or receive) many separate ones in parallel to achieve high
transfer speeds. A current libcurl build with a 16K buffer makes that
impossible, but one with a 512K buffer will reach MUCH faster transfers. But
allocating 512K unconditionally for all buffers just in case they would like
to do fast SFTP transfers at some point is not a good solution either.
Dynamically allocate buffer size depending on protocol in use in combination
with freeing it after each individual transfer? Other suggestions?
2. libcurl - multi interface 2. libcurl - multi interface
2.1 More non-blocking 2.1 More non-blocking