Similar to what we did in edd9ed6a, disconnect the relationship with our
stack allocated error buffer from the curl handle. Just as an FTP
connection might have some network chatter on teardown causing the
progress callback to be triggered, we might also hit an error condition
that causes curl to write to our (now out of scope) error buffer.
I'm unable to reproduce FS#26327, but I have a suspicion that this
should fix it.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a poor place for it, and it will likely move again in the
future, but it's better to have it here than as a static variable.
Initialization of this variable is now no longer necessary as its
zeroed on creation of the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This was done to squash a memory leak in the sync database download
code. When we downloaded a database and then reused the payload struct,
we could find ourselves calling get_fullpath() for the signatures and
overwriting non-freed values we had left over from the database
download.
Refactor the payload_free function into a payload_reset function that we
can call that does NOT free the payload itself, so we can reuse payload
structs. This also allows us to move the payload to the stack in some
call paths, relieving us of the need to alloc space.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than always initializing it on any handle creation. There are
several frontend operations (search, info, etc.) that never need the
download code, so spending time initializing this every single time is a
bit silly. This makes it a bit more like the GPGME code init path.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync code, we explicitly allocated a string for this field, while
in the dload code itself it was filled in with a pointer to another
string. This led to a memory leak in the sync download case.
Make remote_name non-const and always explicitly allocate it. This patch
ensures this as well as uses malloc + snprintf (rather than calloc) in
several codepaths, and eliminates the only use of PATH_MAX in the
download code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This function doesn't exist on OSX. Since there aren't any other
candidates in alpm for which this function would make sense to use,
simply replace the function call with a loop that does the equivalent.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
if ~/.netrc exists and has credentials for the hostname requested in a
download, they will be provided in an http auth request. This can still
be overridden by explcitly declaring user:pass in the URL.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This gives us some amount of room to grow in case we ever find another
reason that we might return with an error from the progress callback.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
We lost some of this output in the fetch->curl conversion, but I also
noticed in FS#25852 that we just lack some of this useful information
along the way.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
After commit 2e7d002315, we use off_t rather than long variables.
Use the _LARGE variants of the methods to indicate we are passing off_t
sized variables, and cast using (curl_off_t) accordingly.
Signed-off-by: Dan McGee <dan@archlinux.org>
This handles the no Content-Length header problem as stated in the
comments of FS#23413. We add a quick check to the callback that will
force an abort if the downloaded data exceeds the payload size, and then
check for this error in the post-download cleanup code.
Signed-off-by: Dan McGee <dan@archlinux.org>
Beautiful of libcurl to use floating point types for what are never
fractional values. We can do better, and we usually want these values in
their integer form anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
Since we store this directly in the download function, just rework
mask_signal() to take a pointer to a location to store the original.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a precursor to a following patch which will move the setting of
options to a separate function. With the open mode as part of the
struct, we can avoid modifying stack allocated variables.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is more in line with the menagerie of file name members that we now
have on the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
These are private to the download operation already, so glob them onto
the struct. This is an ugly rename patch, with the only logical change
being that destfile and tempfile are now freed by the payload_free
function.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This was a vestige leftover from the libfetch days of yore.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
In the case of a non-operation (e.g. DNS resolver error), delete the
leftover 0 byte .part file.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a far more accurate description of what this is, since it's more
than likely not really a filename at all, but the name after a final
slash on a URL.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
ftp and http both define >=400 as being "something bad happened"
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Let callers of _alpm_download state whether we should delete on fail,
rather than inferring it from context. We still override this decision
and always unlink when a temp file is used.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Use the STRDUP macro instead of strdup() for the sake of better error
handling on memory allocation failures.
Signed-off-by: Lukas Fleischer <archlinux@cryptocrack.de>
Signed-off-by: Dan McGee <dan@archlinux.org>
Return with ALPM_ERR_WRONG_ARGS instead of causing a potential segfault
if alpm_fetch_pkgurl() is invoked with a NULL URL.
Signed-off-by: Lukas Fleischer <archlinux@cryptocrack.de>
Signed-off-by: Dan McGee <dan@archlinux.org>
This moves all the delete-on-fail logic to under cleanup label. This
also implies should_unlink when a payload is received that doesn't allow
resuming.
Fixes .db.sig.part files leftover in the sync dir.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This leverages earlier work that avoids a rename when destfile is unset.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
stat()'s behaviour is undefined if the first argument is NULL and might
be prone to segfault. Add an additional check to skip the stat()
invocation if no destfile is used.
Signed-off-by: Lukas Fleischer <archlinux@cryptocrack.de>
Signed-off-by: Dan McGee <dan@archlinux.org>
Avoid a potential segfault that may occur if we use a temporary file and
fail to build the destination file name from the effective URL.
Signed-off-by: Lukas Fleischer <archlinux@cryptocrack.de>
Signed-off-by: Dan McGee <dan@archlinux.org>
This reverts some hacky behavior from 5fc3ec and resets the handle's
pm_errno where it should be reset -- prior to each download. This
prevents a transaction with a download from being aborted when a package
is successfully grabbed from a secondary server.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Take this opportunity to refactor the if/then/else logic into a
switch/case which is likely going to be needed to fine tune more
exceptions in the future.
Fixes FS#25531
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This prevents possible null dereferences in FTP transfers when the
progress callback is touched during connection teardown.
http://curl.haxx.se/mail/lib-2011-08/0128.html
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Noticed in my PowerPC Linux VM:
cc1: warnings being treated as errors
dload.c:45: error: 'get_filename' defined but not used
make[3]: *** [dload.lo] Error 1
Signed-off-by: Dan McGee <dan@archlinux.org>
We did a good job checking this in add.c, but not necessarily anywhere
else. Fix this up by adding checks into dload.c, remove.c, and conf.c in
the frontend. Also add loggers where appropriate and make the message
syntax more consistent.
Signed-off-by: Dan McGee <dan@archlinux.org>
Restore some sanity to the number of arguments passed to _alpm_download
and curl_download_internal.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
This means creating a new struct which can pass more descriptive data
from the back end sync functions to the downloader. In particular, we're
interested in the download size read from the sync DB. When the remote
server reports a size larger than this (via a content-length header),
abort the transfer.
In cases where the size is unknown, we set a hard upper limit of:
* 25MiB for a sync DB
* 16KiB for a signature
For reference, 25MiB is more than twice the size of all of the current
binary repos (with files) combined, and 16KiB is a truly gargantuan
signature.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
URLs might end with a slash and follow redirects, or could be a
generated by a script such as /getpkg.php?id=12345. In both cases, we
may have a better filename that we can write to, taken from either
content-disposition header, or the effective URL.
Specific to the first case, we write to a temporary file of the format
'alpmtmp.XXXXXX', where XXXXXX is randomized by mkstemp(3). Since this
is a randomly generated file, we cannot support resuming and the file is
unlinked in the event of an interrupt.
We also run into the possibility of changing out the filename from under
alpm on a -U operation, so callers of _alpm_download can optionally pass
a pointer to a *char to be filled in by curl_download_internal with the
actual filename we wrote to. Any sync operation will pass a NULL pointer
here, as we rely on specific names for packages from a mirror.
Fixes FS#22645.
Signed-off-by: Dave Reisner <d@falconindy.com>
This gives us more granularity than the former Never/Optional/Always
trifecta. The frontend still uses these values temporarily but that will
be changed in a future patch.
* Use 'siglevel' consistenly in method names, 'level' as variable name
* The level becomes an enum bitmask value for flexibility
* Signature check methods now return a array of status codes rather than
a simple integer success/failure value. This allows callers to
determine whether things such as an unknown signature are valid.
* Specific signature error codes mostly disappear in favor of the above
returned status code; pm_errno is now set only to PKG_INVALID_SIG or
DB_INVALID_SIG as appropriate.
Signed-off-by: Dan McGee <dan@archlinux.org>