Extend the return values of compute_download_size to allow callers to
know that a .part file exists for the package.
This extra value isn't currently used, but it'll be needed later on.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds a logger to the CURLE_OK case so we can always know the return
code if it was >= 400, and debug log it regardless. Also adjust another
logger to use the cURL error message directly, as well as use fstat()
when we have an open file handle rather than stat().
Signed-off-by: Dan McGee <dan@archlinux.org>
Create a new static function called 'download_single_file' which
iterates over the servers for each payload.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is done by both the delta and regular file code, so we can extract
a little helper method. Done mostly to satisfy my "why are we repeating
code here" itch.
Signed-off-by: Dan McGee <dan@archlinux.org>
Break out the logic of finding payloads into a separate static function
to avoid nesting mayhem. After gathering all the records, download them
all at once.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
The absolutely terrible part about this is the failure on GPGME's part
to distinguish between "key not found" and "keyserver timeout". Instead,
it returns the same silly GPG_ERR_EOF in both cases (why isn't
GPG_ERR_TIMEOUT being used?), leaving us helpless to tell them apart.
Spit out a generic enough error message that covers both cases;
unfortunately we can't provide much guidance to the user because we
aren't sure what actually happened.
Signed-off-by: Dan McGee <dan@archlinux.org>
This logic is reused in both diskspace and downloadspace check
functions, so pull it out into its own static method.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This function determines if the given cachedir has at least the given
amount of free space on it. This will be later used in the sync code to
preemptively halt downloads.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
We can take a large shortcut here that saves us a lot of time,
especially when upgrading packages with lots of directories. Obviously
iterating the full file list of every single package to determine if
this directory was present in any other package can take quite some time
on a system with many packages installed. We don't need to remove a
directory at all if we are upgrading a package and the version we are
moving to still had the directory.
Also make a small optimization on the package comparsion- we really only
care about equality here, not the result of the compare, so we can
shortcut using our name_hash.
What kind of benefit does this give us? Oh, only a reduction from 295.7
million to 1.4 million strcmp() calls (99.5% fewer) during a
`pacman -S linux libreoffice-common` operation.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is in the realm of "probably not going to happen", but if someone
were to translate "disk" to a string longer than 256 characters, we
would have a smashed/corrupted stack due to our unchecked strcpy() call.
Rework the function to always length-check the value we copy into the
hostname buffer, and do it with memcpy rather than the more cumbersome
and unnecessary snprintf.
Finally, move the magic 256 value into a constant and pass it into the
function which is going to get inlined anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
This one is pretty darn useless. Just derefence the ->data attribute
since the type is public anyway and save yourself the function call.
Signed-off-by: Dan McGee <dan@archlinux.org>
This will be useful when extending disk space checks to free space
checking before we download package files.
Signed-off-by: Dan McGee <dan@archlinux.org>
Instead of iterating character by character, use memchr() calls to
hopefully speed up the search. A newline is the most likely culprit, so
search for that first followed by a NULL byte if there was no newline in
the buffer.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the default configuration, we can enter the signing code but still
have nothing to do with GPGME- for example, if database signatures are
optional but none are present. Delay initialization of GPGME until we
know there is a signature file present or we were passed base64-encoded
data.
This also makes debugging with valgrind a lot easier as you don't have
to deal with all the GPGME error noise because their code leaks like a
sieve.
Signed-off-by: Dan McGee <dan@archlinux.org>
If you need zero-filled allocations, call CALLOC() instead.
This was from the original definition of these macros in commit
cc754bc6e3be0f3; hopefully our code is in the shape it needs to be to
switch this behavior.
Signed-off-by: Dan McGee <dan@archlinux.org>
These backup-related paths in package extraction are used on relatively
few files during the install process, so bump them off the stack and
into the heap. This removes the artificial PATH_MAX limitation on their
length as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
This will always be a 64-bit signed integer rather than the variable length
time_t type. Dates beyond 2038 should be fully supported in the library; the
frontend still lags behind because 32-bit platforms provide no localtime64()
or equivalent function to convert from an epoch value to a broken down time
structure.
Signed-off-by: Dan McGee <dan@archlinux.org>
This prepares the function to handle values past year 2038. The return type
is still limited to 32-bits on 32-bit systems; this will be adjusted in a
future patch.
Signed-off-by: Dan McGee <dan@archlinux.org>
We have a few incomplete translations, but these should be addressable
before the 4.0.1 maint release that is surely not that far in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
Similar to what we did in edd9ed6a, disconnect the relationship with our
stack allocated error buffer from the curl handle. Just as an FTP
connection might have some network chatter on teardown causing the
progress callback to be triggered, we might also hit an error condition
that causes curl to write to our (now out of scope) error buffer.
I'm unable to reproduce FS#26327, but I have a suspicion that this
should fix it.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Expose the current static get_pkgpath() function internally to the rest
of the library as _alpm_local_db_pkgpath(). This allows use of this
convenience function in add.c and remove.c when forming the path to the
scriptlet location.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add an is_archive parameter to reduce the amount of black magic going
on. Rework to use fewer PATH_MAX sized local variables, and simplify
some of the logic where appropriate in both this function and in the
callers where duplicate calls can be replaced by some conditional
parameter code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a poor place for it, and it will likely move again in the
future, but it's better to have it here than as a static variable.
Initialization of this variable is now no longer necessary as its
zeroed on creation of the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This was done to squash a memory leak in the sync database download
code. When we downloaded a database and then reused the payload struct,
we could find ourselves calling get_fullpath() for the signatures and
overwriting non-freed values we had left over from the database
download.
Refactor the payload_free function into a payload_reset function that we
can call that does NOT free the payload itself, so we can reuse payload
structs. This also allows us to move the payload to the stack in some
call paths, relieving us of the need to alloc space.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than always initializing it on any handle creation. There are
several frontend operations (search, info, etc.) that never need the
download code, so spending time initializing this every single time is a
bit silly. This makes it a bit more like the GPGME code init path.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than free them right away, keep the list on the transaction as
we already do with add and remove lists. This is necessary because we
may be manipulating pointers the frontend needs to refer to packages,
and we are breaking our contract as stated in the alpm_add_pkg()
documentation of only freeing packages at the end of a transaction.
This fixes an issue found when refactoring the package list display
code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is definitely not in the normal hot path, so we can afford to do
some temporary heap allocation here.
Signed-off-by: Dan McGee <dan@archlinux.org>
No wonder these were slower than expected. We were only reading 4
(32-bit) or 8 (64-bit) bytes at a time and feeding it to the hash
functions. Define a buffer size constant and use it correctly so we feed
8K at a time into the hashing algorithm.
This cut one larger `-Sw --noconfirm` operation, with nothing to
actually download so only timing integrity, from 3.3s to 1.7s.
This has been broken since the original commit eba521913d introducing
OpenSSL usage for crypto hash functions. Boy do I feel stupid.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync code, we explicitly allocated a string for this field, while
in the dload code itself it was filled in with a pointer to another
string. This led to a memory leak in the sync download case.
Make remote_name non-const and always explicitly allocate it. This patch
ensures this as well as uses malloc + snprintf (rather than calloc) in
several codepaths, and eliminates the only use of PATH_MAX in the
download code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This commit was made with the intent of displaying "correctly" sorted
package lists to users. Here are some reasons I think this is incorrect:
* It is done in the wrong place. If a frontend application wants to show
a different order of packages dependent on locale, it should do that
on its own.
* Even if one wants a locale-specific order, almost all package names
are all ASCII and language agnostic, so this different comparison
makes little sense and may serve only to confuse people.
* _alpm_pkg_cmp was unlike any other comparator function. None of the
rest had any dependency on anything but the content of the structs
being compared (e.g., they only used strcmp() or other basic
comparison operators).
This reverts commit 3e4d2c3aa6.
Signed-off-by: Dan McGee <dan@archlinux.org>
There was only one simple to handle case where we left a field
uninitialized; set it to NULL and use malloc() instead.
Signed-off-by: Dan McGee <dan@archlinux.org>
In every case we were calling calloc, the struct we allocated (or the
memory to be used) is fully specified later in the method.
For alpm_list_t allocations, we always set all of data, next, and prev.
For list copying and transforming to an array, we always copy the entire
data element, so no need to zero it first.
Signed-off-by: Dan McGee <dan@archlinux.org>
I'm really good at breaking this on a regular basis. If only we had some
sort of automated testing for this...
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync download code, we added an early check in 6731d0a940 for
sync download server existence so we wouldn't show the same error over
and over for each file to be downloaded. Move this check into the
download block so we only run it if there are actually files that need
to be downloaded for this repository.
Signed-off-by: Dan McGee <dan@archlinux.org>
When we switched to a file object and not just a simple string, we missed an
update along the way here in target-target conflicts. This patch looks
large, but it really comes down to one errant (char *) cast before that has
been reworked to explicitly point to the alpm_file_t object. The rest is
simply code cleanup.
Signed-off-by: Dan McGee <dan@archlinux.org>
A few parameters were outdated or wrongly named, and a few things were
explicitly linked that Doxygen wasn't able to resolve.
Signed-off-by: Dan McGee <dan@archlinux.org>
We returned the right error code but never set the flags accordingly.
Also, now that we can bail early, ensure we set the error code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds calls to gpgme_op_import_result() which we were not looking at
before to ensure the key was actually imported. Additionally, we do some
preemptive checks to ensure the keyring is even writable if we are going
to prompt the user to add things to it.
Signed-off-by: Dan McGee <dan@archlinux.org>
This also fixes a segfault found by dave when key_search is
unsuccessful; the key_search return code documentation has also been
updated to reflect reality.
Signed-off-by: Dan McGee <dan@archlinux.org>
Because we aren't using gpgv and a dedicated keyring that is known to be
all safe, we should honor this flag being set on a given key in the
keyring to know to not honor it. This prevents a key from being
reimported that a user does not want to be used- instead of deleting,
one should mark it as disabled.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add two new static methods, key_search() and key_import(), to our
growing list of signing code.
If we come across a key we do not have, attempt to look it up remotely
and ask the user if they wish to import said key. If they do, flag the
validation process as a potential 'retry', meaning it might succeed the
next time it is ran.
These depend on you having a 'keyserver hkp://foo.example.com' line in
your gpg.conf file in your gnupg home directory to function.
Signed-off-by: Dan McGee <dan@archlinux.org>
This moves the result processing out of the validation check loop itself
and into a new loop. Errors will be presented to the user one-by-one
after we fully complete the validation loop, so they no longer overlap
the progress bar.
Unlike the database validation, we may have several errors to process in
sequence here, so we use a function-scoped struct to track all the
necessary information between seeing an error and asking the user about
it.
The older prompt_to_delete() callback logic is still kept, but only for
checksum failures. It is debatable whether we should do this at all or
just delegate said actions to the user.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is for eventual use by the PGP key import code. Breaking this into
a separate commit now makes the following patches a bit easier to
understand.
Signed-off-by: Dan McGee <dan@archlinux.org>
This allows a frontend program to query, at runtime, what the library
supports. This can be useful for sanity checking during config-
requiring a downloader or disallowing signature settings, for example.
Signed-off-by: Dan McGee <dan@archlinux.org>
This takes the libraries hidden default out of the equation: hidden in
the sense that we can't even find out what it is until we create a
handle. This is a chicken-and-egg problem where we have probably already
parsed the config, so it is hard to get the bitmask value right.
Move it to the frontend so the caller can do whatever the heck they
want. This also exposes a shortcoming where the frontend doesn't know if
the library even supports signatures, so we should probably add a
alpm_capabilities() method which exposes things like HAS_DOWNLOADER,
HAS_SIGNATURES, etc.
Signed-off-by: Dan McGee <dan@archlinux.org>
This allows us to do all delta verification up front, followed by
whatever needs to be done with any found errors. In this case, we call
prompt_to_delete() for each error.
Add back the missing EVENT(ALPM_EVENT_DELTA_INTEGRITY_DONE) that
accidentally got removed in commit 062c391919.
Remove use of *data; we never even look at the stuff in this array for
the error code we were returning and this would be much better handled
by one callback per error anyway, or at least some strongly typed return
values.
Signed-off-by: Dan McGee <dan@archlinux.org>
If siglist->results wasn't a NULL pointer, we would try to free it
anyway, even if siglist->count was zero. Only attempt to free this
pointer if we had results and the pointer is valid.
Signed-off-by: Dan McGee <dan@archlinux.org>
This one can be overwhelming when reading debug output from a very large
package. We already have the output of each extracted file so we
probably can do without this in 99.9% of cases.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds two new static methods, check_validity() and load_packages(),
to sync.c which are simply code fragments pulled out of our
do-everything sync commit code.
Signed-off-by: Dan McGee <dan@archlinux.org>
In reality, there is no retrying that happens as of now because we don't
have any import or changing of the keyring going on, but the code is set
up so we can drop this in our new _alpm_process_siglist() function. Wire
up the basics to the sync database validation code, so we see something
like the following:
$ pacman -Ss unknowntrust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
$ pacman -Ss missingsig
error: core: missing required signature
error: core: missing required signature
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
Yes, there is some double output, but this should be fixable in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds a some new callback event and progress codes for package
loading, which was formerly bundled in with package validation before.
The main sync.c loop where loading occurred is now two loops running
sequentially. The behavior should not change with this patch outside of
progress and event display; more changes will come in following patches.
Signed-off-by: Dan McGee <dan@archlinux.org>
_alpm_pkg_load_internal() was becoming a monster. Extract the top bit of
the method that dealt with checksum and signature validation into a
separate method that should be called before one loads a package to
ensure it is valid.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is always true at the end since we return early if we couldn't
create the tmpdir, so it is totally unnecessary.
Signed-off-by: Dan McGee <dan@archlinux.org>
We shouldn't be going through the accessor that does a bunch of
unnecessary legwork, including potentially loading the pkgcache right
before we free it.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than using a string-based path, we can restore the working
directory via a file descriptor and use of fchdir().
From the getcwd manpage:
Opening the current directory (".") and calling fchdir(2) to
return is usually a faster and more reliable alternative when
sufficiently many file descriptors are available.
Signed-off-by: Dan McGee <dan@archlinux.org>
We did a lot of both malloc-ing and stack printing to form some paths in
this code. Attempt to unify it all into the one get_pkgpath() method by
adding an optional third "filename" parameter, and form the necessary
path string all in one go.
Signed-off-by: Dan McGee <dan@archlinux.org>
1. Don't run it if something failed in package removal- this mirrors
what we already do in sync transactions.
2. Don't run it if we are invoking it for the replaces removal bit of a
sync transaction- it doesn't make sense to run ldconfig halfway through
a sync install; we should only run it once at the end.
Signed-off-by: Dan McGee <dan@archlinux.org>
We add them to this list with the root path not appended; we should be
searching for them this way as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
We checked the (fgets == NULL and !feof) case, but never actually bailed
out of the loop if we were at the end of the file, causing infinite
looping.
Signed-off-by: Dan McGee <dan@archlinux.org>
This shouldn't really be declared with const, and causes a compile error
when -Wcast-qual is used. Remove the const specifier from the function
specification and all implementations.
Also fix one other trivial -Wcast-qual warning in _alpm_db_cmp().
Signed-off-by: Dan McGee <dan@archlinux.org>
We never ended up using or really needing this; kill it for now knowing
it is in git history if ever needed again.
Signed-off-by: Dan McGee <dan@archlinux.org>
This function doesn't exist on OSX. Since there aren't any other
candidates in alpm for which this function would make sense to use,
simply replace the function call with a loop that does the equivalent.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Similar to an earlier commit which accounts for .part files for full
packages, calculate the download_size for deltas keeping mind the
possibility of a partial transfer.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Check for the existance of a partial download of a package file before
jumping to delta calculations. Currently, if there were 10MiB remaining
in a 100MiB the values passed to the front end do not reflect this.
Refactored from an old patch originally by Dan.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
if ~/.netrc exists and has credentials for the hostname requested in a
download, they will be provided in an http auth request. This can still
be overridden by explcitly declaring user:pass in the URL.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This adjusts type usage to match POSIX provided types from
<sys/types.h> rather than assuming everything will fit in a long or
unsigned long. Use fsblkcnt_t (unsigned) and blkcnt_t (signed) as
appropriate. These are affected the same way off_t is on 32 bit
platforms, where the types are extende to 64 bits if large file support
is enabled.
Because most numbers here are block counts, this isn't
near as pressing as using a 32-bit variable for file sizes where
anything over 2GiB can burn you; we likely can support files at least
512 but mainly 4096 times larger.
Signed-off-by: Dan McGee <dan@archlinux.org>
Changes in commit dc3336c277 caused this to stop working as expected for
sync packages, due to the way the logic is structured. Ensure we always
enter the signature code if the bitflag is flipped on to check
signatures for packages. Rename 'use_sig' to 'has_sig' for clarity.
Signed-off-by: Dan McGee <dan@archlinux.org>
This gives us some amount of room to grow in case we ever find another
reason that we might return with an error from the progress callback.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
We lost some of this output in the fetch->curl conversion, but I also
noticed in FS#25852 that we just lack some of this useful information
along the way.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
_alpm_filecache_setup() destroys the list of cachedirs when it finds no
writeable directories in the config. This put us in an awkward situation
where _alpm_filecache_find() would locate a downloaded file in a r/o
cachedir, but then fail to install it after _alpm_filecache_setup() is
called (with a NULL argument). Change this behavior to merely prepend
the temporary directory to the list of available cachedirs.
Dan exposed it in e07547ee4e, as now a package can be found in a
directory we may not be able to actually store packages in.
Reported-by: Rémy Oudompheng <remy@archlinux.org>
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Neither deltas nor filename attributes are ever present in the local
database, so we can remove all of the indirection for accessing these
attributes.
Signed-off-by: Dan McGee <dan@archlinux.org>