When doing a bare -U operation on a local package that doesn't pull in
any dependencies from the sync databases, we can get away with missing
database files. This makes the check conditional on no sync targets
found in the target list. This is not the prettiest code here so we have
a bit of hackish behavior required to straighten both the behavior and
the nonsensical error message out.
Addresses FS#26899.
Signed-off-by: Dan McGee <dan@archlinux.org>
Bump the version, update the translation template files, and fill in
NEWS with relevant commits and changes since 4.0.0.
Signed-off-by: Dan McGee <dan@archlinux.org>
The point of this early compare to NULL byte check was so we could bail
early and skip the strcmp() call. Given we weren't doing the check
right, this never exited early. Fix it to work as intended.
Noticed-by: Pepe Juárez <trulustapa@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
Replacing the strdup when after the first NULL check assures that we get
continue with payload->remote_name defined.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Dan: fix mask calculation, add it to the success/fail block instead.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This takes the place of three previously used constants:
ARCHIVE_DEFAULT_BYTES_PER_BLOCK, BUFFER_SIZE, and CPBUFSIZE.
In libarchive 3.0, the first constant will be no more, so we can ensure
we are forward-compatible by removing our usage of it now. The rest are
unified for consistency.
By default, we will use the value of BUFSIZ provided by <stdio.h>, which
is 8192 on Linux. If that is undefined, a default value is provided.
Signed-off-by: Dan McGee <dan@archlinux.org>
There aretwo seperate issues in the same block of file conflict
checking code here:
1) If realpath errored, such as when a symlink was broken, we would call
'continue' rather than simply exit this particular method of
resolution. This was likely just a copy-paste mistake as the previous
resolving steps all use loops where continue makes sense. Refactor
the check so we only proceed if realpath is successful, and continue
with the rest of the checks either way.
2) The real problem this code was trying to solve was canonicalizing
path component (e.g., directory) symlinks. The final component, if
not a directory, should not be handled at all in this loop. Add a
!S_ISLNK() condition to the loop so we only call this for real files.
There are few other small cleanups to the debug messages that I made
while debugging this problem- we don't need to keep printing the file
name, and ensure every block that sets resolved_conflict to true prints
a debug message so we know how it was resolved.
This fixes the expected failures from symlink010.py and symlink011.py,
while still ensuring the fix for fileconflict007.py works.
Signed-off-by: Dan McGee <dan@archlinux.org>
There is some pecular behavior going on here when a package is loaded
that has no files, as is very common in our test suite. When we enter
the realloc/sort code, a package without files will call the following:
files = realloc(NULL, 0);
One would assume this is a no-op, returning a NULL pointer, but that is
not the case and valgrind later reports we are leaking memory. Fix the
whole thing by skipping the reallocation and sort steps if the pointer
is NULL, as we have nothing to do.
Note that the package still gets marked as 'files loaded', becuase
although there were none, we tried and were successful.
Signed-off-by: Dan McGee <dan@archlinux.org>
Extend the return values of compute_download_size to allow callers to
know that a .part file exists for the package.
This extra value isn't currently used, but it'll be needed later on.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds a logger to the CURLE_OK case so we can always know the return
code if it was >= 400, and debug log it regardless. Also adjust another
logger to use the cURL error message directly, as well as use fstat()
when we have an open file handle rather than stat().
Signed-off-by: Dan McGee <dan@archlinux.org>
The absolutely terrible part about this is the failure on GPGME's part
to distinguish between "key not found" and "keyserver timeout". Instead,
it returns the same silly GPG_ERR_EOF in both cases (why isn't
GPG_ERR_TIMEOUT being used?), leaving us helpless to tell them apart.
Spit out a generic enough error message that covers both cases;
unfortunately we can't provide much guidance to the user because we
aren't sure what actually happened.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is in the realm of "probably not going to happen", but if someone
were to translate "disk" to a string longer than 256 characters, we
would have a smashed/corrupted stack due to our unchecked strcpy() call.
Rework the function to always length-check the value we copy into the
hostname buffer, and do it with memcpy rather than the more cumbersome
and unnecessary snprintf.
Finally, move the magic 256 value into a constant and pass it into the
function which is going to get inlined anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the default configuration, we can enter the signing code but still
have nothing to do with GPGME- for example, if database signatures are
optional but none are present. Delay initialization of GPGME until we
know there is a signature file present or we were passed base64-encoded
data.
This also makes debugging with valgrind a lot easier as you don't have
to deal with all the GPGME error noise because their code leaks like a
sieve.
Signed-off-by: Dan McGee <dan@archlinux.org>
We have a few incomplete translations, but these should be addressable
before the 4.0.1 maint release that is surely not that far in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
Similar to what we did in edd9ed6a, disconnect the relationship with our
stack allocated error buffer from the curl handle. Just as an FTP
connection might have some network chatter on teardown causing the
progress callback to be triggered, we might also hit an error condition
that causes curl to write to our (now out of scope) error buffer.
I'm unable to reproduce FS#26327, but I have a suspicion that this
should fix it.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Expose the current static get_pkgpath() function internally to the rest
of the library as _alpm_local_db_pkgpath(). This allows use of this
convenience function in add.c and remove.c when forming the path to the
scriptlet location.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add an is_archive parameter to reduce the amount of black magic going
on. Rework to use fewer PATH_MAX sized local variables, and simplify
some of the logic where appropriate in both this function and in the
callers where duplicate calls can be replaced by some conditional
parameter code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a poor place for it, and it will likely move again in the
future, but it's better to have it here than as a static variable.
Initialization of this variable is now no longer necessary as its
zeroed on creation of the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This was done to squash a memory leak in the sync database download
code. When we downloaded a database and then reused the payload struct,
we could find ourselves calling get_fullpath() for the signatures and
overwriting non-freed values we had left over from the database
download.
Refactor the payload_free function into a payload_reset function that we
can call that does NOT free the payload itself, so we can reuse payload
structs. This also allows us to move the payload to the stack in some
call paths, relieving us of the need to alloc space.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than always initializing it on any handle creation. There are
several frontend operations (search, info, etc.) that never need the
download code, so spending time initializing this every single time is a
bit silly. This makes it a bit more like the GPGME code init path.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than free them right away, keep the list on the transaction as
we already do with add and remove lists. This is necessary because we
may be manipulating pointers the frontend needs to refer to packages,
and we are breaking our contract as stated in the alpm_add_pkg()
documentation of only freeing packages at the end of a transaction.
This fixes an issue found when refactoring the package list display
code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is definitely not in the normal hot path, so we can afford to do
some temporary heap allocation here.
Signed-off-by: Dan McGee <dan@archlinux.org>
No wonder these were slower than expected. We were only reading 4
(32-bit) or 8 (64-bit) bytes at a time and feeding it to the hash
functions. Define a buffer size constant and use it correctly so we feed
8K at a time into the hashing algorithm.
This cut one larger `-Sw --noconfirm` operation, with nothing to
actually download so only timing integrity, from 3.3s to 1.7s.
This has been broken since the original commit eba521913d introducing
OpenSSL usage for crypto hash functions. Boy do I feel stupid.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync code, we explicitly allocated a string for this field, while
in the dload code itself it was filled in with a pointer to another
string. This led to a memory leak in the sync download case.
Make remote_name non-const and always explicitly allocate it. This patch
ensures this as well as uses malloc + snprintf (rather than calloc) in
several codepaths, and eliminates the only use of PATH_MAX in the
download code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This commit was made with the intent of displaying "correctly" sorted
package lists to users. Here are some reasons I think this is incorrect:
* It is done in the wrong place. If a frontend application wants to show
a different order of packages dependent on locale, it should do that
on its own.
* Even if one wants a locale-specific order, almost all package names
are all ASCII and language agnostic, so this different comparison
makes little sense and may serve only to confuse people.
* _alpm_pkg_cmp was unlike any other comparator function. None of the
rest had any dependency on anything but the content of the structs
being compared (e.g., they only used strcmp() or other basic
comparison operators).
This reverts commit 3e4d2c3aa6.
Signed-off-by: Dan McGee <dan@archlinux.org>
There was only one simple to handle case where we left a field
uninitialized; set it to NULL and use malloc() instead.
Signed-off-by: Dan McGee <dan@archlinux.org>
In every case we were calling calloc, the struct we allocated (or the
memory to be used) is fully specified later in the method.
For alpm_list_t allocations, we always set all of data, next, and prev.
For list copying and transforming to an array, we always copy the entire
data element, so no need to zero it first.
Signed-off-by: Dan McGee <dan@archlinux.org>
I'm really good at breaking this on a regular basis. If only we had some
sort of automated testing for this...
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync download code, we added an early check in 6731d0a940 for
sync download server existence so we wouldn't show the same error over
and over for each file to be downloaded. Move this check into the
download block so we only run it if there are actually files that need
to be downloaded for this repository.
Signed-off-by: Dan McGee <dan@archlinux.org>
When we switched to a file object and not just a simple string, we missed an
update along the way here in target-target conflicts. This patch looks
large, but it really comes down to one errant (char *) cast before that has
been reworked to explicitly point to the alpm_file_t object. The rest is
simply code cleanup.
Signed-off-by: Dan McGee <dan@archlinux.org>
A few parameters were outdated or wrongly named, and a few things were
explicitly linked that Doxygen wasn't able to resolve.
Signed-off-by: Dan McGee <dan@archlinux.org>
We returned the right error code but never set the flags accordingly.
Also, now that we can bail early, ensure we set the error code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds calls to gpgme_op_import_result() which we were not looking at
before to ensure the key was actually imported. Additionally, we do some
preemptive checks to ensure the keyring is even writable if we are going
to prompt the user to add things to it.
Signed-off-by: Dan McGee <dan@archlinux.org>
This also fixes a segfault found by dave when key_search is
unsuccessful; the key_search return code documentation has also been
updated to reflect reality.
Signed-off-by: Dan McGee <dan@archlinux.org>
Because we aren't using gpgv and a dedicated keyring that is known to be
all safe, we should honor this flag being set on a given key in the
keyring to know to not honor it. This prevents a key from being
reimported that a user does not want to be used- instead of deleting,
one should mark it as disabled.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add two new static methods, key_search() and key_import(), to our
growing list of signing code.
If we come across a key we do not have, attempt to look it up remotely
and ask the user if they wish to import said key. If they do, flag the
validation process as a potential 'retry', meaning it might succeed the
next time it is ran.
These depend on you having a 'keyserver hkp://foo.example.com' line in
your gpg.conf file in your gnupg home directory to function.
Signed-off-by: Dan McGee <dan@archlinux.org>