When checking if a package owns a directory, it is important to check
not only that all the files in the directory are part of the package,
but also if the directory is part of a package. This catches empty
subdirectories during conflict checking for directory to file/symlink
replacements.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
When two packages own an empty directory, pacman finds no conflict when
one of those packages wants to replace the directory with a file or a
symlink. When it comes to actually extracting the new file/symlink,
pacman sees the directory is still there (we do not remove empty
directories if they are owned by a package) and refuses to extract.
Detect this potential conflict early and bail. Note that it is a
_potential_ conflict and not a guaranteed one as the other package owning
the directory could be updated or removed first which would remove
the conflict. However, pacman currently can not sort package installation
order to ensure this, so this conflict requires manual upgrade ordering.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Only load filesystem details for the mount points that we're actually
going to write to. This reduces our syscall count considerably. In the
case of installation, we would actually stat every mountpoint twice (an
extra round for download diskspace) which means (on my system) a total
of 60 syscalls to write to 3 partitions when installing the kernel
package. This change reduces the 60 syscalls down to the expected 3.
A slight debug output change is added here to discern between a
mountpoint added to our linked list versus when we actually load the fs
info.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
add mount_point_load_fsinfo() for platforms using getmntent().
Dan: move the #ifdef slightly so we don't have unused functions on
certain platforms (e.g., OS X).
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Similar to the case for makedepends, it is useful to be able to
access this information without parsing a PKGBUILD.
Signed-off-by: Allan McRae <allan@archlinux.org>
If known, callers can pass the line size to this function in order to
avoid an strlen call. Otherwise, they simply pass 0 and
_alpm_strip_newline will do the call instead.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
We inevitably call strlen() or similar on the line returned from
_alpm_archive_fgets(), so include the line size of the interesting line
in the struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
With lazy loading in place, it's now quite obvious that we aren't
necessarily checking the right mountpoint for necessary download space.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
This is useful for tools that automatically rebuild packages and
thus require to generate a build order. These entries are skipped
by pacman.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Apparently gcc 4.7 has decided that -Wshadow warnings aren't worth
reporting anymore even with the flag enabled. These were found on
an Ubuntu 10.04 install.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This path is rarely (read: never) taken in any normal run of the code,
so injecting the fprintf() call everywhere with the macro is a bit
overkill. Instead, add a lightweight _alpm_alloc_fail() function that
gets called instead.
This does have a reasonable effect on the size of the generated code;
most places using the macros provided by util.c have their code size
reduced.
Signed-off-by: Dan McGee <dan@archlinux.org>
Increment the strlen() provided value by 1 for the NULL byte so we use
the right value in all three places we later reference it.
Signed-off-by: Dan McGee <dan@archlinux.org>
There is little reason here to grab 4K from the heap only to return it a
few lines later. Instead, just use the stack to hold the returned value
saving ourselves the malloc/free cycle.
Signed-off-by: Dan McGee <dan@archlinux.org>
No one seems to do this "correctly", but for the sake of having an easy
method of detecting the presence and version of libalpm on a given
system, we provide a straightforward .pc file.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
There isn't a whole lot of reason other than code clarity for this, but
it makes it a bit more obvious where multivalued attributes start.
Signed-off-by: Dan McGee <dan@archlinux.org>
Detected by clang scan-build static code analyzer.
* Don't attempt to free an uninitialized gpgme key variable
* Initialize answer variable before asking frontend a question
* Pass by reference instead of value if uninitialized fields are
possible in download signal handler code
* Ensure we never call strlen() on NULL payload->remote_name value
Signed-off-by: Dan McGee <dan@archlinux.org>
Not sure why this one wasn't showing up on x86_64, but this fixes the
compile on i686.
diskspace.c: In function 'calculate_removed_size':
diskspace.c:247:4: error: assuming signed overflow does not occur when negating a division [-Werror=strict-overflow]
cc1: all warnings being treated as errors
Signed-off-by: Dan McGee <dan@archlinux.org>
This fixes a bunch of small issues in order to enable a clean
successful build with a crazy number of GCC warning flags. A lot of
these changes are covered by -Wshadow, -Wformat-security, and
-Wstrict-overflow=5.
Signed-off-by: Dan McGee <dan@archlinux.org>
Continue the trend of not touching the environment CFLAGS, ensuring that
the user always has the final say.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
- handle gpgme libs and cflags separately rather than appending to
CFLAGS and LDFLAGS
- be consistent in AC_LINK_IFELSE check for gpgme 1.3.0 (though this is
irrelephant since we don't actually run)
- be consistent with usage of "have" and "with" variables (this
actually ends up reducing SLOC)
- when voluntary detection fails, unset GPGME_CFLAGS and GPGME_LIBS
- when requested support fails the version check, complain about the min
version.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Failure isn't always due to the package file location not existing;
permission issues can also play a part on something like a FUSE-based
filesystem inaccessible to root.
Signed-off-by: Dan McGee <dan@archlinux.org>
The initial patch to implement this achieved nothing apart from
adding a configure option. This patch makes that configure option
do what it advertises.
Note that specifing any shell apart from /bin/sh causes testsuite
failures as /bin/sh is the only shell in the testing environment.
Bug-found-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Dan was right. This should have been FREE(), not free().
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Pull updates from transifex, run update-po on all files, fix a few
errors, and push them back to Transifex.
Signed-off-by: Dan McGee <dan@archlinux.org>
For key searches only, gpg2 will fail to lookup any and all keys that
are not prefixed with 0x.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Using fputs should be faster as no format string parsing is required. It
also prevents silly errors related to unescaped '%' signs, and removes
the need to double them up in a lot of places.
Signed-off-by: Dan McGee <dan@archlinux.org>
This removes a call to _alpm_local_db_pkgpath() as well as an access()
call when reading the local database. This appears to be code from 2006
that has stuck around. We don't need it because:
1) We never use this path except to check it via access(); however, we
are already in a readdir() loop so it exists, or at least did at the
time of the call.
2) The fopen() and other calls will fail on accessing the database files
anyway, and we need to check those for errors.
Signed-off-by: Dan McGee <dan@archlinux.org>
In case we have a mirror failure, unlink_on_fail would remain set,
causing an interrupt in a successive download attempt to be wrongly
unlinked.
This also fixes a memory leak in the url member, as we would allocate
over the previous, unfreed URL.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
To avoid conflicts on reusing a payload after a failed download, ensure
that we reset the filename hints in the payload struct prior to the
download operation.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
We had one stubbed out so we didn't require a translation update, and
the other is more a code style issue.
Signed-off-by: Dan McGee <dan@archlinux.org>
Unify the output for local and sync packages by only printing a
list of possible validation types for sync packages. This also
has the advantage of not printing the very long sha256 checksum
which line wrapped on a standard width terminal.
Signed-off-by: Allan McRae <allan@archlinux.org>
When installing a package, store information on which validation
method was used and output this on "pacman -Qi" operations.
e.g.
Validated By : SHA256 Sum
Possible values are Unknown, None, MD5 Sum, SHA256 Sum, Signature.
Dan: just a few very minor tweaks.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
No new behaviour introduced, everything should work exactly as before.
Dan: refactored to use the single alpm_depend_t structure.
Signed-off-by: Benedikt Morbach <benedikt.morbach@googlemail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is the first step in parsing and handling optdepends. There is no
behavior change introduced in this commit; however, depends that contain
a ": " string will now be parsed as having a description and it will be
stored in the depend structure. Later patches will utilize this new
field as appropriate.
This is heavily based on the work of Benedikt, who did something similar
but introduced a new type for this rather than only a new field to the
existing type.
Heavily-influenced-by: Benedikt Morbach <benedikt.morbach@googlemail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
* it updates to all translations
* minor fr, pt_BR, de, lt, sk and uk updates
* add new strings in pacman translation catalog
Signed-off-by: Dan McGee <dan@archlinux.org>
If we begin to create a file list when loading a package, but abort
because of an error to one of our goto labels, the memory used to create
the file list will leak. This is because we use a set of local variables
to hold the data, and thus _alpm_pkg_free() cannot clean up for us.
Use the file list struct on the package object as much as possible to
keep state when building the file list, thus allowing _alpm_pkg_free()
to clean up any partially built data.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is easily triggered via a `pacman -Sc` operation when it attempts
to open a delta file as a package- we end up leaking loads of memory
due to us never freeing the archive object. When you have upwards of
1200 delta files in your sync database directory, this results in a
memory leak of nearly 1.5 MiB.
Also fix another memory leak noticed at the same time- we need to call
the internal _alpm_pkg_free() function, as without the origin data being
set the public free function will do nothing.
Signed-off-by: Dan McGee <dan@archlinux.org>
It seems that if we pass the permissions that we want the created
directory to have, then we should probably use it...
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Add 2012 to the copyright range for all libalpm and pacman source files.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Simplify the implementation:
- allocate and manipulate a copy of the passed in path rather than
building out a path as the while loop progresses
- use simple pointer arithmetic to skip uninteresting cases
- use mkdir(3)'s return value and errno to detect failure
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
It can happen that the to-be-downloaded file cannot be created in cachedir.
For example, I am an -Sup user, and it is comfortable to set --cachedir to
/mnt/pendrive, which is a FAT filesystem, so files like
capseo-1:0.3-2-i686.pkg.tar.xz cannot be downloaded to there.
Before this patch, pacman didn't give clear output about what happens when
the download code could not create the necessary file. This can be confusing
with -Su. An example output:
***
$ sudo pacman -S capseo bochs --cachedir /c/TEMP
resolving dependencies...
looking for inter-conflicts...
Targets (2): bochs-2.4.6-1 capseo-1:0.3-2
Total Download Size: 0.61 MiB
Total Installed Size: 2.61 MiB
Proceed with installation? [Y/n]
:: Retrieving packages from extra...
warning: failed to retrieve some files from extra
bochs-2.4.6-1-i686 611.5 KiB 118K/s 00:05 [------------------] 97%
error: failed to commit transaction (unexpected error)
Errors occurred, no packages were upgraded.
***
After the patch, pacman will give more informative error message (and
pm_errno is set properly):
***
error: could not open file '/c/TEMP/capseo-1:0.3-2-i686.pkg.tar.xz.part': Invalid argument
error: failed to commit transaction (failed to retrieve some files)
***
Unfortunately, the "could not open file" error message is printed for
every mirror (that can be dozens of lines), which is ugly, but at least
informative... Without modifying the download logic (for example, by
introducing -2 return value for _alpm_download() to indicate giving up),
this ugliness cannot be eliminated.
Signed-off-by: Nagy Gabor <ngaba@bibl.u-szeged.hu>
Signed-off-by: Dan McGee <dan@archlinux.org>
Mostly a waste of time. Sure, we no longer make sure your pacman
database partition has enough space, but if you are using this option
you better know what you are doing anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
(cherry picked from commit ee96900605)
If one had a mountpoint at '/e' (don't ask), a file being installed to
'/etc' would map to it incorrectly. Ensure we do more than just prefix
matching on paths by doing some more sanity checks once the simple
strncmp() call succeeds.
Signed-off-by: Dan McGee <dan@archlinux.org>
This reverts commit f3fa77bcf1 along with
making other necessary changes to fully back this (mis)feature out until
we can do it correctly.
The quick summary here is this was not implemented correctly; provides
are not fully taken into account in this logic, and making that happen
exposes a lot of other flaws in this code that are covered up later on
in the dependency resolving process by several other pieces of
convoluted and conditional logic.
Tests have been adjusted accordingly. Some test EXISTS conditions have
been removed as we already know the package is installed locally, and we
also are checking the VERSION condition anyway.
With these two related revert commits, we do have some changes in test
pass/fail results:
* upgrade078.py: does not pass, this is due to --recursive getting
removed for -U/-S operations after this commit.
* sync302.py: the version checks have been disabled, so this test
continues to pass but has been scaled back in scope.
* sync303.py: now passes, was failing before.
* sync304.py: still failing, was failing before.
* sync305.py: now passes, was failing before.
* sync306.py: still passes, was passing before.
Signed-off-by: Dan McGee <dan@archlinux.org>
Set errno to 0 at the start of _alpm_open_archive as it is not set when
archive_read_open_fd fails. This can result in _alpm_pkg_load_internal
thinking errno == ENOENT and setting the wrong pm_errno. e.g.
Before:
> testpkg pacman-4.0.1-4-i686.pkg.tar.gz.sig
error: could not open file pacman-4.0.1-4-i686.pkg.tar.gz.sig: Unrecognized archive format
Cannot find the given file.
After:
> testpkg pacman-4.0.1-4-i686.pkg.tar.gz.sig
error: could not open file pacman-4.0.1-4-i686.pkg.tar.gz.sig: Unrecognized archive format
Cannot open the given file.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Some distributions insist on using bash specific commands in their
install scripts under the assumption that "sh" is a symlink to bash.
This can causes issues if (e.g.) their users what to change sh to
point at another shell, such as dash, that does not support these
features. Add a configure option to explicitly set the shell being
used to run install scripts.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
pacman -U <pkg> returns a bogus "could not find or read package" if the
file is on a fuse file system that doesn't allow root access. Debug
output isn't very helpful here either so we should log why the access
check failed.
The other 2 checks already log something when failing so logging a more
specific error won't hurt either.
Signed-off-by: Florian Pritz <bluewind@xinu.at>
Signed-off-by: Dan McGee <dan@archlinux.org>
The max filesize for a delta download must be the full size of the delta
file, not just what's remaining.
Fixes FS#28345
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is after some manual massaging to fix issues with newlines in some
translations of the script catalogs.
Signed-off-by: Dan McGee <dan@archlinux.org>
This makes several small adjustments to our exposed method names, and in
one case, parameters. The justification here is to make methods less odd
in their naming convention. If a method takes an alpm_db_t argument, the
method should be named 'alpm_db_*', but perhaps more importantly, if it
doesn't take a database as the first parameter, it should not.
Summary of changes:
alpm_db_register_sync -> alpm_register_syncdb
alpm_db_unregister_all -> alpm_unregister_all_syncdbs
alpm_option_get_localdb -> aplpm_get_localdb
alpm_option_get_syncdbs -> aplpm_get_syncdbs
alpm_db_readgroup -> alpm_db_get_group
alpm_db_set_pkgreason -> alpm_pkg_set_reason
All methods keep the same argument list except for alpm_pkg_set_reason;
there we drop the 'handle' argument as it can be retrieved from the
passed in package object.
Signed-off-by: Dan McGee <dan@archlinux.org>
Don't use trailing commas in enums if people really want to use a strict
C89 compiler, and document why on earth one particular enum uses bitmask
values when it doesn't seem necessary.
With comments, shoot for more consistency. When something is a
one-liner, keep it that way and move the whole /** sequence */ to one
line. When it needs more than one line, ensure we format most of them in
a similar fashion.
Two minor function signature adjustments are made that don't change
anything other than matching the parameter name (name -> filename)
and fitting in with our coding style (type* var -> type *var).
Signed-off-by: Dan McGee <dan@archlinux.org>
The pacman-scripts catalog is omitted here due to various newline errors
I don't have the time to fix right now.
Signed-off-by: Dan McGee <dan@archlinux.org>
Very rarely a segfault would occur when removing a number of packages
due to a corrupted list for the local database (FS#27805, FS#28195).
This was caused by the alpm_list_msort function not correctly dealing
with the two new head node's prev values.
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This moves the code for removal of local database entries right into
be_local.c, which was the last user of the rmrf() function we had in our
utility source file. We can simplify the implementation and make it
non-recursive as we know the structure of the local database entries.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is particularly important in the case of FTP control connections,
which may be closed by rogue NAT/firewall devices detecting idle
connections on larger transfers which may take 5-10+ minutes.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Was able to get my hands on one of these boxes today, so add yet another
new way of doing this. I'm glad these calls are so standardized. This
was compile tested on Linux and Illumos and seems to still be working in
both places.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rework the frontend and backend to allow passing a ratio value in for
UseDelta rather than having a hardcoded #define-d 0.7 value always used.
This is useful for those with fast connections, who would likely benefit
from tuning this ratio to lower values; it is also useful for general
testing purposes.
The libalpm API changes for this, but we do support the old config file
format with a no-value 'UseDelta' option; in this case we simply use the
old default of 0.7.
We clamp the ratio values to a sane range between 0.0 and 2.0, allowing
ratios above 1.0 for testing purposes.
Signed-off-by: Dan McGee <dan@archlinux.org>
The entry's name is only used when not "." or ".." so only print the
string then.
Signed-off-by: Olivier Brunel <i.am.jack.mail@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
We lost this logic somewhere between the libfetch and libcurl
transition, as it existed in the internal downloader, but was pulled
back only into the sync workflow. Add a helper function that will let us
check for existance in the filecache prior to calling the downloader.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
We don't need to open the data to be checked if we don't have a
signature to check against, so postpone that open until we know we have
either the base64_data or a valid signature file.
Signed-off-by: Dan McGee <dan@archlinux.org>
"invalid" in this case simply means files that may or may not be
archives. Discovered via a `pacman -Sc` operation with delta files in
the package cache directory, but can be triggered if any file is passed
to `pacman -Ql` that isn't an archive, for instance, or if the sync
database file is not an archive.
Fix it up so we are more careful about calling archive_read_finish()
only on archives that are valid and have not already been closed, and
teach our archive open function to set the returned archive to NULL if
we aren't going to be returning something valid anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
In both cases we can go with the slightly leaner <stdint.h> header
include since we aren't using the print macros.
Signed-off-by: Dan McGee <dan@archlinux.org>
A look at what this does on 64 bit systems since we were using the
unnecessarily large 'unsigned long' type before even though it was 64
bits wide:
$ ~/bin/bloat-o-meter libalpm.so.old lib/libalpm/.libs/libalpm.so
add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-10412 (-10412)
function old new delta
md5_finish 370 356 -14
sha2_finish 547 531 -16
md5_process 3762 2643 -1119
sha2_process 20356 11093 -9263
The code size is nearly halved in the sha2 case (44% smaller code size),
and md5 gets a nice size reduction (27% smaller) as well.
We also move base64 code to <stdint.h> types as well; we can use
'uint32_t' rather than 'unsigned long' for at least two variables in the
decode function. This doesn't net the same size benefit as the hash code
case, but it is more proper.
Signed-off-by: Dan McGee <dan@archlinux.org>
PGP keyservers are pieces of sh** when it comes to searching for
subkeys, and only allow it if you submit an 8-character fingerprint
rather than the recommended and less chance of collision 16-character
fingerprint.
Add a second remote lookup for the 8-character version of a key ID if we
don't find anything the first time we look up the key. This fixes
FS#27612 and the deficiency has been sent upstream to the GnuPG users
mailing list as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
This patch changes a variety of small things related to our pkghash
implementation with an eye toward performance, especially on native
32-bit systems.
* Use `unsigned int` rather than `size_t` for hash sizes. We already
return ERANGE for any attempted creation of a hash greater than 1
million elements, so unsigned int is more than large enough for our
purposes. Switching to this type allows 32 bit systems to do native
math without helper functions from libgcc.
* _alpm_pkghash_create() now internally adds extra padding for
additional array elements, rather than that being the responsibility of
the caller.
* #define values are moved into static const values in pkghash.c; a new
`stride` value is also extracted (but remains set at 1).
* Division and modulus operators are removed from the normal find and
add paths if possible. We store the upper limit of the number of
elements in the hash so we no longer need to calculate this every
element addition. When doing wraparound position calculations, we only
apply the modulus operator if the value is greater than the number of
buckets.
Signed-off-by: Dan McGee <dan@archlinux.org>
be_package.c: In function 'parse_descfile':
be_package.c:181:28: error: comparison between signed and unsigned
integer expressions [-Werror=sign-compare]
ptr - key + 2 is guaranteed to be > 0 so we can cast to size_t
Signed-off-by: Allan McRae <allan@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Now that filelist arrays know their own size, we don't need to do the
bookkeeping we used to do when they were linked lists. Remove some of
the counter variables and use math instead.
Signed-off-by: Dan McGee <dan@archlinux.org>
This reduces the number of functions we call by log(n) in this function,
and the inlined version is trivial and barely increases the size of the
function.
Signed-off-by: Dan McGee <dan@archlinux.org>
This reduces the number of regcomp() calls when parsing delta entries in
the database from once per entry to once for the entire context handle
by storing the compiled regex data on the handle itself. Just as we do
with the cURL handle, we initialize it the first time it is needed and
free it when releasing the handle.
A few other small tweaks to the parsing function also take place,
including using the stack to store the transient and short file size
string while parsing it.
When parsing a sync database with 1378 delta entries, this reduces the
time of a `pacman -Sl deltas` operation by 50% from 0.22s to 0.12s.
Signed-off-by: Dan McGee <dan@archlinux.org>
In commit 4c5e7af32f, we changed this code to use the regex gathered
substrings. However, we failed to correctly store the delta file name
(leaking memory), as well as freeing the temporary string used to hold
the file size string.
Signed-off-by: Dan McGee <dan@archlinux.org>
More than likely the compiler will do the three operation breakdown we
had here before (2 shifts + subtraction), but let the compiler do the
optimizations and make the actual operation more obvious. This actually
slightly shrinks the function binary size, likely due to instruction
reordering or something.
Signed-off-by: Dan McGee <dan@archlinux.org>
This eliminates the need for strtrim() usage completely, instead relying
on the fact that the only allowed delimiter between key and value is the
" = " string.
Signed-off-by: Dan McGee <dan@archlinux.org>
Extract the actual unlinking of files into a new method, which
eliminates a goto used for flow control. Also fix up a few small issues
in the code:
* Unnecessary (unsigned long) cast, use '%zd' instead
* Total up errors returned from unlink_file calls and return to caller
* Be consistent with scriptlets- we run pre_remove on dbonly, so we
should also run post_remove. Both can be disabled by way of the
--noscriptlet argument.
* Don't pass an invalid pointer to oldpkg to the event callbacks;
instead call the callback before we free the object.
Signed-off-by: Dan McGee <dan@archlinux.org>
Used in alpm_compute_md5sum() and alpm_compute_sha256sum().
Signed-off-by: Diogo Sousa <diogogsousa@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
Ensures that config.h is always ordered correctly (first) in the
includes. Also means that new source files get this for free without
having to remember to add it.
We opt for -imacros over -include as its more portable, and the
added constraint by -imacros doesn't bother us for config.h.
This also touches the HACKING file to remove the explicit mention of
config.h as part of the includes.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Mostly a waste of time. Sure, we no longer make sure your pacman
database partition has enough space, but if you are using this option
you better know what you are doing anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
It is quite easy to hoist this potentially repeated computation out of
the loop; even if we don't end up using it, it is super cheap to do it
only once.
Signed-off-by: Dan McGee <dan@archlinux.org>
The return should probably be checked to ensure its not longer than
PATH_MAX, but I have no idea what the correct behavior is when that
happens.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
As per HACKING file, we use 'CTRL(' rather than 'CTRL ('
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Since we know the length of the line, we can use this all the way
through and do a cheaper operation than strdup() by just invoking malloc
and memcpy directly.
Signed-off-by: Dan McGee <dan@archlinux.org>
We were using the size of a pointer, not the size of the whole
archive_read_buffer struct. Thanks to Clang/LLVM 3.0 and Allan/Dave in
IRC for finding this one.
Signed-off-by: Dan McGee <dan@archlinux.org>
We had a 16 KiB limit on database signatures, we should do the same here
too to have a slight sanity check, even if we can't do so for the
package itself yet.
Signed-off-by: Dan McGee <dan@archlinux.org>
We do this in several of the package duplication steps; add a helper
function for doing so to reduce some of the repetitive code.
Also add a free_deplist function for our repeated depend list free calls
of both the data and the list.
Signed-off-by: Dan McGee <dan@archlinux.org>
Made existing documentation more consistent and added
documentation where there was none. One function still
needs documentation and is marked with 'TODO'.
Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
This moves the common setup code of about 5 different callers into one
method. Error messages will now be common and shared in all places;
several paths did not have any messages at all before.
In addition, we now pick an ideal block size for the archive read based
off the larger value of our default buffer size or the st.st_blksize
field. For a filesystem such as NFS, this is often much larger than the
default 8192- values such as 32768 and 131072 are common.
Signed-off-by: Dan McGee <dan@archlinux.org>
When doing a bare -U operation on a local package that doesn't pull in
any dependencies from the sync databases, we can get away with missing
database files. This makes the check conditional on no sync targets
found in the target list. This is not the prettiest code here so we have
a bit of hackish behavior required to straighten both the behavior and
the nonsensical error message out.
Addresses FS#26899.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is consistent with the other enums and structs, and should be
slightly more readable.
Signed-off-by: Jonathan Conder <jonno.conder@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
Bump the version, update the translation template files, and fill in
NEWS with relevant commits and changes since 4.0.0.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is work originally provided by Sascha Kruse on FS#20360 with only
minor adjustments to the implementation. It's been expanded to cover:
NoUpgrade, NoExtract, IgnorePkg, IgnoreGroup.
Adds tests ignore008, sync139, sync502, and sync503.
Also satisfies FS#18988.
Original-work-by: Sascha Kruse <knopwob@googlemail.com>
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
This is a simple change that allows comparions to be more in line with
how other checks are done. It will be necessary for ensuing patchwork
that implements fnmatch for comparing and assumes a specific argument
ordering.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
The point of this early compare to NULL byte check was so we could bail
early and skip the strcmp() call. Given we weren't doing the check
right, this never exited early. Fix it to work as intended.
Noticed-by: Pepe Juárez <trulustapa@gmail.com>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a trivial operation that doesn't require calling a function over
and over- just do some math and indexing into a character array.
Signed-off-by: Dan McGee <dan@archlinux.org>
This gives us a bit more control and over the archive reading process,
and a bit less is done behind the scenes. It also allows us to use
fstat() in preference to stat(), which should avoid some potential race
conditions.
Some reorganization is necessary to move the stat calls after the open()
calls. Error handling and cleanup in general is also improved, as we had
several potential memory and file handle leaks before in some error
paths.
Signed-off-by: Dan McGee <dan@archlinux.org>
This removes an unnecessary level of buffering. We are not doing
line-based I/O here, so we can read in blocks of 8K at a time directly
from the file.
Signed-off-by: Dan McGee <dan@archlinux.org>
Replacing the strdup when after the first NULL check assures that we get
continue with payload->remote_name defined.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Dan: fix mask calculation, add it to the success/fail block instead.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This takes the place of three previously used constants:
ARCHIVE_DEFAULT_BYTES_PER_BLOCK, BUFFER_SIZE, and CPBUFSIZE.
In libarchive 3.0, the first constant will be no more, so we can ensure
we are forward-compatible by removing our usage of it now. The rest are
unified for consistency.
By default, we will use the value of BUFSIZ provided by <stdio.h>, which
is 8192 on Linux. If that is undefined, a default value is provided.
Signed-off-by: Dan McGee <dan@archlinux.org>
There aretwo seperate issues in the same block of file conflict
checking code here:
1) If realpath errored, such as when a symlink was broken, we would call
'continue' rather than simply exit this particular method of
resolution. This was likely just a copy-paste mistake as the previous
resolving steps all use loops where continue makes sense. Refactor
the check so we only proceed if realpath is successful, and continue
with the rest of the checks either way.
2) The real problem this code was trying to solve was canonicalizing
path component (e.g., directory) symlinks. The final component, if
not a directory, should not be handled at all in this loop. Add a
!S_ISLNK() condition to the loop so we only call this for real files.
There are few other small cleanups to the debug messages that I made
while debugging this problem- we don't need to keep printing the file
name, and ensure every block that sets resolved_conflict to true prints
a debug message so we know how it was resolved.
This fixes the expected failures from symlink010.py and symlink011.py,
while still ensuring the fix for fileconflict007.py works.
Signed-off-by: Dan McGee <dan@archlinux.org>
There is some pecular behavior going on here when a package is loaded
that has no files, as is very common in our test suite. When we enter
the realloc/sort code, a package without files will call the following:
files = realloc(NULL, 0);
One would assume this is a no-op, returning a NULL pointer, but that is
not the case and valgrind later reports we are leaking memory. Fix the
whole thing by skipping the reallocation and sort steps if the pointer
is NULL, as we have nothing to do.
Note that the package still gets marked as 'files loaded', becuase
although there were none, we tried and were successful.
Signed-off-by: Dan McGee <dan@archlinux.org>
First, use fstat() in preference to stat() since we already have an open
file handle. This also removes the need to check for a symlink as that
is not possible when a file is opened.
Next, use archive_entry_mode() rather than archive_entry_stat() as we
only use the mode portion of the stat struct and the call is much
cheaper. Also delay it until it is necessary.
Signed-off-by: Dan McGee <dan@archlinux.org>
Extend the return values of compute_download_size to allow callers to
know that a .part file exists for the package.
This extra value isn't currently used, but it'll be needed later on.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds a logger to the CURLE_OK case so we can always know the return
code if it was >= 400, and debug log it regardless. Also adjust another
logger to use the cURL error message directly, as well as use fstat()
when we have an open file handle rather than stat().
Signed-off-by: Dan McGee <dan@archlinux.org>
Create a new static function called 'download_single_file' which
iterates over the servers for each payload.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This is done by both the delta and regular file code, so we can extract
a little helper method. Done mostly to satisfy my "why are we repeating
code here" itch.
Signed-off-by: Dan McGee <dan@archlinux.org>
Break out the logic of finding payloads into a separate static function
to avoid nesting mayhem. After gathering all the records, download them
all at once.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
The absolutely terrible part about this is the failure on GPGME's part
to distinguish between "key not found" and "keyserver timeout". Instead,
it returns the same silly GPG_ERR_EOF in both cases (why isn't
GPG_ERR_TIMEOUT being used?), leaving us helpless to tell them apart.
Spit out a generic enough error message that covers both cases;
unfortunately we can't provide much guidance to the user because we
aren't sure what actually happened.
Signed-off-by: Dan McGee <dan@archlinux.org>
This logic is reused in both diskspace and downloadspace check
functions, so pull it out into its own static method.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This function determines if the given cachedir has at least the given
amount of free space on it. This will be later used in the sync code to
preemptively halt downloads.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
We can take a large shortcut here that saves us a lot of time,
especially when upgrading packages with lots of directories. Obviously
iterating the full file list of every single package to determine if
this directory was present in any other package can take quite some time
on a system with many packages installed. We don't need to remove a
directory at all if we are upgrading a package and the version we are
moving to still had the directory.
Also make a small optimization on the package comparsion- we really only
care about equality here, not the result of the compare, so we can
shortcut using our name_hash.
What kind of benefit does this give us? Oh, only a reduction from 295.7
million to 1.4 million strcmp() calls (99.5% fewer) during a
`pacman -S linux libreoffice-common` operation.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is in the realm of "probably not going to happen", but if someone
were to translate "disk" to a string longer than 256 characters, we
would have a smashed/corrupted stack due to our unchecked strcpy() call.
Rework the function to always length-check the value we copy into the
hostname buffer, and do it with memcpy rather than the more cumbersome
and unnecessary snprintf.
Finally, move the magic 256 value into a constant and pass it into the
function which is going to get inlined anyway.
Signed-off-by: Dan McGee <dan@archlinux.org>
This one is pretty darn useless. Just derefence the ->data attribute
since the type is public anyway and save yourself the function call.
Signed-off-by: Dan McGee <dan@archlinux.org>
This will be useful when extending disk space checks to free space
checking before we download package files.
Signed-off-by: Dan McGee <dan@archlinux.org>
Instead of iterating character by character, use memchr() calls to
hopefully speed up the search. A newline is the most likely culprit, so
search for that first followed by a NULL byte if there was no newline in
the buffer.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the default configuration, we can enter the signing code but still
have nothing to do with GPGME- for example, if database signatures are
optional but none are present. Delay initialization of GPGME until we
know there is a signature file present or we were passed base64-encoded
data.
This also makes debugging with valgrind a lot easier as you don't have
to deal with all the GPGME error noise because their code leaks like a
sieve.
Signed-off-by: Dan McGee <dan@archlinux.org>
If you need zero-filled allocations, call CALLOC() instead.
This was from the original definition of these macros in commit
cc754bc6e3be0f3; hopefully our code is in the shape it needs to be to
switch this behavior.
Signed-off-by: Dan McGee <dan@archlinux.org>
These backup-related paths in package extraction are used on relatively
few files during the install process, so bump them off the stack and
into the heap. This removes the artificial PATH_MAX limitation on their
length as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
This will always be a 64-bit signed integer rather than the variable length
time_t type. Dates beyond 2038 should be fully supported in the library; the
frontend still lags behind because 32-bit platforms provide no localtime64()
or equivalent function to convert from an epoch value to a broken down time
structure.
Signed-off-by: Dan McGee <dan@archlinux.org>
This prepares the function to handle values past year 2038. The return type
is still limited to 32-bits on 32-bit systems; this will be adjusted in a
future patch.
Signed-off-by: Dan McGee <dan@archlinux.org>
We have a few incomplete translations, but these should be addressable
before the 4.0.1 maint release that is surely not that far in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
Similar to what we did in edd9ed6a, disconnect the relationship with our
stack allocated error buffer from the curl handle. Just as an FTP
connection might have some network chatter on teardown causing the
progress callback to be triggered, we might also hit an error condition
that causes curl to write to our (now out of scope) error buffer.
I'm unable to reproduce FS#26327, but I have a suspicion that this
should fix it.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Expose the current static get_pkgpath() function internally to the rest
of the library as _alpm_local_db_pkgpath(). This allows use of this
convenience function in add.c and remove.c when forming the path to the
scriptlet location.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add an is_archive parameter to reduce the amount of black magic going
on. Rework to use fewer PATH_MAX sized local variables, and simplify
some of the logic where appropriate in both this function and in the
callers where duplicate calls can be replaced by some conditional
parameter code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is a poor place for it, and it will likely move again in the
future, but it's better to have it here than as a static variable.
Initialization of this variable is now no longer necessary as its
zeroed on creation of the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This was done to squash a memory leak in the sync database download
code. When we downloaded a database and then reused the payload struct,
we could find ourselves calling get_fullpath() for the signatures and
overwriting non-freed values we had left over from the database
download.
Refactor the payload_free function into a payload_reset function that we
can call that does NOT free the payload itself, so we can reuse payload
structs. This also allows us to move the payload to the stack in some
call paths, relieving us of the need to alloc space.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than always initializing it on any handle creation. There are
several frontend operations (search, info, etc.) that never need the
download code, so spending time initializing this every single time is a
bit silly. This makes it a bit more like the GPGME code init path.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than free them right away, keep the list on the transaction as
we already do with add and remove lists. This is necessary because we
may be manipulating pointers the frontend needs to refer to packages,
and we are breaking our contract as stated in the alpm_add_pkg()
documentation of only freeing packages at the end of a transaction.
This fixes an issue found when refactoring the package list display
code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is definitely not in the normal hot path, so we can afford to do
some temporary heap allocation here.
Signed-off-by: Dan McGee <dan@archlinux.org>
No wonder these were slower than expected. We were only reading 4
(32-bit) or 8 (64-bit) bytes at a time and feeding it to the hash
functions. Define a buffer size constant and use it correctly so we feed
8K at a time into the hashing algorithm.
This cut one larger `-Sw --noconfirm` operation, with nothing to
actually download so only timing integrity, from 3.3s to 1.7s.
This has been broken since the original commit eba521913d introducing
OpenSSL usage for crypto hash functions. Boy do I feel stupid.
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync code, we explicitly allocated a string for this field, while
in the dload code itself it was filled in with a pointer to another
string. This led to a memory leak in the sync download case.
Make remote_name non-const and always explicitly allocate it. This patch
ensures this as well as uses malloc + snprintf (rather than calloc) in
several codepaths, and eliminates the only use of PATH_MAX in the
download code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This commit was made with the intent of displaying "correctly" sorted
package lists to users. Here are some reasons I think this is incorrect:
* It is done in the wrong place. If a frontend application wants to show
a different order of packages dependent on locale, it should do that
on its own.
* Even if one wants a locale-specific order, almost all package names
are all ASCII and language agnostic, so this different comparison
makes little sense and may serve only to confuse people.
* _alpm_pkg_cmp was unlike any other comparator function. None of the
rest had any dependency on anything but the content of the structs
being compared (e.g., they only used strcmp() or other basic
comparison operators).
This reverts commit 3e4d2c3aa6.
Signed-off-by: Dan McGee <dan@archlinux.org>
There was only one simple to handle case where we left a field
uninitialized; set it to NULL and use malloc() instead.
Signed-off-by: Dan McGee <dan@archlinux.org>
In every case we were calling calloc, the struct we allocated (or the
memory to be used) is fully specified later in the method.
For alpm_list_t allocations, we always set all of data, next, and prev.
For list copying and transforming to an array, we always copy the entire
data element, so no need to zero it first.
Signed-off-by: Dan McGee <dan@archlinux.org>
I'm really good at breaking this on a regular basis. If only we had some
sort of automated testing for this...
Signed-off-by: Dan McGee <dan@archlinux.org>
In the sync download code, we added an early check in 6731d0a940 for
sync download server existence so we wouldn't show the same error over
and over for each file to be downloaded. Move this check into the
download block so we only run it if there are actually files that need
to be downloaded for this repository.
Signed-off-by: Dan McGee <dan@archlinux.org>
When we switched to a file object and not just a simple string, we missed an
update along the way here in target-target conflicts. This patch looks
large, but it really comes down to one errant (char *) cast before that has
been reworked to explicitly point to the alpm_file_t object. The rest is
simply code cleanup.
Signed-off-by: Dan McGee <dan@archlinux.org>
A few parameters were outdated or wrongly named, and a few things were
explicitly linked that Doxygen wasn't able to resolve.
Signed-off-by: Dan McGee <dan@archlinux.org>
We returned the right error code but never set the flags accordingly.
Also, now that we can bail early, ensure we set the error code.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds calls to gpgme_op_import_result() which we were not looking at
before to ensure the key was actually imported. Additionally, we do some
preemptive checks to ensure the keyring is even writable if we are going
to prompt the user to add things to it.
Signed-off-by: Dan McGee <dan@archlinux.org>
This also fixes a segfault found by dave when key_search is
unsuccessful; the key_search return code documentation has also been
updated to reflect reality.
Signed-off-by: Dan McGee <dan@archlinux.org>
Because we aren't using gpgv and a dedicated keyring that is known to be
all safe, we should honor this flag being set on a given key in the
keyring to know to not honor it. This prevents a key from being
reimported that a user does not want to be used- instead of deleting,
one should mark it as disabled.
Signed-off-by: Dan McGee <dan@archlinux.org>
Add two new static methods, key_search() and key_import(), to our
growing list of signing code.
If we come across a key we do not have, attempt to look it up remotely
and ask the user if they wish to import said key. If they do, flag the
validation process as a potential 'retry', meaning it might succeed the
next time it is ran.
These depend on you having a 'keyserver hkp://foo.example.com' line in
your gpg.conf file in your gnupg home directory to function.
Signed-off-by: Dan McGee <dan@archlinux.org>
This moves the result processing out of the validation check loop itself
and into a new loop. Errors will be presented to the user one-by-one
after we fully complete the validation loop, so they no longer overlap
the progress bar.
Unlike the database validation, we may have several errors to process in
sequence here, so we use a function-scoped struct to track all the
necessary information between seeing an error and asking the user about
it.
The older prompt_to_delete() callback logic is still kept, but only for
checksum failures. It is debatable whether we should do this at all or
just delegate said actions to the user.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is for eventual use by the PGP key import code. Breaking this into
a separate commit now makes the following patches a bit easier to
understand.
Signed-off-by: Dan McGee <dan@archlinux.org>
This allows a frontend program to query, at runtime, what the library
supports. This can be useful for sanity checking during config-
requiring a downloader or disallowing signature settings, for example.
Signed-off-by: Dan McGee <dan@archlinux.org>
This takes the libraries hidden default out of the equation: hidden in
the sense that we can't even find out what it is until we create a
handle. This is a chicken-and-egg problem where we have probably already
parsed the config, so it is hard to get the bitmask value right.
Move it to the frontend so the caller can do whatever the heck they
want. This also exposes a shortcoming where the frontend doesn't know if
the library even supports signatures, so we should probably add a
alpm_capabilities() method which exposes things like HAS_DOWNLOADER,
HAS_SIGNATURES, etc.
Signed-off-by: Dan McGee <dan@archlinux.org>
This allows us to do all delta verification up front, followed by
whatever needs to be done with any found errors. In this case, we call
prompt_to_delete() for each error.
Add back the missing EVENT(ALPM_EVENT_DELTA_INTEGRITY_DONE) that
accidentally got removed in commit 062c391919.
Remove use of *data; we never even look at the stuff in this array for
the error code we were returning and this would be much better handled
by one callback per error anyway, or at least some strongly typed return
values.
Signed-off-by: Dan McGee <dan@archlinux.org>
If siglist->results wasn't a NULL pointer, we would try to free it
anyway, even if siglist->count was zero. Only attempt to free this
pointer if we had results and the pointer is valid.
Signed-off-by: Dan McGee <dan@archlinux.org>
This one can be overwhelming when reading debug output from a very large
package. We already have the output of each extracted file so we
probably can do without this in 99.9% of cases.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds two new static methods, check_validity() and load_packages(),
to sync.c which are simply code fragments pulled out of our
do-everything sync commit code.
Signed-off-by: Dan McGee <dan@archlinux.org>
In reality, there is no retrying that happens as of now because we don't
have any import or changing of the keyring going on, but the code is set
up so we can drop this in our new _alpm_process_siglist() function. Wire
up the basics to the sync database validation code, so we see something
like the following:
$ pacman -Ss unknowntrust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
$ pacman -Ss missingsig
error: core: missing required signature
error: core: missing required signature
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
Yes, there is some double output, but this should be fixable in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
This adds a some new callback event and progress codes for package
loading, which was formerly bundled in with package validation before.
The main sync.c loop where loading occurred is now two loops running
sequentially. The behavior should not change with this patch outside of
progress and event display; more changes will come in following patches.
Signed-off-by: Dan McGee <dan@archlinux.org>
_alpm_pkg_load_internal() was becoming a monster. Extract the top bit of
the method that dealt with checksum and signature validation into a
separate method that should be called before one loads a package to
ensure it is valid.
Signed-off-by: Dan McGee <dan@archlinux.org>
This is always true at the end since we return early if we couldn't
create the tmpdir, so it is totally unnecessary.
Signed-off-by: Dan McGee <dan@archlinux.org>
We shouldn't be going through the accessor that does a bunch of
unnecessary legwork, including potentially loading the pkgcache right
before we free it.
Signed-off-by: Dan McGee <dan@archlinux.org>
Rather than using a string-based path, we can restore the working
directory via a file descriptor and use of fchdir().
From the getcwd manpage:
Opening the current directory (".") and calling fchdir(2) to
return is usually a faster and more reliable alternative when
sufficiently many file descriptors are available.
Signed-off-by: Dan McGee <dan@archlinux.org>
We did a lot of both malloc-ing and stack printing to form some paths in
this code. Attempt to unify it all into the one get_pkgpath() method by
adding an optional third "filename" parameter, and form the necessary
path string all in one go.
Signed-off-by: Dan McGee <dan@archlinux.org>
1. Don't run it if something failed in package removal- this mirrors
what we already do in sync transactions.
2. Don't run it if we are invoking it for the replaces removal bit of a
sync transaction- it doesn't make sense to run ldconfig halfway through
a sync install; we should only run it once at the end.
Signed-off-by: Dan McGee <dan@archlinux.org>
We add them to this list with the root path not appended; we should be
searching for them this way as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
We checked the (fgets == NULL and !feof) case, but never actually bailed
out of the loop if we were at the end of the file, causing infinite
looping.
Signed-off-by: Dan McGee <dan@archlinux.org>
This shouldn't really be declared with const, and causes a compile error
when -Wcast-qual is used. Remove the const specifier from the function
specification and all implementations.
Also fix one other trivial -Wcast-qual warning in _alpm_db_cmp().
Signed-off-by: Dan McGee <dan@archlinux.org>
We never ended up using or really needing this; kill it for now knowing
it is in git history if ever needed again.
Signed-off-by: Dan McGee <dan@archlinux.org>
This function doesn't exist on OSX. Since there aren't any other
candidates in alpm for which this function would make sense to use,
simply replace the function call with a loop that does the equivalent.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Similar to an earlier commit which accounts for .part files for full
packages, calculate the download_size for deltas keeping mind the
possibility of a partial transfer.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
Check for the existance of a partial download of a package file before
jumping to delta calculations. Currently, if there were 10MiB remaining
in a 100MiB the values passed to the front end do not reflect this.
Refactored from an old patch originally by Dan.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
if ~/.netrc exists and has credentials for the hostname requested in a
download, they will be provided in an http auth request. This can still
be overridden by explcitly declaring user:pass in the URL.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
This adjusts type usage to match POSIX provided types from
<sys/types.h> rather than assuming everything will fit in a long or
unsigned long. Use fsblkcnt_t (unsigned) and blkcnt_t (signed) as
appropriate. These are affected the same way off_t is on 32 bit
platforms, where the types are extende to 64 bits if large file support
is enabled.
Because most numbers here are block counts, this isn't
near as pressing as using a 32-bit variable for file sizes where
anything over 2GiB can burn you; we likely can support files at least
512 but mainly 4096 times larger.
Signed-off-by: Dan McGee <dan@archlinux.org>