* tail: fix race condition (fix#3765)
There exists a race condition (RC) that can occur if changes to a path
happen after the initial print loop in `uu_tail()`, but before the
path is added to the notify-Watcher thread in `follow()`.
To minimize the window where the RC can occur, this moves starting the
Watcher thread and adding paths to it from `follow()` to the initial
print loop in `uu_tail()`.
Additionally, to make sure the RC cannot happen in
"gnu/tests/tail-2/F-headers.sh", the error message that is used as a trigger
in this test, is delayed until the path is added to the Watcher thread.
* build-gnu: remove workarounds for tail
Remove workarounds for "tests/tail-2/F-headers.sh" which are
(presumably) no longer needed because of the race condition fix.
* build-gnu: remove workarounds for tail
Remove workarounds for "tests/tail-2/F-headers.sh" which are
(presumably) no longer needed because of the race condition fix.
* tail: refactor to minimize chances of RC
Move "adding paths to Watcher thread" to its own loop and run this loop
before the initial tail-print-loop in order to minimize the window for
race conditions.
* cp: Refactor `reflink`/`sparse` handling to enable `--sparse` flag
`--sparse` and `--reflink` options have a lot of similarities:
- They have similar options (`always`, `never`, `auto`)
- Both need OS specific handling
- They can be mutually exclusive
Prior to this change, `sparse` was defined as `CopyMode`, but `reflink`
wasn't. Given the similarities, it makes sense to handle them similarly.
The idea behind this change is to move all OS specific file copy
handling in the `copy_on_write_*` functions. Those function then
dispatch to the correct logic depending on the arguments (at the moment,
the tuple `(reflink, sparse)`).
Also, move the handling of `--reflink=never` from `copy_file` to the
`copy_on_write_*` functions, at the cost of a bit of code duplication,
to allow `copy_on_write_*` to handle all cases (and later handle
`--reflink=never` with `--sparse`).
* cp: Implement `--sparse` flag
This begins to address #3362
At the moment, only the `--sparse=always` logic matches the requirement
form GNU cp info page, i.e. always make holes in destination when
possible.
Sparse copy is done by copying the source to the destination block by
block (blocks being of the destination's fs block size). If the block
only holds NUL bytes, we don't write to the destination.
About `--sparse=auto`: according to GNU cp info page, the destination
file will be made sparse if the source file is sparse as well. The next
step are likely to use `lseek` with `SEEK_HOLE` detect if the source
file has holes. Currently, this has the same behaviour as
`--sparse=never`. This `SEEK_HOLE` logic can also be applied to
`--sparse=always` to improve performance when copying sparse files.
About `--sparse=never`: from my understanding, it is not guaranteed that
Rust's `fs::copy` will always produce a file with no holes, as
["platform-specific behavior may change in the
future"](https://doc.rust-lang.org/std/fs/fn.copy.html#platform-specific-behavior)
About other platforms:
- `macos`: The solution may be to use `fcntl` command `F_PUNCHHOLE`.
- `windows`: I only see `FSCTL_SET_SPARSE`.
This should pass the following GNU tests:
- `tests/cp/sparse.sh`
- `tests/cp/sparse-2.sh`
- `tests/cp/sparse-extents.sh`
- `tests/cp/sparse-extents-2.sh`
`sparse-perf.sh` needs `--sparse=auto`, and in particular a way to skip
holes in the source file.
Co-authored-by: Sylvestre Ledru <sylvestre@debian.org>
* ls: Implement --zero flag. (#2929)
This flag can be used to provide a easy machine parseable output from
ls, as discussed in the GNU bug report
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=49716.
There are some peculiarities with this flag:
- Current implementation of GNU ls of the `--zero` flag implies some
other flags. Those can be overridden by setting those flags after
`--zero` in the command line.
- This flag is not compatible with `--dired`. This patch is not 100%
compliant with GNU ls: GNU ls `--zero` will fail if `--dired` and
`-l` are set, while with this patch only `--dired` is needed for the
command to fail.
We also add `--dired` flag to the parser, with no additional behaviour
change.
Testing done:
```
$ bash util/build-gnu.sh
[...]
$ bash util/run-gnu-test.sh tests/ls/zero-option.sh
[...]
PASS: tests/ls/zero-option.sh
============================================================================
Testsuite summary for GNU coreutils 9.1.36-8ec11
============================================================================
# TOTAL: 1
# PASS: 1
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
```
* Use the US way to spell Behavior
* Fix formatting with cargo fmt -- tests/by-util/test_ls.rs
* Simplify --zero flag overriding logic by using index_of
Also, allow multiple --zero flags, as this is possible with GNU ls
command. Only the last one is taken into account.
Co-authored-by: Sylvestre Ledru <sledru@mozilla.com>
* Speed up sum by using reasonable read buffer sizes.
Use a 4K read buffer for each of the checksum functions, which seems
reasonable. This improves the performance of BSD checksums on
odyssey1024.txt from 399ms to 325ms on my laptop, and of SysV
checksums from 242ms to 67ms.
* Add BENCHMARKING.md for `sum`.
* Add comment regarding block sizes.
* Improve portability of BENCHMARKING.md
* Make `div_ceil` const and enhance comment.
Byte, character, and line counting can all be done on the raw bytes
of the incoming stream without decoding the Unicode characters. This
fact was previously exploited in specific fast paths for counting
characters and counting lines. This change unifies those fast paths into
a single shared fast paths, using const generics to specialize the
function for each use case. This has the benefit of making sure that all
combinations of these Unicode-oblivious fast paths benefit from the same
optimization.
On my laptop, this speeds up `wc -clm odyssey1024.txt` from 840ms to
120ms. I experimented with using a filter loop for line counting, but
continuing to use the bytecount crate came out ahead by a significant
margin.