In writing the completion script for openocd I found the need to
complete paths at the command-line as if they were relative to a
path other than the current $PWD. Given that `$PWD` is currently
global in fish (i.e. no side-effect free `cd` within a subshell)
this is probably good to have for other completions too.
This also fixes a bug in support for explicitly supplying the
description for completions via a `$argv` parameter, which prefixed
the description with `\t` (which is correct) except it did so in
the local scope within an `if` statement, meaning the changes never
had any effect and in the output the description was directly
concatenated to the completions, instead of separated by a tab.
Incorrectly assumed that pandoc uses XDG_CONFIG_HOME, it turns out the
path is hard-coded as $HOME/.pandoc unless explicitly otherwise
specified in the command-line.
Limit the fish_wcstod fast path to ASCII digits only, to fix the problem
observed in the discussion for a700acadfa
where LANG=de_DE.UTF-8 would cause `test` to interpret commas instead of
periods inside floating point values.
This merges a stack that removes the WAIT_BY_PROCESS and NESTED job flags.
Instead jobs are taught about their parents, and parents are interrogated to
determine whether they are fully constructed, and therefore whether it is
safe to call waitpid().
Now jobs are aware of their parent jobs, and can interrogate those jobs,
to determine if every job in the chain is fully constructed.
Remove flags and the static stacks that manipulated them.
The parent of a job is the parent pipeline that executed the function or
block corresponding to this job. This will help simplify
process_mark_finished_children().
Prior to this fix, cding into a symlink and then completing .. would complete
from the physical directory instead of the logical directory, which could not
actually be cd'd to. Teach cd completiond to use the logical directory.
Don't attempt to complete against package names if the user is trying to
enter a switch to speed things up.
Also work around #5267 by not wrapping unfiltered `all-the-package-name`
calls in a function.
select_try() returned IO_ERROR to indicate that there's no file descriptors
from which to read. Name this return value properly.
Also migrate this type into proc.cpp since it's not used outside of the
header.
This is an opposite case from the usual "pipe into grep-the-function"
where my `pbpaste` emitted a lot of content exceeding the OS pipe
buffer. The `block_on_fg` condition was just `send_sigcont` in the
original job control rewrite, and it was incorrect to sub it for
WAIT_BY_PROCESS on its own.
However, this requires always blocking when select_try returns an
interrupted/incomplete read or else fish doesn't block and stays running
in a tight loop in the background (and incorrectly writing to a terminal
it doesn't own under higher debug levels), which I *think* is OK.
Instantiate the std:locale instance used within the character comparison
callback outside the lambda and take a reference to it instead of
creating the locale object for each character in the sequence.
This is part of a very tight loop with lots of inputs during the
evaluation of fuzzy string matches for completions/autosuggestions and
is worth optimizing.
This was introduced in 1b1bc28c0a but did
not cause any problems until the job control refactor, which caused it
to attempt to signal the calling `exec` builtin's own (invalid) pgrp
with SIGHUP.
Also improved debugging for `j->signal()` failures by printing the
signal we tried sending in case of error, rename the function to
`hup_background_jobs`, and move it from `reader.h`/`reader.cpp` to
`proc.h`/`proc.cpp`.
When a function is encountered by exec_job, a new context is created for
its execution from the ground up, with a new job and all, ultimately
resulting in a recursive call to exec_job from the same (main) thread.
Since each time exec_job encounters a new job with external commands
that needs terminal control it creates a new pgrp and gives it control
of the terminal (tcsetpgrp & co), this effectively takes control away
from the previously spawned external commands which may be (and likely
are) expecting to still have terminal access.
This commit attempts to detect when such a situation arises by handling
recursive calls to exec_job (which can only happen if the pipeline
included a function) by borrowing the pgrp from the (necessarily still
active) parent job and spawning new external commands into it.
When a parent job spawns new jobs due to the evaluation of a new
function (which shouldn't be the case in the first place), we end up
with two distinct jobs sharing one pgrp (to fix#3952). This can lead to
early termination of a pgrp if finished parent job children are reaped
before future processes in either the parent or future child jobs can
join it.
While the parent job is under construction, require that waitpid(2)
calls for the child job be done by process id and not job pgrp.
Closes#3952.
Use SIGCHLD to determine whether or not waitpid(2) calls can be elided,
but only with extreme caution. If we receive SIGCHLD but are not able to
reap all jobs, we need to iterate through them again.
For this to work, we need to make sure that we reap all children that we
can reap after a SIGCHLD, i.e. it's not OK to just reap the first and
return or else we can never clear the dirty state flag.
In all cases, as expensive as a call to waitpid() may be, if a child
process is available for reaping it is always cheaper to wait on it then
reap it than to call select_try() and end up timing out.
The old code was rather haphazard with regards to error control, and
would make mutable changes before operations that could fail without any
viable error handling options.