Introduce pgroup_provenance_t, a type which captures "where the pgroup
comes from." This centralizes some logic around how pgroups are
assigned, and it anticipates concurrent execution.
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
We used to have a global notion of "is the shell interactive" but soon we
will want to have multiple independent execution threads, only some of
which may be interactive. Start tracking this data per-parser.
This makes the following changes:
1. Events in background threads are executed in those threads, instead of
being silently dropped
2. Blocked events are now per-parser instead of global
3. Events are posted in builtin_set instead of within the environment stack
The last one means that we no longer support event handlers for implicit
sets like (example) argv. Instead only the `set` builtin (and also `cd`)
post variable-change events.
Events from universal variable changes are still not fully rationalized.
This widens the remaining ones that don't take a char
anywhere.
The rest either use a char _variable_ or __FUNCTION__, which from my
reading is narrow and needs to be widened manually. I've been unable
to test it, though.
See #5900.
Now that our interactive signal handlers are a strict superset of
non-interactive ones, there is no reason to "reset" signals or take action
when becoming non-interactive. Clean up how signal handlers get installed.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.
Prior to this change, fish used a global flag to decide if we should check
for changes to universal variables. This flag was then checked at arbitrary
locations, potentially triggering variable updates and event handlers for
those updates; this was very hard to reason about.
Switch to triggering a universal variable update at a fixed location,
after running an external command. The common case is that the variable
file has not changed, which we can identify with just a stat() call, so
this is pretty cheap.
`eval` has always been implemented as a function, which was always a bit
of a hack that caused some issues such as triggering the creation of a
new scope. This turns `eval` into a decorator.
The scoping issues with eval prevented it from being usable to actually
implement other shell components in fish script, such as the problems
described in #4442, which should now no longer be the case.
Closes#4443.
Followup to 394623b.
Doing it in the parser meant only top-level jobs would be reaped after
being `disown`ed, as subjobs aren't directly handled by the parser.
This is also much cleaner, as now job removal is centralized in
`process_clean_after_marking()`.
Closes#5803.
Prior to this fix, in every call to job_continue, fish would reclaim the
foreground pgrp. This would cause other jobs in the pipeline (which may
have another pgrp) to receive SIGTTIN / SIGTTOU.
Only reclaim the foreground pgrp if it was held at the point of job_continue.
This partially addresses #5765
Directly access the job list without the intermediate job_iterator_t,
and remove functions that are ripe for abuse by modifying a local
enumeration of the same list instead of operating on the iterators
directly (e.g. proc.cpp iterates jobs, and mid-iteration calls
parser::job_remove(j) with the job (and not the iterator to the job),
causing an invisible invalidation of the pre-existing local iterators.
C++11 provides std::min/std::max which we're using all over,
obviating the need for our own templates for this.
util.h now only provides two things: get_time and wcsfilecmp.
This commit removes everything that includes it which doesn't
use either; most because they no longer need mini or maxi from
it but some others were #including it unnecessarily.
Prior to this fix, the wait command used waitpid() directly. Switch it to
calling process_mark_finished_children() along with the rest of the job
machinery. This centralizes the waitpid call to a single location.
In fish we play fast and loose with status codes as set directly (e.g. on
failed redirections), vs status codes returned from waitpid(), versus the
value $status. Introduce a new value type proc_status_t to encapsulate
this logic.
Prior to this fix, an "event" was used as both a predicate on which events
to match, and also as the event itself. Re-express these concepts
distinctly: an event is something that happened, an event_handler is the
predicate and name of the function to execute.
Prior to this fix, fish had a signal_list_t that accumulated signals.
Signals were added to an array of integers, with an overflow flag.
The event machinery would attempt to atomically "swap in" the other list.
After this fix, there is a single list of pending signal events, as an array
of atomic booleans. The signal handler sets the boolean corresponding to its
signal.
This uses the new internal process mechanism to write output for builtins.
After this the only reason fish ever forks is to execute external processes.
This introduces "internal processes" which are backed by a pthread instead
of a normal process. Internal processes are reaped using the topic
machinery, plugging in neatly alongside the sigchld topic; this means that
process_mark_finished_children() can wait for internal and external
processes simultaneously.
Initially internal processes replace the forked process that fish uses to
write out the output of blocks and functions.
The sigchld generation expresses the idea that, if we receive a sigchld
signal, the generation will be different than when we last recorded it. A
process cannot exit before it has launched, so check the generation count
before process launch. This is an optimization that reduces failing
waitpid calls.
This is a big change to how process reaping works, reimplenting it using
topics. The idea is to simplify the logic in
process_mark_finished_children around blocking, and also prepare for
"internal processes" which do not correspond to real processes.
Before this change, fish would use waitpid() to wait for a process group,
OR would individually poll processes if the process group leader was
unreapable.
After this change, fish no longer ever calls blocking waitpid(). Instead
fish uses the topic mechanism. For each reapable process, fish checks if
it has received a SIGCHLD since last poll; if not it waits until the next
SIGCHLD, and then polls them all.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).
This makes some significant architectual improvements to io_pipe_t and
io_buffer_t.
Prior to this fix, io_buffer_t subclassed io_pipe_t. io_buffer_t is now
replaced with a class io_bufferfill_t, which does not subclass pipe.
io_pipe_t no longer remembers both fds. Instead it has an autoclose_fd_t,
so that the file descriptor ownership is clear.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).
This makes some significant architectual improvements to io_pipe_t and
io_buffer_t.
Prior to this fix, io_buffer_t subclassed io_pipe_t. io_buffer_t is now
replaced with a class io_bufferfill_t, which does not subclass pipe.
io_pipe_t no longer remembers both fds. Instead it has an autoclose_fd_t,
so that the file descriptor ownership is clear.
By exclusively waiting by pgrp, we can fail to reap processes that
change their own pgrp then either crash or close their fds. If we wind
up in a situation where `waitpid(2)` returns 0 or ECHLD even though we
did not specify `WNOHANG` but we still have unreaped child processes,
wait on them by pid.
Closes#5596.
This is effectively a pick of 2ebdcf82ee
and the subsequent fixup. However we also avoid setting WNOHANG unless
waitpid() indicates a process was reaped.
Fixes#5438
This is the more correct fix for #5447, as regardless of which process
in the job (be it the first or the last) finished first, once we have
waited on a process without ~WNOHANG we don't do that for any subsequent
processes in the job.
It is also a waste to call into the kernel to wait for a process we
already know is completed!
This fixes#5438 by having fish block while waiting on a foreground job
via its individual processes by enumerating the procs in reverse order,
such that we hang waiting for the last job in the IO chain to terminate,
rather than the first.
The function `add_disowned_pgid` adds process *group* ids and not
process ids. It multiplies the value by negative 1 to indicate a wait
on a process group, so the original value must be positive.
If a job is disowned that, for some reason, has a pgid that is special
to waitpid, like 0 (process with pgid of the calling process), -1 (any
process), or our actual pgid, that would lead to us waiting for too
many processes when we later try to reap the disowned processes (to
stop zombies from appearing).
And that means we'd snag away the processes we actually do want to
wait for, which would end with us in a waiting loop.
This is tough to reproduce, the easiest I've found was
fish -ic 'sleep 5 &; disown; set -g __fish_git_prompt_showupstream auto; __fish_git_prompt'
in a git repo.
What we do is to not allow special pgids in the disowned_pids list.
That means we might leave a zombie around (though we probably wait on
0 somewhere), but that's preferable to infinitely looping.
See #5426.
This happens in firejail, and it means that we can't use it as an
argument to most pgid-taking functions.
E.g. `wait(0)` means to wait for the _current_ process group,
`tcsetpgrp(0)` doesn't work etc.
So we just stop doing this stuff and hope it works.
Fixes#5295.
This reverts commit 1cb8b2a87b.
argv[0] has the full path in it for a user when he executes it
out of $PATH. This is really annoying in the title which uses $_.
Also check if that is actually defined, not the cur_term proxy.
In #5371, we figured out that there are terminfo entries without this
capability, so this would do a NULL-dereference.
... rather than hard code it to "fish". This affects
what is found in $_ and improves the errors:
For example, if fish was ran with ./fish, instead of
something like:
fish: Expected 3 surprises, only got 2 surprises
we'll see:
./fish: Expected 3 surprises, only got 2 surprises
like most other shell utilities. It's just a tiny bit
of detail that can avoid confusion.
Now jobs are aware of their parent jobs, and can interrogate those jobs,
to determine if every job in the chain is fully constructed.
Remove flags and the static stacks that manipulated them.
The parent of a job is the parent pipeline that executed the function or
block corresponding to this job. This will help simplify
process_mark_finished_children().
select_try() returned IO_ERROR to indicate that there's no file descriptors
from which to read. Name this return value properly.
Also migrate this type into proc.cpp since it's not used outside of the
header.
This is an opposite case from the usual "pipe into grep-the-function"
where my `pbpaste` emitted a lot of content exceeding the OS pipe
buffer. The `block_on_fg` condition was just `send_sigcont` in the
original job control rewrite, and it was incorrect to sub it for
WAIT_BY_PROCESS on its own.
However, this requires always blocking when select_try returns an
interrupted/incomplete read or else fish doesn't block and stays running
in a tight loop in the background (and incorrectly writing to a terminal
it doesn't own under higher debug levels), which I *think* is OK.
This was introduced in 1b1bc28c0a but did
not cause any problems until the job control refactor, which caused it
to attempt to signal the calling `exec` builtin's own (invalid) pgrp
with SIGHUP.
Also improved debugging for `j->signal()` failures by printing the
signal we tried sending in case of error, rename the function to
`hup_background_jobs`, and move it from `reader.h`/`reader.cpp` to
`proc.h`/`proc.cpp`.
When a function is encountered by exec_job, a new context is created for
its execution from the ground up, with a new job and all, ultimately
resulting in a recursive call to exec_job from the same (main) thread.
Since each time exec_job encounters a new job with external commands
that needs terminal control it creates a new pgrp and gives it control
of the terminal (tcsetpgrp & co), this effectively takes control away
from the previously spawned external commands which may be (and likely
are) expecting to still have terminal access.
This commit attempts to detect when such a situation arises by handling
recursive calls to exec_job (which can only happen if the pipeline
included a function) by borrowing the pgrp from the (necessarily still
active) parent job and spawning new external commands into it.
When a parent job spawns new jobs due to the evaluation of a new
function (which shouldn't be the case in the first place), we end up
with two distinct jobs sharing one pgrp (to fix#3952). This can lead to
early termination of a pgrp if finished parent job children are reaped
before future processes in either the parent or future child jobs can
join it.
While the parent job is under construction, require that waitpid(2)
calls for the child job be done by process id and not job pgrp.
Closes#3952.
Use SIGCHLD to determine whether or not waitpid(2) calls can be elided,
but only with extreme caution. If we receive SIGCHLD but are not able to
reap all jobs, we need to iterate through them again.
For this to work, we need to make sure that we reap all children that we
can reap after a SIGCHLD, i.e. it's not OK to just reap the first and
return or else we can never clear the dirty state flag.
In all cases, as expensive as a call to waitpid() may be, if a child
process is available for reaping it is always cheaper to wait on it then
reap it than to call select_try() and end up timing out.
The old code was rather haphazard with regards to error control, and
would make mutable changes before operations that could fail without any
viable error handling options.
Convert `select_try()` to return a well-defined enum describing its
state, and handle each of the three possible cases with clear reasons
why we are blocking or not blocking in each subsequent call to
`process_mark_finished_children()`.
* Use the newly-introduced signal_block_t RAII wrapper
* Remove EINTR loops as all signals are blocked
* Clean up control flow thanks to RAII wrappers
* Rename parameter to clarify what it does and update docs accordingly
* Update outdated comments referencing SIGSTOP code that was removed a
long time ago.
* Remove no-op CHECK_BLOCK() call
* Convert JOB_* enums to scoped enums
* Convert standalone job_is_* functions to member functions
* Convert standalone job_{promote, signal, continue} to member functions
* Convert standolen job_get{,_from_pid} to `job_t` static functions
* Reduce usage of JOB_* enums outside of proc.cpp by using new
`job_t::is_foo()` const helper methods instead.
This patch is only a refactor and should not change any functionality or
behavior (both observed and unobserved).
* Debug level 3: describe all commands being executed (this is, after all,
a shell and one can argue that this is the most important debug
information avaliable)
* Debug level 4: details of execution, mainly fork vs no-fork and io
handling
Also introduced j->preview() to print a short descriptor of the job
based on the head of the first process so we don't overwhelm with
needless repitition, but also so that we don't have to rely on
distinguishing between repeated, non-unique/non-monotonic job ids that
are often recycled within a single "execution cycle" (pressing enter
once).
Per @ridiculousfish's suggestions in #5219,
`process_mark_finished_children()` has been updated to work in an easier-
to-follow manner. Its behavior is now straight forward, it always checks
for finished processes but only blocks if `block_on_fg` is true.
We're not using the SIGCHLD count in s_sigchld_generation_cnt for
anything any more, as it's not actually a reliable metric since we can
experience one SIGCHLD as a result of two processes exiting (see #1768),
but only reap one of them if the other is in a not-fully-constructed job
(see #5219), a state we cannot possibly detect without calling
`waitpid()` on all child processes, which we are explicitly avoiding.