Assigning the tty is really a function of a job group, not an individual
job. Reflect that in terminal_maybe_give_to_job_group and also
terminal_return_from_job_group.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
Prior to this, jobs all had a pgid, and fish has to work hard to ensure
that pgids were inherited properly for nested jobs. But now the job tree
is the source of truth and there is only one location for the pgid.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.
job_tree represents the data that should be shared between a job and any
jobs that may be spawned by functions or eval run as part of that job. It
reifies shared data that before was handled piecemeal.
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
fish has some unprincipled code that attempts to tcsetpgrp() to own the
terminal before running a builtin; this was added because 'read' might
want to read from the terminal. I added this code before fully
understanding how process groups and terminals work. A better fix would
be to ensure that fish is marked as the pgroup leader in the job when
the builtin is the first process in the job, and we do that now.
Courageously back out the changes to grab the terminal; see #5147 and
also #5133.
Introduce pgroup_provenance_t, a type which captures "where the pgroup
comes from." This centralizes some logic around how pgroups are
assigned, and it anticipates concurrent execution.
In particular, this allows `true && time true`, or `true; and time true`,
and both `time not true` as well as `not time true` (like bash).
time is valid only as job _prefix_, so `true | time true` could call
`/bin/time` (same in bash)
See discussion in #6442
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
Previously, if the user control-C'd out of a process, we would set a
bogus exit status in the process, but it was difficult to observe this
because we would be cancelling anyways. But set it properly.
Prior to this change, a process after it has been constructed by
parse_execution, but before it is executed, was given a list of
io_data_t redirections. The problem is that redirections have a
sensitive ownership policy because they hold onto fds. This made it
rather hard to reason about fd lifetime.
Change these to redirection_spec_t. This is a textual description
of a redirection after expansion. It does not represent an open file and
so its lifetime is no longer important.
This enables files to be held only on the stack, and are no longer owned
by a process of indeterminate lifetime.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
We used to have a global notion of "is the shell interactive" but soon we
will want to have multiple independent execution threads, only some of
which may be interactive. Start tracking this data per-parser.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.
This was added in 04a96f6 but not strictly required to fix#5803
(verified), with the intention of hiding invisible background jobs
(created by invoking a function within a pipeline) from the user, but
that also broke intentionally created jobs from displaying as well.
I'm thinking it can't be done without keeping track of caller context vs
job context.
Closes#5824.
Prior to this change, fish used a global flag to decide if we should check
for changes to universal variables. This flag was then checked at arbitrary
locations, potentially triggering variable updates and event handlers for
those updates; this was very hard to reason about.
Switch to triggering a universal variable update at a fixed location,
after running an external command. The common case is that the variable
file has not changed, which we can identify with just a stat() call, so
this is pretty cheap.
I did not realize builtins could safely call into the parser and inject
jobs during execution. This is much cleaner than hacking around the
required shape of a plain_statement.
While `eval` is still a function, this paves the way for changing that
in the future, and lets the proc/exec functions detect when an eval is
used to allow/disallow certain behaviors and optimizations.
Followup to 394623b.
Doing it in the parser meant only top-level jobs would be reaped after
being `disown`ed, as subjobs aren't directly handled by the parser.
This is also much cleaner, as now job removal is centralized in
`process_clean_after_marking()`.
Closes#5803.
This prevents the `disown` builtin from directly removing jobs out of
the jobs list to prevent sanity issues, as `disown` may be called within
the context of a subjob (e.g. in a function or block) in which case the
parent job might not yet be done with the reference to the child job.
Instead, a flag is set and the parser removes the job from the list only
after the entire execution chain has completed.
Closes#5720.
Prior to this fix, in every call to job_continue, fish would reclaim the
foreground pgrp. This would cause other jobs in the pipeline (which may
have another pgrp) to receive SIGTTIN / SIGTTOU.
Only reclaim the foreground pgrp if it was held at the point of job_continue.
This partially addresses #5765
Directly access the job list without the intermediate job_iterator_t,
and remove functions that are ripe for abuse by modifying a local
enumeration of the same list instead of operating on the iterators
directly (e.g. proc.cpp iterates jobs, and mid-iteration calls
parser::job_remove(j) with the job (and not the iterator to the job),
causing an invisible invalidation of the pre-existing local iterators.
Prior to this fix, the wait command used waitpid() directly. Switch it to
calling process_mark_finished_children() along with the rest of the job
machinery. This centralizes the waitpid call to a single location.
In fish we play fast and loose with status codes as set directly (e.g. on
failed redirections), vs status codes returned from waitpid(), versus the
value $status. Introduce a new value type proc_status_t to encapsulate
this logic.
This introduces "internal processes" which are backed by a pthread instead
of a normal process. Internal processes are reaped using the topic
machinery, plugging in neatly alongside the sigchld topic; this means that
process_mark_finished_children() can wait for internal and external
processes simultaneously.
Initially internal processes replace the forked process that fish uses to
write out the output of blocks and functions.
The sigchld generation expresses the idea that, if we receive a sigchld
signal, the generation will be different than when we last recorded it. A
process cannot exit before it has launched, so check the generation count
before process launch. This is an optimization that reduces failing
waitpid calls.
This is a big change to how process reaping works, reimplenting it using
topics. The idea is to simplify the logic in
process_mark_finished_children around blocking, and also prepare for
"internal processes" which do not correspond to real processes.
Before this change, fish would use waitpid() to wait for a process group,
OR would individually poll processes if the process group leader was
unreapable.
After this change, fish no longer ever calls blocking waitpid(). Instead
fish uses the topic mechanism. For each reapable process, fish checks if
it has received a SIGCHLD since last poll; if not it waits until the next
SIGCHLD, and then polls them all.