We need special handling when reporting backtraces for commands run
during startup, i.e. config.fish. Previously we had a global variable;
make it local to the parser to eliminate a global.
No functional change here.
Cancellation groups were meant to reflect the following idea: if you ran a
simple block:
begin
cmd1
cmd2
end
then under job control, cmd1 and cmd2 would get separate groups; however if
either exits due to SIGINT or SIGQUIT we also want to propagate that to the
outer block. So the outermost block and its interior jobs would share a
cancellation group. However this is more complex than necessary; it's
sufficient for the execution context to just store an int internally.
This ought not to affect anything user-visible.
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
is_block is a field which supports 'status is-block', and also controls
whether notifications get posted. However there is no reason to store
this as a distinct field since it is trivially computed from the block
list. Stop storing it. No functional changes in this commit.
In preparation for using wait handles in --on-process-exit events, factor
wait handles into their own wait handle store. Also switch them to
per-process instead of per-job, which is a simplification.
This is preparing to address the problem where fish cannot wait on a
reaped job, because it only looks at the active job list. Introduce the
idea of a "wait handle," which is a thing that `wait` can use to check if
a job is finished. A job may produce its wait handle on demand, and
parser_t will save the wait handle from wait-able jobs at the point they
are reaped.
This change merely introduces the idea; the next change makes builtin_wait
start using it.
This goes to a separate file because that makes option parsing easier
and allows profiling both at the same time.
The "normal" profile now contains only the profile data of the actual
run, which is much more useful - you can now profile a function by
running
fish -C 'source /path/to/thing' --profile /tmp/thefunction.prof -c 'thefunction'
and won't need to filter out extraneous information.
This concerns how "internal job groups" know to stop executing when an
external command receives a "cancel signal" (SIGINT or SIGQUIT). For
example:
while true
sleep 1
end
The intent is that if any 'sleep' exits from a cancel signal, then so would
the while loop. This is why you can hit control-C to end the loop even
if the SIGINT is delivered to sleep and not fish.
Here the 'while' loop is considered an "internal job group" (no separate
pgid, bash would not fork) while each 'sleep' is a separate external
command with its own job group, pgroup, etc. Prior to this change, after
running each 'sleep', parse_execution_context_t would check to see if its
exit status was a cancel signal, and if so, stash it into an int that the
cancel checker would check. But this became unwieldy: now there were three
sources of cancellation signals (that int, the job group, and fish itself).
Introduce the notion of a "cancellation group" which is a set of job
groups that should cancel together. Even though the while loop and sleep
are in different job groups, they are in the same cancellation group. When
any job gets a SIGINT or SIGQUIT, it marks that signal in its cancellation
group, which prevents running new jobs in that group.
This reduces the number of signals to check from 3 to 2; eventually we can
teach cancellation groups how to check fish's own signals and then it will
just be 1.
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
This is a set of miscellaneous cleanup for profiling.
An errant newline has been removed from 'if' statement output, which got
introduced with the new ast.
Switch from storing unique_ptr to a deque, which allocates less.
Collapse "parse" and "exec" times into just a single value "duration". The
"parse" time no longer makes sense, as we now parse ahead of time.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
Prior to this, jobs all had a pgid, and fish has to work hard to ensure
that pgids were inherited properly for nested jobs. But now the job tree
is the source of truth and there is only one location for the pgid.
job_lineage was used to track "where jobs came from" but the job tree idea is
a better abstraction. It groups jobs together similar to how a process group
would in other shells. Begin to remove the notion of lineage.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.
Give string expansion an (optional) parent pgroup. This is threaded all
the way into eval(). This ensures that in a mixed pipeline like:
cmd | begin ; something (cmd2) ; end
that cmd2 and cmd have the same pgroup.
Add a test to ensure that command substitutions inherit pgroups
properly.
Fixes#6624
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
This commit recognizes an existing pattern: many operations need some
combination of a set of variables, a way to detect cancellation, and
sometimes a parser. For example, tab completion needs a parser to execute
custom completions, the variable set, should cancel on SIGINT. Background
autosuggestions don't need a parser, but they do need the variables and
should cancel if the user types something new. Etc.
This introduces a new triple operation_context_t that wraps these concepts
up. This simplifies many method signatures and argument passing.