When expanding a string, you may or may not want to generate
descriptions alongside the expanded string. Usually you don't want to
but descriptions were opt out. This commit makes them opt in.
Previously, when a command wasn't found, fish would emit the
"fish_command_not_found" *event*.
This was annoying as it was hard to override (the code ended up
checking for a function called `__fish_command_not_found_handler`
anyway!), the setup was ugly,
and it's useless - there is no use case for multiple command-not-found handlers.
Instead, let's just call a function `fish_command_not_found` if it
exists, or print the default message otherwise.
The event is completely removed, but because a missing event is not an error
(MEISNAE in C++-speak) this isn't an issue.
Note that, for backwards-compatibility, we still keep the default
handler function around even tho the new one is hard-coded in C++.
Also, if we detect a previous handler, the new handler just calls it.
This way, the backwards-compatible way to install a custom handler is:
```fish
function __fish_command_not_found_handler --on-event fish_command_not_found
# do a little dance, make a little love, get down tonight
end
```
and the new hotness is
```fish
function fish_command_not_found
# do the thing
end
```
Fixes#7293.
This concerns how "internal job groups" know to stop executing when an
external command receives a "cancel signal" (SIGINT or SIGQUIT). For
example:
while true
sleep 1
end
The intent is that if any 'sleep' exits from a cancel signal, then so would
the while loop. This is why you can hit control-C to end the loop even
if the SIGINT is delivered to sleep and not fish.
Here the 'while' loop is considered an "internal job group" (no separate
pgid, bash would not fork) while each 'sleep' is a separate external
command with its own job group, pgroup, etc. Prior to this change, after
running each 'sleep', parse_execution_context_t would check to see if its
exit status was a cancel signal, and if so, stash it into an int that the
cancel checker would check. But this became unwieldy: now there were three
sources of cancellation signals (that int, the job group, and fish itself).
Introduce the notion of a "cancellation group" which is a set of job
groups that should cancel together. Even though the while loop and sleep
are in different job groups, they are in the same cancellation group. When
any job gets a SIGINT or SIGQUIT, it marks that signal in its cancellation
group, which prevents running new jobs in that group.
This reduces the number of signals to check from 3 to 2; eventually we can
teach cancellation groups how to check fish's own signals and then it will
just be 1.
The 'time' prefix may come about either because the job itself is marked
with time, or because of the "inside out" weirdness of 'not time...'.
Factor this logic together and precompute it for a job.
This adds a new type 'exit_state_t' which encapsulates where fish is in
the process of exiting. This makes it explicit when fish wants to cancel
"ordinary" fish script but still run exit handlers.
There should be no user-visible behavior change here; this is just
refactoring in preparation for the next commit.
Prior to this fix, the `exit` command would set a global variable in the
reader, which parse_execution would check. However in concurrent mode you
may have multiple scripts being sourced at once, and 'exit' should only
apply to the current script.
Switch to using a variable in the parser's libdata instead.
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
builtins output to stdout and stderr via io_streams_t. Prior to this fix, it
contained an output_stream_t which just wraps a buffer. So all builtin output
went to this buffer (except for eval).
Switch output_stream_t to become a new abstract class which can output to a
buffer, file descriptor, or nowhere. This allows for example `string` to stream
its output as it is produced, instead of buffering it.
This is a set of miscellaneous cleanup for profiling.
An errant newline has been removed from 'if' statement output, which got
introduced with the new ast.
Switch from storing unique_ptr to a deque, which allocates less.
Collapse "parse" and "exec" times into just a single value "duration". The
"parse" time no longer makes sense, as we now parse ahead of time.
Assigning the tty is really a function of a job group, not an individual
job. Reflect that in terminal_maybe_give_to_job_group and also
terminal_return_from_job_group.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
job_lineage was used to track "where jobs came from" but the job tree idea is
a better abstraction. It groups jobs together similar to how a process group
would in other shells. Begin to remove the notion of lineage.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.
It used to error out when a command wasn't known, even when it was a
function that would only be discovered via autoloading.
Now we just accept that a command doesn't exist when no-execute is
given - we're not gonna execute it anyway.
Also, in the same breath stop counting empty commands after expansion
and empty wildcard expansions as errors - these depend on runtime
values, so we can't verify them without executing.
Fixes#977.
(note that it still executes "time", but that's another commit)
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
Introduce pgroup_provenance_t, a type which captures "where the pgroup
comes from." This centralizes some logic around how pgroups are
assigned, and it anticipates concurrent execution.
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.