This used to construct a vector, which was then passed down and filled
with a new event_t each go around the loop. That's useless - we fire
one event here, and it's simply the variable event.
This reduces the overhead of a for-loop by ~10%:
```fish
for i in (seq 100000)
true
end
```
runs in about 90% of the time now.
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
for PWD in foo; true; end
prints:
>..src/parse_execution.cpp:461: end_execution_reason_t parse_execution_context_t::run_for_statement(const ast::for_header_t&, const ast::job_list_t&): Assertion `retval == ENV_OK' failed.
because this used the wrong way to see if something is read-only.
is_block is a field which supports 'status is-block', and also controls
whether notifications get posted. However there is no reason to store
this as a distinct field since it is trivially computed from the block
list. Stop storing it. No functional changes in this commit.
The user may write for example:
echo foo >&5
and fish would try to output to file descriptor 5, within the fish process
itself. This has unpredictable effects and isn't useful. Make this an
error.
Note that the reverse is "allowed" but ignored:
echo foo 5>&1
this conceptually dup2s stdout to fd 5, but since no builtin writes to fd
5 we ignore it.
When expanding a string, you may or may not want to generate
descriptions alongside the expanded string. Usually you don't want to
but descriptions were opt out. This commit makes them opt in.
Previously, when a command wasn't found, fish would emit the
"fish_command_not_found" *event*.
This was annoying as it was hard to override (the code ended up
checking for a function called `__fish_command_not_found_handler`
anyway!), the setup was ugly,
and it's useless - there is no use case for multiple command-not-found handlers.
Instead, let's just call a function `fish_command_not_found` if it
exists, or print the default message otherwise.
The event is completely removed, but because a missing event is not an error
(MEISNAE in C++-speak) this isn't an issue.
Note that, for backwards-compatibility, we still keep the default
handler function around even tho the new one is hard-coded in C++.
Also, if we detect a previous handler, the new handler just calls it.
This way, the backwards-compatible way to install a custom handler is:
```fish
function __fish_command_not_found_handler --on-event fish_command_not_found
# do a little dance, make a little love, get down tonight
end
```
and the new hotness is
```fish
function fish_command_not_found
# do the thing
end
```
Fixes#7293.
This concerns how "internal job groups" know to stop executing when an
external command receives a "cancel signal" (SIGINT or SIGQUIT). For
example:
while true
sleep 1
end
The intent is that if any 'sleep' exits from a cancel signal, then so would
the while loop. This is why you can hit control-C to end the loop even
if the SIGINT is delivered to sleep and not fish.
Here the 'while' loop is considered an "internal job group" (no separate
pgid, bash would not fork) while each 'sleep' is a separate external
command with its own job group, pgroup, etc. Prior to this change, after
running each 'sleep', parse_execution_context_t would check to see if its
exit status was a cancel signal, and if so, stash it into an int that the
cancel checker would check. But this became unwieldy: now there were three
sources of cancellation signals (that int, the job group, and fish itself).
Introduce the notion of a "cancellation group" which is a set of job
groups that should cancel together. Even though the while loop and sleep
are in different job groups, they are in the same cancellation group. When
any job gets a SIGINT or SIGQUIT, it marks that signal in its cancellation
group, which prevents running new jobs in that group.
This reduces the number of signals to check from 3 to 2; eventually we can
teach cancellation groups how to check fish's own signals and then it will
just be 1.
The 'time' prefix may come about either because the job itself is marked
with time, or because of the "inside out" weirdness of 'not time...'.
Factor this logic together and precompute it for a job.
This adds a new type 'exit_state_t' which encapsulates where fish is in
the process of exiting. This makes it explicit when fish wants to cancel
"ordinary" fish script but still run exit handlers.
There should be no user-visible behavior change here; this is just
refactoring in preparation for the next commit.
Prior to this fix, the `exit` command would set a global variable in the
reader, which parse_execution would check. However in concurrent mode you
may have multiple scripts being sourced at once, and 'exit' should only
apply to the current script.
Switch to using a variable in the parser's libdata instead.
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
builtins output to stdout and stderr via io_streams_t. Prior to this fix, it
contained an output_stream_t which just wraps a buffer. So all builtin output
went to this buffer (except for eval).
Switch output_stream_t to become a new abstract class which can output to a
buffer, file descriptor, or nowhere. This allows for example `string` to stream
its output as it is produced, instead of buffering it.
This is a set of miscellaneous cleanup for profiling.
An errant newline has been removed from 'if' statement output, which got
introduced with the new ast.
Switch from storing unique_ptr to a deque, which allocates less.
Collapse "parse" and "exec" times into just a single value "duration". The
"parse" time no longer makes sense, as we now parse ahead of time.
Assigning the tty is really a function of a job group, not an individual
job. Reflect that in terminal_maybe_give_to_job_group and also
terminal_return_from_job_group.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
job_lineage was used to track "where jobs came from" but the job tree idea is
a better abstraction. It groups jobs together similar to how a process group
would in other shells. Begin to remove the notion of lineage.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.