When there are multiple event handlers for a single event, we would print
the same log statement twice. Let's add the function name to make this
less confusing.
Let's hope this doesn't causes build failures for e.g. musl: I just
know it's good on macOS and our Linux CI.
It's been a long time.
One fix this brings, is I discovered we #include assert.h or cassert
in a lot of places. If those ever happen to be in a file that doesn't
include common.h, or we are before common.h gets included, we're
unawaringly working with the system 'assert' macro again, which
may get disabled for debug builds or at least has different
behavior on crash. We undef 'assert' and redefine it in common.h.
Those were all eliminated, except in one catch-22 spot for
maybe.h: it can't include common.h. A fix might be to
make a fish_assert.h that *usually* common.h exports.
ESCAPE_ALL is not really a helpful name. Also it's the most common flag.
Let's make it the default so we can remove this unhelpful name.
While at it, let's add a default value for the flags argument, which helps
most callers.
The absence of ESCAPE_ALL makes it only escape nonprintable characters
(with some exceptions). We use this for displaying strings in the completion
pager as well as for the human-readable output of "set", "set -S", "bind"
and "functions".
No functional change.
In b0084c3fc4, we refactored out event handlers get removed. But this
also caused us to remove "one-shot" handlers even if they have not yet
been fired. Fix this.
s_observed_signals is used to inform the signal handler which signals may
have --on-signal functions attached to them, as an optimization. Prior to
this change it was latched: once we started observing a signal we assume we
will keep observing that signal. Make it properly increment and decrement,
in preparation for making trap work non-interactively.
This concerns what happens if one event handler removes another, when
both are responding to the same event. Previously we had a "double lock"
where we would traverse the list twice. Now track directly in the
handler when it is removed; this simplifies the code a lot. No
functional changes expected here.
Watching for exit events is rare, so check if we have any exit events
before actually emitting them. This saves about 2% of time in
external_cmds benchmark.
The clang warning for pending_signals_t was about the operator=
return type being wrong (misc-unconventional-assign-operator).
Signed-off-by: Rosen Penev <rosenp@gmail.com>
The names in the implementation differed from those in the header, but
the header names were definitely better (because they correlated across
function calls).
The sort routine was using the address of the **function pointer**
`signal(int signal)` rather than the union payload of the same name.
Perhaps one of the two should be renamed.
Prior to this change, a function with an on-job-exit event handler must be
added with the pgid of the job. But sometimes the pgid of the job is fish
itself (if job control is disabled) and the previous commit made last_pid
an actual pid from the job, instead of its pgroup.
Switch on-job-exit to accept any pid from the job (except fish itself).
This allows it to be used directly with $last_pid, except that it now
works if job control is off. This is implemented by "resolving" the pid to
the internal job id at the point the event handler is added.
Also switch to passing the last pid of the job, rather than its pgroup.
This aligns better with $last_pid.
It is possible to run a function when a process exits via `function
--on-process-exit`, or when a job exits via `function --on-job-exits`.
Internally these were distinguished by the pid in the event: if it was
positive, then it was a process exit. If negative, it represents a pgid
and is a job exit. If zero, it fires for both jobs and processes, which is
pretty weird.
Switch to tracking these explicitly. Separate out the --on-process-exit
and --on-job-exit event types into separate types. Stop negating pgids as
well.
This could have been one iteration off, e.g.
```fish
function on-winch --on-signal winch
echo $LINES
end
```
Resize the terminal, it'll print e.g.
24
then run `echo $LINES` interactively, it might have a different answer.
This isn't beautiful, but it works. A better solution might be to make
the termsize vars electric and just always update them on read?
While pid values may be reused, it is logical to assume that fish event
handlers coded against a particular job or process id mean just the job
that is currently referred to be any given pid/pgrp rather than in
perpetuity.
This trims the list of registered event handlers nice and early, and as
a bonus avoids the issue described in #7721.
The cleanup song-and-dance is extremely ugly due to the repeated locking
and unlocking of the event handler list.
Closes#7221.
These are events that have been queued but not yet fired. There's no
reason to modify the events after creating them. Mark them as const
to ensure that doesn't happen.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
"job_exit" events, despite their name, can only be created via
the '--on-job-exit caller' misfeature of function. Rename it to make it
clear that this event type is specifically for caller-exit.
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427