Assigning the tty is really a function of a job group, not an individual
job. Reflect that in terminal_maybe_give_to_job_group and also
terminal_return_from_job_group.
This reverts commit 3a5585df95.
This reverts a change that removed a lock. It's indeed true that in master,
fish script is bound to the main thread. But I'm working to remove that
limitation and these locks are important in that future.
The owning locks were added after the original code and decorated with
comments indicating they are thread-safe, even though they're only ever
used from the main thread. Presuming the intent was to make future
manipulation of the code safer rather than to actually make use of any
thread safety guarantees, these have been wrapped in a new
`thread_exclusive` type which always calls ASSERT_IS_MAIN_THREAD.
The benefit is that this does not perform a syscall to lock a mutex
each time the variables are accessed.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
After profiling bottlenecks in job execution, the calls to `tcgetpgrp`
were identified to take a good amount of the execution time. Collecting
metrics on which branches were taken revealed that in all "normal"
cases, there is no benefit to calling `tcgetpgrp` before calling
`tcsetpgrp` as it can instead be called only in the error case to
determine what sort of error handling behavior should be applied.
This makes the best-case scenario of a single syscall much more likely
than in the previous situation.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
Prior to this, jobs all had a pgid, and fish has to work hard to ensure
that pgids were inherited properly for nested jobs. But now the job tree
is the source of truth and there is only one location for the pgid.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.
job_tree represents the data that should be shared between a job and any
jobs that may be spawned by functions or eval run as part of that job. It
reifies shared data that before was handled piecemeal.
When sending SIGCONT to a stopped job, this behaves now
a bit more like a job that was continued by the bg builtin;
bg uses job_t::continue_job which seems overkill here.
The default implementation will not print any output in that case, but this provides users with additional flexibility when it comes to customising the shell's behaviour.
This allows users to customise the behaviour of the shell by redefining the function. This is similar to how fish_title or fish_greeting behave, where the default implementation can be easily overridden.
The function receives as arguments the job id, command line, signal name and signal description.
Prior to this fix, if job control is enabled but stdin is not a tty, we
would return an error from terminal_maybe_give_to_job which would cause us
to avoid waiting for the job. Instead just return notneeded.
Fixes#6573.
Fixes#6830
For some reason, with this change, typing "vi", Control-Z, and 2 x Control-D,
results in the cursor not moving correctly, but this only
seems to happen when starting fish from a fish that doesnt have this fix.
I hope that is a temporary glitch.
55e3270 introduced a regression where we would remove all completed
jobs. But jobs that want to print a status message get skipped, so
the status message (and associated event handlers) might not get run.
Fix this by making it explicit which jobs are safe to process, and which
should be skipped.
Fixes#6679.
Mostly line breaks, one instance of tabs!
For some reason clang-format insists on two spaces before a same-line comment?
(I continue to be unimpressed with super-strict line length limits,
but I continue to believe in automatic styling, so it is what it is)
[ci skip]
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
"job_exit" events, despite their name, can only be created via
the '--on-job-exit caller' misfeature of function. Rename it to make it
clear that this event type is specifically for caller-exit.
fish has some unprincipled code that attempts to tcsetpgrp() to own the
terminal before running a builtin; this was added because 'read' might
want to read from the terminal. I added this code before fully
understanding how process groups and terminals work. A better fix would
be to ensure that fish is marked as the pgroup leader in the job when
the builtin is the first process in the job, and we do that now.
Courageously back out the changes to grab the terminal; see #5147 and
also #5133.
Introduce pgroup_provenance_t, a type which captures "where the pgroup
comes from." This centralizes some logic around how pgroups are
assigned, and it anticipates concurrent execution.
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
We used to have a global notion of "is the shell interactive" but soon we
will want to have multiple independent execution threads, only some of
which may be interactive. Start tracking this data per-parser.
This makes the following changes:
1. Events in background threads are executed in those threads, instead of
being silently dropped
2. Blocked events are now per-parser instead of global
3. Events are posted in builtin_set instead of within the environment stack
The last one means that we no longer support event handlers for implicit
sets like (example) argv. Instead only the `set` builtin (and also `cd`)
post variable-change events.
Events from universal variable changes are still not fully rationalized.
This widens the remaining ones that don't take a char
anywhere.
The rest either use a char _variable_ or __FUNCTION__, which from my
reading is narrow and needs to be widened manually. I've been unable
to test it, though.
See #5900.
Now that our interactive signal handlers are a strict superset of
non-interactive ones, there is no reason to "reset" signals or take action
when becoming non-interactive. Clean up how signal handlers get installed.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.