Also check if that is actually defined, not the cur_term proxy.
In #5371, we figured out that there are terminfo entries without this
capability, so this would do a NULL-dereference.
... rather than hard code it to "fish". This affects
what is found in $_ and improves the errors:
For example, if fish was ran with ./fish, instead of
something like:
fish: Expected 3 surprises, only got 2 surprises
we'll see:
./fish: Expected 3 surprises, only got 2 surprises
like most other shell utilities. It's just a tiny bit
of detail that can avoid confusion.
Now jobs are aware of their parent jobs, and can interrogate those jobs,
to determine if every job in the chain is fully constructed.
Remove flags and the static stacks that manipulated them.
The parent of a job is the parent pipeline that executed the function or
block corresponding to this job. This will help simplify
process_mark_finished_children().
select_try() returned IO_ERROR to indicate that there's no file descriptors
from which to read. Name this return value properly.
Also migrate this type into proc.cpp since it's not used outside of the
header.
This is an opposite case from the usual "pipe into grep-the-function"
where my `pbpaste` emitted a lot of content exceeding the OS pipe
buffer. The `block_on_fg` condition was just `send_sigcont` in the
original job control rewrite, and it was incorrect to sub it for
WAIT_BY_PROCESS on its own.
However, this requires always blocking when select_try returns an
interrupted/incomplete read or else fish doesn't block and stays running
in a tight loop in the background (and incorrectly writing to a terminal
it doesn't own under higher debug levels), which I *think* is OK.
This was introduced in 1b1bc28c0a but did
not cause any problems until the job control refactor, which caused it
to attempt to signal the calling `exec` builtin's own (invalid) pgrp
with SIGHUP.
Also improved debugging for `j->signal()` failures by printing the
signal we tried sending in case of error, rename the function to
`hup_background_jobs`, and move it from `reader.h`/`reader.cpp` to
`proc.h`/`proc.cpp`.
When a function is encountered by exec_job, a new context is created for
its execution from the ground up, with a new job and all, ultimately
resulting in a recursive call to exec_job from the same (main) thread.
Since each time exec_job encounters a new job with external commands
that needs terminal control it creates a new pgrp and gives it control
of the terminal (tcsetpgrp & co), this effectively takes control away
from the previously spawned external commands which may be (and likely
are) expecting to still have terminal access.
This commit attempts to detect when such a situation arises by handling
recursive calls to exec_job (which can only happen if the pipeline
included a function) by borrowing the pgrp from the (necessarily still
active) parent job and spawning new external commands into it.
When a parent job spawns new jobs due to the evaluation of a new
function (which shouldn't be the case in the first place), we end up
with two distinct jobs sharing one pgrp (to fix#3952). This can lead to
early termination of a pgrp if finished parent job children are reaped
before future processes in either the parent or future child jobs can
join it.
While the parent job is under construction, require that waitpid(2)
calls for the child job be done by process id and not job pgrp.
Closes#3952.
Use SIGCHLD to determine whether or not waitpid(2) calls can be elided,
but only with extreme caution. If we receive SIGCHLD but are not able to
reap all jobs, we need to iterate through them again.
For this to work, we need to make sure that we reap all children that we
can reap after a SIGCHLD, i.e. it's not OK to just reap the first and
return or else we can never clear the dirty state flag.
In all cases, as expensive as a call to waitpid() may be, if a child
process is available for reaping it is always cheaper to wait on it then
reap it than to call select_try() and end up timing out.
The old code was rather haphazard with regards to error control, and
would make mutable changes before operations that could fail without any
viable error handling options.
Convert `select_try()` to return a well-defined enum describing its
state, and handle each of the three possible cases with clear reasons
why we are blocking or not blocking in each subsequent call to
`process_mark_finished_children()`.
* Use the newly-introduced signal_block_t RAII wrapper
* Remove EINTR loops as all signals are blocked
* Clean up control flow thanks to RAII wrappers
* Rename parameter to clarify what it does and update docs accordingly
* Update outdated comments referencing SIGSTOP code that was removed a
long time ago.
* Remove no-op CHECK_BLOCK() call
* Convert JOB_* enums to scoped enums
* Convert standalone job_is_* functions to member functions
* Convert standalone job_{promote, signal, continue} to member functions
* Convert standolen job_get{,_from_pid} to `job_t` static functions
* Reduce usage of JOB_* enums outside of proc.cpp by using new
`job_t::is_foo()` const helper methods instead.
This patch is only a refactor and should not change any functionality or
behavior (both observed and unobserved).
* Debug level 3: describe all commands being executed (this is, after all,
a shell and one can argue that this is the most important debug
information avaliable)
* Debug level 4: details of execution, mainly fork vs no-fork and io
handling
Also introduced j->preview() to print a short descriptor of the job
based on the head of the first process so we don't overwhelm with
needless repitition, but also so that we don't have to rely on
distinguishing between repeated, non-unique/non-monotonic job ids that
are often recycled within a single "execution cycle" (pressing enter
once).
Per @ridiculousfish's suggestions in #5219,
`process_mark_finished_children()` has been updated to work in an easier-
to-follow manner. Its behavior is now straight forward, it always checks
for finished processes but only blocks if `block_on_fg` is true.
We're not using the SIGCHLD count in s_sigchld_generation_cnt for
anything any more, as it's not actually a reliable metric since we can
experience one SIGCHLD as a result of two processes exiting (see #1768),
but only reap one of them if the other is in a not-fully-constructed job
(see #5219), a state we cannot possibly detect without calling
`waitpid()` on all child processes, which we are explicitly avoiding.
* Instead of reaping all child processes when we receive a SIGCHLD, try
reaping only processes belonging to process groups from fully-
constructed jobs, which should eliminate the need for the keepalive
process entirely (WSL's lack of zombies not withstanding) as now
completed processes are not reaped until the job has been fully
constructed (i.e. all processes launched), which means their process
group should still be around for new processes to join.
* When `tcgetpgrp()` calls return 0, attempt to `tcsetpgrp()` before
invoking failure handling code.
* When forking a builtin and not running interactively, do not bail if
unable to set/restore terminal attributes.
Fixes#4178. Fixes#3805. Fixes#5210.
Fix#5133 changed builtins to acquire the terminal, but this regressed
caused fish to be stopped when running in background via `sudo fish`.
Fix this by only acquiring the terminal if the terminal was owned by the
builtin's pgroup.
Fixes#5147
When running a builtin, if we are an interactive shell and stdin is a tty,
then acquire ownership of the terminal via tcgetpgrp() before running the
builtin, and set it back after.
Fixes#4540
While supported by gcc and clang, \e is a gcc-specific extension and not
formally defined in the C or C++ standards.
See [0] for a list of valid escapes.
[0]: https://stackoverflow.com/a/10220539/17027
Turns out the process-exit is only ever used in conjunction with
`%self`. Make that explicit by just adding a new "fish_exit" event,
and deprecate the general process-exit machinery.
Fixes#4700.
This concerns block nodes with redirections, like
begin ... end | grep ...
Prior to this fix, we passed in a pointer to the node. Switch to passing
in the tnode and parsed source ref. This improves type safety and better
aligns with the function-node plans.
There were several issues with the way that the include tests for curses.h
were being done that were ultimately causing fish to use the headers from
ncurses but link against curses on platforms that provide an actual
libcurses.so that isn't just a symlink to libncurses.so
In particular, the old code was first testing for curses's cureses.h and then
falling back to libncurses's implementation of the same - but that logic was
reversed when it came to including term.h, in which case it was testing for
the ncurses term.h and falling back to the curses.h header. Long story short,
while cmake will link against libcurses.so if both libcurses.so and
libncurses.so are present (unless CURSES_NEED_NCURSES evaluates to TRUE, but
that makes ncurses a hard requirement), but we were brining in some of the
defines from the ncurses headers, causing SIGSEGV panics when fish ultimately
tried to access variables that weren't exported or were mapped to undefined
areas of memory in the other library.
Additionally it is an error to include termios.h prior to including the plain
Jane curses.h (not ncurses/curses.h), causing errors about unimplemented types
SGTTY/chtype. So far as I can tell, both curses.h and ncurses/curses.h pull in
termios.h themselves so it shouldn't even be necessary to manually include it,
but I have just moved its #include below that of curses.h
No longer using RAII wrappers around pthread_mutex_t and pthread_cond_t
in favor of the C++11 std::mutex, std::recursive_mutex, and
std::condition_variable data types.
There is no more race condition between parent and child with
regards to setting the process groups. Each child sets it for themselves
and then blocks indefinitely until the parent does what it needs to for
them (having waited for them to set their process groups). They are not
SIGCONT'd until the next process in the chain (if any) starts so that
that process can join their process group and open the pipes.
Setting the process group in a fork/exec scenario is a well-documented
race condition in pretty much any job control mechanism [0] [1]. The
Wikipedia article contradicts the glibc article and suggests that the
best approach is for the parent to wait for the child to become the
process group leader, while the glibc article suggests that both should
make it so (which is what fish did previously). However, I'm running
into cases where tcsetpgrp is causing an EPERM error, which it isn't
documented to do except if the session id for the calling process
differs from that of the target process group (which is never the case
in fish since they are all part of the same session), which should cause
a _different_ error (SIGTTOU to be sent to all members of the calling
process' group).
In all cases, this is easily remedied by checking if the process group
in question is already in control of the terimnal. There's still the
off-chance that in the time between we check that and the time that the
command completes that situation may have changed, but the parent
process is supposed to ignore the result of this call if it errors out.
[0]: https://en.wikipedia.org/wiki/Process_group
[1]: https://www.gnu.org/software/libc/manual/html_node/Launching-Jobs.html
PR #3691 made most calls to `signal_block()` and `signal_unblock()`
no-ops unless a magic env var is set when fish starts running. It's
been seven months since that change was made and no problems have been
reported. This finishes that work by removing those no-op function calls
and support for the magic env var in our next major release (which won't
happen till at least six months from now).
This implements `status is-breakpoint` that returns true if the current
shell prompt is displayed in the context of a `breakpoint` command.
This also fixes several bugs. Most notably making `breakpoint` a no-op if
the shell isn't interactive. Also, typing `breakpoint` at an interactive
prompt should be an error rather than creating a new nested debugging
context.
Partial fix for #1310
This came up in the context of issue #4068. This change makes it more
likely that the correct translation from english to another language
will be done for the "Job ... has {ended,stopped}" message.
0 is not a good default PGID, because it's possible for a kernel process
to have the PGID of 0 under Linux.
This meant that job_get_from_pid could return incorrect jobs, as the PGID
for internal, non-forked jobs was the same as kernel processes.
Avoid this by using an invalid PGID as the initial PGID.
It is possible for fish to not be the process group leader; avoid
signalling the process group containing the current process by checking
with getpgrp() rather than assuming that getpid() is enough.
This is the next step in determining whether we can disable blocking
signals without a good reason to do so. This makes not blocking signals
the default behavior. If someone finds a problem they can add this to
their ~/config/fish/config.fish file:
set FISH_NO_SIGNAL_BLOCK 0
Alternatively set that env var before starting fish. I won't be surprised
if people report problems. Till now we have relied on people opting in
to this behavior to tell us whether it causes problems. This makes the
experimental behavior the default that has to be opted out of. This will
give us a lot more confidence this change doesn't cause problems before
the next minor release.
Note that there are still a few places where we force blocking of
signals. Primarily to keep SIGTSTP from interfering with the shell in
response to manipulating the controlling tty. Bash is more selective
in the signals it blocks around the problematic syscalls (c.f., its
`git_terminal_to()` function). However, I don't see any value in that
refinement.
I recently upgraded the software on my macOS server and was dismayed to
see that cppcheck reported a huge number of format string errors due to
mismatches between the format string and its arguments from calls to
`assert()`. It turns out they are due to the macOS header using `%lu`
for the line number which is obviously wrong since it is using the C
preprocessor `__LINE__` symbol which evaluates to a signed int.
I also noticed that the macOS implementation writes to stdout, rather
than stderr. It also uses `printf()` which can be a problem on some
platforms if the stream is already in wide mode which is the normal case
for fish.
So implement our own `assert()` implementation. This also eliminates
double-negative warnings that we get from some of our calls to
`assert()` on some platforms by oclint.
Also reimplement the `DIE()` macro in terms of our internal
implementation.
Rewrite `assert(0 && msg)` statements to `DIE(msg)` for clarity and to
eliminate oclint warnings about constant expressions.
Fixes#3276, albeit not in the fashion I originally envisioned.
Commit ab189a75 introduced a regression where we stop breaking out
of loops in response to a child death via a signal. Fix that regression.
Also introduces a test to help ensure we don't regress in the future.
Fixes#3780
The shell was doing a log of signal blocking/unblocking that hurts
performance and can be avoided. This reduced the elapsed time for a
simple benchmark by 25%.
Partial fix for #2007
We should never use stdio functions that use stdout implicitly. Saving a
few characters isn't worth the inconsistency. Too, using the forms such
as `fwprintf()` which take an explicit stream makes it easier to find
the places we write to stdout versus stderr.
Fixes#3728
On some platforms, notably GNU libc, you cannot mix narrow and wide
stdio functions on a stream like stdout or stderr. Doing so will drop
the output of one or the other. This change makes all output to the
stderr stream consistently use the wide forms.
This change also converts some fprintf(stderr,...) calls to debug()
calls where appropriate.
Fixes#3692
Using `\e` is clearer and shorter than `\x1b`. It's also consistent with how
we write related control chars; e.g., we don't write `\x0a` we write '\n'.
If an interactive shell has its tty invalidated attempts to write to
stdout or stderr can trigger this bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=20632
Avoid that by reopening the stdio streams on /dev/null if we're getting
an ENOTTY error when trying to do things like give or take ownership of
the tty.
This includes some unrelated style cleanups but including them seems
reasonable.
Fixes#3644
If an interactive job is started, and it is reaped within fish's
exit handler, we may attempt to print its status message after
cur_term has been set to NULL. This results in a crash.
This change makes fish only print the status message if cur_term is
not NULL.
Fixes#3222