Prior to this fix, an "event" was used as both a predicate on which events
to match, and also as the event itself. Re-express these concepts
distinctly: an event is something that happened, an event_handler is the
predicate and name of the function to execute.
Prior to this fix, fish had a signal_list_t that accumulated signals.
Signals were added to an array of integers, with an overflow flag.
The event machinery would attempt to atomically "swap in" the other list.
After this fix, there is a single list of pending signal events, as an array
of atomic booleans. The signal handler sets the boolean corresponding to its
signal.
This uses the new internal process mechanism to write output for builtins.
After this the only reason fish ever forks is to execute external processes.
This introduces "internal processes" which are backed by a pthread instead
of a normal process. Internal processes are reaped using the topic
machinery, plugging in neatly alongside the sigchld topic; this means that
process_mark_finished_children() can wait for internal and external
processes simultaneously.
Initially internal processes replace the forked process that fish uses to
write out the output of blocks and functions.
The sigchld generation expresses the idea that, if we receive a sigchld
signal, the generation will be different than when we last recorded it. A
process cannot exit before it has launched, so check the generation count
before process launch. This is an optimization that reduces failing
waitpid calls.
This is a big change to how process reaping works, reimplenting it using
topics. The idea is to simplify the logic in
process_mark_finished_children around blocking, and also prepare for
"internal processes" which do not correspond to real processes.
Before this change, fish would use waitpid() to wait for a process group,
OR would individually poll processes if the process group leader was
unreapable.
After this change, fish no longer ever calls blocking waitpid(). Instead
fish uses the topic mechanism. For each reapable process, fish checks if
it has received a SIGCHLD since last poll; if not it waits until the next
SIGCHLD, and then polls them all.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).
This makes some significant architectual improvements to io_pipe_t and
io_buffer_t.
Prior to this fix, io_buffer_t subclassed io_pipe_t. io_buffer_t is now
replaced with a class io_bufferfill_t, which does not subclass pipe.
io_pipe_t no longer remembers both fds. Instead it has an autoclose_fd_t,
so that the file descriptor ownership is clear.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).
This makes some significant architectual improvements to io_pipe_t and
io_buffer_t.
Prior to this fix, io_buffer_t subclassed io_pipe_t. io_buffer_t is now
replaced with a class io_bufferfill_t, which does not subclass pipe.
io_pipe_t no longer remembers both fds. Instead it has an autoclose_fd_t,
so that the file descriptor ownership is clear.
By exclusively waiting by pgrp, we can fail to reap processes that
change their own pgrp then either crash or close their fds. If we wind
up in a situation where `waitpid(2)` returns 0 or ECHLD even though we
did not specify `WNOHANG` but we still have unreaped child processes,
wait on them by pid.
Closes#5596.
This is effectively a pick of 2ebdcf82ee
and the subsequent fixup. However we also avoid setting WNOHANG unless
waitpid() indicates a process was reaped.
Fixes#5438
This is the more correct fix for #5447, as regardless of which process
in the job (be it the first or the last) finished first, once we have
waited on a process without ~WNOHANG we don't do that for any subsequent
processes in the job.
It is also a waste to call into the kernel to wait for a process we
already know is completed!
This fixes#5438 by having fish block while waiting on a foreground job
via its individual processes by enumerating the procs in reverse order,
such that we hang waiting for the last job in the IO chain to terminate,
rather than the first.
The function `add_disowned_pgid` adds process *group* ids and not
process ids. It multiplies the value by negative 1 to indicate a wait
on a process group, so the original value must be positive.
If a job is disowned that, for some reason, has a pgid that is special
to waitpid, like 0 (process with pgid of the calling process), -1 (any
process), or our actual pgid, that would lead to us waiting for too
many processes when we later try to reap the disowned processes (to
stop zombies from appearing).
And that means we'd snag away the processes we actually do want to
wait for, which would end with us in a waiting loop.
This is tough to reproduce, the easiest I've found was
fish -ic 'sleep 5 &; disown; set -g __fish_git_prompt_showupstream auto; __fish_git_prompt'
in a git repo.
What we do is to not allow special pgids in the disowned_pids list.
That means we might leave a zombie around (though we probably wait on
0 somewhere), but that's preferable to infinitely looping.
See #5426.
This happens in firejail, and it means that we can't use it as an
argument to most pgid-taking functions.
E.g. `wait(0)` means to wait for the _current_ process group,
`tcsetpgrp(0)` doesn't work etc.
So we just stop doing this stuff and hope it works.
Fixes#5295.
This reverts commit 1cb8b2a87b.
argv[0] has the full path in it for a user when he executes it
out of $PATH. This is really annoying in the title which uses $_.
Also check if that is actually defined, not the cur_term proxy.
In #5371, we figured out that there are terminfo entries without this
capability, so this would do a NULL-dereference.
... rather than hard code it to "fish". This affects
what is found in $_ and improves the errors:
For example, if fish was ran with ./fish, instead of
something like:
fish: Expected 3 surprises, only got 2 surprises
we'll see:
./fish: Expected 3 surprises, only got 2 surprises
like most other shell utilities. It's just a tiny bit
of detail that can avoid confusion.
Now jobs are aware of their parent jobs, and can interrogate those jobs,
to determine if every job in the chain is fully constructed.
Remove flags and the static stacks that manipulated them.
The parent of a job is the parent pipeline that executed the function or
block corresponding to this job. This will help simplify
process_mark_finished_children().
select_try() returned IO_ERROR to indicate that there's no file descriptors
from which to read. Name this return value properly.
Also migrate this type into proc.cpp since it's not used outside of the
header.
This is an opposite case from the usual "pipe into grep-the-function"
where my `pbpaste` emitted a lot of content exceeding the OS pipe
buffer. The `block_on_fg` condition was just `send_sigcont` in the
original job control rewrite, and it was incorrect to sub it for
WAIT_BY_PROCESS on its own.
However, this requires always blocking when select_try returns an
interrupted/incomplete read or else fish doesn't block and stays running
in a tight loop in the background (and incorrectly writing to a terminal
it doesn't own under higher debug levels), which I *think* is OK.
This was introduced in 1b1bc28c0a but did
not cause any problems until the job control refactor, which caused it
to attempt to signal the calling `exec` builtin's own (invalid) pgrp
with SIGHUP.
Also improved debugging for `j->signal()` failures by printing the
signal we tried sending in case of error, rename the function to
`hup_background_jobs`, and move it from `reader.h`/`reader.cpp` to
`proc.h`/`proc.cpp`.
When a function is encountered by exec_job, a new context is created for
its execution from the ground up, with a new job and all, ultimately
resulting in a recursive call to exec_job from the same (main) thread.
Since each time exec_job encounters a new job with external commands
that needs terminal control it creates a new pgrp and gives it control
of the terminal (tcsetpgrp & co), this effectively takes control away
from the previously spawned external commands which may be (and likely
are) expecting to still have terminal access.
This commit attempts to detect when such a situation arises by handling
recursive calls to exec_job (which can only happen if the pipeline
included a function) by borrowing the pgrp from the (necessarily still
active) parent job and spawning new external commands into it.
When a parent job spawns new jobs due to the evaluation of a new
function (which shouldn't be the case in the first place), we end up
with two distinct jobs sharing one pgrp (to fix#3952). This can lead to
early termination of a pgrp if finished parent job children are reaped
before future processes in either the parent or future child jobs can
join it.
While the parent job is under construction, require that waitpid(2)
calls for the child job be done by process id and not job pgrp.
Closes#3952.
Use SIGCHLD to determine whether or not waitpid(2) calls can be elided,
but only with extreme caution. If we receive SIGCHLD but are not able to
reap all jobs, we need to iterate through them again.
For this to work, we need to make sure that we reap all children that we
can reap after a SIGCHLD, i.e. it's not OK to just reap the first and
return or else we can never clear the dirty state flag.
In all cases, as expensive as a call to waitpid() may be, if a child
process is available for reaping it is always cheaper to wait on it then
reap it than to call select_try() and end up timing out.
The old code was rather haphazard with regards to error control, and
would make mutable changes before operations that could fail without any
viable error handling options.
Convert `select_try()` to return a well-defined enum describing its
state, and handle each of the three possible cases with clear reasons
why we are blocking or not blocking in each subsequent call to
`process_mark_finished_children()`.
* Use the newly-introduced signal_block_t RAII wrapper
* Remove EINTR loops as all signals are blocked
* Clean up control flow thanks to RAII wrappers
* Rename parameter to clarify what it does and update docs accordingly
* Update outdated comments referencing SIGSTOP code that was removed a
long time ago.
* Remove no-op CHECK_BLOCK() call
* Convert JOB_* enums to scoped enums
* Convert standalone job_is_* functions to member functions
* Convert standalone job_{promote, signal, continue} to member functions
* Convert standolen job_get{,_from_pid} to `job_t` static functions
* Reduce usage of JOB_* enums outside of proc.cpp by using new
`job_t::is_foo()` const helper methods instead.
This patch is only a refactor and should not change any functionality or
behavior (both observed and unobserved).
* Debug level 3: describe all commands being executed (this is, after all,
a shell and one can argue that this is the most important debug
information avaliable)
* Debug level 4: details of execution, mainly fork vs no-fork and io
handling
Also introduced j->preview() to print a short descriptor of the job
based on the head of the first process so we don't overwhelm with
needless repitition, but also so that we don't have to rely on
distinguishing between repeated, non-unique/non-monotonic job ids that
are often recycled within a single "execution cycle" (pressing enter
once).
Per @ridiculousfish's suggestions in #5219,
`process_mark_finished_children()` has been updated to work in an easier-
to-follow manner. Its behavior is now straight forward, it always checks
for finished processes but only blocks if `block_on_fg` is true.
We're not using the SIGCHLD count in s_sigchld_generation_cnt for
anything any more, as it's not actually a reliable metric since we can
experience one SIGCHLD as a result of two processes exiting (see #1768),
but only reap one of them if the other is in a not-fully-constructed job
(see #5219), a state we cannot possibly detect without calling
`waitpid()` on all child processes, which we are explicitly avoiding.
* Instead of reaping all child processes when we receive a SIGCHLD, try
reaping only processes belonging to process groups from fully-
constructed jobs, which should eliminate the need for the keepalive
process entirely (WSL's lack of zombies not withstanding) as now
completed processes are not reaped until the job has been fully
constructed (i.e. all processes launched), which means their process
group should still be around for new processes to join.
* When `tcgetpgrp()` calls return 0, attempt to `tcsetpgrp()` before
invoking failure handling code.
* When forking a builtin and not running interactively, do not bail if
unable to set/restore terminal attributes.
Fixes#4178. Fixes#3805. Fixes#5210.
Fix#5133 changed builtins to acquire the terminal, but this regressed
caused fish to be stopped when running in background via `sudo fish`.
Fix this by only acquiring the terminal if the terminal was owned by the
builtin's pgroup.
Fixes#5147
When running a builtin, if we are an interactive shell and stdin is a tty,
then acquire ownership of the terminal via tcgetpgrp() before running the
builtin, and set it back after.
Fixes#4540
While supported by gcc and clang, \e is a gcc-specific extension and not
formally defined in the C or C++ standards.
See [0] for a list of valid escapes.
[0]: https://stackoverflow.com/a/10220539/17027
Turns out the process-exit is only ever used in conjunction with
`%self`. Make that explicit by just adding a new "fish_exit" event,
and deprecate the general process-exit machinery.
Fixes#4700.
This concerns block nodes with redirections, like
begin ... end | grep ...
Prior to this fix, we passed in a pointer to the node. Switch to passing
in the tnode and parsed source ref. This improves type safety and better
aligns with the function-node plans.
There were several issues with the way that the include tests for curses.h
were being done that were ultimately causing fish to use the headers from
ncurses but link against curses on platforms that provide an actual
libcurses.so that isn't just a symlink to libncurses.so
In particular, the old code was first testing for curses's cureses.h and then
falling back to libncurses's implementation of the same - but that logic was
reversed when it came to including term.h, in which case it was testing for
the ncurses term.h and falling back to the curses.h header. Long story short,
while cmake will link against libcurses.so if both libcurses.so and
libncurses.so are present (unless CURSES_NEED_NCURSES evaluates to TRUE, but
that makes ncurses a hard requirement), but we were brining in some of the
defines from the ncurses headers, causing SIGSEGV panics when fish ultimately
tried to access variables that weren't exported or were mapped to undefined
areas of memory in the other library.
Additionally it is an error to include termios.h prior to including the plain
Jane curses.h (not ncurses/curses.h), causing errors about unimplemented types
SGTTY/chtype. So far as I can tell, both curses.h and ncurses/curses.h pull in
termios.h themselves so it shouldn't even be necessary to manually include it,
but I have just moved its #include below that of curses.h
No longer using RAII wrappers around pthread_mutex_t and pthread_cond_t
in favor of the C++11 std::mutex, std::recursive_mutex, and
std::condition_variable data types.
There is no more race condition between parent and child with
regards to setting the process groups. Each child sets it for themselves
and then blocks indefinitely until the parent does what it needs to for
them (having waited for them to set their process groups). They are not
SIGCONT'd until the next process in the chain (if any) starts so that
that process can join their process group and open the pipes.
Setting the process group in a fork/exec scenario is a well-documented
race condition in pretty much any job control mechanism [0] [1]. The
Wikipedia article contradicts the glibc article and suggests that the
best approach is for the parent to wait for the child to become the
process group leader, while the glibc article suggests that both should
make it so (which is what fish did previously). However, I'm running
into cases where tcsetpgrp is causing an EPERM error, which it isn't
documented to do except if the session id for the calling process
differs from that of the target process group (which is never the case
in fish since they are all part of the same session), which should cause
a _different_ error (SIGTTOU to be sent to all members of the calling
process' group).
In all cases, this is easily remedied by checking if the process group
in question is already in control of the terimnal. There's still the
off-chance that in the time between we check that and the time that the
command completes that situation may have changed, but the parent
process is supposed to ignore the result of this call if it errors out.
[0]: https://en.wikipedia.org/wiki/Process_group
[1]: https://www.gnu.org/software/libc/manual/html_node/Launching-Jobs.html
PR #3691 made most calls to `signal_block()` and `signal_unblock()`
no-ops unless a magic env var is set when fish starts running. It's
been seven months since that change was made and no problems have been
reported. This finishes that work by removing those no-op function calls
and support for the magic env var in our next major release (which won't
happen till at least six months from now).
This implements `status is-breakpoint` that returns true if the current
shell prompt is displayed in the context of a `breakpoint` command.
This also fixes several bugs. Most notably making `breakpoint` a no-op if
the shell isn't interactive. Also, typing `breakpoint` at an interactive
prompt should be an error rather than creating a new nested debugging
context.
Partial fix for #1310
This came up in the context of issue #4068. This change makes it more
likely that the correct translation from english to another language
will be done for the "Job ... has {ended,stopped}" message.
0 is not a good default PGID, because it's possible for a kernel process
to have the PGID of 0 under Linux.
This meant that job_get_from_pid could return incorrect jobs, as the PGID
for internal, non-forked jobs was the same as kernel processes.
Avoid this by using an invalid PGID as the initial PGID.
It is possible for fish to not be the process group leader; avoid
signalling the process group containing the current process by checking
with getpgrp() rather than assuming that getpid() is enough.
This is the next step in determining whether we can disable blocking
signals without a good reason to do so. This makes not blocking signals
the default behavior. If someone finds a problem they can add this to
their ~/config/fish/config.fish file:
set FISH_NO_SIGNAL_BLOCK 0
Alternatively set that env var before starting fish. I won't be surprised
if people report problems. Till now we have relied on people opting in
to this behavior to tell us whether it causes problems. This makes the
experimental behavior the default that has to be opted out of. This will
give us a lot more confidence this change doesn't cause problems before
the next minor release.
Note that there are still a few places where we force blocking of
signals. Primarily to keep SIGTSTP from interfering with the shell in
response to manipulating the controlling tty. Bash is more selective
in the signals it blocks around the problematic syscalls (c.f., its
`git_terminal_to()` function). However, I don't see any value in that
refinement.
I recently upgraded the software on my macOS server and was dismayed to
see that cppcheck reported a huge number of format string errors due to
mismatches between the format string and its arguments from calls to
`assert()`. It turns out they are due to the macOS header using `%lu`
for the line number which is obviously wrong since it is using the C
preprocessor `__LINE__` symbol which evaluates to a signed int.
I also noticed that the macOS implementation writes to stdout, rather
than stderr. It also uses `printf()` which can be a problem on some
platforms if the stream is already in wide mode which is the normal case
for fish.
So implement our own `assert()` implementation. This also eliminates
double-negative warnings that we get from some of our calls to
`assert()` on some platforms by oclint.
Also reimplement the `DIE()` macro in terms of our internal
implementation.
Rewrite `assert(0 && msg)` statements to `DIE(msg)` for clarity and to
eliminate oclint warnings about constant expressions.
Fixes#3276, albeit not in the fashion I originally envisioned.
Commit ab189a75 introduced a regression where we stop breaking out
of loops in response to a child death via a signal. Fix that regression.
Also introduces a test to help ensure we don't regress in the future.
Fixes#3780
The shell was doing a log of signal blocking/unblocking that hurts
performance and can be avoided. This reduced the elapsed time for a
simple benchmark by 25%.
Partial fix for #2007
We should never use stdio functions that use stdout implicitly. Saving a
few characters isn't worth the inconsistency. Too, using the forms such
as `fwprintf()` which take an explicit stream makes it easier to find
the places we write to stdout versus stderr.
Fixes#3728
On some platforms, notably GNU libc, you cannot mix narrow and wide
stdio functions on a stream like stdout or stderr. Doing so will drop
the output of one or the other. This change makes all output to the
stderr stream consistently use the wide forms.
This change also converts some fprintf(stderr,...) calls to debug()
calls where appropriate.
Fixes#3692
Using `\e` is clearer and shorter than `\x1b`. It's also consistent with how
we write related control chars; e.g., we don't write `\x0a` we write '\n'.
If an interactive shell has its tty invalidated attempts to write to
stdout or stderr can trigger this bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=20632
Avoid that by reopening the stdio streams on /dev/null if we're getting
an ENOTTY error when trying to do things like give or take ownership of
the tty.
This includes some unrelated style cleanups but including them seems
reasonable.
Fixes#3644
If an interactive job is started, and it is reaped within fish's
exit handler, we may attempt to print its status message after
cur_term has been set to NULL. This results in a crash.
This change makes fish only print the status message if cur_term is
not NULL.
Fixes#3222
Just use static_cast directly instead of inscrutible "shortcut"
macro.
It was not always used and doesn't seem to do much besides scramble
things up; encountering CAST_INIT() in the code seems likely to lead
to head scratching due to the transformation taking place.
It was added to save folks typing the type twice, now with 100
columns available, let's roll that convenience macro back.
sockaddr_dl:
Perform reinterpret_cast<sockaddr_dl> conversion. The cast affected
alignment and looks fishy to a compiler (but it's fine). Ditch
C-style cast and communicate we're doing that on purpose.
This only eliminates errors reported by `make lint`. It shouldn't cause any
functional changes.
This change does remove several functions that are unused. It also removes the
`desc_arr` variable which is both unused and out of date with reality.
I'm doing this as part of fixing issue #2980. The code for managing tty modes
and job control is a horrible mess. This is a very tiny step towards improving
the situation.
Remove the "make iwyu" build target. Move the functionality into the
recently introduced lint.fish script. Fix a lot, but not all, of the
include-what-you-use errors. Specifically, it fixes all of the IWYU errors
on my OS X server but only removes some of them on my Ubuntu 14.04 server.
Fixes#2957
Cppcheck has identified a lot of unused functions. This removes funcs that
are unlikely to ever be used. Others that might be useful for debugging I've
commented out with "#if 0".
This was used to cache a narrow string representation
of commands, so that if certain system calls returned errors
after fork, we could output error messages without allocating
memory. But in practice these errors are very uncommon, as are
commands that have wide characters. It is simpler to do a best-effort
output of the wide string, instead of caching a narrow string
unconditionally.
This change moves source files into a src/ directory,
and puts object files into an obj/ directory. The Makefile
and xcode project are updated accordingly.
Fixes#1866