Previously, if the user control-C'd out of a process, we would set a
bogus exit status in the process, but it was difficult to observe this
because we would be cancelling anyways. But set it properly.
Prior to this change, a process after it has been constructed by
parse_execution, but before it is executed, was given a list of
io_data_t redirections. The problem is that redirections have a
sensitive ownership policy because they hold onto fds. This made it
rather hard to reason about fd lifetime.
Change these to redirection_spec_t. This is a textual description
of a redirection after expansion. It does not represent an open file and
so its lifetime is no longer important.
This enables files to be held only on the stack, and are no longer owned
by a process of indeterminate lifetime.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
We used to have a global notion of "is the shell interactive" but soon we
will want to have multiple independent execution threads, only some of
which may be interactive. Start tracking this data per-parser.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.
This was added in 04a96f6 but not strictly required to fix#5803
(verified), with the intention of hiding invisible background jobs
(created by invoking a function within a pipeline) from the user, but
that also broke intentionally created jobs from displaying as well.
I'm thinking it can't be done without keeping track of caller context vs
job context.
Closes#5824.
Prior to this change, fish used a global flag to decide if we should check
for changes to universal variables. This flag was then checked at arbitrary
locations, potentially triggering variable updates and event handlers for
those updates; this was very hard to reason about.
Switch to triggering a universal variable update at a fixed location,
after running an external command. The common case is that the variable
file has not changed, which we can identify with just a stat() call, so
this is pretty cheap.
I did not realize builtins could safely call into the parser and inject
jobs during execution. This is much cleaner than hacking around the
required shape of a plain_statement.
While `eval` is still a function, this paves the way for changing that
in the future, and lets the proc/exec functions detect when an eval is
used to allow/disallow certain behaviors and optimizations.
Followup to 394623b.
Doing it in the parser meant only top-level jobs would be reaped after
being `disown`ed, as subjobs aren't directly handled by the parser.
This is also much cleaner, as now job removal is centralized in
`process_clean_after_marking()`.
Closes#5803.
This prevents the `disown` builtin from directly removing jobs out of
the jobs list to prevent sanity issues, as `disown` may be called within
the context of a subjob (e.g. in a function or block) in which case the
parent job might not yet be done with the reference to the child job.
Instead, a flag is set and the parser removes the job from the list only
after the entire execution chain has completed.
Closes#5720.
Prior to this fix, in every call to job_continue, fish would reclaim the
foreground pgrp. This would cause other jobs in the pipeline (which may
have another pgrp) to receive SIGTTIN / SIGTTOU.
Only reclaim the foreground pgrp if it was held at the point of job_continue.
This partially addresses #5765
Directly access the job list without the intermediate job_iterator_t,
and remove functions that are ripe for abuse by modifying a local
enumeration of the same list instead of operating on the iterators
directly (e.g. proc.cpp iterates jobs, and mid-iteration calls
parser::job_remove(j) with the job (and not the iterator to the job),
causing an invisible invalidation of the pre-existing local iterators.
Prior to this fix, the wait command used waitpid() directly. Switch it to
calling process_mark_finished_children() along with the rest of the job
machinery. This centralizes the waitpid call to a single location.