Fixes#6830
For some reason, with this change, typing "vi", Control-Z, and 2 x Control-D,
results in the cursor not moving correctly, but this only
seems to happen when starting fish from a fish that doesnt have this fix.
I hope that is a temporary glitch.
The prefix 'haha' is short enough, (and phonetic enough), that it could collide with an existing user on the system where the tests are running, causing the test to fail.
I kinda hate how fussy clang-format is. It reflows text
constantly (line limit), forces things onto one line *except* when
they're too long, and wants to turn this:
```c++
return true;;
```
into this:
```c++
return true;
;
```
instead of, you know, eliminating the second semicolon?
Anyway, it is what it is and we use it, I'll just look into getting some
more slack.
This allows code of the form `if jobs -q $some_pid` in scripts to check whether a previously started job is still running. Previously this would return the correct value, but also print an error message.
The invalid argument errors will still be printed.
Added test cases for both.
Currently we do not add such command lines to the history, so there
won't be a suggestion from history anyway.
Fixes#6763 which occurs because midnight commander feeds fish commands
like this one (note the loading space)
` cd (printf '%b' '\0057home\0057johannes\0057git\0057fish\0055shell\0057build')`
Things like
```fish
\
echo foo
```
or
```fish
echo foo; \
echo bar
```
are a formatting blunder and should be handled.
This makes it so the escaped newline is removed, and the
semicolon/token_type_end handling will then put the statements on
different lines.
One case this doesn't handle brilliantly is an escaped newline after a
pipe:
```fish
echo foo | \
cat
```
is turned into
```fish
echo foo | cat
```
which here works great, but in long pipelines can cause issues.
Pipes at the end of the line cause fish to continue parsing on the
next line, so this can just be written as
```fish
echo foo |
cat
```
for now.
1. When the wall time and cpu time rows has different units
e.x. running multiple cores
2. When duration is around 1E3 or 1E6 microseconds
printf("%6.2F", 999.995) gives 1000.00 which is 7 digits
If given a prompt that includes a non-ascii char and a C locale, fish
currently fails to properly display it.
So you set `function fish_prompt; echo 😃; end` and it shows empty
space.
While the underlying cause is obviously using a C locale and non-C
characters to begin with, this is an unacceptable failure mode.
Apparently I misunderstood wcstombs, so I inadvertently broke this in
2b0b3d3 while trying to fix 5134949's crash.
Just return the offending bit to pre-5134949 levels, so instead of an
infinite recursion we just call a lame function a couple of times.
This tries to see if quotes guard some expansion from happening. If it
detects a "weird" character it'll leave the quotes in place, even in
some cases where it might not trigger.
So
for i in 'c' 'color'
turns into
for i in c color
The rationale here is that these quotes are useless, wasting
space (and line length), but more importantly that they are
superstitions. They don't do anything, but look like they do.
The counter argument is that they can be kept in case of later
changes, or that they make the intent clear - "this is supposed to be
a string we pass".
This teaches the reader fast-path to use self-insert-notfirst, allowing
it to handle spaces. This greatly increases the performance of paste by
reducing redraws.
Fixes#6603. Somewhat improves #6704
This adds basic support for self-insert-notfirst. When we see a
self-insert-nonempty char event, we kick it back to the outer loop,
which only inserts the character if the cursor is not at the beginning.
This adds a new readline command self-insert-notfirst, which is
analogous to self-insert, except that it does nothing if the cursor
is at the beginning. This will serve as a higher-performance implementation
for stripping leading spaces on paste.
debounce_t will be used to limit thread creation from background highlighting
and autosuggestion scenarios. This is a one-element queue backed by a
single thread. New requests displace any existing queued request; this
reflects the fact that autosuggestions and highlighting only care about
the most recent result.
A timeout allows for abandoning hung threads, which may happen if you
attempt to e.g. access a dead hard-mounted NFS server. We don't want
this to defeat autosuggestions and highlighting permanently, so allow
spawning a new thread after the timeout (here 500 ms).
bbc3fecbe introduced a regression where support for 256 color was not
detected on xterm-like terminals that did not define the TERM_PROGRAM
env variable. Almost no terminal on linux define this variable.
55e3270 introduced a regression where we would remove all completed
jobs. But jobs that want to print a status message get skipped, so
the status message (and associated event handlers) might not get run.
Fix this by making it explicit which jobs are safe to process, and which
should be skipped.
Fixes#6679.
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
This was written before local-exported variables did anything useful.
Passing these vars as local-exports removes the need to define the
validation function with `--no-scope-shadowing` which is quite the
hack.
If a background process runs a fish function which launches another
background process, ensure that these background procs get different
pgroups. Add a test for it.
Which happened when starting the selection at the end of the commandline.
In this case, selections still interact weirdly with autosuggestions (the
first character of the suggestion appears to be part of the selection
when it's not).
Fixes#6680
Prior to this commit, when executing a builtin, we mark the job as not
foreground. After this commit we no longer modify the foreground state
of the job just for the builtin.
There was the following comment:
// Since this may be the foreground job, and since a builtin may execute another
// foreground job, we need to pretend to suspend this job while running the
// builtin, in order to avoid a situation where two jobs are running at once.
The concern seemed to be in the `bg` and `fg` builtins, which might attempt
to foreground or background the jobs associated with `bg` and `fg` themselves.
But the builtins run before the job is marked constructed, so it cannot
actually happen.
Bravely remove this code.
Mostly line breaks, one instance of tabs!
For some reason clang-format insists on two spaces before a same-line comment?
(I continue to be unimpressed with super-strict line length limits,
but I continue to believe in automatic styling, so it is what it is)
[ci skip]
It used to error out when a command wasn't known, even when it was a
function that would only be discovered via autoloading.
Now we just accept that a command doesn't exist when no-execute is
given - we're not gonna execute it anyway.
Also, in the same breath stop counting empty commands after expansion
and empty wildcard expansions as errors - these depend on runtime
values, so we can't verify them without executing.
Fixes#977.
(note that it still executes "time", but that's another commit)
Appending to an fd doesn't really make sense, but we allowed the
syntax previously and it was actually used.
It's not too harmful to allow it, so let's just do that again.
For the record: Zsh also allows it, bash doesn't.
Fixes#6614
When building fish-shell with the macOS 10.12 SDK, <sys/proc.h> does not
include <sys/time.h> but references `struct itimerval`. This causes a
compilation failure if we don't import <sys/time.h> ourselves.
This was previously masked by an import of <sys/sysctl.h>, which was
removed in fc0c39b6fd.
Glob ordering is used in a variety of places, including figuring out
conf.d and really needs to be stable.
Other ordering, like completions, is really just cosmetic and can
change if it makes for a nicer experience.
So we uncouple it by copying the wcsfilecmp from 3.0.2, which will
return the ordering to what it was in that release.
Fixes#6593
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
"job_exit" events, despite their name, can only be created via
the '--on-job-exit caller' misfeature of function. Rename it to make it
clear that this event type is specifically for caller-exit.
Add the input function undo which is bound to `\c_` (control + / on
some terminals). Redoing the most recent chain of undos is supported,
redo is bound to `\e/` for now.
Closes#1367.
This approach should not have the issues discussed in #5897.
Every single modification to the commandline can be undone individually,
except for adjacent single-character inserts, which are coalesced,
so they can be reverted with a single undo. Coalescing is not done for
space characters, so each word can be undone separately.
When moving between history search entries, only the current history
search entry is reachable via the undo history. This allows to go back
to the original search string with a single undo, or by pressing the
escape key.
Similarly, when moving between pager entries, only the most recent
selection in the pager can be undone.
This switches bufferfills from using an exclusively-owned thread, to
sharing an fd_monitor. This allows multiple bufferfills to all use the same
thread.
fd_monitor is a new class which can monitor a set of fds, waiting for them
to become readable. When an fd becomes readable, a callback is invoked.
Timeouts are also supported.
This is intended to replace the "bufferfill" threads. Rather than one
thread per bufferfill, we will have a single fd_monitor which can service
multiple bufferfills. This helps today with nested command substitutions,
and will help in the future with concurrent execution.
This makes two changes:
1. Remove the 'brace_text_start' idea. The idea of 'brace_text_start' was
to prevent emitting `BRACE_SPACE` at the beginning or end of an item. But
we later strip these off anyways, so there is no apparent benefit. If we
are not doing brace expansion, this prevented emitting whitespace at the
beginning or end of an item, leading to #6564.
2. When performing brace expansion, only stomp the space character with
`BRACE_SPACE`; do not stomp newlines and tabs. This is because the fix in
came from a newline or tab literal, then we would have effectively
replaced a newline or tab with a space, so this is important for #6564 as
well. Moreover, it is not easy to place a literal newline or tab inside a
brace expansion, and users who do probably do not mean for it to be
stripped, so I believe this is a good change in general.
Fixes#6564
fish has some unprincipled code that attempts to tcsetpgrp() to own the
terminal before running a builtin; this was added because 'read' might
want to read from the terminal. I added this code before fully
understanding how process groups and terminals work. A better fix would
be to ensure that fish is marked as the pgroup leader in the job when
the builtin is the first process in the job, and we do that now.
Courageously back out the changes to grab the terminal; see #5147 and
also #5133.
Introduce pgroup_provenance_t, a type which captures "where the pgroup
comes from." This centralizes some logic around how pgroups are
assigned, and it anticipates concurrent execution.
In some cases on some platforms this could clobber errno, so doing something like
aThingThatFailsWithErrno();
FLOG(category, "Some message");
wperror("something");
would print the wrong error (presumably if that category was enabled).
In our case it was our (very) old friend RHEL6 returning ESPIPE instead of EISDIR.
Fixes#6545.
Prior to this fix, the cancellation C++ test would mark the parser as
interactive in an effort to install interactive signal handling (so that,
for example, SIGINT would stop the job and return control to the user).
However this flag would also cause fish to attempt to save and restore tty modes
across the job. This would fail since there is no tty, and so the job would fail
with an unexpected error code.
We don't need to mark the parser as interactive, we can just remove that line.
Fixes#6539.
Use some more move semantics to reduce allocations.
Correctly handle the case where the completion is empty. For example, if
you type:
ls<tab>
we get an empty completion (since ls is already a valid command), but we
still want to show its description.
Remove some unsafe statics - these are unsafe today in weird cases where
completions might invoke complete recursively, and also will soon be
unsafe with concurrent execution.
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
Sometimes we must spawn a new thread, to avoid the risk of deadlock.
Ensure we always spawn a thread in those cases. In particular this
includes the fillthread.
complete -C'echo $HOM ' would complete $HOM instead of a new token.
Fixes another regression introduced in
6fb7f9b6b - Fix completion for builtins with subcommands
64 is too low (it's actually reachable), and every sensible system should have a limit above
this.
On OpenBSD and FreeBSD it's ULONG_MAX, on my linux system it's 61990.
Plus we currently fail by hanging if our limit is reached, so this
should improve things regardless.
On my linux system _POSIX_THREAD_THREADS_MAX works out to 64 here,
which is just too low, even tho the system can handle more.
Fixes#6503 harder.
This commit recognizes an existing pattern: many operations need some
combination of a set of variables, a way to detect cancellation, and
sometimes a parser. For example, tab completion needs a parser to execute
custom completions, the variable set, should cancel on SIGINT. Background
autosuggestions don't need a parser, but they do need the variables and
should cancel if the user types something new. Etc.
This introduces a new triple operation_context_t that wraps these concepts
up. This simplifies many method signatures and argument passing.
When executing a buffered block or builtin, the usual approach is to
execute, collect output in a string, and then output that string to
stdout or whatever the redirections say. Similarly for stderr.
If we get no output, then we can elide the outputting which means
skipping the background thread. In this case we just mark the process as
finished immediately.
We do this in multiple locations which is confusing. Factor them all
together into a new function run_internal_process_or_short_circuit.
for-loops that were not inside a function could overwrite global
and universal variables with the loop variable. Avoid this by making
for-loop-variables local variables in their enclosing scope.
This means that if someone does:
set a global
for a in local; end
echo $a
The local $a will shadow the global one (but not be visible in child
scopes). Which is surprising, but less dangerous than the previous
behavior.
The detection whether the loop is running inside a function was failing
inside command substitutions. Remove this special handling of functions
alltogether, it's not needed anymore.
Fixes#6480
'fish_test_helper print_pid_then_sleep' tried to sleep for .5 seconds,
but instead it divided by .5 so it actually slept for 2 seconds.
This exceeds the maximum value on NetBSD so it wasn't sleeping at all
there.
Fixes#6476
Empty items are used as sentinels to indicate that we've reached the end of
history, so they should not be added as actual items. Enforce this.
Fixes#6032
This reduces the syscall count for `fish -c exit` from 651 to 566.
We don't attempt to *cache* the pgrp or anything, we just call it once
when we're about to execute the job to see if we are in foreground and
to assign it to the job, instead of once for checking foreground and
once to give it to the job.
Caching it with a simple `static` would get the count down to 480, but
it's possible for fish to have its pgroup changed.
Store the entire function declaration, not just its job list.
This allows us to extract the body of the function complete with any
leading comments and indents.
Fixes#5285
In particular, this allows `true && time true`, or `true; and time true`,
and both `time not true` as well as `not time true` (like bash).
time is valid only as job _prefix_, so `true | time true` could call
`/bin/time` (same in bash)
See discussion in #6442
Extend the commit 8e17d29e04 to block processes, for example:
begin ; stuff ; end
or if/while blocks as well.
Note there's an existing optimization where we do not create a job for a
block if it has no redirections.
job_promote attempts to bring the most recently "touched" job to the front
of the job list. It did this via:
std::rotate(begin, job, end)
However this has the effect of pushing job-1 to the end. That is,
promoting '2' in [1, 2, 3] would result in [2, 3, 1].
Correct this by replacing it with:
std::rotate(begin, job, job+1);
now we get the desired [2, 1, 3].
Also add a test.
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
It looks like the last status already contains the signal that cancelled
execution.
Also make `fish -c something` always return the last exit status of
"something", instead of hardcoded 127 if exited or signalled.
Fixes#6444
Previously, the block stack was a true stack. However in most cases, you
want to traverse the stack from the topmost frame down. This is awkward
to do with range-based for loops.
Switch it to pushing new blocks to the front of the block list.
This simplifies some traversals.
This was previously required so that, if there was a redirection to a
file, we would fork a process to create the file even if there was no
output. For example `echo -n >/tmp/file.txt` would have to create
file.txt even though it would be empty.
However now we open the file before fork, so we no longer need special
logic around this.
Do this only when splitting on IFS characters which usually contains
whitespace characters --- read --delimiter is unchanged; it still
consumes no more than one delimiter per variable. This seems better,
because it allows arbitrary delimiters in the last field.
Fixes#6406
user_supplied was used to distinguish IO redirections which were
explicit, vs those that came about through "transmogrphication." But
transmogrification is no more. Remove the flag.
As of GCC 7.4 (at least under macOS 10.10), the previous workaround of
casting a must-use result to `(void)` to avoid warnings about unused
code no longer works.
This workaround is uglier but it quiets these warnings.
The C++ spec (as of C++17/n4713) does not specify the sign of `wchar_t`,
saying only (in section 6.7.1: Fundamental Types)
> Type wchar_t shall have the same size, signedness, and alignment
> requirements (6.6.5) as one of the other integral types, called its
> underlying type.
On most *nix platforms on AMD64 architecture, `wchar_t` is a signed type
and can be compared with `int32_t` without incident, but on at least
some platforms (tested: clang under FreeBSD 12.1 on AARCH64), `wchar_t`
appears to be unsigned leading to sign comparison warnings:
```
../src/widecharwidth/widechar_width.h:512:48: warning: comparison of
integers of different signs: 'const wchar_t' and 'int32_t' (aka 'int')
[-Wsign-compare]
return where != std::end(arr) && where->lo <= c;
```
This patch forces the use of wchar_t for the range start/end values in
`widechar_range` and the associated comparison values.
Previously, if the user control-C'd out of a process, we would set a
bogus exit status in the process, but it was difficult to observe this
because we would be cancelling anyways. But set it properly.
If a Control-C is received during expanding a command substitution, we
may execute the job anyways, because we do not check for cancellation
after the expansion. Ensure that does not happen.
This should fix sporadic test failures in the cancellation unit test.
parser_t::eval indicates whether there was a parse error. It can be
easily confused with the status of the execution. Use a real type to
make it more clear.
Looking up a variable by a string literal implicitly constructs a wcstring.
By avoiding that, we get a noticeable reduction of temporary allocations.
$ HOME=. heaptrack ./fish -c true
heaptrack stats: # baseline
allocations: 7635
leaked allocations: 3277
temporary allocations: 602
heaptrack stats: # new
allocations: 7565
leaked allocations: 3267
temporary allocations: 530
This adds a test for the obscure case where an fd is redirected to
itself. This is tricky because the dup2 will not clear the CLO_EXEC bit.
So do it manually; also posix_spawn can't be used in this case.
The IO cleanup left file redirections open in the child. For example,
/bin/cmd < file.txt would redirect stdin but also leave the file open.
Ensure these get closed properly.
Prior to this fix, a file redirection was turned into an io_file_t. This is
annoying because every place where we want to apply the redirection, we
might fail due to open() failing. Switch to opening the file at the point
we resolve the redirection spec. This will simplify a lot of code.
Prior to this change, a process after it has been constructed by
parse_execution, but before it is executed, was given a list of
io_data_t redirections. The problem is that redirections have a
sensitive ownership policy because they hold onto fds. This made it
rather hard to reason about fd lifetime.
Change these to redirection_spec_t. This is a textual description
of a redirection after expansion. It does not represent an open file and
so its lifetime is no longer important.
This enables files to be held only on the stack, and are no longer owned
by a process of indeterminate lifetime.
fish has to ensure that the pipes it creates do not conflict with any
explicit fds named in redirections. Switch this code to using
autoclose_fd_t to make the ownership logic more explicit, and also
introduce fd_set_t to reduce the dependence on io_chain_t.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
In preparation for concurrent execution, invert the control of function and
block execution. Allow a process to return an std::function that performs the
the execution. This can be run on either the main or a background thread
(eventually).
This splits a string into variables according to the shell's
tokenization rules, considering quoting, escaping etc.
This runs an automatic `unescape` on the string so it's presented like
it would be passed to the command. E.g.
printf '%s\n' a\ b
returns the tokens
printf
%s\n
a b
It might be useful to add another mode "--tokenize-raw" that doesn't
do that, but this seems to be the more useful of the two.
Fixes#3823.
Background fillthreads are used when we want to populate a buffer from an
external command. The most common is command substitution.
Prior to this commit, fish would spin up a fillthread whenever required.
This ended up being quite expensive.
Switch to using the iothread pool instead. This enables reusing the same
thread(s), which prevents needing to spawn new threads. This shows a big
perf win on the alias benchmark (766 -> 378 ms).
This reintroduces commits 22230a1a0d
and 9d7d70c204, now with the bug fixed.
The problem was when there was one thread waiting in the pool. We enqueue
an item onto the pool and attempt to wake up the thread. But before the
thread runs, we enqueue another item - this second enqueue will see the
thread waiting and attempt to wake it up as well. If the two work items
were dependent (reader/writer) then we would have a deadlock.
The fix is to check if the number of waiting threads is at least as large
as the queue. If the number of enqueued items exceeds the number of waiting
threads, then spawn a new thread always.
This added the function offset *again*, but it's already included in
the line for the current file.
And yes, I have explicitly tested a function file with a function
defined at a later line.
Fixes#6350
Since #6287, bare variable assignments do not parse, which broke
the "Unsupported use of '='" error message.
This commit catches parse errors that occur on bare variable assignments.
When a statement node fails to parse, then we check if there is at least one
prefixing variable assignment. If so, we emit the old error message.
See also #6347
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
uClibc-ng does not expose C++11 math
functions to the std namespace, breaking
compilation. This is fine as the argument
type is double.
Signed-off-by: Rosen Penev <rosenp@gmail.com>
Improve the iothread behavior by enabling an iothread to stick around for
a while waiting for work. This reduces the amount of iothread churn, which
is useful on platforms where threads are expensive.
Also do other modernization like clean up the locking discipline and use
FLOG.
fish will react to certain variable modifications, such as "TZ." Only do
this if the main stack is modified. This has no effect now because there
is always a single stack, but will become important when concurrent
execution is supported.
This adds string-x.rst for each subcommand x of string. The main page
(string.rst) is not changed, except that examples are shown directly after
each subcommand. The subcommand sections in string.rst are created by
textual inclusion of parts of the string-x.rst files.
Subcommand man pages can be viewed with either of:
```
man string collect
man string-collect
string collect <press F1 or Alt-h>
string collect -h
```
While `string -h ...` still prints the full help.
Closes#5968
Presently the completion engine ignores builtins that are part of the
fish syntax. This can be a problem when completing a string that was
based on the output of `commandline -p`. This changes completions to
treat these builtins like any other command.
This also disables generic (filename) completion inside comments and
after strings that do not tokenize.
Additionally, comments are stripped off the output of `commandline -p`.
Fixes#5415Fixes#2705
The history search logic had a not very useful "fast path" which was also
buggy because it neglected to dedup. Switch the "fast path" to just a
history search type which always matches.
Fixes#6278
PATH and CDPATH have special behavior around empty elements. Express this
directly in env_stack_t::set rather than via variable dispatch; this is
cleaner.
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
When considering an autosuggestion from history, we attempt to validate the
command to ensure that we don't suggest invalid (e.g. path-dependent)
commands. Prior to this fix, we would validate the last command in the
command line (e.g. in `cd /bin && ./stuff` we would validate "./stuff".
This doesn't really make sense; we should be validating the first command
because it has the potential to change the PWD. Switch to validating the
first command.
Also remove some helper functions that became dead through this change.
Until now, something like
`math '7 = 2'`
would complain about a "missing" operator.
Now we print an error about logical operators not being supported and
point the user towards `test`.
Fixes#6096
Every builtin or function shipped with fish supports flag -h or --help to
print a slightly condensed version of its manpage.
Some of those help messages are longer than a typical screen;
this commit pipes the help to a pager to make it easier to read.
As in other places in fish we assume that either $PAGER or "less" is a
valid pager and use that.
In three places (error messages for bg, break and continue) the help is
printed to stderr instead of stdout. To make sure the error message is
visible in the pager, we pass it to builtin_print_help, every call of which
needs to be updated.
Fixes#6227
The `--entire` would enable output even though the `--quiet` should
have silenced it. These two don't make any sense together so print an
error, because the user could have just left off the `-q`.
sys/sysctl.h is deprecated on glibc, so it leads to warnings.
According to fa4ec55c96, it was included for KERN_PROCARGS2 for
process expansion, but process expansion is gone, so it's unused now.
(there is another use of it in common.cpp, but that's only on FreeBSD)
Also 1f06e5f0b9 only included
tokenizer.h (present since the initial commit) if KERN_PROCARGS2
wasn't available, so it can't have been important.
This builds and passes the tests on:
- Archlinux, with glibc 2.30
- Alpine, with musl
- FreeBSD
- NetBSD
Universal exported variables (created by `set -xU`) used to show up
both as universal and global variable in child instances of fish.
As a result, when changing an exported universal variable, the
new value would only be visible after a new login (or deleting the
variable from global scope in each fish instance).
Additionally, something like `set -xU EDITOR vim -g` would be imported
into the global scope as a single word resulting in failures to
execute $EDITOR in fish.
We cannot simply give precedence to universal variables, because
another process might have exported the same variable. Instead, we
only skip importing a variable when it is equivalent to an exported
universal variable with the same name. We compare their values after
joining with spaces, hence skipping those imports does not change the
environment fish passes to its children. Only the representation in
fish is changed from `"vim -g"` to `vim -g`.
Closes#5258.
This eliminates the issue #5348 for universal variables.
Consider a group of short options, like -xzPARAM, where x and z are options and z takes an argument.
This commit enables completion of the argument to the last option (z), both within the same
token (-xzP) or in the next one (-xz P).
complete -C'-xz' will complete only parameters to z.
complete -C'-xz ' will complete only parameters to z if z requires a parameter
otherwise, it will also complete non-option parameters
To do so this implements a heuristic to differentiate such strings from single long options. To
detect whether our token contains some short options, we only require the first character after the
dash (here x) to be an option. Previously, all characters had to be short options. The last option
in our example is z. Everything after the last option is assumed to be a parameter to the last
option.
Assume there is also a single long option -x-foo, then complete -C'-x' will suggest both -x-foo and
-xy. However, when the single option x requires an argument, this will not suggest -x-foo.
However, I assume this will almost never happen in practise since completions very rarely mix
short and single long options.
Fixes#332
In e167714899 we allowed recursive calls
to complete. However, some completions use infinite recursion in their
completions and rely on `complete` to silently stop as soon as it is
called recursively twice without parameter (thus completing the
current commandline). For example:
complete -c su -s -xa "(complete -C(commandline -ct))"
su -c <TAB>
Infinite recursion happens because (commandline -ct) is an empty list,
which would print an error message. This commmit explicitly detects
such recursive calls where `complete` has no parameter and silently
terminates. This enables above completion (like before raising the
recursion limit) while still allowing legitimate cases with limited
recursion.
Closes#6171
This stops reading argument names after another option appears. It does not break any previous uses and in fact fixes uses like
```fish
function foo --argument-names bar --description baz
```
* `function` command handles options after argument names (Fixes#6186)
* Removed unneccesary test
Prior to this fix, self-insert would always wait for a new character.
Track in char_event what sequence generated the readline event, and then
if the sequence is not empty, insert that sequence.
This will support implementing the space binding via pure readline
functions.
Previously, tab-completion would move the cursor to the end of the current token, even
if no completion is inserted. This commit defers moving the cursor until we insert a completion.
Fixes#4124
If interactive, `complete` commands are highlighted like they would
be if typed. Adds a little fun contrast and it's easier to read.
Moved a function out of fish_indent to highlight.h
we now print --long options for ones I arbitrarily decided
are less likely to be remembered.
Also fixed the `--wraps` items at the end not being escaped
Most of our completion scripts are written using the short options
anyhow, and this makes it less likely the output will span several
lines per command
This sequence can be generatd by control-spacebar. Allow it to be bound
properly.
To do this we must be sure that we never round-trip the key sequence
through a C string.
Fish completes parts of words split by the separators, so things like
`dd if=/dev/sd<TAB>` work.
This commit improves interactive completion if completion strings legitimately
contain '=' or ':'. Consider this example where completion will suggest
a🅰️1 and other files in the cwd in addition to a:1
touch a:1; complete -C'ls a:'
This behavior remains unchanged, but this commit allows to quote or escape
separators, so that e.g. `ls "a:<TAB>` and `ls a\:<TAB>` successfully complete
the filename.
This also makes the completion insert those escapes automatically unless
already quoted.
So `ls a<TAB>` will give `ls a\:1`.
Both changes match bash's behavior.
Instead of warning (debug level 1), we now emit an error (debug level 0) if a known bad version of
WSL is detected. However, `FISH_NO_WSL_CHECK` can now be defined to skip both the check and the
startup message.
fish is designed to append to the history file in most cases. However
save_internal_via_appending was never returning success, so we were
always doing the slow rewrite path. Correctly return success.
Fixes#6042