This makes it so `complete -c foo -n test1 -n test2` registers *both*
conditions, and when it comes time to check the candidate, tries both,
in that order. If any fails it stops, if all succeed the completion is offered.
The reason for this is that it helps with caching - we have a
condition cache, but conditions like
```fish
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] sub
```
defeats it pretty easily, because the cache only looks at the entire
script as a string - it can't tell that the first `test` is the same
in both.
So this means we separate it into
```fish
complete -f -c string -n "test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
+complete -f -c string -n "test (count (commandline -opc)) -ge 2" -n "contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
```
which allows the `test` to be cached.
In tests, this improves performance for the string completions by 30%
by reducing all the redundant `test` calls.
The `git` completions can also greatly benefit from this.
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
If we ever need any of these... they're in this commit:
fish_wcswidth_visible()
status_cmd_opts_t::feature_name
completion_t::is_naturally_less_than()
parser_t::set_empty_var_and_fire()
parser_t::get_block_desc()
parser_keywords_skip_arguments()
parser_keywords_is_block()
job_t::has_internal_proc()
fish_wcswidth_visible()
* Implement fish_wcstod_underscores
* Add fish_wcstod_underscores unit tests
* Switch to using fish_wcstod_underscores in tinyexpr
* Add tests for math builtin underscore separator functionality
* Add documentation for underscore separators for math builtin
* Add a changelog entry for underscore numeric separators
This reverts commits:
2d9e51b43ed1d9f147ec346ce8081b
The box drawing because it's entangled with the rest and we don't
currently use this anywhere I know of. Nor was it gated on terminfo,
so it could have broken things, for subjectively little gain.
Fixes#8727.
The previous commit added transient commandlines when completing
commands with variable overrides. Transient commandlines require a
parser, but perform_one_completion_cd_test() asked for completions
without giving a parser, which is only okay when asking for
autosuggestions (like perform_one_autosuggestion_cd_test() does).
Let's pass a parser to fix the test.
The sole notifiers test recreated the uvar directory, so if it was
called while the universal test was running it would stop it from
completing correctly.
This happened reasonably often on Ubuntu with tsan on Github Actions.
Cygwin tests are failing because cygwin has a low limit of only 64 fds in
select(). Extend select_wrapper_t to also support using poll(), according to
a FISH_USE_POLL new define. All systems now use poll() except for Mac.
Rename select_wrapper_t to fd_readable_set_t since now it may not wrap
select().
This allows the deep-cmdsub.fish test to pass on Cygwin.
This unit test was passing 0 instead of a pointer to indicate the end of
a varargs; this might fail on 64 bit, and indeed did fail on Cygwin. This
fixes the Cygwin expand test.
A variable may be broken across multiple lines with a backslash, for
example:
> echo $FISH_\
VERSION
Teach syntax highlighting about this line breaking. Fixes#8444
+ No functional change here, just renames and #include changes.
+ CMake can't have slashes in the target names. I'm suspciious of
that weird machinery for test, but I made it work.
+ A couple of builtins did not include their own headers, that
is no longer the case.
Now looks like
```
Error: Wrong color in test at index 8-11 in text (expected 0x6, actual 0x2):
command echo abc foo &
^^^^
```
instead of repeating the error for every character that is wrong.
This introduces a new variable, $fish_color_option, that can be used
to highlight options differently.
Options are tokens starting with `-`, but only up to (and including!)
the first `--`.
Fixes#8292.
The default matching logic for fish_tests was prefix based, so when we
were running `history` we were also running all history tests. This
causes the test to fail for an unknown reason.
Even though we are using CMake's ctest for testing, we still define our
own `make test` target rather than use its default for many reasons:
* CMake doesn't run tests in-proc or even add each tests as an
individual node in the ninja dependency tree, instead it just bundles
all tests into a target called `test` that always just shells out to
`ctest`, so there are no build-related benefits to not doing that
ourselves.
* CMake devs insist that it is appropriate for `make test` to never
depend on `make all`, i.e. running `make test` does not require any
of the binaries to be built before testing.
* The only way to have a test depend on a binary is to add a fake test
with a name like "build_fish" that executes CMake recursively to
build the `fish` target.
* It is not possible to set top-level CTest options/settings such as
CTEST_PARALLEL_LEVEL from within the CMake configuration file.
* Circling back to the point about individual tests not being actual
Makefile targets, CMake does not offer any way to execute a named
test via the `make`/`ninja`/whatever interface; the only way to
manually invoke test `foo` is to to manually run `ctest` and specify
a regex matching `foo` as an argument, e.g. `ctest -R ^foo$`... which
is really crazy.
With this patch, it is now possible to execute any single test by name,
by invoking the build directly, e.g. to run the `universal.fish` check:
`cmake --build build --target universal.fish` or
`ninja -C build universal.fish`. Unfortunately, this is not integrated
into the Makefile wrapper, so `make universal.fish` won't work (although
this can potentially be hacked around).
Instead of compiling `fish_tests.cpp` dynamically with weakly-linked
symbols and asking it to print the list of all available tests, we
use a magic string `#define`'d as a no-op to allow CMake to regex search
for matching test groups. This speeds up configuration somewhat (by not
compiling anything), but more importantly, it's much less brittle and
doesn't involve and linker dark magic.
There's of course still no getting around the fact that it's really ugly.
This is opt-in through a new feature flag "ampersand-nobg-in-token".
When this flag and "qmark-noglob" are enabled, this command no longer
needs quoting:
curl https://example.com/thing?foo=bar&duran=duran
Compared to the previous approach e1570a4 ("Let '&' only separate as
the first char of a word"), this has some advantages:
1. "&&" and "&>" are no longer affected. They are still special, even
if used between tokens without spaces, like "echo bar&>foo".
Maybe this is not really *better*, but it avoids risking to annoy
users by breaking the old variant.
2. "&" is still special if at the end of a token, like in "sleep 1&".
Word movement is not affected by the semantics change, so Alt-F and
friends still stop at every "&".
This adds a hack to the parser. Given a command
echo "x$()y z"
we virtually insert double quotes before and after the command
substitution, so the command internally looks like
echo "x"$()"y z"
This hack allows to reuse the existing logic for handling (recursive)
command substitutions.
This makes the quoting syntax more complex; external highlighters
should consider adding this if possible.
The upside (more Bash compatibility) seems worth it.
Closes#159
In preparation for using wait handles in --on-process-exit events, factor
wait handles into their own wait handle store. Also switch them to
per-process instead of per-job, which is a simplification.
Previously an instance of env_universal_t had to be created with a file
path. Switch to allowing it to be created as empty, and later initialized
with the file path. This will help simplify the case where universal
variables are not used; they may simply be not initialized and so just
appear empty.
In the named pipe notifier, notifications are broadcast by writing to the
pipe, waiting briefly, and then reading it back. When clients see the pipe
as readable, they report the uvars as potentially changed and fish will
sync against the uvar file.
Prior to this change, we synced repeatedly when the pipe was readable. But
we can do somewhat better by also checking the named pipe's timestamp (via
fstat). If the pipe has not changed, then we can skip the sync even if
there is currently data lingering on it.
With this change we should sync against the variable file less often
(typically once or twice per write); in the next change we refactor this
logic so it's easier to follow.
This concerns the problem of "injecting" fancy fish bits like job reaping
into the "common" input stuff which is also used by fish_key_reader.
Instead of providing a callback, make the input event queue a base class
with virtual functions. This allows for a richer interface and simplifies
some memory management issues.
select_wrapper_t wraps up the annoying bits of using select(): keeping
track of the max fd, passing null for boring parameters, and
constructing the timeout. Introduce a wrapper struct for this and
replace the existing uses of select() with the wrapper.
If fish launches a program and that program marks stdin as O_ASYNC, then
fish will start receiving SIGIO events on Mac. This occurs even though
the file descriptor itself does not have the O_ASYNC flag set.
SIGIO is reported as interrupting select which then breaks multiple-key
bindings, especially in vi-mode.
As the SIGIO based universal notifier is disabled, remove it and the
SIGIO handler itself. This allows fish to ignore properly ignore SIGIO.
Fixes#7853
Several functions including wgetopt and execve operate on null-terminated
arrays of nul-terminated pointers: a list of pointers to C strings where
the last pointer is null. Prior to this change, each process_t stored its
argv in such an array. This had two problems:
1. It was awkward to work with this type, instead of using std::vector,
etc.
2. The process's arguments would be rearranged by builtins which is
surprising
Our null terminated arrays were built around a fancy type that would copy
input strings and also generate an array of pointers to them, in one big
allocation.
Switch to a new model where we construct an array of pointers over
existing strings. So you can supply a `vector<string>` and now
`null_terminated_array_t` will just make a list of pointers to them. Now
processes can just store their argv in a familiar wcstring_list_t.
Prior to this change, builtins would take their arguments as `wchar_t **`.
This implies that the order of the arguments may be changed (which is
true, `wgetopter` does so) but also that the strings themselves may be
changed, which no builtin should do.
Switch them all to take `const wchar_t **` instead: now the arguments may
be rearranged but their contents may no longer be modified.
- Check for special characters *before* attempting to parse
- Also ignore lines with `{` and `*`
- Also skip lines with `<<` because that might be a heredoc (or a
- `<<<` herestring)
Fixes#7874.
Previously wbasename and wdirname wrapped the system-provided basename
and dirname. But these have thread-safety issues and some surprising
error conditions on Mac. Just reimplement these per the OpenGroup spec.
In particular these no longer trigger a null-dereference if the input
exceeds PATH_MAX.
Add some tests too.
This fixes#7837
Creating a file called "xfoo" could break the highlight tests because
we'd suddenly get a color with valid_path set to true.
So what we do is simply compare foreground/background and forced
underline, but only check for path validity if we're expecting a valid
path.
If we're not expecting a valid path, we don't fail whether it is there
or not.
This means that we can't check for a non-valid path, but we don't
currently do that anyway and we can just burn that bridge when we get
to it.
cc @siteshwar @krobelus, who both came across this
Similar to what fish_indent does. After typing "echo \" and hitting return,
the cursor will be indented.
A possible annoyance is that when you have multiple indented lines
echo 1 \
2 \
3 \
4 \
If you remove lines in the middle with Control-k, the lines below
the deleted one will start jumping around, as they are disconnected
from and reconnected to "echo".
If a variable is undefined, but it looks like it will be defined by the
current command line, assume the user knows what they are doing.
This should cover most real-world occurrences.
Closes#6654
In an interactive shell, typing "for x in (<RET>" would print an error:
fish: Expected end of the statement, but found a parse_token_type_t::tokenizer_error
Our tokenizer converts "(" into a special error token, hence this message.
Fix two cases by not reporting errors, but only if we allow parsing incomplete
input. I'm not really sure if this is necessary, but it's sufficient.
Fixes#7693
This may slightly improve performance by allowing the compiler greater
visibility into what is happing on top of not executing at runtime in
some hot paths, but more importantly, it gets rid of magic constants in a
few different places.
This introduces a new variable $fish_color_keyword that will be used
to highlight keywords. If it's not defined, we fall back on
$fish_color_command as before.
An issue here is that most of our keywords have this weird duality of
also being builtins *if* executed without an argument or with
`--help`.
This means that e.g.
if
is highlighted as a command until you start typing
if t
and then it turns keyword.
This concerns how fish prevents its own fds from interfering with
user-defined fd redirections, like `echo hi >&5`. fish has historically
done this by tracking all user defined redirections when running a job,
and ensuring that pipes are not assigned the same fds. However this is
annoying to pass around - it means that we have to thread user-defined
redirections into pipe creation.
Take a page from zsh and just ensure that all pipes we create have fds in
the "high range," which here means at least 10. The primary way to do this
is via the F_DUPFD_CLOEXEC syscall, which also sets CLOEXEC, so we aren't
invoking additional syscalls in the common case. This will free us from
having to track which fds are in user-defined redirections.
Previously we sometimes wanted to access an io_buffer_t to append to it
directly, but that's no longer true; all we really care about is its
separated_buffer_t. Make io_bufferfill_t::finish return the
separated_buffer directly, simplifying call sites. No user visible changes
expected here.
This concerns builtins writing to an io_buffer_t. io_buffer_t is how fish
captures output, especially in command substitutions:
set STUFF (string upper stuff)
Recall that io_buffer_t fills itself by reading from an fd (typically
connected to stdout of the command). However if our command is a builtin,
then we can write to the buffer directly.
Prior to this change, when a builtin anticipated writing to an
io_buffer_t, it would first write into an internal buffer, and then after
the builtin was finished, we would copy it to the io_buffer_t. This was
because we didn't have a polymorphic receiver for builtin output: we
always buffered it and then directed it to the io_buffer_t or file
descriptor or stdout or whatever.
Now that we have polymorphpic io_streams_t, we can notice ahead of time
that the builtin output is destined for an internal buffer and have it
just write directly to that buffer. This saves a buffering step, which is
a nice simplification.
This sometimes fails on github actions with ASAN. I am assuming that's
because the ctrl-c happens *before* the process has had a chance to
start.
So we do what we do and increase the delay.