A variable may be broken across multiple lines with a backslash, for
example:
> echo $FISH_\
VERSION
Teach syntax highlighting about this line breaking. Fixes#8444
+ No functional change here, just renames and #include changes.
+ CMake can't have slashes in the target names. I'm suspciious of
that weird machinery for test, but I made it work.
+ A couple of builtins did not include their own headers, that
is no longer the case.
Now looks like
```
Error: Wrong color in test at index 8-11 in text (expected 0x6, actual 0x2):
command echo abc foo &
^^^^
```
instead of repeating the error for every character that is wrong.
This introduces a new variable, $fish_color_option, that can be used
to highlight options differently.
Options are tokens starting with `-`, but only up to (and including!)
the first `--`.
Fixes#8292.
The default matching logic for fish_tests was prefix based, so when we
were running `history` we were also running all history tests. This
causes the test to fail for an unknown reason.
Even though we are using CMake's ctest for testing, we still define our
own `make test` target rather than use its default for many reasons:
* CMake doesn't run tests in-proc or even add each tests as an
individual node in the ninja dependency tree, instead it just bundles
all tests into a target called `test` that always just shells out to
`ctest`, so there are no build-related benefits to not doing that
ourselves.
* CMake devs insist that it is appropriate for `make test` to never
depend on `make all`, i.e. running `make test` does not require any
of the binaries to be built before testing.
* The only way to have a test depend on a binary is to add a fake test
with a name like "build_fish" that executes CMake recursively to
build the `fish` target.
* It is not possible to set top-level CTest options/settings such as
CTEST_PARALLEL_LEVEL from within the CMake configuration file.
* Circling back to the point about individual tests not being actual
Makefile targets, CMake does not offer any way to execute a named
test via the `make`/`ninja`/whatever interface; the only way to
manually invoke test `foo` is to to manually run `ctest` and specify
a regex matching `foo` as an argument, e.g. `ctest -R ^foo$`... which
is really crazy.
With this patch, it is now possible to execute any single test by name,
by invoking the build directly, e.g. to run the `universal.fish` check:
`cmake --build build --target universal.fish` or
`ninja -C build universal.fish`. Unfortunately, this is not integrated
into the Makefile wrapper, so `make universal.fish` won't work (although
this can potentially be hacked around).
Instead of compiling `fish_tests.cpp` dynamically with weakly-linked
symbols and asking it to print the list of all available tests, we
use a magic string `#define`'d as a no-op to allow CMake to regex search
for matching test groups. This speeds up configuration somewhat (by not
compiling anything), but more importantly, it's much less brittle and
doesn't involve and linker dark magic.
There's of course still no getting around the fact that it's really ugly.
This is opt-in through a new feature flag "ampersand-nobg-in-token".
When this flag and "qmark-noglob" are enabled, this command no longer
needs quoting:
curl https://example.com/thing?foo=bar&duran=duran
Compared to the previous approach e1570a4 ("Let '&' only separate as
the first char of a word"), this has some advantages:
1. "&&" and "&>" are no longer affected. They are still special, even
if used between tokens without spaces, like "echo bar&>foo".
Maybe this is not really *better*, but it avoids risking to annoy
users by breaking the old variant.
2. "&" is still special if at the end of a token, like in "sleep 1&".
Word movement is not affected by the semantics change, so Alt-F and
friends still stop at every "&".
This adds a hack to the parser. Given a command
echo "x$()y z"
we virtually insert double quotes before and after the command
substitution, so the command internally looks like
echo "x"$()"y z"
This hack allows to reuse the existing logic for handling (recursive)
command substitutions.
This makes the quoting syntax more complex; external highlighters
should consider adding this if possible.
The upside (more Bash compatibility) seems worth it.
Closes#159
In preparation for using wait handles in --on-process-exit events, factor
wait handles into their own wait handle store. Also switch them to
per-process instead of per-job, which is a simplification.
Previously an instance of env_universal_t had to be created with a file
path. Switch to allowing it to be created as empty, and later initialized
with the file path. This will help simplify the case where universal
variables are not used; they may simply be not initialized and so just
appear empty.
In the named pipe notifier, notifications are broadcast by writing to the
pipe, waiting briefly, and then reading it back. When clients see the pipe
as readable, they report the uvars as potentially changed and fish will
sync against the uvar file.
Prior to this change, we synced repeatedly when the pipe was readable. But
we can do somewhat better by also checking the named pipe's timestamp (via
fstat). If the pipe has not changed, then we can skip the sync even if
there is currently data lingering on it.
With this change we should sync against the variable file less often
(typically once or twice per write); in the next change we refactor this
logic so it's easier to follow.
This concerns the problem of "injecting" fancy fish bits like job reaping
into the "common" input stuff which is also used by fish_key_reader.
Instead of providing a callback, make the input event queue a base class
with virtual functions. This allows for a richer interface and simplifies
some memory management issues.
select_wrapper_t wraps up the annoying bits of using select(): keeping
track of the max fd, passing null for boring parameters, and
constructing the timeout. Introduce a wrapper struct for this and
replace the existing uses of select() with the wrapper.
If fish launches a program and that program marks stdin as O_ASYNC, then
fish will start receiving SIGIO events on Mac. This occurs even though
the file descriptor itself does not have the O_ASYNC flag set.
SIGIO is reported as interrupting select which then breaks multiple-key
bindings, especially in vi-mode.
As the SIGIO based universal notifier is disabled, remove it and the
SIGIO handler itself. This allows fish to ignore properly ignore SIGIO.
Fixes#7853
Several functions including wgetopt and execve operate on null-terminated
arrays of nul-terminated pointers: a list of pointers to C strings where
the last pointer is null. Prior to this change, each process_t stored its
argv in such an array. This had two problems:
1. It was awkward to work with this type, instead of using std::vector,
etc.
2. The process's arguments would be rearranged by builtins which is
surprising
Our null terminated arrays were built around a fancy type that would copy
input strings and also generate an array of pointers to them, in one big
allocation.
Switch to a new model where we construct an array of pointers over
existing strings. So you can supply a `vector<string>` and now
`null_terminated_array_t` will just make a list of pointers to them. Now
processes can just store their argv in a familiar wcstring_list_t.
Prior to this change, builtins would take their arguments as `wchar_t **`.
This implies that the order of the arguments may be changed (which is
true, `wgetopter` does so) but also that the strings themselves may be
changed, which no builtin should do.
Switch them all to take `const wchar_t **` instead: now the arguments may
be rearranged but their contents may no longer be modified.
- Check for special characters *before* attempting to parse
- Also ignore lines with `{` and `*`
- Also skip lines with `<<` because that might be a heredoc (or a
- `<<<` herestring)
Fixes#7874.
Previously wbasename and wdirname wrapped the system-provided basename
and dirname. But these have thread-safety issues and some surprising
error conditions on Mac. Just reimplement these per the OpenGroup spec.
In particular these no longer trigger a null-dereference if the input
exceeds PATH_MAX.
Add some tests too.
This fixes#7837
Creating a file called "xfoo" could break the highlight tests because
we'd suddenly get a color with valid_path set to true.
So what we do is simply compare foreground/background and forced
underline, but only check for path validity if we're expecting a valid
path.
If we're not expecting a valid path, we don't fail whether it is there
or not.
This means that we can't check for a non-valid path, but we don't
currently do that anyway and we can just burn that bridge when we get
to it.
cc @siteshwar @krobelus, who both came across this
Similar to what fish_indent does. After typing "echo \" and hitting return,
the cursor will be indented.
A possible annoyance is that when you have multiple indented lines
echo 1 \
2 \
3 \
4 \
If you remove lines in the middle with Control-k, the lines below
the deleted one will start jumping around, as they are disconnected
from and reconnected to "echo".
If a variable is undefined, but it looks like it will be defined by the
current command line, assume the user knows what they are doing.
This should cover most real-world occurrences.
Closes#6654
In an interactive shell, typing "for x in (<RET>" would print an error:
fish: Expected end of the statement, but found a parse_token_type_t::tokenizer_error
Our tokenizer converts "(" into a special error token, hence this message.
Fix two cases by not reporting errors, but only if we allow parsing incomplete
input. I'm not really sure if this is necessary, but it's sufficient.
Fixes#7693
This may slightly improve performance by allowing the compiler greater
visibility into what is happing on top of not executing at runtime in
some hot paths, but more importantly, it gets rid of magic constants in a
few different places.
This introduces a new variable $fish_color_keyword that will be used
to highlight keywords. If it's not defined, we fall back on
$fish_color_command as before.
An issue here is that most of our keywords have this weird duality of
also being builtins *if* executed without an argument or with
`--help`.
This means that e.g.
if
is highlighted as a command until you start typing
if t
and then it turns keyword.
This concerns how fish prevents its own fds from interfering with
user-defined fd redirections, like `echo hi >&5`. fish has historically
done this by tracking all user defined redirections when running a job,
and ensuring that pipes are not assigned the same fds. However this is
annoying to pass around - it means that we have to thread user-defined
redirections into pipe creation.
Take a page from zsh and just ensure that all pipes we create have fds in
the "high range," which here means at least 10. The primary way to do this
is via the F_DUPFD_CLOEXEC syscall, which also sets CLOEXEC, so we aren't
invoking additional syscalls in the common case. This will free us from
having to track which fds are in user-defined redirections.
Previously we sometimes wanted to access an io_buffer_t to append to it
directly, but that's no longer true; all we really care about is its
separated_buffer_t. Make io_bufferfill_t::finish return the
separated_buffer directly, simplifying call sites. No user visible changes
expected here.
This concerns builtins writing to an io_buffer_t. io_buffer_t is how fish
captures output, especially in command substitutions:
set STUFF (string upper stuff)
Recall that io_buffer_t fills itself by reading from an fd (typically
connected to stdout of the command). However if our command is a builtin,
then we can write to the buffer directly.
Prior to this change, when a builtin anticipated writing to an
io_buffer_t, it would first write into an internal buffer, and then after
the builtin was finished, we would copy it to the io_buffer_t. This was
because we didn't have a polymorphic receiver for builtin output: we
always buffered it and then directed it to the io_buffer_t or file
descriptor or stdout or whatever.
Now that we have polymorphpic io_streams_t, we can notice ahead of time
that the builtin output is destined for an internal buffer and have it
just write directly to that buffer. This saves a buffering step, which is
a nice simplification.
This sometimes fails on github actions with ASAN. I am assuming that's
because the ctrl-c happens *before* the process has had a chance to
start.
So we do what we do and increase the delay.
Prior to this change, histories were immortal and allocated with either
unique_ptr or just leaked via new. But this can result in races in the
path detection test, as the destructor races with the pointer-captured
history. Switch to using shared_ptr.
When adding a command to history, we first expand its arguments to see
if any arguments are paths which refer to files. If so, we will only
autosuggest that command from history if the files are still valid. For
example, if the user runs `rm ./file.txt` then we will remember that
`./file.txt` referred to a file, and then only autosuggest that if the file
is present again.
Prior to this change we only performed simple expansion relative to the
working directory. This change extends it to variables and tilde
expansion. For example we will now apply the same hinting for
`rm ~/file.txt`
Fixes#7582
In preparation for fixing #7559, add a function poke_item to fd_monitor.
fd_monitor has a list of file descriptors, and invokes a callback when an
fd becomes readable. With this change, we assign each item a unique ID and
return it when the item is added; the ID may then be used to invoke the
callback explicitly.
The idea is that we can stop reading from the pipe associated with the
cmdsub when the job is finished, even if the pipe is still open.
Commands that start with a space should not be written to the history
file. Prior to this change, that was implemented by simply not adding them
to history. Items with leading spaces were simply dropped.
With this change, we add a 'history_persistence_mode_t' to
history_item_t, which tracks how the item persists. Items with leading
spaces are now marked as "ephemeral": they can be recovered via up arrow,
until the user runs another command, or types a space and hits return.
This matches zsh's HIST_IGNORE_SPACE feature.
Fixes#1383
A mildly interesting one is the call to test_wchar2utf8 with a non-null
pointer ("u1"/"dst") but 0 length. In this case we relied on malloc(0)
returning non-null which is not guaranteed.
src/fish_tests.cpp:1619:23: warning: Call to 'malloc' has an allocation
size of 0 bytes [clang-analyzer-optin.portability.UnixAPI]
mem = (char *)malloc(dlen);
^
test_wchar2utf8(w1, sizeof(w1) / sizeof(*w1), u1, 0, 0, 0,
"invalid params, dst is not NULL");
Before running a command, or before importing a command from bash history,
we perform error checking. As part of error checking we expand commands
including variables and globs. If the glob is very large, like `/**`, then
we could hang expanding it.
One fix would be to limit the amount of expansion from the glob, but
instead let's just not expand command globs when performing error checking.
Fixes#7407
This adds the ability to limit how many expansions are produced. For
example if $big contains 10 items, and is Cartesian-expanded as
$big$big$big$big... 10 times, we would naviely get 10^10 = 10 billion
results, which fish can't actually handle. Implement this in
completion_receiver_t, which now can return false to indicate an overflow.
The initial expansion limit 'k_default_expansion_limit' is set as 512k
items. There's no way for users to change this at present.
The pager cleanup missed that the existing token could already include active (as in unescaped) expansions, and just escaped them all.
This means things like `ls ~/<TAB>` would escape the `~`, which is obviously wrong and makes it awkward to use.
This reverts commit b38a23a46d.
I fully expect that we'll try again, but there's no use in keeping master broken while that happens.
Fixes#7526.
"smartcase" performs case-insensitive matching if the input string is all
lowercase, and case-sensitive matching otherwise. When completing e.g.
files, we will now show both case sensitive and insensitive completions if
the input string does not contain uppercase characters.
This is a delicate fix in an interactive component with low test coverage.
It's likely something will regress here.
Fixes#3978
This is an attempt to simplfy some completion logic. It mainly refactors
reader_data_t::handle_completions such that all completions have the token
prepended; this attempts to simplify the logic since now all completions
replace the token. It also changes how the pager prefix works. Previously
the pager prefix was an extra string that was prepended to all
completions. In the new model the completions already have the prefix
prepended and the prefix is used only for certain width calculations.
This is a somewhat frightening change in an interactive component with
low test coverage. It tweaks things like how long completions are
ellipsized. Buckle in!
In preparation for introducing "smart case", refactor string fuzzy
matching. Specifically split out the case folding and match type into
separate fields, so that we can introduce more case folding types without
a combinatoric explosion.