Commit 29dc30711 (Insert some completions with quotes instead of
backslashes, 2024-04-13) wrongly copmletes
$ cat ~/space
to
$ cat '~/path with spaces'
Today completions can be either replacing or appending. We never quote
(but backslash-escape) appending completions (unless they "append"
to an empty token). We always quote replacing completions. The
assumption in this part of the code is that replacing completions
can be quoted without changing meaning.
This assumption is wrong for tildes. For the backslash-escaping code
path, we take care of this edge case via a special DONT_ESCAPE_TILDES
flag. However that flag does not take effect when using quotes for
escaping. Fix that.
Unfortunately, e97a4fab7 (Escape : and = in file completions,
2024-04-19) introduced a (hopefully temporary) code clone in
escape_separators, which made added an extra step to debugging here.
When built with the default "installable" feature, the data files (share/) are
included in the fish binary itself.
Run `fish --install` or `fish --install=noconfirm` (for
non-interactive use) to install fish's data files into ~/.local/share/fish/install
To figure out if the data files are out of date, we write the current version
to a file on install, and read it on start.
CMake disables the default features so nothing changes for that, but this allows installing via `cargo install`,
and even making a static binary that you can then just upload and have extract itself.
We set $__fish_help_dir to empty for installable builds, because we do not have
a way to generate html docs (because we need fish_indent for highlighting).
The man pages are found via $__fish_data_dir/man
This now allows:
- Same argument (`random 5 5`)
- Swapped ends (`random 10 2`)
- One possibility (`random 0 5 4`)
This makes it easier to use with numbers generated elsewhere instead
of hard-coded, so you don't need to check as much before running it.
Fixes#10879
We don't really care if the process has a custom handler installed, we
can just set it to default.
The one we check is SIGHUP, which may be given to us via `nohup`.
This saves ~30 syscalls *per process* we spawn, so:
```fish
for f in (seq 1000)
command true
end
```
has ~30000 fewer rt_sigaction calls. These take up about ~30% of the
total time spent in syscalls according to strace.
We could also compute this set once at startup and then reuse it.
exists_no_autoload() wrongly thinks that tombstoned functions can be
autoloaded; fix that.
While at-it replace the use of get_props() with something simpler.
Co-authored-by: Himadri Bhattacharjee
Closes#10873
The [disambiguate flag](https://sw.kovidgoyal.net/kitty/keyboard-protocol/#disambiguate) means that:
> In particular, ctrl+c will no longer generate the SIGINT signal,
> but instead be delivered as a CSI u escape code.
so cancellation only works while we turn off disambiguation.
Today we turn it off while running external commands that want to
claim the TTY. Also we do it (only as a workaround for this issue)
while expanding wildcards or while running builtin wait.
However there are other cases where we don't have a workaround,
like in trivial infinite loops or when opening a fifo.
Before we run "while true; end", we put the terminal back in ICANON
mode. This means it's line-buffered, so we won't be able to detect
if the user pressed ctrl-c.
Commit 8164855b7 (Disable terminal protocols throughout evaluation,
2024-04-02) had the right solution: simply disable terminal protocols
whenever we do computations that might take a long time.
eval_node() covers most of that; there are a few others.
As pointed out in #10494, the logic was fairly unsophisticated then:
it toggled terminal protocols many times. The fix in 29f2da8d1
(Toggle terminal protocols lazily, 2024-05-16) went to the extreme
other end of only toggling protocols when absolutely necessary.
Back out part of that commit by toggling in eval_node() again,
fixing cancellation. Fortunately, we can keep most of the benefits
of the lazy approach from 29f2da8d1: we toggle only 2 times instead
of 8 times for an empty prompt.
There are only two places left where we call signal_check_cancel()
without necessarily disabling the disambiguate flag
1. open_cloexec() we assume that the files we open outside eval_node()
are never blocking fifos.
2. fire_delayed(). Judging by commit history, this check is not
relevant for interactive sessions; we'll soon end up calling
eval_node() anyway.
In future, we can leave bracketed paste, modifyOtherKeys and
application keypad mode turned on again, until we actually run an
external command. We really only want to turn off the disambiguate
flag.
Since this is approach is overly complex, I plan to go with either
of these two alternatives in future:
- extend the kitty keyboard protocol to optionally support VINTR,
VSTOP and friends. Then we can drop most of these changes.
- poll stdin for ctrl-c. This promises a great simplification,
because it implies that terminal ownership (term_steal/term_donate)
will be perfectly synced with us enabling kitty keyboard protocol.
This is because polling requires us to turn off ICANON.
I started working on this change; I'm convinced it must work,
but it's not finished yet. Note that this will also want to
add stdin polling to builtin wait.
Closes#10864
Moving the "make empty ToLowercase iterator" logic to within the
`unwrap_or_else()` instead of always generating it brings most of the speedup;
unrolling the recursive call brings in the rest.
Using `c.is_uppercase()` instead of getting the iterator and checking if the
first (and only) lowercase letter of the sequence is the same as the original
input is 5-8x faster (measured via criterion against `/usr/share/dict/words`).
(Additional benefit of forcibly inlining the now iterator-based comparison not
taken into account; this necessitated changing from a closure to a local
function as the inline attribute on closures is not yet supported with the
stable compiler toolchain.)
This is still suboptimal because we are allocating a vector of indices to be
removed (but allocation-free in the normal case of no duplicates) but
significantly better than the previous version of the code that duplicated the
strings (which are larger and spread out all over the heap).
The ideal code (similar to what we had in the C++ version, iirc) would look like
this, but it's not allowed because the borrow checker hates you:
```
fn unique_in_place_illegal(comps: &mut Vec<Completion>) {
let mut seen = HashSet::with_capacity(comps.len());
let mut idx = 0;
while idx < comps.len() {
if !seen.insert(&comps[idx].completion) {
comps.remove(idx);
continue;
}
idx += 1;
}
}
```
If a semicolon-delimited list of CSI parameters contained an (invalid) long
sequence of ascii numeric characters, the original code would keep multiplying
by ten and adding the most recent ones field until the `params[count][subcount]`
u32 value overflowed.
This was found via automated fuzz testing of the `try_readch()` routine against
a corpus of some proper/valid CSI escapes.
This lets us call into the entirety of the prior `readch()` with an exhaustible
input stream without panicking on the `unreachable!()` call. The previous
functionality is kept under the old name by calling `try_readch()` with the
`blocking` parameter set to `true` (100% same behavior as before).
While the `try_readch(false)` entrypoint isn't used directly by the current fish
codebase, it is required in order to automate input reader tests without the
overhead and complexity of running the test harness in a tty emulator emulator
like pexpect or tmux, which moreover necessitates out-of-process testing – which
is incompatible with most perf-guided testing harnesses.
I hope to be able to upstream harness integrations using this entry point in the
near future.
`print_help` is a hacky-wacky function used to support the `--help` command
of `fish_key_reader` and others. The Rust version panics on an error; fix
that and make it print more useful help messages.
I'm guessing this was missed in the port because there were comments referencing
using a hash set to perform the deduplication but there was no hashset. (The
TODO was added later.)
This caught an incorrect description for process/job exit handlers for ANY_PID
(now removed) which has been replaced with a message stating the handler is for
any process exit event.
The previous approach of "treat this field as an `Option<NonZeroU32>` and
remember to check `p.has_pid()` before accessing it" was a mix of C++ and rust
conventions and led to some bugs or incorrect behaviors.
* `jobs -p` would previously print both the (correct) external pid and the
(incorrect) internal value of `0` if a backgrounded command contained a
fish function (e.g. `function foo; end; cat | foo &; jobs`)
* Updating/calculating job cpu time and usage was incorrectly including all of
fish's cpu usage/time for each function/builtin member of the job pipeline.
Closes#10832
ctrl-r ctrl-s ctrl-s
Attemps to go before the beginning and asserts out. Instead refuse to
do that.
(there's some weirdness where it can reduce the pager to the first
entry if you keep pressing, which I haven't found yet, but that's better than *crashing*)
These are another way to spell the same thing that doesn't match what
`bind` would print.
They're also not documented and tested thoroughly.
Since they are just small shortcuts and unreleased we can just remove
them.
Fixes#10845
Given "printf %18446744073709551616s", we parse the number only in
the printf crate, which tells us that we overflowed somwhere (but
not where exactly).
We were previously printing the internal `INVALID_PID` value (since removed),
which was a meaningless `-2` constant, when there was no pgid associated with a
job.
This PR changes that to `-` to indicate no pgid available, which I prefer over
something like `0` or `-1`, but will cause problems for code that is hardcoded
to convert this field to an integral value.
Just like we already fix terminal modes if a command left them broken,
having an invisible cursor makes the terminal hard to use and so we
fix it.
We can't really use cnorm/cursor_normal because that often includes
other gunk like making the cursor blink, but it turns out every
terminfo entry agrees on the sequence to make the cursor visible, so
we hardcode it.
Fixes#10834
If we end up using this in more places, we can create a `Pid` newtype.
Note that while the constant is no longer used in code, its previous value of -2
is still printed by `jobs` when no pgid is associated with a job. I will open a
PR to change this to something else, likely either `0` or `-`.
If we try to memory map the history file, and we get back ENODEV meaning that
the underlying device does not support memory mapping, then treat that as a hint
that the filesystem is remote and disable history locking.
As of 04c913427 (Limit command line rendering to $LINES lines,
2024-10-25), we only render a part of the command line. This removes
valuable information from scrollback.
The reasons for the limit were
1. to enable redrawing the commandline (can't do that if part of it
is off-screen).
2. if the cursor is at the beginning of the command-line, we can't
really render the off-screen suffix (unless we can tell the terminal
to scroll back after doing that).
Fortunately these don't matter for the very last rendering of a
command line. Let's render the entire command just before executing,
fixing the scrollback for executed commands.
In future, we should fix it also for pre-execution renderings. This
needs a terminal command to clear part of the scrollback. Can't find
anything on https://invisible-island.net/xterm/ctlseqs/ctlseqs.html
There is "Erase Saved Lines" but that deletes the entire scrollback.
See the discussion in #10827
Since f89909ae3 (Also handle overflown screens if editing pager search
field, 2024-10-27), cursor_arr is never None after the loop.
Assert that by unwrapping.
qa.sh
alt-e restores the cursor position received from the editor, moving by
one character at a time. This can be super slow on large commandlines,
even on release builds. Let's fix that by setting the coordinates
directly.
Our recursive create_dir() first calls stat() to check if the directory
already exists and then mkdir() trying to create it. If another (fish)
process creates the same directory after our stat() but before our
mkdir(), then our mkdir() fails with EEXIST. This error is spurious
if there is already a directory at this path (and permissions are
correct).
Let's switch to the stdlib version, which promises to solve this issue.
They currently do it by running mkdir() first and ask stat() later.
This implies that they will only return success even if we don't have
any of rwx permissions on the directory, but that was already a problem
before this change. We silently don't write history in that case..
Fixes#10813
All-whitespace autocompletions are invisible, no matter the cursor
shape. We do offer such autosuggestions after typing a command name
such as "fish". Since the autosuggestion is invisible it's probably
not useful. It also does no harm except when using a binding like
bind ctrl-g '
if commandline --showing-suggestion
commandline -f accept-autosuggestion
else
up-or-search
end'
where typing "fish<ctrl-g>" surprisingly does not perform a history
search. Fix this by detecting this specific case. In future we
could probably stop showing autosuggestions whenever they only
contain whitespace.
Some background: fish has some files which should be updated atomically:
specifically the history file and the universal variables file. If two fish
processes modified these in-place at the same time, then that could result
in interleaved writes and corrupted files.
To prevent this, fish uses the write-to-adjacent-file-then-rename to
atomically swap in a new file (history is slightly more complicated than
this, for performance, but this remains true). This avoids corruption.
However if two fish processes attempt this at the same time, then one
process will win the race and the data from the other process will be lost.
To prevent this, fish attempts to take an (advisory) lock on the target
file before beginning this process. This prevents data loss because only
one fish instance can replace the target file at once. (fish checks to
ensure it's locked the right file).
However some filesystems, particularly remote file systems, may have locks
which hang for a long time, preventing the user from using their shell.
This is far more serious than data loss, which is not catastrophic: losing
a history item or variable is not a major deal. So fish just attempts to
skip locks on remote filesystems.
Unfortunately Linux does not have a good API for checking if a filesystem
is remote: the best you can do is check the file system's magic number
against a hard-coded list. Today, the list is NFS_SUPER_MAGIC,
SMB_SUPER_MAGIC, SMB2_MAGIC_NUMBER, and CIFS_MAGIC_NUMBER.
Expand it to AFS_SUPER_MAGIC, CODA_SUPER_MAGIC, NCP_SUPER_MAGIC,
NFS_SUPER_MAGIC, OCFS2_SUPER_MAGIC, SMB_SUPER_MAGIC, SMB2_MAGIC_NUMBER,
CIFS_MAGIC_NUMBER, V9FS_MAGIC which is believed to be exhaustive.
ALSO include FUSE_SUPER_MAGIC: if the user's home directory is some FUSE
filesystem, that's kind of sus and the fewer tricks we try to pull, the
better.
As mentioned in 04c913427 (Limit command line rendering to $LINES
lines, 2024-10-25) our rendering breaks when the command line overflows
the screen and we have a pager search field.
Let's also apply the overflow logic in this case.
Note that the search field still works, it's just not visible.
In future we should maybe show a small search field (~4 lines) in
this case (removing 4 screen lines worth of command line). But again,
this is not really important.
If the first physical line in the command line overflows the screen,
the cursor will be wrong and we'll fail to clear the prompt without
a manual ctrl-l. Let's fix that, and also don't print the OSC 133
marker in this case.
Currently, when we are scrolled, the first line on the screen still
gets an indentation that would normally be filled by the prompt.
This happens even for soft-wrapped lines, so they might be
torn apart in weird ways here.
In future, we might paint the prompt here. If not, the current
behavior for soft-wrapped lines is debatable but its' not super
important to fix. The main goal is to first get rid of glitches in
these edge cases.
Render the command line buffer only until the last line we can fit
on the screen.
If the cursor pushes the viewport such that neither the prompt nor
the first line of the command line buffer are visible, then we are
"scrolled". In this case we need to make sure to erase any leftover
prompt, so add a hack to disable the "shared_prefix" optimization
that tries to minimize redraws.
Down-arrow scrolls down only when on the last line, and up-arrow always
scrolls up as much as possible. This is somewhat unconventional;
probably we should change the up-arrow behavior but I guess it's a
good idea to show the prompt whenever possible. In future we could
solve that in a different way: we could keep the prompt visible even
if we're scrolled. This would work well because at least the left
prompt lives in a different column from the command line buffer.
However this assumption breaks when the first line in the command
line buffer is soft-wrapped, so keep this approach for now.
Note that we're still broken when complete-and-search or history-pager
try to draw a pager on top of an overfull screen. Will try to fix
this later.
Closes#7296
It's a 9-char CSI and we've read 3 (`<ESC>[T`), so we need to read six more.
Verified against the previous C++ codebase and couldn't find a reason for the
change to consuming 10 chars in a `git blame` run.
Commit ba67d20b7 (Refresh TTY timestamps after nextd/prevd, 2024-10-13)
wasn't quite right because it also needs to fix it for arbitrary commands.
While at it, do this only when needed:
1. It seems to be only relevant for multiline prompts.
Note that we can wait until after evaluation to check if the prompt is
multiline, because repaint events go through the queue, see 5ba21cd29
(Send repaint requests through the input queue again, 2024-04-19).
2. When the binding doesn't execute any external command, we probably don't
need to fix up whatever the user printed. If they actually wanted to show
output and print another prompt, they should currently use "__fish_echo",
to properly support multiline prompts. Bindings should produce no other
output. What distinguishes external programs is that they can trigger this
issue even if they don't produce any output that remains visible in fish,
namely by using the terminal's alternate screen.
Would be nice if we could get rid of __fish_echo; I'm not yet sure how.
Fixes#10800
The test case shows that the pager rendering is not quite right. It renders
'{\', leaving out the newline. This rendering is ambiguous.
Let's fix it by rendering \n as control picture, like we do for other control
characters in the pager.
Given
$ echo {\
C
where C is the cursor.
Completions have prefix "{\\\n".
Since \n has a wcwidth of -1, this line always fails
let prefix_len = usize::try_from(fish_wcswidth(&self.prefix));
This triggers uncovers a regression in 43e2d7b48 (Port pager.cpp, 2023-12-02),
where we end up computing comp_width=0 for all completions.
Fix this. Test in the next commit.
The C++ version added the prefix width only if the completion had a valid
width. That seems wrong, let's do it always (if the prefix width is valid).
Completion on ": {*," used to work but nowadays our attempt to wildcard-expand
it fails with a syntax error and we do nothing. This behavior probably only
makes sense for the overflow case, so do that.
On a German keyboard, with a German keymap, and this ~/.wezterm.lua
local wezterm = require 'wezterm'
local config = wezterm.config_builder()
config.enable_kitty_keyboard = true
return config
when I press shift+# (which is single quote)
WezTerm sends the CSI u encoding shift-'.
Because of this, we completely disable kitty progressive enhancements and
modifyOtherKeys on WezTerm.
It makes no sense for every single app to work around WezTerm violating the
protocol. All these workarounds just create unnecessary version dependencies.
Also our workaround is brittle; it breaks as soon as you're inside something
like SSH.
Least importantly, the workarond prevents users of English keyboard layouts
to easily use the new features.
Since it seems so easy to work around by settting "enable_kitty_keyboard = false",
and most importantly, since that's the default, it seems better to remove
the workaround to simplify the world.
See #10663
Commit 5db0bd5 (Lock history file before reading it, 2024-10-09)
rewrites the history file in place instead of using rename().
By writing to the same file (with the same inode), it corrupts
our memory-mapped snapshot; mmap(3) says:
> It is unspecified whether modifications to the underlying object done
> after the MAP_PRIVATE mapping is established are visible through the
> MAP_PRIVATE mapping.
Revert it (it was misguided anyway).
Closes#10777Closes#10782
Commit a91bf6d88 (builtin.c: builtin_source now checks that its argument is
a file., 2005-12-16) fixed an infinite loop for commands like "source /"
where the argument is a directory.
It did so by erroring out early unless the filename argument is a regular file.
This is too restrictive; it disallows reading from special files like /dev/null
and fifos.
Today we get a sensible error without this check, so remove it.
This fixes a macOS-specific bug. See 390b40e02 (Fix regression not refreshing
TTY timestamps after external command from binding, 2024-05-29) and 8a7c3ceec
(Don't abandon line after writing control sequences, 2024-04-06).
Fixes#10779
For multi-line prompts, we start each leading line with a clr_eol. Immediately
before printing these prompt lines we emit the OSC 133 prompt start marker.
Some terminals such as tmux interpret make clr_eol delete such markers,
hence prompt navigation is broken.
Fix this by printing the marker only after clr_eol.
The scenario where this triggers is quite odd. I haven't looked into why
the problem doesn't exist if I remove the recursive repaint request.
See https://github.com/tmux/tmux/issues/4183Closes#10776
Our panic handler attempts a blocking read from stdin and only exits
after the user presses Enter.
This is unconventional behavior and might cause surprise but there is a
significant upside: crashes become more visible for terminals that don't
already detect crashes (see ecdc9ce1d (Install a panic handler to avoid
dropping crash stacktraces, 2024-03-24)).
As reported in 4d0aa2b5d (Fix panic handler, 2024-08-28), the panic handler
failed to exit fish if the panic happens on background threads. It would
only exit the background thread (like autosuggestion/highlight/history-pager
performer) itself. The fix was to abort the whole process.
Aborting has the additional upside of generating a coredump.
However since abort() skips stack unwinding, 4d0aa2b5d makes us no longer
restore the terminal on panic. In particular, if the terminal supports kitty
progressive enhancements, keys like ctrl-p will no longer work in say,
a Bash parent shell. So it broke 121680147 (Use RAII for restoring term
modes, 2024-03-24).
Fix this while still aborting to create coredumps. This means we can't use
RAII (for better or worse). The bad part is that we have to deal with added
complexity; we need to make sure that we set the AT_EXIT handler only after
all its inputs (like TERMINAL_MODE_ON_STARTUP) are initialized to a safe
value, but also before any damage has been done to the terminal. I guess we
can add a bunch of assertions.
Unfortunately, if a background thread panics, I haven't yet figured out how
to tell the main thread to do the blocking read. So the trick of "Press
Enter to exit", which allows users to attach a debugger doesn't yet work for
panics in background threads. We can probably figure that out later. Maybe
use pthread_kill(3)? Of course we still create coredumps, so that's fine.
As a temporary workaround, let's sleep for a bit so the user can at least
see that there is a crash & stacktrace.
One ugly bit here is that unit tests run AT_EXIT twice but it should be
idempotent.
I don't think I really get why this newline is here. It moves the cursor
from the end of the newline to the beginning of the next line. Maybe it
was added only for panics in background threads? Either way it's fine.
We don't care to check the latest value of these variables;
these should only be read on startup and are not meant to
be overridden by the user ever. Hence we don't need a parser.
If SIGTERM is delivered to a background thread, a function call to sanitize
the reader state would crash in assert_is_main_thread(). In this case we
are about to exit so there's no need to fix the reader state. Skip it on
background threads.
We use optimistic concurrency when rewriting the history file to
minimize the lock scope. Unfortunately, old.mtime == new.mtime
does not imply that file is unchanged; we don't have guarantees
on the granularity of the modification time timestamp, see
https://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond
So let's lock before reading any old contents and use the other
"write-to-tempfile-and-rename" code path only when locking fails.
Potentially fixes#10300
(untested) which probably happens because read_zero_padded() attempts to
read bytes that have not been flushed yet.
No functional change, since with the parent commit, we no longer treat
"DirRemoteness::local" different from "DirRemoteness::remote", but we might
do so in future, so make sure we don't give a false positive here.
Non-Linux systems have ST_LOCAL or MNT_LOCAL, so no unknowns there.
See #10434
mmap() fails with ENODEV on remote file systems. This means we always fail
to read any old history on network file systems on Linux (except on the file
systems we recognize which are NFS, SMB and CIFS).
Untested, so I'm not sure if this works.
Fixes#10434
We no longer use RAII for enabling/disabling these, so a full object is
overkill. Additionally this object doesn't allow us to recover from the case
where we receive SIGTERM while inside terminal_protocols_{enable,disable}.
We can simply run disable another time since they're idempotent. Untested.
When I run a command with leading space, it is not added to the on-disk
history. However we still call History::save(). After 25 of such calls,
we rewrite the history file (even though nothing was written by us).
This is annoying when diagnosing #10300 where the history of the current
shell (but not other shells) is broken; because the history rewrite will
make the problem go away. Let's not save in this case, to make it easier to
run commands to inspect the state of the history file.
Given a history like
- cmd: echo OLD
when: 1726157160
\x00\x00\x00- cmd: echo leading NUL bytes
when: 1726157160
- cmd: echo NEW
when: 1726157223
offset_of_next_item() happily records 3 items even though the second item
is corrupted.
decode_item() fails which makes the caller stop loading any older items --
we got knee capped.
Avoid this horrible failure mode by skipping over these items already in
offset computation. For now we still lose the corrupted item itself.
In future we should probably try to delete the NUL bytes or avoid the
corruption in the first place.
See #10300 and others.
This should make the sort have a strict weak ordering, which rust
requires since 1.81 (or it will panic).
Note: This changes the order, but that's *fine* since the current
order is random weirdness anyway.
Fixes#10763
Commit b00899179 (Don't indent multi-line quoted strings; do indent inside
(), 2024-04-28) made parse_util_compute_indents() crash on `echo "$()"'x`.
After recursively indenting the command substitution, we indent the "'x
suffix. We skip the quoted part by setting "done=2". Later we wrongly
index "self.indents[done..range.start+offset+1]" (= "self.indents[2..1]").
Fix this by making sure that "start >= done", thus not setting any indents
for the quoted suffix. There is no need to do so; only the first character
in each line needs an indent.
In particular, this fixes the case
ctrl-r foo ctrl-r
where foo substring-matches no more than one page's worth of results.
The second attempt will fall back to subsequence matching which is wrong.
In case a terminal resize[1] causes us
to repaint a multi-line prompt that changes width like
function fish_prompt
for i in 1 2 3
random choice 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaa' 'bbbbbbbbbbb'
end
end
we add a clr_eol after each line[2] , to make sure
that a "b" line does not have leftover "a" letters
(80aaae5b7 (Clear to end of each line in left prompt, 2020-10-25)).
Unfortunately, if a prompt line takes up all the columns, clr_eol will
wrongly clear the last column. Reproduce with
function fish_prompt
string repeat $COLUMNS -
echo "$PWD> "
end
and observe that the last "-" is missing.
Previous (reverted) attempt d3ceba107 (Clear to eol before outputting line
in multi-line prompt, 2021-05-17) found the right fix but had an off-by-one
error which reintroduced the leftover "a" letters in the "random choice"
prompt above.
Given prompt string "aa\nbb\ncc", it wrongly printed
clr_eol "aa" clr_eol "\nbb" "\ncc"
Observe that the first line is cleared twice, while the second line is
never cleared. Fix that.
[1]: or an async "commandline -f repaint" triggered by a uvar change /
async prompt update
[2]: except after the last line where we probably already emit clr_eol
elsewhere..
Alternative fix: emit both clr_eol and clr_bol *before* drawing the current
line. However, if fish and the terminal disagree on character width, that
approach might erase too much.
Closes#8164
I guess it's nice to know that these two are the same but that info is not
needed here, it just adds confusion. The user must have pressed ctrl-j if
we get here, so echo that back.
See the parent commit.
This is just too confusing; \b sounds like it would map backspace but it's
actually just ctrl-h. Backspace is a different key ("bind backspace"),
so let's move away from \b.
Reproduce by typing ctrl-h in fish_key_reader, or, for even more confusion,
use a terminal like tmux and type ctrl-backspace which also sends ctrl-h.
I've thought about changing \b (and its aliases like \ch and \x08) to mean
backspace but that seems like unnecessary breakage, since they all already
mean ctrl-h, and can usually be mapped independent of backspace.
See the discussion in #10738
Make fish-printf no longer depend on the widestring crate, as other clients
won't use it; instead this is an optional feature.
Make format strings a generic type, so that both narrow and wide strings can
serve. This removes a lot of the complexity around converting from narrow to
wide.
Add a README.md to this crate.
This gave a weird error when you did e.g. `math Foo / 6`:
"Missing Operator" and only the "F" marked.
Adding an operator here anywhere won't help, so calling this an
"Unknown function" is closer to the truth. We also get nicer markings
because we know the extent of the identifier.
Fix 7308dbc7a (fish_indent: Prevent overwriting file with identical content,
2024-07-21) in a different way by passing O_TRUNC again.
If we don't want regressions we could use code review.
We used deque in C++ because this vector may be large, and so it avoids
repeated re-allocations. But VecDeque is different in Rust - it's contiguous -
so there's no benefit. Just use Vec.
For implementation reasons, we special-case cd in several ways
1. it gets different completions (handle_as_special_cd)
2. when highlighting, we honor CDPATH
3. we discard autosuggestions from history that don't have valid path arguments
There are some third-party tools like zoxide that redefine cd ("function cd
--wraps ...; ...; end"). We can't support this in general but let's try to
make an effort.
zoxide tries to be a superset of cd, so special case 1 is still
valid but 2 and 3 are not, because zoxide accepts some paths
that cd doesn't accept.
Let's add a hack to detect when "cd" actually means something else by checking
if there is any --wraps argument.
A cleaner solution is definitely possible but more effort.
Closes#10719
In some cases we add the wildcard twice.
$ fish -c '../jj; complete -C"ls cli/*/conf/tem"'
cli/*/*/config/templates.toml
Fix that. Test in the next commit.
There seems to be another bug in 3.7.1 where we fail to apply this completion
to the command line. This appears fixed. (FWIW we might want to revert
the quoting change in completion_apply_to_command_line(), maybe that one
accidentally fix this).
Fixes#10703
The HashMap is used to generate the __fish_describe_command integration
completions. Given the nature of the allocations and the numbers that we use, a
BTreeMap would theoretically perform better. Benchmarks show a 2-9%
improvement in completion times consistently in favor of BTreeMap.
Warnings were appearing under GCC 13.2
(void) alone is insufficient under modern compilers, workaround with logical
negation taken from GCC bug tracker.
Worth including because mold is rather popular in the rust world and because the
bug affects mold versions coincident with the development of the fish rust port.
The bug affects all currently released versions of mold from 2.30.0 (Mar 2024)
onwards under at least FreeBSD (though quite likely other platforms as well).
See https://github.com/rui314/mold/issues/1338 for reference.
The previous control flow logic wasn't sound and would leave the shell in a hung
state when `break` would be encountered.
The behavior is now straightforward, the shell reads until <Enter> or <q> is
pressed, at which point it aborts.
This was based on a misunderstanding.
On musl, 64-bit time_t on 32-bit architectures was introduced in version 1.2.0,
by introducing new symbols. The old symbols still exist, to allow programs compiled against older versions
to keep running on 1.2.0+, preserving ABI-compatibility. (see musl commit 38143339646a4ccce8afe298c34467767c899f51)
Programs compiled against 1.2.0+ will get the new symbols, and will therefore think time_t is 64-bit.
Unfortunately, rust's libc crate uses its own definition of these types, and does not check for musl version.
Currently, it includes the pre-1.2.0 32-bit type.
That means:
- If you run on a 32-bit system like i686
- ... and compile against a C-library other than libc
- ... and pass it a time_t-containing struct like timespec or stat
... you need to arrange for that library to be built against musl <1.2.0.
Or, as https://github.com/ericonr/rust-time64 says:
> Therefore, for "old" 32-bit targets (riscv32 is supposed to default to time64),
> any Rust code that interacts with C code built on musl after 1.2.0,
> using types based on time_t (arguably, the main ones are struct timespec and struct stat) in their interface,
> will be completely miscompiled.
However, while fish runs on i686 and compiles against pcre2, we do not pass pcre2 a time_t.
Our only uses of time_t are confined to interactions with libc, in which case with musl we would simply use the legacy ABI.
I have compiled an i686 fish against musl to confirm and can find no issue.
This reverts commit 55196ee2a0.
This reverts commit 4992f88966.
This reverts commit 46c8ba2c9f.
This reverts commit 3a9b4149da.
This reverts commit 5f9e9cbe74.
This reverts commit 338579b78c.
This reverts commit d19e5508d7.
This reverts commit b64045dc18.
Closes#10634