Our recursive create_dir() first calls stat() to check if the directory
already exists and then mkdir() trying to create it. If another (fish)
process creates the same directory after our stat() but before our
mkdir(), then our mkdir() fails with EEXIST. This error is spurious
if there is already a directory at this path (and permissions are
correct).
Let's switch to the stdlib version, which promises to solve this issue.
They currently do it by running mkdir() first and ask stat() later.
This implies that they will only return success even if we don't have
any of rwx permissions on the directory, but that was already a problem
before this change. We silently don't write history in that case..
Fixes#10813
All-whitespace autocompletions are invisible, no matter the cursor
shape. We do offer such autosuggestions after typing a command name
such as "fish". Since the autosuggestion is invisible it's probably
not useful. It also does no harm except when using a binding like
bind ctrl-g '
if commandline --showing-suggestion
commandline -f accept-autosuggestion
else
up-or-search
end'
where typing "fish<ctrl-g>" surprisingly does not perform a history
search. Fix this by detecting this specific case. In future we
could probably stop showing autosuggestions whenever they only
contain whitespace.
Some background: fish has some files which should be updated atomically:
specifically the history file and the universal variables file. If two fish
processes modified these in-place at the same time, then that could result
in interleaved writes and corrupted files.
To prevent this, fish uses the write-to-adjacent-file-then-rename to
atomically swap in a new file (history is slightly more complicated than
this, for performance, but this remains true). This avoids corruption.
However if two fish processes attempt this at the same time, then one
process will win the race and the data from the other process will be lost.
To prevent this, fish attempts to take an (advisory) lock on the target
file before beginning this process. This prevents data loss because only
one fish instance can replace the target file at once. (fish checks to
ensure it's locked the right file).
However some filesystems, particularly remote file systems, may have locks
which hang for a long time, preventing the user from using their shell.
This is far more serious than data loss, which is not catastrophic: losing
a history item or variable is not a major deal. So fish just attempts to
skip locks on remote filesystems.
Unfortunately Linux does not have a good API for checking if a filesystem
is remote: the best you can do is check the file system's magic number
against a hard-coded list. Today, the list is NFS_SUPER_MAGIC,
SMB_SUPER_MAGIC, SMB2_MAGIC_NUMBER, and CIFS_MAGIC_NUMBER.
Expand it to AFS_SUPER_MAGIC, CODA_SUPER_MAGIC, NCP_SUPER_MAGIC,
NFS_SUPER_MAGIC, OCFS2_SUPER_MAGIC, SMB_SUPER_MAGIC, SMB2_MAGIC_NUMBER,
CIFS_MAGIC_NUMBER, V9FS_MAGIC which is believed to be exhaustive.
ALSO include FUSE_SUPER_MAGIC: if the user's home directory is some FUSE
filesystem, that's kind of sus and the fewer tricks we try to pull, the
better.
As mentioned in 04c913427 (Limit command line rendering to $LINES
lines, 2024-10-25) our rendering breaks when the command line overflows
the screen and we have a pager search field.
Let's also apply the overflow logic in this case.
Note that the search field still works, it's just not visible.
In future we should maybe show a small search field (~4 lines) in
this case (removing 4 screen lines worth of command line). But again,
this is not really important.
If the first physical line in the command line overflows the screen,
the cursor will be wrong and we'll fail to clear the prompt without
a manual ctrl-l. Let's fix that, and also don't print the OSC 133
marker in this case.
Currently, when we are scrolled, the first line on the screen still
gets an indentation that would normally be filled by the prompt.
This happens even for soft-wrapped lines, so they might be
torn apart in weird ways here.
In future, we might paint the prompt here. If not, the current
behavior for soft-wrapped lines is debatable but its' not super
important to fix. The main goal is to first get rid of glitches in
these edge cases.
Render the command line buffer only until the last line we can fit
on the screen.
If the cursor pushes the viewport such that neither the prompt nor
the first line of the command line buffer are visible, then we are
"scrolled". In this case we need to make sure to erase any leftover
prompt, so add a hack to disable the "shared_prefix" optimization
that tries to minimize redraws.
Down-arrow scrolls down only when on the last line, and up-arrow always
scrolls up as much as possible. This is somewhat unconventional;
probably we should change the up-arrow behavior but I guess it's a
good idea to show the prompt whenever possible. In future we could
solve that in a different way: we could keep the prompt visible even
if we're scrolled. This would work well because at least the left
prompt lives in a different column from the command line buffer.
However this assumption breaks when the first line in the command
line buffer is soft-wrapped, so keep this approach for now.
Note that we're still broken when complete-and-search or history-pager
try to draw a pager on top of an overfull screen. Will try to fix
this later.
Closes#7296
It's a 9-char CSI and we've read 3 (`<ESC>[T`), so we need to read six more.
Verified against the previous C++ codebase and couldn't find a reason for the
change to consuming 10 chars in a `git blame` run.
Commit ba67d20b7 (Refresh TTY timestamps after nextd/prevd, 2024-10-13)
wasn't quite right because it also needs to fix it for arbitrary commands.
While at it, do this only when needed:
1. It seems to be only relevant for multiline prompts.
Note that we can wait until after evaluation to check if the prompt is
multiline, because repaint events go through the queue, see 5ba21cd29
(Send repaint requests through the input queue again, 2024-04-19).
2. When the binding doesn't execute any external command, we probably don't
need to fix up whatever the user printed. If they actually wanted to show
output and print another prompt, they should currently use "__fish_echo",
to properly support multiline prompts. Bindings should produce no other
output. What distinguishes external programs is that they can trigger this
issue even if they don't produce any output that remains visible in fish,
namely by using the terminal's alternate screen.
Would be nice if we could get rid of __fish_echo; I'm not yet sure how.
Fixes#10800
The test case shows that the pager rendering is not quite right. It renders
'{\', leaving out the newline. This rendering is ambiguous.
Let's fix it by rendering \n as control picture, like we do for other control
characters in the pager.
Given
$ echo {\
C
where C is the cursor.
Completions have prefix "{\\\n".
Since \n has a wcwidth of -1, this line always fails
let prefix_len = usize::try_from(fish_wcswidth(&self.prefix));
This triggers uncovers a regression in 43e2d7b48 (Port pager.cpp, 2023-12-02),
where we end up computing comp_width=0 for all completions.
Fix this. Test in the next commit.
The C++ version added the prefix width only if the completion had a valid
width. That seems wrong, let's do it always (if the prefix width is valid).
Completion on ": {*," used to work but nowadays our attempt to wildcard-expand
it fails with a syntax error and we do nothing. This behavior probably only
makes sense for the overflow case, so do that.
On a German keyboard, with a German keymap, and this ~/.wezterm.lua
local wezterm = require 'wezterm'
local config = wezterm.config_builder()
config.enable_kitty_keyboard = true
return config
when I press shift+# (which is single quote)
WezTerm sends the CSI u encoding shift-'.
Because of this, we completely disable kitty progressive enhancements and
modifyOtherKeys on WezTerm.
It makes no sense for every single app to work around WezTerm violating the
protocol. All these workarounds just create unnecessary version dependencies.
Also our workaround is brittle; it breaks as soon as you're inside something
like SSH.
Least importantly, the workarond prevents users of English keyboard layouts
to easily use the new features.
Since it seems so easy to work around by settting "enable_kitty_keyboard = false",
and most importantly, since that's the default, it seems better to remove
the workaround to simplify the world.
See #10663
Commit 5db0bd5 (Lock history file before reading it, 2024-10-09)
rewrites the history file in place instead of using rename().
By writing to the same file (with the same inode), it corrupts
our memory-mapped snapshot; mmap(3) says:
> It is unspecified whether modifications to the underlying object done
> after the MAP_PRIVATE mapping is established are visible through the
> MAP_PRIVATE mapping.
Revert it (it was misguided anyway).
Closes#10777Closes#10782
Commit a91bf6d88 (builtin.c: builtin_source now checks that its argument is
a file., 2005-12-16) fixed an infinite loop for commands like "source /"
where the argument is a directory.
It did so by erroring out early unless the filename argument is a regular file.
This is too restrictive; it disallows reading from special files like /dev/null
and fifos.
Today we get a sensible error without this check, so remove it.
This fixes a macOS-specific bug. See 390b40e02 (Fix regression not refreshing
TTY timestamps after external command from binding, 2024-05-29) and 8a7c3ceec
(Don't abandon line after writing control sequences, 2024-04-06).
Fixes#10779
For multi-line prompts, we start each leading line with a clr_eol. Immediately
before printing these prompt lines we emit the OSC 133 prompt start marker.
Some terminals such as tmux interpret make clr_eol delete such markers,
hence prompt navigation is broken.
Fix this by printing the marker only after clr_eol.
The scenario where this triggers is quite odd. I haven't looked into why
the problem doesn't exist if I remove the recursive repaint request.
See https://github.com/tmux/tmux/issues/4183Closes#10776
Our panic handler attempts a blocking read from stdin and only exits
after the user presses Enter.
This is unconventional behavior and might cause surprise but there is a
significant upside: crashes become more visible for terminals that don't
already detect crashes (see ecdc9ce1d (Install a panic handler to avoid
dropping crash stacktraces, 2024-03-24)).
As reported in 4d0aa2b5d (Fix panic handler, 2024-08-28), the panic handler
failed to exit fish if the panic happens on background threads. It would
only exit the background thread (like autosuggestion/highlight/history-pager
performer) itself. The fix was to abort the whole process.
Aborting has the additional upside of generating a coredump.
However since abort() skips stack unwinding, 4d0aa2b5d makes us no longer
restore the terminal on panic. In particular, if the terminal supports kitty
progressive enhancements, keys like ctrl-p will no longer work in say,
a Bash parent shell. So it broke 121680147 (Use RAII for restoring term
modes, 2024-03-24).
Fix this while still aborting to create coredumps. This means we can't use
RAII (for better or worse). The bad part is that we have to deal with added
complexity; we need to make sure that we set the AT_EXIT handler only after
all its inputs (like TERMINAL_MODE_ON_STARTUP) are initialized to a safe
value, but also before any damage has been done to the terminal. I guess we
can add a bunch of assertions.
Unfortunately, if a background thread panics, I haven't yet figured out how
to tell the main thread to do the blocking read. So the trick of "Press
Enter to exit", which allows users to attach a debugger doesn't yet work for
panics in background threads. We can probably figure that out later. Maybe
use pthread_kill(3)? Of course we still create coredumps, so that's fine.
As a temporary workaround, let's sleep for a bit so the user can at least
see that there is a crash & stacktrace.
One ugly bit here is that unit tests run AT_EXIT twice but it should be
idempotent.
I don't think I really get why this newline is here. It moves the cursor
from the end of the newline to the beginning of the next line. Maybe it
was added only for panics in background threads? Either way it's fine.
We don't care to check the latest value of these variables;
these should only be read on startup and are not meant to
be overridden by the user ever. Hence we don't need a parser.
If SIGTERM is delivered to a background thread, a function call to sanitize
the reader state would crash in assert_is_main_thread(). In this case we
are about to exit so there's no need to fix the reader state. Skip it on
background threads.
We use optimistic concurrency when rewriting the history file to
minimize the lock scope. Unfortunately, old.mtime == new.mtime
does not imply that file is unchanged; we don't have guarantees
on the granularity of the modification time timestamp, see
https://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond
So let's lock before reading any old contents and use the other
"write-to-tempfile-and-rename" code path only when locking fails.
Potentially fixes#10300
(untested) which probably happens because read_zero_padded() attempts to
read bytes that have not been flushed yet.
No functional change, since with the parent commit, we no longer treat
"DirRemoteness::local" different from "DirRemoteness::remote", but we might
do so in future, so make sure we don't give a false positive here.
Non-Linux systems have ST_LOCAL or MNT_LOCAL, so no unknowns there.
See #10434
mmap() fails with ENODEV on remote file systems. This means we always fail
to read any old history on network file systems on Linux (except on the file
systems we recognize which are NFS, SMB and CIFS).
Untested, so I'm not sure if this works.
Fixes#10434
We no longer use RAII for enabling/disabling these, so a full object is
overkill. Additionally this object doesn't allow us to recover from the case
where we receive SIGTERM while inside terminal_protocols_{enable,disable}.
We can simply run disable another time since they're idempotent. Untested.
When I run a command with leading space, it is not added to the on-disk
history. However we still call History::save(). After 25 of such calls,
we rewrite the history file (even though nothing was written by us).
This is annoying when diagnosing #10300 where the history of the current
shell (but not other shells) is broken; because the history rewrite will
make the problem go away. Let's not save in this case, to make it easier to
run commands to inspect the state of the history file.