refactor: Prefer plain trait definitions over macros for impl_intern_value_trivial
`impl_intern_value_trivial` can be defined with a trait directly, so prefer that over a macro definition.
This partially reverts #17350, based on the feedback in #17397.
If we don't have an autofix, it's more annoying to highlight the whole line.
This heuristic fixes the diagnostic overwhelming the user during startup.
Allow rust-project.json to include arbitrary shell commands for runnables
This is a follow-up on #16135, resolving the feedback raised :)
Allow rust-project.json to include shell runnables, of the form:
```
{
"build_info": {
"label": "//project/foo:my-crate",
"target_kind": "bin",
"shell_runnables": [
{
"kind": "run",
"program": "buck2",
"args": ["run", "//project/foo:my-crate"]
},
{
"kind": "test_one",
"program": "test_runner",
"args": ["--name=$$TEST_NAME$$"]
}
]
}
}
```
If these runnable configs are present for the current crate in rust-project.json, offer them as runnables in VS Code.
This PR required some boring changes to APIs that previously only handled cargo situations. I've split out these changes as commits labelled 'refactor', so it's easy to see the interesting changes.
feat: show type bounds from containers when hovering on functions
fix#12917.
### Changes
1. Added Support for displaying the container and type bounds from it when hovering on functions with generic types.
2. Added a user config to determine whether to display container bounds (enabled by default).
3. Added regression tests.
4. Simplified and refactored `hir/display.rs` to improve readability.
fix: ensure that the parent of a SourceRoot cannot be itself
fix#17378.
In `FileSetConfig.map`, different roots might be mapped to the same `root_id` due to deduplication in `ProjectFolders::new`:
```rust
// Example from rustup
/Users/roife/code/rustup/target/debug/build/rustup-863a063426b56c51/out
/Users/roife/code/rustup
```
In `source_root_parent_map`, r-a might encounter paths where their SourceRootId (i.e. `root_id`) is identical, yet one the them is the parent of the another. This situation can cause the `root_id` to be its own parent, potentially leading to an infinite loop.
This PR resolves such cases by adding a check.
feat: TOML based config for rust-analyzer
> Important
>
> We don't promise _**any**_ stability with this feature yet, any configs exposed may be removed again, the grouping may change etc.
# TOML Based Config for RA
This PR ( addresses #13529 and this is a follow-up PR on #16639 ) makes rust-analyzer configurable by configuration files called `rust-analyzer.toml`. Files **must** be named `rust-analyzer.toml`. There is not a strict rule regarding where the files should be placed, but it is recommended to put them near a file that triggers server to start (i.e., `Cargo.{toml,lock}`, `rust-project.json`).
## Configuration Types
Previous configuration keys are now split into three different classes.
1. Client keys: These keys only make sense when set by the client (e.g., by setting them in `settings.json` in VSCode). They are but a small portion of this list. One such example is `rust_analyzer.files_watcher`, based on which either the client or the server will be responsible for watching for changes made to project files.
2. Global keys: These keys apply to the entire workspace and can only be set on the very top layers of the hierarchy. The next section gives instructions on which layers these are.
3. Local keys: Keys that can be changed for each crate if desired.
### How Am I Supposed To Know If A Config Is Gl/Loc/Cl ?
#17101
## Configuration Hierarchy
There are 5 levels in the configuration hierarchy. When a key is searched for, it is searched in a bottom-up depth-first fashion.
### Default Configuration
**Scope**: Global, Local, and Client
This is a hard-coded set of configurations. When a configuration key could not be found, then its default value applies.
### User configuration
**Scope**: Global, Local
If you want your configurations to apply to **every** project you have, you can do so by setting them in your `$CONFIG_DIR/rust-analyzer/rust-analyzer.toml` file, where `$CONFIG_DIR` is :
| Platform | Value | Example |
| ------- | ------------------------------------- | ---------------------------------------- |
| Linux | `$XDG_CONFIG_HOME` or `$HOME`/.config | /home/alice/.config |
| macOS | `$HOME`/Library/Application Support | /Users/Alice/Library/Application Support |
| Windows | `{FOLDERID_RoamingAppData}` | C:\Users\Alice\AppData\Roaming |
### Client configuration
**Scope**: Global, Local, and Client
Previously, the only way to configure rust-analyzer was to configure it from the settings of the Client you are using. This level corresponds to that.
> With this PR, you don't need to port anything to benefit from new features. You can continue to use your old settings as they are.
### Workspace Root Configuration
**Scope**: Global, Local
Rust-analyzer already used the path of the workspace you opened in your Client. We used this information to create a configuration file that won't affect your other projects and define global level configurations at the same time.
### Local Configuration
**Scope**: Local
You can also configure rust-analyzer on a crate level. Although it is not an error to define global ( or client ) level keys in such files, they won't be taken into consideration by the server. Defined local keys will affect the crate in which they are defined and crate's descendants. Internally, a Rust project is split into what we call `SourceRoot`s. This, although with exceptions, is equal to splitting a project into crates.
> You may choose to have more than one `rust-analyzer.toml` files within a `SourceRoot`, but among them, the one closer to the project root will be
Add preference modifier for workspace-local crates when using auto import.
`@joshka` pointed out some odd behavior of auto import ordering. It doesn't seem that the current heuristics were applying any sort of precedence to imports from the workspace. I've went ahead and added that.
I hope to get some feedback on the modifier numbers here. I just went with something that felt like it balanced giving more power to workspace crates without completely ignoring relative path distance.
closes https://github.com/rust-lang/rust-analyzer/issues/17303
fix: do not resolve prelude within block modules
fix#17338 (continuing from #17251).
In #17251, we injected preludes into non-top-level modules, which leading to r-a to directly resolve names in preludes in block modules. This PR fix it by checking whether the module is a pseudo-module introduced by blocks. (similar to what we do for extern preludes)
Changed package.json so vscode extension settings have submenus
There are a lot of options that are a part of rust-analyzer, sometimes it can be hard to find an option that you are looking for. To fix this I have put all configurations into categories based on their names. I have also changed the schema in `crates/rust-analyzer/src/config.rs` to reflect this.
Currently for each generated entry the title is redeclared, this does function but I am prepared to change this if it is a problem.
Currently, rust-analyzer highlights the entire region when a `cfg` is
inactive (e.g. `#[cfg(windows)]` on a Linux machine). However,
unlinked files only highlight the first three characters of the file.
This was introduced in #8444, but users have repeatedly found
themselves with no rust-analyzer support for a file and unsure
why (see e.g. #13226 and the intentionally prominent pop-up added in
PR #14366).
(Anecdotally, we see this issue bite our users regularly, particularly
people new to Rust.)
Instead, highlight the entire inactive file, but mark it as all as
unused. This allows users to hover and run the quickfix from any line.
Whilst this is marginally more prominent, it's less invasive than a
pop-up, and users do want to know why they're getting no rust-analyzer
support in certain files.
Feat: hide double underscored symbols from symbol search
Fixes#17272 by changing the default behavior of query to skip results that start with `__` (two underscores).
Not sure if this has any far reaching implications - a review would help to understand if this is the right place to do the filtering, and if it's fine to do it by default on the query.
If you type `__` as your search, then we'll show the matching double unders, just in case you actually need the symbol.
Don't mark `#[rustc_deprecated_safe_2024]` functions as unsafe
`std::env::set_var` will be unsafe in edition 2024, but not before it. I couldn't quite figure out how to check for the span properly, so for now we just turn the false positives into false negatives, which are less bad.
Add `Function::fn_ptr_type(…)` for obtaining name-erased function type
The use case of this function if being able to group functions by their function ptr type.
cc `@flodiebold`
fix: Only generate snippets for `extract_expressions_from_format_string` if snippets are supported
Part of #17332
Fixes `extract_expressions_from_format_string` so that it doesn't generate snippets if the client doesn't support it.
`std::env::set_var` will be unsafe in edition 2024, but not before it.
I couldn't quite figure out how to check for the span properly, so for now
we just turn the false positives into false negatives, which are less bad.
fix diagnostics clearing when flychecks run per-workspace
This might be causing #17300 or it's a different bug with the same functionality.
I wonder if the decision to clear diagnostics should stay in the main loop or maybe the flycheck itself should track it and tell the mainloop?
I have used a hash map but we could just as well use a vector since the IDs are `usizes` in some given range starting at 0. It would be probably faster but this just felt a bit cleaner and it allows us to change the ID to newtype later and we can just use a hasher that returns the underlying integer.
Add `toggle_async_sugar` assist code action
Implement code action for sugaring and de-sugaring asynchronous functions.
This code action does not import `Future` trait when de-sugaring and does not touch function boby, I guess this can be implemented later if needed. This action also does not take into consideration other bounds because IMO it's usually "let me try to use sugared version here".
Feel free to request changes, that's my first code action implementation 😄Closes#17010
Relates to #16195
Implement assist to switch between doc and normal comments
Hey first PR to rust-analyzer to get my feet wet with the code base. It's an assist to switch a normal comment to a doc comment and back, something I've found myself doing by hand a couple of times.
I shamelessly stole `relevant_line_comments` from `convert_comment_block`, because I didn't see any inter-assist imports happening in the files I peeked at so I thought this would be preferable.
Fix `data_constructor` ignoring generics for struct
Previously didn't work for structs with generics due to `field.ty()` having placeholders in type.
_Enums were handeled correctly already._
Also renamed `type_constructor -> data_constructor` as this is more correct name for it
Allow sysroots to only consist of the source root dir
Fixes https://github.com/rust-lang/rust-analyzer/issues/17159
This PR encodes the `None` case of an optional sysroot into `Sysroot` itself. This simplifies a lot of things and allows us to have sysroots that consist of nothing, only standard library sources, everything but the standard library sources or everything. This makes things a lot more flexible. Additionally, this removes the workspace status bar info again, as it turns out that that can be too much information for the status bar to handle (this is better rendered somewhere else, like in the status view).
Fix: infer type of async block with tail return expr
Fixes#17106
The `infer_async_block` method calls the `infer_block` method internally, which returns the never type without coercion when `tail_expr` is `None` and `ctx.diverges` is `Diverges::Always`.This is the reason for the bug in this issue.
cfce2bb46d/crates/hir-ty/src/infer/expr.rs (L1411-L1413)
This PR solves the bug by adding a process to coerce after calling `infer_block` method.
This code passes all the tests, including tests I added for this isuue, however, I am not sure if this solution is right. I think that this solution is an ad hoc solution. So, I would appreciate to have your review.
I apologize if I'm off the mark, but `infer_async_block` method should be rewritten to share code with the process of infering type of `expr::Closure` instead of the `infer_block` method. That way it will be closer to the infer process of rustc.
handle {self} when removing unused imports
Fixes#17139
On master
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner::{self, X}$0;
fn f() {
let y = inner::Y();
}
}
```
becomes
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner:self;
fn f() {
let y = inner::Y();
}
}
```
with this fix it instead becomes
```
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner;
fn f() {
let y = inner::Y();
}
}
```
fix: ensure implied bounds from associated types are considered in autocomplete
closes: #16989
rust-analyzer needs to consider implied bounds from associated types in order to get all methods suggestions people expect. A pretty easy way to do that is to keep the `candidate_trait_id`'s receiver if it matches `TyFingerprint::Unnameable`. When benchmarking this change, I didn't notice a meaningful difference in autocomplete latency.
(`TyFingerprint::Unnameable` corresponds to `TyKind::AssociatedType`, `TyKind::OpaqueType`, `TyKind::FnDef`, `TyKind::Closure`, `TyKind::Coroutine`, and `TyKind::CoroutineWitness`.)
Clear diagnostics only after new ones were received
Closes#15934
This adds a flag inside the global state which controls when old diagnostics are cleared. Now, old diagnostics should be cleared only after at least one new diagnostic is available.
feat: More callable info
With this PR we retain more info about callables other than functions, allowing for closure parameter type inlay hints to be linkable as well as better signature help around closures and `Fn*` implementors.
Fix OOM caused by term search
The issue came from multi Cartesian product for exprs with many (25+) arguments, each having multiple options.
The solution is two fold:
### Avoid blowing up in Cartesian product
**Before the logic was:**
1. Find expressions for each argument/param - there may be many
2. Take the Cartesian product (which blows up in some cases)
4. If there are more than 2 options throw them away by squashing them to `Many`
**Now the logic is:**
1. Find expressions for each argument/param and squash them to `Many` if there are more than 2 as otherwise we are guaranteed to also have more than 2 after taking the product which means squashing them anyway.
2. Take the Cartesian product on iterator
3. Start consuming it one by one
4. If there are more than 2 options throw them away by squashing them to `Many` (same as before)
This is also why I had to update some tests as the expressions get squashed to many more eagerly.
### Use fuel to avoid long search times and high memory usage
Now all the tactics use `should_continue: Fn() -> bool` to chech if they should keep iterating _(Similarly to chalk)_.
This reduces the search times by a magnitude, for example from ~139ms/hole to ~14ms/hole for `ripgrep` crate.
There are slightly less expressions found, but I think speed gain worth it for usability.
Also note that syntactic hits decreases more because of squashing so you simple need to run search multiple times to get full terms.
Also the worst case time (For example `nalgebra` crate cus it has tons of generics) has search times mostly under 200ms.
Benchmarks on `ripgrep` crate
Before:
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
After:
```
Tail Expr syntactic hits: 239/1692 (14%)
Tail Exprs found: 1226/1692 (72%)
Term search avg time: 14ms
```
fix: keep parentheses when the precedence of inner expr is lower than the outer one
fix#17185
Additionally, this PR simplifies some code in `apply_demorgan`.
If rust-analyzer receives a malformed LSP request, the IO thread
terminates with a meaningful error, but then closes the channel.
Once the channel has closed, the main_loop also terminates, but it
only has RecvError and can't show a meaningful error. As a result,
rust-analyzer would incorrectly claim that the client forgot to
shutdown.
```
$ buggy_lsp_client | rust-analyzer
Error: client exited without proper shutdown sequence
```
Instead, include both error messages when the server shuts down.
Fix source_range for INT_NUMBER in completion
fix#17179.
Previously r-a use `TextRange::empty(self.position.offset)` as `source_range` for `INT_NUMBER`, so the `text_edit` would always be an insertion, which results in #17179.
This PR changed it by using `text_range` of `original_token` (same as `IDENT`).
The documentation for `lens.run.enable` states that it only applies
when `lens.enable` is set. However, the config setting whether to show
the Run lens did not check `lens.enable`, so the Run lens would show
even though lenses were disabled.
In the stabilization attempt of `#[unix_sigpipe = "sig_dfl"]`, a concern
was raised related to using a language attribute for the feature: Long
term, we want `fn lang_start()` to be definable by any crate, not just
libstd. Having a special language attribute in that case becomes
awkward.
So as a first step towards towards the next stabilization attempt, this
PR changes the `#[unix_sigpipe = "..."]` attribute to a compiler flag
`-Zon-broken-pipe=...` to remove that concern, since now the language
is not "contaminated" by this feature.
Another point was also raised, namely that the ui should not leak
**how** it does things, but rather what the **end effect** is. The new
flag uses the proposed naming. This is of course something that can be
iterated on further before stabilization.
fix: Tracing span names should match function names
When viewing traces, it's slightly confusing when the span name doesn't match the function name. Ensure the names are consistent.
(It might be worth moving most of these to use `#[tracing::instrument]` so the name can never go stale. `@davidbarsky` suggested that is marginally slower, so I've just done the simple change here.)
When viewing traces, it's slightly confusing when the span name doesn't
match the function name. Ensure the names are consistent.
(It might be worth moving most of these to use #[tracing::instrument]
so the name can never go stale. @davidbarsky suggested that is marginally
slower, so I've just done the simple change here.)
Before this commit `UseTree::remove_unnecessary_braces` removed the braces
around `{self}` in `use x::y::{self};` but `use x::y::self;` is not valid
rust.
feature: Make generate function assist generate a function as a constructor if the generated function has the name "new" and is an asscociated function.
close#17050
This PR makes `generate function assist` generate a function as a constructor if the generated function has the name "new" and is an asscociated function.
If the asscociate type is a record struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self { field_1: todo!(), field_2: todo!() }
}
}
```
If the asscociate type is a tuple struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self(todo!(), todo!())
}
}
```
If the asscociate type is a unit struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self
}
}
```
If the asscociate type is another adt, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
todo!()
}
}
```
Support hovering limits for adts
Fix#17009
1. Currently, r-a supports limiting the number of struct fields displayed when hovering. This PR extends it to support enum variants and union fields. Since the display of these three (ADTs) is similar, this PR extends 'hover_show_structFields' to 'hover_show_adtFieldsOrVariants'.
2. This PR also resolved the problem that the layout of ADT was not restricted by display limitations when hovering on the Self type.
3. Additionally, this PR changes the default value of display limitations to `10` (instead of the original `null`), which helps users discover this feature.
Make `cargo run` always available for binaries
Previously, items for `cargo test` and `cargo check` would appear as in
the `Select Runnable` quick pick that appears when running
`rust-analyzer: Run`, but `run` would only appear as a runnable if a
`main`` function was selected in the editor. This change adds `cargo
run` as an always available runnable command for binary packages.
This makes it easier to develop cli / tui applications, as now users can
run application from anywhere in their codebase.
chore: add some `tracing` to project loading
I wanted to see what's happening during project loading and if it could be parallelized. I'm thinking maybe, but it's not this PR :)
Try to generate more meaningful names in json converter
I just found out about rust-analyzer json converter, but I think it would be more convenient, if names were more useful, like using the names of the keys.
Let's look at some realistic arbitrary json:
```json
{
"user": {
"address": {
"street": "Main St",
"house": 3
},
"email": "example@example.com"
}
}
```
I think, new generated code is much easier to read and to edit, than the old:
```rust
// Old
struct Struct1{ house: i64, street: String }
struct Struct2{ address: Struct1, email: String }
struct Struct3{ user: Struct2 }
// New
struct Address1{ house: i64, street: String }
struct User1{ address: Address1, email: String }
struct Root1{ user: User1 }
```
Ideally, if we drop the numbers, I can see it being usable just as is (may be rename root)
```rust
struct Address{ house: i64, street: String }
struct User{ address: Address, email: String }
struct Root{ user: User }
```
Sadly, we can't just drop them, because there can be multiple fields (recursive) with the same name, and we can't just easily retroactively add numbers if the name has 2 instances due to parsing being single pass.
We could ignore the `1` and add number only if it's > 1, but I will leave this open to discussion and right now made it the simpler way
In sum, even with numbers, I think this PR still helps in readability
* Added config `runnables.extraTestBinaryArgs` to control the args.
* The default is `--show-output` rather than `--nocapture` to prevent
unreadable output when 2 or more tests fail or print output at once.
* Renamed variables in `CargoTargetSpec::runnable_args()` for clarity.
Fixes <https://github.com/rust-lang/rust-analyzer/issues/12737>.
Instead of using `core::fmt::format` to format panic messages, which may in turn
panic too and cause recursive panics and other messy things, redirect
`panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to
`panic_display` and does the things normally. See the tests for the full
call stack.
As of #6246, rust-analyzer follows symlinks. This can introduce an
infinite loop if symlinks point to parent directories.
Considering that #6246 was added in 2020 without many bug reports,
this is clearly a rare occurrence. However, I am observing
rust-analyzer hang on projects that have symlinks of the form:
```
test/a_symlink -> ../../
```
Ignore symlinks that only point to the parent directories, as this is
more robust but still allows typical symlink usage patterns.
fix: Replace Just the variable name in Unused Variable Diagnostic Fix
Changes Unused Variable diagnostic to just look at the variable name, not the entire syntax range.
Also added a test for an unused variable in an array destructure.
Closes#17053
It is bitset semantically --- many categorical things can be true about
a reference at the same time.
In parciular, a reference can be a "test" and a "write" at the same
time.
internal : redesign rust-analyzer::config
This PR aims to cover the infrastructural requirements for the `rust-analyzer.toml` ( #13529 ) issue. This means, that
1. We no longer have a single config base. The once single `ConfigData` has been divided into 4 : A tree of `.ratoml` files, a set of configs coming from the client ( this is what was called before the `CrateData` except that now values do not default to anything when they are not defined) , a set of configs that will reflect what the contents of a `ratoml` file defined in user's config directory ( e.g `~/.config/rust-analyzer/.rust-analyzer.toml` and finally a tree root that is populated by default values only.
2. Configs have also been divided into 3 different blocks : `global` , `local` , `client`. The current status of a config may change until #13529 got merged.
Once again many thanks to `@cormacrelf` for doing all the serde work.
internal: improve `TokenSet` implementation and add reserved keywords
The current `TokenSet` type represents "A bit-set of `SyntaxKind`s" as a newtype `u128`.
Internally, the flag for each `SyntaxKind` variant in the bit-set is set as the n-th LSB (least significant bit) via a bit-wise left shift operation, where n is the discriminant.
Edit: This is problematic because there's currently ~121 token `SyntaxKind`s, so adding new token kinds for missing reserved keywords increases the number of token `SyntaxKind`s above 128, thus making this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation overflow.
~~This is problematic because there's currently 266 SyntaxKinds, so this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation silently overflows in release mode.~~
~~This leads to a single flag/bit in the bit-set being shared by multiple `SyntaxKind`s~~.
This PR:
- Changes the wrapped type for `TokenSet` from `u128` to `[u64; 3]` ~~`[u*; N]` (currently `[u16; 17]`) where `u*` can be any desirable unsigned integer type and `N` is the minimum array length needed to represent all token `SyntaxKind`s without any collisions~~.
- Edit: Add assertion that `TokenSet`s only include token `SyntaxKind`s
- Edit: Add ~7 missing [reserved keywords](https://doc.rust-lang.org/stable/reference/keywords.html#reserved-keywords)
- ~~Moves the definition of the `TokenSet` type to grammar codegen in xtask, so that `N` is adjusted automatically (depending on the chosen `u*` "base" type) when new `SyntaxKind`s are added~~.
- ~~Updates the `token_set_works_for_tokens` unit test to include the `__LAST` `SyntaxKind` as a way of catching overflows in tests.~~
~~Currently `u16` is arbitrarily chosen as the `u*` "base" type mostly because it strikes a good balance (IMO) between unused bits and readability of the generated `TokenSet` code (especially the [`union` method](7a8374c162/crates/parser/src/token_set.rs (L26-L28))), but I'm open to other suggestions or a better methodology for choosing `u*` type.~~
~~I considered using a third-party crate for the bit-set, but a direct implementation seems simple enough without adding any new dependencies. I'm not strongly opposed to using a third-party crate though, if that's preferred.~~
~~Finally, I haven't had the chance to review issues, to figure out if there are any parser issues caused by collisions due the current implementation that may be fixed by this PR - I just stumbled upon the issue while adding "new" keywords to solve #16858~~
Edit: fixes#16858
Better inline preview for postfix completion
Better inline preview for postfix completion, a proper implementation of c5686c8941.
Here editors may filter completion item with the text within `delete_range`, so we need to include the `receiver text` in the `lookup` (aka `FilterText` in LSP spec) for editors to find the completion item. (See https://github.com/rust-lang/rust-analyzer/issues/17036#issuecomment-2056614180, Thanks to [pascalkuthe](https://github.com/pascalkuthe))
fix: Fix inlay hint resolution being broken
So, things broke because we now store a hash (u64) in the resolution payload, but javascript and hence JSON only support integers of up to 53 bits (anything beyond gets truncated in various ways) which caused almost all hashes to always differ when resolving them. This masks the hash to 53 bits to work around that.
Fixes https://github.com/rust-lang/rust-analyzer/issues/16962
fix: VFS should not confuse paths with source roots that have the same prefix
Previously, the VFS would assign paths to the source root that had the longest string prefix match. This would break when we had source roots in subdirectories:
```
/foo
/foo/bar
```
Given a file `/foo/bar_baz.rs`, we would attribute it to the `/foo/bar` source root, which is wrong.
As a result, we would attribute paths to the wrong crate when a crate was in a subdirectory of another one. This is more common in larger monorepos, but could occur in any Rust project.
Fix this in the VFS, and add a test.
internal: make function builder create ast directly
I am working on #17050.
In the process, I noticed a place in the code that could be refactored.
Currently, the `function builder` creates the `ast` through the `function template` , but those two processes can be combined into one function.
I thought I should work on this first and created a PR.
Do not accept the following
```rust
macro_rules! lexes {($($_:tt)*) => {}}
lexes!(🐛"foo");
```
Before, invalid emoji identifiers were gated during parsing instead of lexing in all cases, but this didn't account for macro expansion of literal prefixes.
Fix#123696.
Fix off-by-one error converting to LSP UTF8 offsets with multi-byte char
On this file,
```rust
fn main() {
let 된장 = 1;
}
```
when using `"positionEncodings":["utf-16"]` I get an "unused variable" diagnostic on the variable
name (codepoint offset range `8..10`). So far so good.
When using `positionEncodings":["utf-8"]`, I expect to get the equivalent range in bytes (LSP:
"Character offsets count UTF-8 code units (e.g bytes)."), which is `8..14`, because both
characters are 3 bytes in UTF-8. However I actually get `10..14`.
Looks like this is because we accidentally treat a 1-based index as an offset value: when
converting from our internal char-indices to LSP byte offsets, we look at one character to many.
This causes wrong results if the extra character is a multi-byte one, such as when computing
the start coordinate of 된장.
Fix that by actually passing an offset. While at it, fix the variable name of the line number,
which is not an offset (yet).
Originally reported at https://github.com/kakoune-lsp/kakoune-lsp/issues/740