Try to generate more meaningful names in json converter
I just found out about rust-analyzer json converter, but I think it would be more convenient, if names were more useful, like using the names of the keys.
Let's look at some realistic arbitrary json:
```json
{
"user": {
"address": {
"street": "Main St",
"house": 3
},
"email": "example@example.com"
}
}
```
I think, new generated code is much easier to read and to edit, than the old:
```rust
// Old
struct Struct1{ house: i64, street: String }
struct Struct2{ address: Struct1, email: String }
struct Struct3{ user: Struct2 }
// New
struct Address1{ house: i64, street: String }
struct User1{ address: Address1, email: String }
struct Root1{ user: User1 }
```
Ideally, if we drop the numbers, I can see it being usable just as is (may be rename root)
```rust
struct Address{ house: i64, street: String }
struct User{ address: Address, email: String }
struct Root{ user: User }
```
Sadly, we can't just drop them, because there can be multiple fields (recursive) with the same name, and we can't just easily retroactively add numbers if the name has 2 instances due to parsing being single pass.
We could ignore the `1` and add number only if it's > 1, but I will leave this open to discussion and right now made it the simpler way
In sum, even with numbers, I think this PR still helps in readability
* Added config `runnables.extraTestBinaryArgs` to control the args.
* The default is `--show-output` rather than `--nocapture` to prevent
unreadable output when 2 or more tests fail or print output at once.
* Renamed variables in `CargoTargetSpec::runnable_args()` for clarity.
Fixes <https://github.com/rust-lang/rust-analyzer/issues/12737>.
Instead of using `core::fmt::format` to format panic messages, which may in turn
panic too and cause recursive panics and other messy things, redirect
`panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to
`panic_display` and does the things normally. See the tests for the full
call stack.
As of #6246, rust-analyzer follows symlinks. This can introduce an
infinite loop if symlinks point to parent directories.
Considering that #6246 was added in 2020 without many bug reports,
this is clearly a rare occurrence. However, I am observing
rust-analyzer hang on projects that have symlinks of the form:
```
test/a_symlink -> ../../
```
Ignore symlinks that only point to the parent directories, as this is
more robust but still allows typical symlink usage patterns.
fix: Replace Just the variable name in Unused Variable Diagnostic Fix
Changes Unused Variable diagnostic to just look at the variable name, not the entire syntax range.
Also added a test for an unused variable in an array destructure.
Closes#17053
It is bitset semantically --- many categorical things can be true about
a reference at the same time.
In parciular, a reference can be a "test" and a "write" at the same
time.
internal : redesign rust-analyzer::config
This PR aims to cover the infrastructural requirements for the `rust-analyzer.toml` ( #13529 ) issue. This means, that
1. We no longer have a single config base. The once single `ConfigData` has been divided into 4 : A tree of `.ratoml` files, a set of configs coming from the client ( this is what was called before the `CrateData` except that now values do not default to anything when they are not defined) , a set of configs that will reflect what the contents of a `ratoml` file defined in user's config directory ( e.g `~/.config/rust-analyzer/.rust-analyzer.toml` and finally a tree root that is populated by default values only.
2. Configs have also been divided into 3 different blocks : `global` , `local` , `client`. The current status of a config may change until #13529 got merged.
Once again many thanks to `@cormacrelf` for doing all the serde work.
internal: improve `TokenSet` implementation and add reserved keywords
The current `TokenSet` type represents "A bit-set of `SyntaxKind`s" as a newtype `u128`.
Internally, the flag for each `SyntaxKind` variant in the bit-set is set as the n-th LSB (least significant bit) via a bit-wise left shift operation, where n is the discriminant.
Edit: This is problematic because there's currently ~121 token `SyntaxKind`s, so adding new token kinds for missing reserved keywords increases the number of token `SyntaxKind`s above 128, thus making this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation overflow.
~~This is problematic because there's currently 266 SyntaxKinds, so this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation silently overflows in release mode.~~
~~This leads to a single flag/bit in the bit-set being shared by multiple `SyntaxKind`s~~.
This PR:
- Changes the wrapped type for `TokenSet` from `u128` to `[u64; 3]` ~~`[u*; N]` (currently `[u16; 17]`) where `u*` can be any desirable unsigned integer type and `N` is the minimum array length needed to represent all token `SyntaxKind`s without any collisions~~.
- Edit: Add assertion that `TokenSet`s only include token `SyntaxKind`s
- Edit: Add ~7 missing [reserved keywords](https://doc.rust-lang.org/stable/reference/keywords.html#reserved-keywords)
- ~~Moves the definition of the `TokenSet` type to grammar codegen in xtask, so that `N` is adjusted automatically (depending on the chosen `u*` "base" type) when new `SyntaxKind`s are added~~.
- ~~Updates the `token_set_works_for_tokens` unit test to include the `__LAST` `SyntaxKind` as a way of catching overflows in tests.~~
~~Currently `u16` is arbitrarily chosen as the `u*` "base" type mostly because it strikes a good balance (IMO) between unused bits and readability of the generated `TokenSet` code (especially the [`union` method](7a8374c162/crates/parser/src/token_set.rs (L26-L28))), but I'm open to other suggestions or a better methodology for choosing `u*` type.~~
~~I considered using a third-party crate for the bit-set, but a direct implementation seems simple enough without adding any new dependencies. I'm not strongly opposed to using a third-party crate though, if that's preferred.~~
~~Finally, I haven't had the chance to review issues, to figure out if there are any parser issues caused by collisions due the current implementation that may be fixed by this PR - I just stumbled upon the issue while adding "new" keywords to solve #16858~~
Edit: fixes#16858
Better inline preview for postfix completion
Better inline preview for postfix completion, a proper implementation of c5686c8941.
Here editors may filter completion item with the text within `delete_range`, so we need to include the `receiver text` in the `lookup` (aka `FilterText` in LSP spec) for editors to find the completion item. (See https://github.com/rust-lang/rust-analyzer/issues/17036#issuecomment-2056614180, Thanks to [pascalkuthe](https://github.com/pascalkuthe))
fix: Fix inlay hint resolution being broken
So, things broke because we now store a hash (u64) in the resolution payload, but javascript and hence JSON only support integers of up to 53 bits (anything beyond gets truncated in various ways) which caused almost all hashes to always differ when resolving them. This masks the hash to 53 bits to work around that.
Fixes https://github.com/rust-lang/rust-analyzer/issues/16962
fix: VFS should not confuse paths with source roots that have the same prefix
Previously, the VFS would assign paths to the source root that had the longest string prefix match. This would break when we had source roots in subdirectories:
```
/foo
/foo/bar
```
Given a file `/foo/bar_baz.rs`, we would attribute it to the `/foo/bar` source root, which is wrong.
As a result, we would attribute paths to the wrong crate when a crate was in a subdirectory of another one. This is more common in larger monorepos, but could occur in any Rust project.
Fix this in the VFS, and add a test.
internal: make function builder create ast directly
I am working on #17050.
In the process, I noticed a place in the code that could be refactored.
Currently, the `function builder` creates the `ast` through the `function template` , but those two processes can be combined into one function.
I thought I should work on this first and created a PR.
Do not accept the following
```rust
macro_rules! lexes {($($_:tt)*) => {}}
lexes!(🐛"foo");
```
Before, invalid emoji identifiers were gated during parsing instead of lexing in all cases, but this didn't account for macro expansion of literal prefixes.
Fix#123696.
Fix off-by-one error converting to LSP UTF8 offsets with multi-byte char
On this file,
```rust
fn main() {
let 된장 = 1;
}
```
when using `"positionEncodings":["utf-16"]` I get an "unused variable" diagnostic on the variable
name (codepoint offset range `8..10`). So far so good.
When using `positionEncodings":["utf-8"]`, I expect to get the equivalent range in bytes (LSP:
"Character offsets count UTF-8 code units (e.g bytes)."), which is `8..14`, because both
characters are 3 bytes in UTF-8. However I actually get `10..14`.
Looks like this is because we accidentally treat a 1-based index as an offset value: when
converting from our internal char-indices to LSP byte offsets, we look at one character to many.
This causes wrong results if the extra character is a multi-byte one, such as when computing
the start coordinate of 된장.
Fix that by actually passing an offset. While at it, fix the variable name of the line number,
which is not an offset (yet).
Originally reported at https://github.com/kakoune-lsp/kakoune-lsp/issues/740
On this file,
```rust
fn main() {
let 된장 = 1;
}
```
when using `"positionEncodings":["utf-16"]` I get an "unused variable" diagnostic on the variable
name (codepoint offset range `8..10`). So far so good.
When using `positionEncodings":["utf-8"]`, I expect to get the equivalent range in bytes (LSP:
"Character offsets count UTF-8 code units (e.g bytes)."), which is `8..14`, because both
characters are 3 bytes in UTF-8. However I actually get `10..14`.
Looks like this is because we accidentally treat a 1-based index as an offset value: when
converting from our internal char-indices to LSP byte offsets, we look at one character to many.
This causes wrong results if the extra character is a multi-byte one, such as when computing
the start coordinate of 된장.
Fix that by actually passing an offset. While at it, fix the variable name of the line number,
which is not an offset (yet).
Originally reported at https://github.com/kakoune-lsp/kakoune-lsp/issues/740
Changed the completion item source_range to match
the replaced text. Though in VS Code it may not be
disturbing because the snippet is previewed in a
box, but in Helix editor, it's previewed by applying
the main text edit.
pattern analysis: Use contiguous indices for enum variants
The main blocker to using the in-tree version of the `pattern_analysis` crate is that rustc requires enum indices to be contiguous because it uses `IndexVec`/`BitSet` for performance. Currently we swap these out for `FxHashMap`/`FxHashSet` when the `rustc` feature is off, but we can't do that if we use the in-tree crate.
This PR solves the problem by using contiguous indices on the r-a side too.
Fix crate IDs when multiple workspaces are loaded
Previously, we assumed that the crate numbers in a `rust-project.json` always matched the `CrateId` values in the crate graph. This isn't true when there are multiple workspaces, because the crate graphs are merged and the `CrateId` values in the merged graph are different.
This broke flycheck (see first commit), because we were unable to find the workspace when a file changed, so we every single flycheck, producing duplicate compilation errors.
Instead, use the crate root module path to look up the relevant flycheck. This makes `ProjectWorkspace::Json` consistenet with `ProjectWorkspace::Cargo`.
Also, define a separate JSON crate number type, to prevent bugs like this happening again.
feat: Add `rust-analyzer.cargo.allTargets` to configure passing `--all-targets` to cargo invocations
Closes#16859
## Unresolved question:
Should this be a setting for build scripts only ? All the other `--all-targets` I found where already covered by `checkOnSave.allTargets`
Previously, items for `cargo test` and `cargo check` would appear as in
the `Select Runnable` quick pick that appears when running
`rust-analyzer: Run`, but `run` would only appear as a runnable if a
`main`` function was selected in the editor. This change adds `cargo
run` as an always available runnable command for binary packages.
This makes it easier to develop cli / tui applications, as now users can
run application from anywhere in their codebase.
Handle panicking like rustc CTFE does
Instead of using `core::fmt::format` to format panic messages, which may in turn panic too and cause recursive panics and other messy things, redirect `panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to `panic_display` and does the things normally. See the tests for the full call stack.
The tests don't work yet, I probably missed something in minicore.
fixes#16907 in my local testing, I also need to add a test for it
fix: Prevent stack overflow in recursive const types
In the evaluation of const values of recursive types certain declarations could cause an endless call-loop within the interpreter (hir-ty’s create_memory_map), which would lead to a stack overflow.
This commit adds a check that prevents values that contain an address in their value (such as TyKind::Ref) from being allocated at the address they contain.
The commit also adds a test for this edge case.
Instead of using `core::fmt::format` to format panic messages, which may in turn
panic too and cause recursive panics and other messy things, redirect
`panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to
`panic_display` and does the things normally. See the tests for the full
call stack.
In the evaluation of const values of recursive types
certain declarations could cause an endless call-loop
within the interpreter (hir-ty’s create_memory_map),
which would lead to a stack overflow.
This commit adds a check that prevents values that contain
an address in their value (such as TyKind::Ref) from being
allocated at the address they contain.
The commit also adds a test for this edge case.
fix: Some file watching related vfs fixes
Fixes https://github.com/rust-lang/rust-analyzer/issues/15554, additionally it seems that client side file watching was broken on windows this entire time, this PR switches `DidChangeWatchedFilesRegistrationOptions` to use relative glob patterns which do work on windows in VSCode.
Have Derive Attribute share a token tree with it's proc macros.
The goal of this PR is to stop creating a token tree for each derive proc macro.
This is done by giving the derive proc macros an id to its parent derive element.
From running the analysis stat on the rust analyzer project I did see a small memory decrease.
```
Inference: 42.80s, 362ginstr, 591mb
MIR lowering: 8.67s, 67ginstr, 291mb
Mir failed bodies: 18 (0%)
Data layouts: 85.81ms, 609minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 440.57ms, 5235minstr, 13mb
Failed const evals: 1 (0%)
Total: 64.16s, 552ginstr, 1731mb
```
After Change
```
Inference: 40.32s, 340ginstr, 593mb
MIR lowering: 7.95s, 62ginstr, 292mb
Mir failed bodies: 18 (0%)
Data layouts: 87.97ms, 591minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 433.38ms, 5226minstr, 14mb
Failed const evals: 1 (0%)
Total: 60.49s, 523ginstr, 1680mb
```
Currently this breaks the expansion for the actual derive attribute.
## TODO
- [x] Pick a better name for the function `smart_macro_arg`
fix: Fix projects depending on `rustc_private` hanging
If loading the root fails, we'll hang up in this loop as we never inserted the entry that asserts we already visited a package. This fixes that
Fixes https://github.com/rust-lang/rust-analyzer/issues/16902
internal: Enforce utf8 paths
Cargo already requires this, and I highly doubt r-a works with non-utf8 paths generally either. This just makes dealing with paths a lot easier.
Add fuel to match checking
Exhaustiveness checking is NP-hard hence can take extremely long to check some specific matches. This PR makes ehxaustiveness bail after a set number of steps. I chose a bound that takes ~100ms on my machine, which should be more than enough for normal matches.
I'd like someone with less recent hardware to run the test to see if that limit is low enough for them. Also curious if the r-a team thinks this is a good ballpark or if we should go lower/higher. I don't have much data on how complex real-life matches get, but we can definitely go lower than `500 000` steps.
The second commit is a drive-by soundness fix which doesn't matter much today but will matter once `min_exhaustive_patterns` is stabilized.
Fixes https://github.com/rust-lang/rust-analyzer/issues/9528 cc `@matklad`
fix: Skip problematic cyclic dev-dependencies
Implements a workaround for https://github.com/rust-lang/rust-analyzer/issues/14167, notably it does not implement the ideas surfaced in the issue, but takes a simpler to implement approach (and one that is more consistent).
Effectively, all this does is discard dev-dependency edges that go from a workspace library target to another workspace library target. This means, using a dev-dependency to another workspace member inside unit tests will always fail to resolve for r-a now, (instead of being order dependent and causing problems elsewhere) while things will work out fine in integration tests, benches, examples etc. This effectively acknowledges package cycles to be okay, but crate graph cycles to be invalid:
Quoting https://github.com/rust-lang/rust-analyzer/issues/14167#issuecomment-1864145772
> Though, if you have “package cycle” in integration tests, you’d have “crate cycle” in unit test.
We disallow the latter here, while continuing to support the former
(What's missing is to supress diagnostics for such unit tests, though not doing so might be a good deterrent, making devs avoid the pattern altogether)
feat: Implement ATPIT
Resolves#16584
Note: This implementation only works for ATPIT, not for TAIT.
The main hinderence that blocks the later is the defining sites of TAIT can be inner blocks like in;
```rust
type X = impl Default;
mod foo {
fn bar() -> super::X {
()
}
}
```
So, to figure out we are defining it or not, we should recursively probe for nested modules and bodies.
For ATPIT, we can just look into current body because `error[E0401]: can't use 'Self' from outer item` prevent such nested structures;
```rust
trait Foo {
type Item;
fn foo() -> Self::Item;
}
struct Bar;
impl Foo for Bar {
type Item = impl Default;
fn foo() -> Self::Item {
fn bar() -> Self::Item {
^^^^^^^^^^
|
use of `Self` from outer item
refer to the type directly here instead
5
}
bar()
}
}
```
But this implementation does not checks for unification of same ATPIT between different bodies, monomorphization, nor layout for similar reason. (But these can be done with lazyness if we can utilize something like "mutation of interned value" with `db`. I coundn't find such thing but I would appreciate it if such thing exists and you could let me know 😅)
feat: Syntax highlighting improvements
Specifically
- Adds a new `constant` modifier, attached to keyword `const` (except for `*const ()` and `&raw const ()`), `const` items and `const` functions
- Adds (or rather reveals) `associated` modifier for associated items
- Fixes usage of the standard `static` modifier, now it acts like `associated` except being omitted for methods.
- Splits `SymbolKind::Function` into `Function` and `Method`. We already split other things like that (notable self param from params), so the split makes sense in general as a lot special cases around it anyways.
fix: handle attributes when typing curly bracket
fix#16848.
When inserting a `{`, if it is identified that the front part of `expr` is `attr`, we consider it as inserting `{}` around the entire `expr` (excluding the attr part).
Bump dependencies and use in-tree `rustc_pattern_analysis`
One last `pattern_analysis` API change. I don't have any more planned! So we can now use the in-tree version when available.
fix: Ignore some warnings if they originate from within macro expansions
These tend to be annoying noise as we can't handle `allow`s for them properly for the time being.
fix: incorrect handling of `use` and panic issue in `extract_module`.
fix#16826
This PR includes the following changes:
1. Simplify the implementation partially, removing many unnecessary loops and `clone()`.
2. When it is found that the top level of the selection contains a `use` statement, a copy of the `use` will be reinserted before extraction. (#16826)
3. Fixed an issue during `extract_module`, where if the top level of the selected part contains `A` and `use A::B`, it caused a duplication of `use A`.
fix: Fix wrong where clause rendering on hover
We were not accounting for proper newline indentation in some places making the hover look weird (or just straight up wrong for type aliases)
internal: Compress file text using LZ4
I haven't tested properly, but this roughly looks like:
```
1246 MB
59mb 4899 FileTextQuery
1008 MB
20mb 4899 CompressedFileTextQuery
555kb 1790 FileTextQuery
```
We might want to test on something more interesting, like `bevy`.
Stop eagerly resolving inlay hint text edits for VSCode
Send less json over the wire.
After https://github.com/microsoft/vscode/issues/193124 was fixed, this change is not needed anymore.
VSCode 1.86.0 now supports double click for unresolved hint data too.
Remove unncessary check for macro call
Since `macro_rules` is a contextual keyword, it is an `IDENT` token and thus `is_path_start` already identifies it correctly. You can tell the previous check is unnecessary because the relevant tests still pass.
internal: Improve readability of the parser code
The code is basically equivalent to the previous version, but it improves the readability by making it much more simpler and concise.
fix: Don't invalid body query results when generating desugared names
The hack remains until we get hygiene, but with this the generated names are stable across bodies
fix: Remove accidental dependency between `parse_macro_expansion` and `parse`
Turns out my idea from https://github.com/rust-lang/rust-analyzer/pull/15251 causes all builtin derive expansions to obviously rely on the main parse, meaning the entire `macro_arg` layer becomes kind of pointless. So this reverts that PR again.
internal: Implement parent-child relation for `SourceRoot`s
This commit adds the said relation by keeping a map of type `FxHashMap<SourceRootId,Option<SourceRootId>>` inside the `GlobalState`. Its primary use case is reading `rust-analyzer.toml`(#13529) files that can be placed in every local source root. As a config will be found by traversing this "tree" we need the parent information for every local source root. This commit omits defining this relation for library source roots entirely.
This commit adds the said relation by keeping a map of type `FxHashMap<SourceRootId,Option<SourceRootId>>`
inside the `GlobalState`. Its primary use case is reading the rust-analyzer.toml files that can be
placed under every local source root. As a config will be found by traversing this "tree" we need the parent information
for every local source root. This commit omits defining this relation for library source roots entirely.
fix: panic when using float numbers without dots in chain calls
Fix#16278.
This PR fixes the panic caused by using floating-point numbers without a dot (such as `1e2`) in chain calls.
-------------
Although this syntax is very odd 🤣, r-a should not panic.
fix: keep attributes in assist 'generate_delegate_trait'
fix#15198.
This PR address the issue that `impl` generated by `generate_delegate_trait` doesn't keep attributes.
From `impl Into<DiagnosticMessage>` to `impl Into<Cow<'static, str>>`.
Because these functions don't produce user-facing output and we don't
want their strings to be translated.
internal: Compute syntax validation errors on demand
The LRU cache causes us to re-parse trees quite often, yet we don't use the validation errors at all. With this we push calculating them off to the caller who is interested in them.
Add more methods for resolving definitions from AST to their corresponding HIR types
In order to be able to add these methods with consistent naming I had to also rename two existing methods that would otherwise be conflicting/confusing:
`Semantics::to_module_def(&self, file: FileId) -> Option<Module>` (before)
`Semantics::file_to_module_def(&self, file: FileId) -> Option<Module>` (after)
`Semantics::to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (before)
`Semantics::file_to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (after)
(the PR is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to walk a `hir::Function`'s AST, resolving its exprs/stmts/items to their HIR equivalents)
fix: use 4 spaces for indentation in macro expansion
Partial fix for #16471.
In the previous code, the indentation produced by macro expansion was set to 2 spaces. This PR modifies it to 4 spaces for the sake of consistency.
fix: autocomplete constants inside format strings
Hi! This PR adds autocompletion for constants (including statics) inside format strings and closes#16608.
I'm not sure about adding the `constants` field to the `CompletionContext`. It kinda makes sense, since it's in line with the `locals` field, and this way everything looks a bit cleaner, but at the same time does it really need to be there?
Anyway, let me know if anything should/can be changed. :)
Export `SemanticsImpl` from `ra_ap_hir` crate, since it's already exposed via `Semantics.deref()`
The `SemanticsImpl` type is already de-facto exposed via `<Semantics as Deref>::Target`.
By not being part of the public crate interface it however doesn't get included in the documentation, resulting in a massive blind spot when it comes to `ra_ap_hir`'s type resolution APIs.
Add public function for resolving callable AST exprs to their HIR equivalents
(the PR is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to walk a `hir::Function`'s AST, resolving callable exprs within to their HIR equivalents)
Derive `PartialEq`, `Eq` & `Hash` for `hir::Param`
Since `hir::SelfParam`, as well as all members of `hir::Param` already implement `PartialEq`, `Eq` & `Hash` it seems reasonable to also make `hir::Param` implement those.
(the change is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to collect params in a `HashSet`)
feature: Add `destructure_struct_binding`
Adds an assist for destructuring a struct in a binding (#8673). I saw that #13997 has been abandoned for a while, so I thought I'd give it a go.
## Example
```rust
let foo = Foo { bar: 1, baz: 2 };
let bar2 = foo.bar;
let baz2 = foo.baz;
let foo2 = foo;
let fizz = Fizz(1, 2);
let buzz = fizz.0;
```
becomes
```rust
let Foo { bar, baz } = Foo { bar: 1, baz: 2 };
let bar2 = bar;
let baz2 = baz;
let foo2 = todo!();
let Fizz(_0, _1) = Fizz(1, 2);
let buzz = _0;
```
More examples in the tests.
## What is included?
- [x] Destructure record, tuple, and unit struct bindings
- [x] Edit field usages
- [x] Non-exhaustive structs in foreign crates and private fields get hidden behind `..`
- [x] Nested bindings
- [x] Carry over `mut` and `ref mut` in nested bindings to fields, i.e. `let Foo { ref mut bar } = ...` becomes `let Foo { bar: Bar { baz: ref mut baz } } = ...`
- [x] Attempt to resolve collisions with other names in the scope
- [x] If the binding is to a reference, field usages are dereferenced if required
- [x] Use shorthand notation if possible
## Known limitations
- `let foo = Foo { bar: 1 }; foo;` currently results in `let Foo { bar } = Foo { bar: 1 }; todo!();` instead of reassembling the struct. This requires user intervention.
- Unused fields are not currently omitted. I thought that this is more ergonomic, as there already is a quick fix action for adding `: _` to unused field patterns.
Separate into create and apply edit
Rename usages
Hacky name map
Add more tests
Handle non-exhaustive
Add some more TODOs
Private fields
Use todo
Nesting
Improve rest token generation
Cleanup
Doc -> regular comment
Support mut
fix: Wrong closure kind deduction for closures with predicates
Completes #16472, fixes#16421
The changed closure kind deduction is mostly simlar to `rustc_hir_typeck/src/closure.rs`.
Porting closure sig deduction from it seems possible too and I'm considering doing it with another PR
feat: Add "make tuple" tactic to term search
Follow up to https://github.com/rust-lang/rust-analyzer/pull/16092
Now term search also supports tuples.
```rust
let a: i32 = 1;
let b: f64 = 0.0;
let c: (i32, (f64, i32)) = todo!(); // Finds (a, (b, a))
```
In addition to new tactic that handles tuples I changed how the generics are handled.
Previously it tried all possible options from types we had in scope but now it only tries useful ones that help us directly towards the goal or at least towards calling some other function.
This changes O(2^n) to O(n^2) where n is amount of rounds which in practice allows using types that take generics for multiple rounds (previously limited to 1). Average case that also used to be exponential is now roughly linear.
This means that deeply nested generics also work.
````rust
// Finds all valid combos, including `Some(Some(Some(...)))`
let a: Option<Option<Option<bool>>> = todo!();
````
_Note that although the complexity is smaller allowing more types with generics the search overall slows down considerably. I hope it's fine tho as the autocomplete is disabled by default and for code actions it's not super slow. Might have to tweak the depth hyper parameter tho_
This resulted in a huge increase of results found (benchmarks on `ripgrep` crate):
Before
````
Tail Expr syntactic hits: 149/1692 (8%)
Tail Exprs found: 749/1692 (44%)
Term search avg time: 18ms
```
After
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
Most changes are local to term search except some tuple related stuff on `hir::Type`.
performance: Speed up Method Completions By Taking Advantage of Orphan Rules
(Continues https://github.com/rust-lang/rust-analyzer/pull/16498)
This PR speeds up method completions by doing two things without regressing `analysis-stats`[^1]:
- Filter candidate traits prior to calling `iterate_path_candidates` by relying on orphan rules (see below for a slightly more in-depth explanation). When generating completions [on `slog::Logger`](5e9e59c312/common/src/ledger.rs (L78)) in `oxidecomputer/omicron` as a test, this PR halved my completion times—it's now 454ms cold and 281ms warm. Before this PR, it was 808ms cold and 579ms warm.
- Inline some of the method candidate checks into `is_valid_method_candidate` and remove some unnecessary visibility checks. This was suggested by `@Veykril` in [this comment](https://github.com/rust-lang/rust-analyzer/pull/16498#issuecomment-1929864427).
We filter candidate traits by taking advantage of orphan rules. For additional details, I'll rely on `@WaffleLapkin's` explanation [from Zulip](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Trait.20Checking/near/420942417):
> A type `A` can only implements traits which
> 1. Have a blanket implementation (`impl<T> Trait for T {}`)
> 2. Have implementation for `A` (`impl Trait for A {}`)
>
> Blanket implementation can only exist in `Trait`'s crate. Implementation for `A` can only exist in `A`'s or `Trait`'s crate.
Big thanks to Waffle for its keen observation!
---
I think some additional improvements are possible:
- `for_trait_and_self_ty` seemingly does not distinguish between `&T`, `&mut T`, or `T`, resulting in seemingly irrelevant traits like `tokio::io::AsyncWrite` being being included for, e.g., `&slog::Logger`. I don't know they're being considered due to the [autoref/autoderef behavior](a02a219773/crates/hir-ty/src/method_resolution.rs (L945-L962)), but I wonder if it'd make sense to filter by mutability earlier and not consider trait implementations that require `&mut T` when we only have a `&T`.
- The method completions [spend a _lot_ of time in unification](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Trait.20Checking/near/421072356), and while there might be low-hanging fruit there, it might make more sense to wait for the new trait solver in `rustc`. I dunno.
[^1]: The filtering occurs outside of typechecking, after all.
fix: Don't panic on synthetic syntax in inference diagnostics
Temporary fix for https://github.com/rust-lang/rust-analyzer/issues/16682
We ought to rethink how we attach diagnostics to things, as IDs don't work for `format_args` like that!
fix: panic when inlining callsites inside macros' parameters
Close#16660, #12429, #10695.
When `inline_into_callers` encounters callsites in macros parameters, it can lead to panics. Since there is no perfect way to handle macros, this PR directly filters out these cases.