Fix off-by-one error converting to LSP UTF8 offsets with multi-byte char
On this file,
```rust
fn main() {
let 된장 = 1;
}
```
when using `"positionEncodings":["utf-16"]` I get an "unused variable" diagnostic on the variable
name (codepoint offset range `8..10`). So far so good.
When using `positionEncodings":["utf-8"]`, I expect to get the equivalent range in bytes (LSP:
"Character offsets count UTF-8 code units (e.g bytes)."), which is `8..14`, because both
characters are 3 bytes in UTF-8. However I actually get `10..14`.
Looks like this is because we accidentally treat a 1-based index as an offset value: when
converting from our internal char-indices to LSP byte offsets, we look at one character to many.
This causes wrong results if the extra character is a multi-byte one, such as when computing
the start coordinate of 된장.
Fix that by actually passing an offset. While at it, fix the variable name of the line number,
which is not an offset (yet).
Originally reported at https://github.com/kakoune-lsp/kakoune-lsp/issues/740
On this file,
```rust
fn main() {
let 된장 = 1;
}
```
when using `"positionEncodings":["utf-16"]` I get an "unused variable" diagnostic on the variable
name (codepoint offset range `8..10`). So far so good.
When using `positionEncodings":["utf-8"]`, I expect to get the equivalent range in bytes (LSP:
"Character offsets count UTF-8 code units (e.g bytes)."), which is `8..14`, because both
characters are 3 bytes in UTF-8. However I actually get `10..14`.
Looks like this is because we accidentally treat a 1-based index as an offset value: when
converting from our internal char-indices to LSP byte offsets, we look at one character to many.
This causes wrong results if the extra character is a multi-byte one, such as when computing
the start coordinate of 된장.
Fix that by actually passing an offset. While at it, fix the variable name of the line number,
which is not an offset (yet).
Originally reported at https://github.com/kakoune-lsp/kakoune-lsp/issues/740
Changed the completion item source_range to match
the replaced text. Though in VS Code it may not be
disturbing because the snippet is previewed in a
box, but in Helix editor, it's previewed by applying
the main text edit.
pattern analysis: Use contiguous indices for enum variants
The main blocker to using the in-tree version of the `pattern_analysis` crate is that rustc requires enum indices to be contiguous because it uses `IndexVec`/`BitSet` for performance. Currently we swap these out for `FxHashMap`/`FxHashSet` when the `rustc` feature is off, but we can't do that if we use the in-tree crate.
This PR solves the problem by using contiguous indices on the r-a side too.
Fix crate IDs when multiple workspaces are loaded
Previously, we assumed that the crate numbers in a `rust-project.json` always matched the `CrateId` values in the crate graph. This isn't true when there are multiple workspaces, because the crate graphs are merged and the `CrateId` values in the merged graph are different.
This broke flycheck (see first commit), because we were unable to find the workspace when a file changed, so we every single flycheck, producing duplicate compilation errors.
Instead, use the crate root module path to look up the relevant flycheck. This makes `ProjectWorkspace::Json` consistenet with `ProjectWorkspace::Cargo`.
Also, define a separate JSON crate number type, to prevent bugs like this happening again.
feat: Add `rust-analyzer.cargo.allTargets` to configure passing `--all-targets` to cargo invocations
Closes#16859
## Unresolved question:
Should this be a setting for build scripts only ? All the other `--all-targets` I found where already covered by `checkOnSave.allTargets`
Previously, items for `cargo test` and `cargo check` would appear as in
the `Select Runnable` quick pick that appears when running
`rust-analyzer: Run`, but `run` would only appear as a runnable if a
`main`` function was selected in the editor. This change adds `cargo
run` as an always available runnable command for binary packages.
This makes it easier to develop cli / tui applications, as now users can
run application from anywhere in their codebase.
Handle panicking like rustc CTFE does
Instead of using `core::fmt::format` to format panic messages, which may in turn panic too and cause recursive panics and other messy things, redirect `panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to `panic_display` and does the things normally. See the tests for the full call stack.
The tests don't work yet, I probably missed something in minicore.
fixes#16907 in my local testing, I also need to add a test for it
fix: Prevent stack overflow in recursive const types
In the evaluation of const values of recursive types certain declarations could cause an endless call-loop within the interpreter (hir-ty’s create_memory_map), which would lead to a stack overflow.
This commit adds a check that prevents values that contain an address in their value (such as TyKind::Ref) from being allocated at the address they contain.
The commit also adds a test for this edge case.
Instead of using `core::fmt::format` to format panic messages, which may in turn
panic too and cause recursive panics and other messy things, redirect
`panic_fmt` to `const_panic_fmt` like CTFE, which in turn goes to
`panic_display` and does the things normally. See the tests for the full
call stack.
In the evaluation of const values of recursive types
certain declarations could cause an endless call-loop
within the interpreter (hir-ty’s create_memory_map),
which would lead to a stack overflow.
This commit adds a check that prevents values that contain
an address in their value (such as TyKind::Ref) from being
allocated at the address they contain.
The commit also adds a test for this edge case.
fix: Some file watching related vfs fixes
Fixes https://github.com/rust-lang/rust-analyzer/issues/15554, additionally it seems that client side file watching was broken on windows this entire time, this PR switches `DidChangeWatchedFilesRegistrationOptions` to use relative glob patterns which do work on windows in VSCode.
Have Derive Attribute share a token tree with it's proc macros.
The goal of this PR is to stop creating a token tree for each derive proc macro.
This is done by giving the derive proc macros an id to its parent derive element.
From running the analysis stat on the rust analyzer project I did see a small memory decrease.
```
Inference: 42.80s, 362ginstr, 591mb
MIR lowering: 8.67s, 67ginstr, 291mb
Mir failed bodies: 18 (0%)
Data layouts: 85.81ms, 609minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 440.57ms, 5235minstr, 13mb
Failed const evals: 1 (0%)
Total: 64.16s, 552ginstr, 1731mb
```
After Change
```
Inference: 40.32s, 340ginstr, 593mb
MIR lowering: 7.95s, 62ginstr, 292mb
Mir failed bodies: 18 (0%)
Data layouts: 87.97ms, 591minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 433.38ms, 5226minstr, 14mb
Failed const evals: 1 (0%)
Total: 60.49s, 523ginstr, 1680mb
```
Currently this breaks the expansion for the actual derive attribute.
## TODO
- [x] Pick a better name for the function `smart_macro_arg`
fix: Fix projects depending on `rustc_private` hanging
If loading the root fails, we'll hang up in this loop as we never inserted the entry that asserts we already visited a package. This fixes that
Fixes https://github.com/rust-lang/rust-analyzer/issues/16902
internal: Enforce utf8 paths
Cargo already requires this, and I highly doubt r-a works with non-utf8 paths generally either. This just makes dealing with paths a lot easier.
Add fuel to match checking
Exhaustiveness checking is NP-hard hence can take extremely long to check some specific matches. This PR makes ehxaustiveness bail after a set number of steps. I chose a bound that takes ~100ms on my machine, which should be more than enough for normal matches.
I'd like someone with less recent hardware to run the test to see if that limit is low enough for them. Also curious if the r-a team thinks this is a good ballpark or if we should go lower/higher. I don't have much data on how complex real-life matches get, but we can definitely go lower than `500 000` steps.
The second commit is a drive-by soundness fix which doesn't matter much today but will matter once `min_exhaustive_patterns` is stabilized.
Fixes https://github.com/rust-lang/rust-analyzer/issues/9528 cc `@matklad`
fix: Skip problematic cyclic dev-dependencies
Implements a workaround for https://github.com/rust-lang/rust-analyzer/issues/14167, notably it does not implement the ideas surfaced in the issue, but takes a simpler to implement approach (and one that is more consistent).
Effectively, all this does is discard dev-dependency edges that go from a workspace library target to another workspace library target. This means, using a dev-dependency to another workspace member inside unit tests will always fail to resolve for r-a now, (instead of being order dependent and causing problems elsewhere) while things will work out fine in integration tests, benches, examples etc. This effectively acknowledges package cycles to be okay, but crate graph cycles to be invalid:
Quoting https://github.com/rust-lang/rust-analyzer/issues/14167#issuecomment-1864145772
> Though, if you have “package cycle” in integration tests, you’d have “crate cycle” in unit test.
We disallow the latter here, while continuing to support the former
(What's missing is to supress diagnostics for such unit tests, though not doing so might be a good deterrent, making devs avoid the pattern altogether)
feat: Implement ATPIT
Resolves#16584
Note: This implementation only works for ATPIT, not for TAIT.
The main hinderence that blocks the later is the defining sites of TAIT can be inner blocks like in;
```rust
type X = impl Default;
mod foo {
fn bar() -> super::X {
()
}
}
```
So, to figure out we are defining it or not, we should recursively probe for nested modules and bodies.
For ATPIT, we can just look into current body because `error[E0401]: can't use 'Self' from outer item` prevent such nested structures;
```rust
trait Foo {
type Item;
fn foo() -> Self::Item;
}
struct Bar;
impl Foo for Bar {
type Item = impl Default;
fn foo() -> Self::Item {
fn bar() -> Self::Item {
^^^^^^^^^^
|
use of `Self` from outer item
refer to the type directly here instead
5
}
bar()
}
}
```
But this implementation does not checks for unification of same ATPIT between different bodies, monomorphization, nor layout for similar reason. (But these can be done with lazyness if we can utilize something like "mutation of interned value" with `db`. I coundn't find such thing but I would appreciate it if such thing exists and you could let me know 😅)
feat: Syntax highlighting improvements
Specifically
- Adds a new `constant` modifier, attached to keyword `const` (except for `*const ()` and `&raw const ()`), `const` items and `const` functions
- Adds (or rather reveals) `associated` modifier for associated items
- Fixes usage of the standard `static` modifier, now it acts like `associated` except being omitted for methods.
- Splits `SymbolKind::Function` into `Function` and `Method`. We already split other things like that (notable self param from params), so the split makes sense in general as a lot special cases around it anyways.
fix: handle attributes when typing curly bracket
fix#16848.
When inserting a `{`, if it is identified that the front part of `expr` is `attr`, we consider it as inserting `{}` around the entire `expr` (excluding the attr part).
Bump dependencies and use in-tree `rustc_pattern_analysis`
One last `pattern_analysis` API change. I don't have any more planned! So we can now use the in-tree version when available.
fix: Ignore some warnings if they originate from within macro expansions
These tend to be annoying noise as we can't handle `allow`s for them properly for the time being.
fix: incorrect handling of `use` and panic issue in `extract_module`.
fix#16826
This PR includes the following changes:
1. Simplify the implementation partially, removing many unnecessary loops and `clone()`.
2. When it is found that the top level of the selection contains a `use` statement, a copy of the `use` will be reinserted before extraction. (#16826)
3. Fixed an issue during `extract_module`, where if the top level of the selected part contains `A` and `use A::B`, it caused a duplication of `use A`.
fix: Fix wrong where clause rendering on hover
We were not accounting for proper newline indentation in some places making the hover look weird (or just straight up wrong for type aliases)
internal: Compress file text using LZ4
I haven't tested properly, but this roughly looks like:
```
1246 MB
59mb 4899 FileTextQuery
1008 MB
20mb 4899 CompressedFileTextQuery
555kb 1790 FileTextQuery
```
We might want to test on something more interesting, like `bevy`.
Stop eagerly resolving inlay hint text edits for VSCode
Send less json over the wire.
After https://github.com/microsoft/vscode/issues/193124 was fixed, this change is not needed anymore.
VSCode 1.86.0 now supports double click for unresolved hint data too.
Remove unncessary check for macro call
Since `macro_rules` is a contextual keyword, it is an `IDENT` token and thus `is_path_start` already identifies it correctly. You can tell the previous check is unnecessary because the relevant tests still pass.
internal: Improve readability of the parser code
The code is basically equivalent to the previous version, but it improves the readability by making it much more simpler and concise.
fix: Don't invalid body query results when generating desugared names
The hack remains until we get hygiene, but with this the generated names are stable across bodies
fix: Remove accidental dependency between `parse_macro_expansion` and `parse`
Turns out my idea from https://github.com/rust-lang/rust-analyzer/pull/15251 causes all builtin derive expansions to obviously rely on the main parse, meaning the entire `macro_arg` layer becomes kind of pointless. So this reverts that PR again.
internal: Implement parent-child relation for `SourceRoot`s
This commit adds the said relation by keeping a map of type `FxHashMap<SourceRootId,Option<SourceRootId>>` inside the `GlobalState`. Its primary use case is reading `rust-analyzer.toml`(#13529) files that can be placed in every local source root. As a config will be found by traversing this "tree" we need the parent information for every local source root. This commit omits defining this relation for library source roots entirely.
This commit adds the said relation by keeping a map of type `FxHashMap<SourceRootId,Option<SourceRootId>>`
inside the `GlobalState`. Its primary use case is reading the rust-analyzer.toml files that can be
placed under every local source root. As a config will be found by traversing this "tree" we need the parent information
for every local source root. This commit omits defining this relation for library source roots entirely.
fix: panic when using float numbers without dots in chain calls
Fix#16278.
This PR fixes the panic caused by using floating-point numbers without a dot (such as `1e2`) in chain calls.
-------------
Although this syntax is very odd 🤣, r-a should not panic.
fix: keep attributes in assist 'generate_delegate_trait'
fix#15198.
This PR address the issue that `impl` generated by `generate_delegate_trait` doesn't keep attributes.
From `impl Into<DiagnosticMessage>` to `impl Into<Cow<'static, str>>`.
Because these functions don't produce user-facing output and we don't
want their strings to be translated.
internal: Compute syntax validation errors on demand
The LRU cache causes us to re-parse trees quite often, yet we don't use the validation errors at all. With this we push calculating them off to the caller who is interested in them.
Add more methods for resolving definitions from AST to their corresponding HIR types
In order to be able to add these methods with consistent naming I had to also rename two existing methods that would otherwise be conflicting/confusing:
`Semantics::to_module_def(&self, file: FileId) -> Option<Module>` (before)
`Semantics::file_to_module_def(&self, file: FileId) -> Option<Module>` (after)
`Semantics::to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (before)
`Semantics::file_to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (after)
(the PR is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to walk a `hir::Function`'s AST, resolving its exprs/stmts/items to their HIR equivalents)
fix: use 4 spaces for indentation in macro expansion
Partial fix for #16471.
In the previous code, the indentation produced by macro expansion was set to 2 spaces. This PR modifies it to 4 spaces for the sake of consistency.
fix: autocomplete constants inside format strings
Hi! This PR adds autocompletion for constants (including statics) inside format strings and closes#16608.
I'm not sure about adding the `constants` field to the `CompletionContext`. It kinda makes sense, since it's in line with the `locals` field, and this way everything looks a bit cleaner, but at the same time does it really need to be there?
Anyway, let me know if anything should/can be changed. :)
Export `SemanticsImpl` from `ra_ap_hir` crate, since it's already exposed via `Semantics.deref()`
The `SemanticsImpl` type is already de-facto exposed via `<Semantics as Deref>::Target`.
By not being part of the public crate interface it however doesn't get included in the documentation, resulting in a massive blind spot when it comes to `ra_ap_hir`'s type resolution APIs.
Add public function for resolving callable AST exprs to their HIR equivalents
(the PR is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to walk a `hir::Function`'s AST, resolving callable exprs within to their HIR equivalents)
Derive `PartialEq`, `Eq` & `Hash` for `hir::Param`
Since `hir::SelfParam`, as well as all members of `hir::Param` already implement `PartialEq`, `Eq` & `Hash` it seems reasonable to also make `hir::Param` implement those.
(the change is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to collect params in a `HashSet`)
feature: Add `destructure_struct_binding`
Adds an assist for destructuring a struct in a binding (#8673). I saw that #13997 has been abandoned for a while, so I thought I'd give it a go.
## Example
```rust
let foo = Foo { bar: 1, baz: 2 };
let bar2 = foo.bar;
let baz2 = foo.baz;
let foo2 = foo;
let fizz = Fizz(1, 2);
let buzz = fizz.0;
```
becomes
```rust
let Foo { bar, baz } = Foo { bar: 1, baz: 2 };
let bar2 = bar;
let baz2 = baz;
let foo2 = todo!();
let Fizz(_0, _1) = Fizz(1, 2);
let buzz = _0;
```
More examples in the tests.
## What is included?
- [x] Destructure record, tuple, and unit struct bindings
- [x] Edit field usages
- [x] Non-exhaustive structs in foreign crates and private fields get hidden behind `..`
- [x] Nested bindings
- [x] Carry over `mut` and `ref mut` in nested bindings to fields, i.e. `let Foo { ref mut bar } = ...` becomes `let Foo { bar: Bar { baz: ref mut baz } } = ...`
- [x] Attempt to resolve collisions with other names in the scope
- [x] If the binding is to a reference, field usages are dereferenced if required
- [x] Use shorthand notation if possible
## Known limitations
- `let foo = Foo { bar: 1 }; foo;` currently results in `let Foo { bar } = Foo { bar: 1 }; todo!();` instead of reassembling the struct. This requires user intervention.
- Unused fields are not currently omitted. I thought that this is more ergonomic, as there already is a quick fix action for adding `: _` to unused field patterns.
Separate into create and apply edit
Rename usages
Hacky name map
Add more tests
Handle non-exhaustive
Add some more TODOs
Private fields
Use todo
Nesting
Improve rest token generation
Cleanup
Doc -> regular comment
Support mut
fix: Wrong closure kind deduction for closures with predicates
Completes #16472, fixes#16421
The changed closure kind deduction is mostly simlar to `rustc_hir_typeck/src/closure.rs`.
Porting closure sig deduction from it seems possible too and I'm considering doing it with another PR
feat: Add "make tuple" tactic to term search
Follow up to https://github.com/rust-lang/rust-analyzer/pull/16092
Now term search also supports tuples.
```rust
let a: i32 = 1;
let b: f64 = 0.0;
let c: (i32, (f64, i32)) = todo!(); // Finds (a, (b, a))
```
In addition to new tactic that handles tuples I changed how the generics are handled.
Previously it tried all possible options from types we had in scope but now it only tries useful ones that help us directly towards the goal or at least towards calling some other function.
This changes O(2^n) to O(n^2) where n is amount of rounds which in practice allows using types that take generics for multiple rounds (previously limited to 1). Average case that also used to be exponential is now roughly linear.
This means that deeply nested generics also work.
````rust
// Finds all valid combos, including `Some(Some(Some(...)))`
let a: Option<Option<Option<bool>>> = todo!();
````
_Note that although the complexity is smaller allowing more types with generics the search overall slows down considerably. I hope it's fine tho as the autocomplete is disabled by default and for code actions it's not super slow. Might have to tweak the depth hyper parameter tho_
This resulted in a huge increase of results found (benchmarks on `ripgrep` crate):
Before
````
Tail Expr syntactic hits: 149/1692 (8%)
Tail Exprs found: 749/1692 (44%)
Term search avg time: 18ms
```
After
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
Most changes are local to term search except some tuple related stuff on `hir::Type`.
performance: Speed up Method Completions By Taking Advantage of Orphan Rules
(Continues https://github.com/rust-lang/rust-analyzer/pull/16498)
This PR speeds up method completions by doing two things without regressing `analysis-stats`[^1]:
- Filter candidate traits prior to calling `iterate_path_candidates` by relying on orphan rules (see below for a slightly more in-depth explanation). When generating completions [on `slog::Logger`](5e9e59c312/common/src/ledger.rs (L78)) in `oxidecomputer/omicron` as a test, this PR halved my completion times—it's now 454ms cold and 281ms warm. Before this PR, it was 808ms cold and 579ms warm.
- Inline some of the method candidate checks into `is_valid_method_candidate` and remove some unnecessary visibility checks. This was suggested by `@Veykril` in [this comment](https://github.com/rust-lang/rust-analyzer/pull/16498#issuecomment-1929864427).
We filter candidate traits by taking advantage of orphan rules. For additional details, I'll rely on `@WaffleLapkin's` explanation [from Zulip](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Trait.20Checking/near/420942417):
> A type `A` can only implements traits which
> 1. Have a blanket implementation (`impl<T> Trait for T {}`)
> 2. Have implementation for `A` (`impl Trait for A {}`)
>
> Blanket implementation can only exist in `Trait`'s crate. Implementation for `A` can only exist in `A`'s or `Trait`'s crate.
Big thanks to Waffle for its keen observation!
---
I think some additional improvements are possible:
- `for_trait_and_self_ty` seemingly does not distinguish between `&T`, `&mut T`, or `T`, resulting in seemingly irrelevant traits like `tokio::io::AsyncWrite` being being included for, e.g., `&slog::Logger`. I don't know they're being considered due to the [autoref/autoderef behavior](a02a219773/crates/hir-ty/src/method_resolution.rs (L945-L962)), but I wonder if it'd make sense to filter by mutability earlier and not consider trait implementations that require `&mut T` when we only have a `&T`.
- The method completions [spend a _lot_ of time in unification](https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Trait.20Checking/near/421072356), and while there might be low-hanging fruit there, it might make more sense to wait for the new trait solver in `rustc`. I dunno.
[^1]: The filtering occurs outside of typechecking, after all.
fix: Don't panic on synthetic syntax in inference diagnostics
Temporary fix for https://github.com/rust-lang/rust-analyzer/issues/16682
We ought to rethink how we attach diagnostics to things, as IDs don't work for `format_args` like that!
fix: panic when inlining callsites inside macros' parameters
Close#16660, #12429, #10695.
When `inline_into_callers` encounters callsites in macros parameters, it can lead to panics. Since there is no perfect way to handle macros, this PR directly filters out these cases.
`get_path_for_executable` will now first check `$CARGO_HOME` before falling back to searching `$PATH`.
rustup is the recommended way to manage rust toolchains, therefore should be picked before the
system toolchain.
fix: Recompiles due to RUSTC_BOOTSTRAP
Some packages (e.g. thiserror) force a recompile if the value of the `RUSTC_BOOTSTRAP` env var changes. RA sets the variable to 1 in order to enable rustc / cargo unstable options. This causes flapping recompiles when building outside of RA.
Fixes#15057
internal: Optimize salsa memory usage
Reduces memory on self by ~20mb for me, there is a few more mb to save here if we made LRU caching opt-in, as currently every entry in a memoized query will store an `AtomicUsize` for the LRU.
`cargo rustc -- <args>` first builds dependencies then calls `rustc <args>` for the current package. Here, we don't want to build dependencies, we just want to call `rustc --print`. An unstable `cargo rustc` `--print` command bypasses building dependencies first. This speeds up execution of this code path and ensures RA doesn't recompile dependencies with the `RUSTC_BOOTSRAP=1` env var flag set.
Note that we must pass `-Z unstable-options` twice, first to enable the `cargo` unstable `--print` flag, then later to enable the unstable `rustc` `target-spec-json` print request.
Some packages (e.g. thiserror) force a recompile if the value of the `RUSTC_BOOTSTRAP` env var changes. RA sets the variable to 1 in order to enable rustc / cargo unstable options it uses. This causes flapping recompiles when building outside of RA.
As of Cargo 1.75 the `--keep-going` flag is stable. This change uses the flag without `RUSTC_BOOTSTRAP` if the Cargo version is >= 1.75, and drops `--keep-going` otherwise. This fixes build script recompilation.
internal: Attempt to add a timeout to rustc-tests
Looks like some new test is stuck, this might help with figuring that out, though it unfortunately won't if its a chalk hang (which is the most likely)
fix: server hanging up on build script task
This should fix https://github.com/rust-lang/rust-analyzer/issues/16614, can't say for certain since it might be not 100% reproducible... We really need to replace the current workspace fetching logic, it is completely unreadable and incredibly difficult to follow. I don't really understand how the server even got to hang here honestly (I would expect it to loop re-fetching build scripts, but not hang).
Setup infra for handling auto trait bounds disabled due to perf problems
This patch updates some of the partially-implemented functions of `ChalkContext as RustIrDatabase`, namely `adt_datum()` and `impl_provided_for()`. With those, we can now correctly work with auto trait bounds and distinguish methods based on them.
Resolves#7856 (the second code; the first one is resolved by #13074)
**IMPORTANT**: I don't think we want to merge this until #7637 is resolved. Currently this patch introduces A LOT of unknown types and type mismtaches as shown below. This is because we cannot resolve items like `hashbrown::HashMap` in `std` modules, leading to auto trait bounds on them and their dependents unprovable.
|crate (from `rustc-perf@c52ee6` except for r-a)|e3dc5a588f07d6f1d3a0f33051d4af26190abe9e|HEAD of this branch|
|---|---|---|
|rust-analyzer @ e3dc5a588f |exprs: 417528, ??ty: 907 (0%), ?ty: 114 (0%), !ty: 1|exprs: 417528, ??ty: 1704 (0%), ?ty: 403 (0%), !ty: 20|
|ripgrep|exprs: 62120, ??ty: 2 (0%), ?ty: 0 (0%), !ty: 0|exprs: 62120, ??ty: 132 (0%), ?ty: 58 (0%), !ty: 11|
|webrender/webrender|exprs: 94355, ??ty: 49 (0%), ?ty: 16 (0%), !ty: 2|exprs: 94355, ??ty: 429 (0%), ?ty: 130 (0%), !ty: 7|
|diesel|exprs: 132591, ??ty: 401 (0%), ?ty: 5129 (3%), !ty: 31|exprs: 132591, ??ty: 401 (0%), ?ty: 5129 (3%), !ty: 31|
feat!: create alias when renaming an import.
![gif](https://github.com/rust-lang/rust-analyzer/assets/57047985/c593d9a8-b8a0-4e13-9e50-a69c7d0d8749)
Closes#15858
Implemented:
- [x] - Prevent using `reserved` keywords (e.g self) and `_`.
- [x] - Rename other modules that might be referencing the import.
- [x] - Fix "broken" tests.
- [ ] - Rename **only** "direct" references.
- [ ] - Test more cases.
Future possibilities:
1. Also support `extern crate <name>` syntax.
2. Allow aliasing `self` when it is inside an `UseTreeList`.
~3. If import path already has an alias, "rename" the alias.~
~[4. Create alias even if path is not the last path segment.](https://github.com/rust-lang/rust-analyzer/pull/16489#issuecomment-1930541697)~
feat: add non-exhaustive-let diagnostic
I want this to have a quickfix to add an else branch but I couldn't figure out how to do that, so here's the diagnostic on its own. It pretends a `let` is a match with one arm, and asks the match checking whether that match would be exhaustive.
Previously the pattern was checked based on its own type, but that was causing a panic in `match_check` (while processing e.g. `crates/hir/src/lib.rs`) so I changed it to use the initialiser's type instead, to align with the checking of actual match expressions. I think the panic can still happen, but I hear that `match_check` is going to be updated to a new version from rustc, so I'm posting this now in the hopes that the panic will magically go away when that happens.
test: include `rename_path_inside_use_tree`.
Keeps tracks the progress of the changes. 3 other tests broke with the changes
of this.
feat: rename all other usages within the current file.
feat: fix most of the implementation problems.
test: `rename_path_inside_use_tree` tests a more complicated scenario.
Commit 6a06f6f72 (Deduplicate reference search results, 2022-11-07) deduplicates references
within each definition.
There is an edge case when requesting references of a macro argument. Apparently, our
descend_into_macros() stanza in references.rs produces a cartesian product of
- references inside the macro times
- times references outside the macro.
Since the above deduplication only applies to the references within a single definition, we
return them all, leading to many redundant references.
Work around this by deduplicating definitions as well. Perhaps there is a better fix to not
produce this cartesian product in the first place; but I think at least for definitions the
problem would remain; a macro can contain multiple definitions of the same name, but since the
navigation target will be the unresolved location, it's the same for all of them.
We can't use unique() because we don't want to drop references that don't have a declaration
(though I dont' have an example for this case).
I discovered this working with the "bitflags" macro from the crate of the same name.
Fixes#16357
Tracking import use types for more accurate redundant import checking
fixes#117448
By tracking import use types to check whether it is scope uses or the other situations like module-relative uses, we can do more accurate redundant import checking.
For example unnecessary imports in std::prelude that can be eliminated:
```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```
fixes#117448
For example unnecessary imports in std::prelude that can be eliminated:
```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```
This mostly works well, and eliminates a couple of delayed bugs.
One annoying thing is that we should really also add an
`ErrorGuaranteed` to `proc_macro::bridge::LitKind::Err`. But that's
difficult because `proc_macro` doesn't have access to `ErrorGuaranteed`,
so we have to fake it.
Add completions to show only traits in trait `impl` statement
This is prerequisite PR for adding the assist mentioned in #12500
P.S: If wanted, I will add the implementation of the assist in this PR as well.
Implement `literal_from_str` for proc macro server
Closes#16233
Todos and unanswered questions:
- [x] Is this the correct approach? Can both the legacy and `rust_analyzer_span` servers depend on the `syntax` crate?
- [ ] How should we handle suffixes for string literals? It doesn't seem like `rust-analyzer` preservers suffix information after parsing.
- [x] Why are the `expect` tests failing? Specifically `test_fn_like_macro_clone_literals`
Substitute $saved_file in custom check commands
If the custom command has a $saved_file placeholder, and we know the file being saved, replace the placeholder and run a check command.
If there's a placeholder and we don't know the saved file, do nothing.
This is a simplified version of #15381, which I hope is easier to review.
feat: Introduce term search to rust-analyzer
# Introduce term search to `rust-analyzer`
_I've marked this as draft as there might be some shortcomings, please point them out so I can fix them. Otherwise I think it is kind of ready as I think I'll rather introduce extra functionality in follow up PRs._
Term search (or I guess expression search for rust) is a technique to generate code by basically making the types match.
Consider the following program
```rust
fn wrap(arg: i32) -> Option<i32> {
todo!();
}
```
From the types of values in scope and constructors of `Option`, we can produce the expected result of wrapping the argument in `Option`
Dependently typed languages such as `Idris2` and `Agda` have similar tools to help with proofs, but this can be also used in everyday development as a "auto-complete".
# Demo videos
https://github.com/rust-lang/rust-analyzer/assets/19900308/7b68a1b7-7dba-4e31-9221-6c7485e77d88https://github.com/rust-lang/rust-analyzer/assets/19900308/0fae530a-aabb-4b28-af71-e19f8d3d64b2
# What does it currently do
- It works well with locals, free functions, type constructors and non-static impl methods that take items by value.
- Works with functions/methods that take shared references, but not with unique references (very conservative).
- Can handle projections to struct fields (eg. `foo.bar.baz`) but this might me more conservative than it has to be to avoid conflicting with borrow checker
- Should create only valid programs (no type / borrow checking errors). Tested with `rust-analyzer analysis-stats /path/to/ripgrep/Cargo.toml --run-term-search --validate-term-search` (basically running `cargo check` on all of the generated programs and only error seems to be due to type inference which is more of issue of testing method.
# Performace / fitness
```txt
ripgrep (latest)
Tail Expr syntactic hits: 130/1692 (7%)
Tail Exprs found: 523/1692 (30%)
Term search avg time: 9ms
Term search: 15.64s, 97ginstr, 8mb
rust-analyzer (on this branch)
Tail Expr syntactic hits: 804/13860 (5%)
Tail Exprs found: 6757/13860 (48%)
Term search avg time: 78ms
Term search: 1088.23s, 6765ginstr, 98mb
```
Highly generic code seems to blow up the search space so currently the amount of generics allowed is functions/methods is limited down to 0 (1 didn't give much improvement and 2 is already like 0.5+s search time)
# Plans for the future (not in this PR)
- ``~~Add impl methods that do not take `self` type (should be quite straight forward)~~ Done
- Be smarter (aka less restrictive) about borrow checking - this seems quite hard but since the current approach is rather naive I think some easy improvement is available.
- ``~~See if it works as a autocomplete while typing~~ Done
_Feel free to ask questions / point of shortcoming either here or on Zulip, I'll be happy to address them. I'm doing this as part of my MSc thesis so I'll be working on it till summer anyway 😄_
If the custom command has a $saved_file placeholder, and we know the
file being saved, replace the placeholder and then run a check command.
If there's a placeholder and we don't know the saved file, do nothing.
feat: ignored and disabled macro expansion
Supersedes #15117, I was having some conflicts after a rebase and since I didn't remember much of it I started clean instead.
The end result is pretty much the same as the linked PR, but instead of proc macro lookups, I marked the expanders that explicitly cannot be expanded and we shouldn't even attempt to do so.
## Unresolved questions
- [ ] I introduced a `DISABLED_ID` next to `DUMMY_ID` in `hir-expand`'s `ProcMacroExpander`, that is effectively exactly the same thing with slightly different semantics, dummy macros are not (yet) expanded probably due to errors, while not expanding disabled macros is part of the usual flow. I'm not sure if it's the right way to handle this, I also thought of just adding a flag instead of replacing the macro ID, so that the disabled macro can still be expanded for any reason if needed.
internal: tool discovery prefers sysroot tools
Fixes https://github.com/rust-lang/rust-analyzer/issues/15927, Fixes https://github.com/rust-lang/rust-analyzer/issues/16523
After this PR we will look for `cargo` and `rustc` in the sysroot if it was succesfully loaded instead of using the current lookup scheme. This should be more correct than the current approach as that relies on the working directory of the server binary or loade workspace, meaning it can behave a bit odd wrt overrides.
Additionally, rust-project.json projects now get the target data layout set so there should be better const eval support now.
Abstract more over ItemTreeLoc-like structs
Allows reducing some code duplication by using functions generic over said structs. The diff isn't negative due to me adding some additional impls for completeness.
Enable some minor lints that we should tackles
This enables these lint rules that are commented as we should tackle at some points.
- non_canonical_clone_impl
- non_canonical_partial_ord_impl
- self_named_constructors
feature: Create `UnindexedProject` notification to be sent to the client
(Note that this branch contains commits from https://github.com/rust-lang/rust-analyzer/pull/15830, which I'll rebase atop of as needed.)
Based on the discussion in https://github.com/rust-lang/rust-analyzer/issues/15837, I've added a notification and off-by-default toggle to send that notification from `handle_did_open_text_document`. I'm happy to rename/tweak this as needed.
I've been using this for a little bit, and it does seem to cause a little bit more indexing/work in rust-analyzer, but it's something that I'll profile as needed, I think.