Add preference modifier for workspace-local crates when using auto import.
`@joshka` pointed out some odd behavior of auto import ordering. It doesn't seem that the current heuristics were applying any sort of precedence to imports from the workspace. I've went ahead and added that.
I hope to get some feedback on the modifier numbers here. I just went with something that felt like it balanced giving more power to workspace crates without completely ignoring relative path distance.
closes https://github.com/rust-lang/rust-analyzer/issues/17303
Add `toggle_async_sugar` assist code action
Implement code action for sugaring and de-sugaring asynchronous functions.
This code action does not import `Future` trait when de-sugaring and does not touch function boby, I guess this can be implemented later if needed. This action also does not take into consideration other bounds because IMO it's usually "let me try to use sugared version here".
Feel free to request changes, that's my first code action implementation 😄Closes#17010
Relates to #16195
Implement assist to switch between doc and normal comments
Hey first PR to rust-analyzer to get my feet wet with the code base. It's an assist to switch a normal comment to a doc comment and back, something I've found myself doing by hand a couple of times.
I shamelessly stole `relevant_line_comments` from `convert_comment_block`, because I didn't see any inter-assist imports happening in the files I peeked at so I thought this would be preferable.
handle {self} when removing unused imports
Fixes#17139
On master
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner::{self, X}$0;
fn f() {
let y = inner::Y();
}
}
```
becomes
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner:self;
fn f() {
let y = inner::Y();
}
}
```
with this fix it instead becomes
```
```rs
mod inner {
pub struct X();
pub struct Y();
}
mod z {
use super::inner;
fn f() {
let y = inner::Y();
}
}
```
feat: More callable info
With this PR we retain more info about callables other than functions, allowing for closure parameter type inlay hints to be linkable as well as better signature help around closures and `Fn*` implementors.
Fix OOM caused by term search
The issue came from multi Cartesian product for exprs with many (25+) arguments, each having multiple options.
The solution is two fold:
### Avoid blowing up in Cartesian product
**Before the logic was:**
1. Find expressions for each argument/param - there may be many
2. Take the Cartesian product (which blows up in some cases)
4. If there are more than 2 options throw them away by squashing them to `Many`
**Now the logic is:**
1. Find expressions for each argument/param and squash them to `Many` if there are more than 2 as otherwise we are guaranteed to also have more than 2 after taking the product which means squashing them anyway.
2. Take the Cartesian product on iterator
3. Start consuming it one by one
4. If there are more than 2 options throw them away by squashing them to `Many` (same as before)
This is also why I had to update some tests as the expressions get squashed to many more eagerly.
### Use fuel to avoid long search times and high memory usage
Now all the tactics use `should_continue: Fn() -> bool` to chech if they should keep iterating _(Similarly to chalk)_.
This reduces the search times by a magnitude, for example from ~139ms/hole to ~14ms/hole for `ripgrep` crate.
There are slightly less expressions found, but I think speed gain worth it for usability.
Also note that syntactic hits decreases more because of squashing so you simple need to run search multiple times to get full terms.
Also the worst case time (For example `nalgebra` crate cus it has tons of generics) has search times mostly under 200ms.
Benchmarks on `ripgrep` crate
Before:
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
After:
```
Tail Expr syntactic hits: 239/1692 (14%)
Tail Exprs found: 1226/1692 (72%)
Term search avg time: 14ms
```
Before this commit `UseTree::remove_unnecessary_braces` removed the braces
around `{self}` in `use x::y::{self};` but `use x::y::self;` is not valid
rust.
feature: Make generate function assist generate a function as a constructor if the generated function has the name "new" and is an asscociated function.
close#17050
This PR makes `generate function assist` generate a function as a constructor if the generated function has the name "new" and is an asscociated function.
If the asscociate type is a record struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self { field_1: todo!(), field_2: todo!() }
}
}
```
If the asscociate type is a tuple struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self(todo!(), todo!())
}
}
```
If the asscociate type is a unit struct, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
Self
}
}
```
If the asscociate type is another adt, it generates the constructor like this.
```rust
impl Foo {
fn new() -> Self {
todo!()
}
}
```
It is bitset semantically --- many categorical things can be true about
a reference at the same time.
In parciular, a reference can be a "test" and a "write" at the same
time.
internal: improve `TokenSet` implementation and add reserved keywords
The current `TokenSet` type represents "A bit-set of `SyntaxKind`s" as a newtype `u128`.
Internally, the flag for each `SyntaxKind` variant in the bit-set is set as the n-th LSB (least significant bit) via a bit-wise left shift operation, where n is the discriminant.
Edit: This is problematic because there's currently ~121 token `SyntaxKind`s, so adding new token kinds for missing reserved keywords increases the number of token `SyntaxKind`s above 128, thus making this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation overflow.
~~This is problematic because there's currently 266 SyntaxKinds, so this ["mask"](7a8374c162/crates/parser/src/token_set.rs (L31-L33)) operation silently overflows in release mode.~~
~~This leads to a single flag/bit in the bit-set being shared by multiple `SyntaxKind`s~~.
This PR:
- Changes the wrapped type for `TokenSet` from `u128` to `[u64; 3]` ~~`[u*; N]` (currently `[u16; 17]`) where `u*` can be any desirable unsigned integer type and `N` is the minimum array length needed to represent all token `SyntaxKind`s without any collisions~~.
- Edit: Add assertion that `TokenSet`s only include token `SyntaxKind`s
- Edit: Add ~7 missing [reserved keywords](https://doc.rust-lang.org/stable/reference/keywords.html#reserved-keywords)
- ~~Moves the definition of the `TokenSet` type to grammar codegen in xtask, so that `N` is adjusted automatically (depending on the chosen `u*` "base" type) when new `SyntaxKind`s are added~~.
- ~~Updates the `token_set_works_for_tokens` unit test to include the `__LAST` `SyntaxKind` as a way of catching overflows in tests.~~
~~Currently `u16` is arbitrarily chosen as the `u*` "base" type mostly because it strikes a good balance (IMO) between unused bits and readability of the generated `TokenSet` code (especially the [`union` method](7a8374c162/crates/parser/src/token_set.rs (L26-L28))), but I'm open to other suggestions or a better methodology for choosing `u*` type.~~
~~I considered using a third-party crate for the bit-set, but a direct implementation seems simple enough without adding any new dependencies. I'm not strongly opposed to using a third-party crate though, if that's preferred.~~
~~Finally, I haven't had the chance to review issues, to figure out if there are any parser issues caused by collisions due the current implementation that may be fixed by this PR - I just stumbled upon the issue while adding "new" keywords to solve #16858~~
Edit: fixes#16858
Add more methods for resolving definitions from AST to their corresponding HIR types
In order to be able to add these methods with consistent naming I had to also rename two existing methods that would otherwise be conflicting/confusing:
`Semantics::to_module_def(&self, file: FileId) -> Option<Module>` (before)
`Semantics::file_to_module_def(&self, file: FileId) -> Option<Module>` (after)
`Semantics::to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (before)
`Semantics::file_to_module_defs(&self, file: FileId) -> impl Iterator<Item = Module>` (after)
(the PR is motivated by an outside use of the `ra_ap_hir` crate that would benefit from being able to walk a `hir::Function`'s AST, resolving its exprs/stmts/items to their HIR equivalents)
fix: use 4 spaces for indentation in macro expansion
Partial fix for #16471.
In the previous code, the indentation produced by macro expansion was set to 2 spaces. This PR modifies it to 4 spaces for the sake of consistency.
Separate into create and apply edit
Rename usages
Hacky name map
Add more tests
Handle non-exhaustive
Add some more TODOs
Private fields
Use todo
Nesting
Improve rest token generation
Cleanup
Doc -> regular comment
Support mut
feat: Add "make tuple" tactic to term search
Follow up to https://github.com/rust-lang/rust-analyzer/pull/16092
Now term search also supports tuples.
```rust
let a: i32 = 1;
let b: f64 = 0.0;
let c: (i32, (f64, i32)) = todo!(); // Finds (a, (b, a))
```
In addition to new tactic that handles tuples I changed how the generics are handled.
Previously it tried all possible options from types we had in scope but now it only tries useful ones that help us directly towards the goal or at least towards calling some other function.
This changes O(2^n) to O(n^2) where n is amount of rounds which in practice allows using types that take generics for multiple rounds (previously limited to 1). Average case that also used to be exponential is now roughly linear.
This means that deeply nested generics also work.
````rust
// Finds all valid combos, including `Some(Some(Some(...)))`
let a: Option<Option<Option<bool>>> = todo!();
````
_Note that although the complexity is smaller allowing more types with generics the search overall slows down considerably. I hope it's fine tho as the autocomplete is disabled by default and for code actions it's not super slow. Might have to tweak the depth hyper parameter tho_
This resulted in a huge increase of results found (benchmarks on `ripgrep` crate):
Before
````
Tail Expr syntactic hits: 149/1692 (8%)
Tail Exprs found: 749/1692 (44%)
Term search avg time: 18ms
```
After
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
Most changes are local to term search except some tuple related stuff on `hir::Type`.