* There are a few needless borrows that don't seem to be needed. I even did a quick assembly comparison and posted a q to stackoveflow on it. See [here](https://stackoverflow.com/questions/74910196/advantages-of-pass-by-ref-val-with-impl-intoiteratoritem-impl-asrefstr)
* removed several `let _ = ...` when they don't look necessary (even a few ones that were not suggested by clippy (?))
* there were a few `then(|| ctor{})` that clippy suggested to replace with `then_some(ctor{})` -- seems reasonable?
* some unneeded assignment+return - keep the code a bit leaner
* a few `writeln!` instead of `write!`, or even consolidate write!
* a nice optimization to use `ch.is_ascii_digit` instead of `ch.is_digit(10)`
This makes code more readale and concise,
moving all format arguments like `format!("{}", foo)`
into the more compact `format!("{foo}")` form.
The change was automatically created with, so there are far less change
of an accidental typo.
```
cargo clippy --fix -- -A clippy::all -W clippy::uninlined_format_args
```
11877: fix: splitting path of a glob import wrongly adds `self` r=Veykril a=iDawer
Close #11703
`ast::UseTree::split_prefix` handles globs now.
Removed an extra branch for globs in `ide_db::imports::merge_imports::recursive_merge` (superseeded by split_prefix).
Co-authored-by: iDawer <ilnur.iskhakov.oss@outlook.com>
`ast::UseTree::split_prefix` handles globs now.
Removed an extra branch for globs in `ide_db::imports::merge_imports::recursive_merge` (superseeded by split_prefix).
Adds a label / lifetime parameter to `ide_assists::handlers::extract_function::FlowKind::{Break, Continue}`, adds support for emitting labels to `syntax::ast::make::{expr_break, expr_continue}`, and implements the required machinery to let `extract_function` make use of them.
This does modify the external API of the `syntax` crate, but the changes there are simple, not used outside `ide_assists`, and, well, we should probably support emitting `break` and `continue` labels through `syntax` anyways, they're part of the language spec.
Closes#11413.
11598: feat: Parse destructuring assignment r=Veykril a=ChayimFriedman2
Part of #11532.
Lowering is not as easy and may not even be feasible right now as it requires generating identifiers: `(a, b) = (b, a)` is desugared into
```rust
{
let (<gensym_a>, <gensym_b>) = (b, a);
a = <gensym_a>;
b = <gensym_b>;
}
```
rustc uses hygiene to implement that, but we don't support hygiene yet.
However, I think parsing was the main problem as lowering will just affect type inference, and while `{unknown}` is not nice it's much better than a syntax error.
I'm still looking for the best way to do lowering, though.
Fixes#11454.
Co-authored-by: Chayim Refael Friedman <chayimfr@gmail.com>
11322: Extract function also extracts comments r=Vannevelj a=Vannevelj
Fixes#9011
The difficulty I came across is that the original assist works from the concept of a `ast::StmtList`, a node, but that does not allow me to (easily) represent comments, which are tokens. To combat this, I do a whole bunch of roundtrips: from the `ast::StmtList` I retrieve the `NodeOrToken`s it encompasses.
I then cast all `Node` ones back to a `Stmt` so I can apply indentation to it, after which it is again parsed as a `NodeOrToken`.
Lastly, I add a new `make::` api that accepts `NodeOrToken` rather than `StmtList` so we can write the comment tokens.
Co-authored-by: Jeroen Vannevel <jer_vannevel@outlook.com>
11107: Fix generic type substitution in impl trait with assoc type r=pnevyk a=pnevyk
Fixes#11045
The path transform now detects if a type parameter that is being substituted has an associated type. In that case it is necessary (or safe in general case) to fully qualify the substitution with a trait which the associated type belongs to.
This PR also fixes the previous wrong behavior of the substitution that could create an invalid tree `PATH_TYPE -> PATH_TYPE -> ...`.
Co-authored-by: Petr Nevyhoštěný <petr.nevyhosteny@gmail.com>
11145: feat: add config to use reasonable default expression instead of todo! when filling missing fields r=Veykril a=bnjjj
Use `Default::default()` in struct fields when we ask to fill it instead of putting `todo!()` for every fields
before:
```rust
pub enum Other {
One,
Two,
}
pub struct Test {
text: String,
num: usize,
other: Other,
}
fn t_test() {
let test = Test {<|>};
}
```
after:
```rust
pub enum Other {
One,
Two,
}
pub struct Test {
text: String,
num: usize,
other: Other,
}
fn t_test() {
let test = Test {
text: String::new(),
num: 0,
other: todo!(),
};
}
```
Co-authored-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Coenen Benjamin <benjamin.coenen@hotmail.com>
This has to re-introduce the `sink` pattern, because doing this purely
with iterators is awkward :( Maaaybe the event vector was a false start?
But, anyway, I like the current factoring more -- it sort-of obvious
that we do want to keep ws-attachment business in the parser, and that
we also don't want that to depend on the particular tree structure. I
think `shortcuts` module achieves that.
The general theme of this is to make parser a better independent
library.
The specific thing we do here is replacing callback based TreeSink with
a data structure. That is, rather than calling user-provided tree
construction methods, the parser now spits out a very bare-bones tree,
effectively a log of a DFS traversal.
This makes the parser usable without any *specifc* tree sink, and allows
us to, eg, move tests into this crate.
Now, it's also true that this is a distinction without a difference, as
the old and the new interface are equivalent in expressiveness. Still,
this new thing seems somewhat simpler. But yeah, I admit I don't have a
suuper strong motivation here, just a hunch that this is better.
Revert "Fix `impl_trait` function to emit correct ast"
This reverts commit 55a4813151.
Fix `impl_def_from_trait`
It now generates the correct `ast::Impl` using
`generate_trait_impl_text` and parses it to form the right node (copied
from the private fn 'make::ast_from_text').
10689: Handle pub tuple fields in tuple structs r=Veykril a=adamrk
The current implementation will throw a parser error for tuple structs
that contain a pub tuple field. For example,
```rust
struct Foo(pub (u32, u32));
```
is valid Rust, but rust-analyzer will throw a parser error. This is
because the parens after `pub` is treated as a visibility context.
Allowing a tuple type to follow `pub` in the special case when we are
defining fields in a tuple struct can fix the issue.
I guess this is a really minor case because there's not much reason
for having a tuple type within a struct tuple, but it is valid rust syntax...
Co-authored-by: Adam Bratschi-Kaye <ark.email@gmail.com>
The current implementation will throw a parser error for tuple structs
that contain a pub tuple field. For example,
```rust
struct Foo(pub (u32, u32));
```
is valid Rust, but rust-analyzer will throw a parser error. This is
because the parens after `pub` is treated as a visibility context.
Allowing a tuple type to follow `pub` in the special case when we are
defining fields in a tuple struct can fix the issue.
10546: feat: Implement promote_local_to_const assist r=Veykril a=Veykril
Fixes#7692, that is now one can invoke the `extract_variable` assist on something and then follow that up with this assist to turn it into a const.
bors r+
Co-authored-by: Lukas Wirth <lukastw97@gmail.com>
10440: Fix Clippy warnings and replace some `if let`s with `match` r=Veykril a=arzg
I decided to try fixing a bunch of Clippy warnings. I am aware of this project’s opinion of Clippy (I have read both [rust-lang/clippy#5537](https://github.com/rust-lang/rust-clippy/issues/5537) and [rust-analyzer/rowan#57 (comment)](https://github.com/rust-analyzer/rowan/pull/57#discussion_r415676159)), so I totally understand if part of or the entirety of this PR is rejected. In particular, I can see how the semicolons and `if let` vs `match` commits provide comparatively little benefit when compared to the ensuing churn.
I tried to separate each kind of change into its own commit to make it easier to discard certain changes. I also only applied Clippy suggestions where I thought they provided a definite improvement to the code (apart from semicolons, which is IMO more of a formatting/consistency question than a linting question). In the end I accumulated a list of 28 Clippy lints I ignored entirely.
Sidenote: I should really have asked about this on Zulip before going through all 1,555 `if let`s in the codebase to decide which ones definitely look better as `match` :P
Co-authored-by: Aramis Razzaghipour <aramisnoah@gmail.com>
Consider these expples
{ 92 }
async { 92 }
'a: { 92 }
#[a] { 92 }
Previously the tree for them were
BLOCK_EXPR
{ ... }
EFFECT_EXPR
async
BLOCK_EXPR
{ ... }
EFFECT_EXPR
'a:
BLOCK_EXPR
{ ... }
BLOCK_EXPR
#[a]
{ ... }
As you see, it gets progressively worse :) The last two items are
especially odd. The last one even violates the balanced curleys
invariant we have (#10357) The new approach is to say that the stuff in
`{}` is stmt_list, and the block is stmt_list + optional modifiers
BLOCK_EXPR
STMT_LIST
{ ... }
BLOCK_EXPR
async
STMT_LIST
{ ... }
BLOCK_EXPR
'a:
STMT_LIST
{ ... }
BLOCK_EXPR
#[a]
STMT_LIST
{ ... }
Originally we tried to maintain the invariant that `{}` always match.
That is, that in the parse tree the pair of corresponding `{}` is always
first and last tokens of some nodes.
We had the code to validate that, but apparently it's been broken for
**years** since we introduced tokens/nodes split. Fixing it now makes
some tests fail.
It's unclear if we want to keep this invariant: there's a strong
motivation for breaking it in the following case:
```
use std::{ // unclosed paren
fn main() {
}
} // don't actually want to pair up this with the one from `use`
```
So let's fix the code, but disable it for the time being
The code here is intentionally dense and does exactly what is written.
Explaining semantic difference between Rust 2015 and 2018 doesn't help
with understanding syntax. Better to just add more targeted tests.
Group related stuff together, use only on path for parsing extern blocks
(they actually have modifiers).
Perhaps we should get rid of items_without_modifiers altogether? Better
to handle these kinds on diagnostics in validation layer...
9944: internal: introduce in-place indenting API r=matklad a=iDawer
Introduce `edit_in_place::Indent` that uses mutable tree API and intended to replace `edit::AstNodeEdit`.
Closes#9903
Co-authored-by: Dawer <7803845+iDawer@users.noreply.github.com>
9814: Generate default impl when converting `#[derive(Debug)]` to manual impl r=yoshuawuyts a=yoshuawuyts
This patch makes it so when you convert `#[derive(Debug)]` to a manual impl, a default body is provided that's equivalent to the original output of `#[derive(Debug)]`. This should make it drastically easier to write custom `Debug` impls, especially when all you want to do is quickly omit a single field which is `!Debug`.
This is implemented for enums, record structs, tuple structs, empty structs - and it sets us up to implement variations on this in the future for other traits (like `PartialEq` and `Hash`).
Thanks!
## Codegen diff
This is the difference in codegen for record structs with this patch:
```diff
struct Foo {
bar: String,
}
impl fmt::Debug for Foo {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- todo!();
+ f.debug_struct("Foo").field("bar", &self.bar).finish()
}
}
```
Co-authored-by: Irina Shestak <shestak.irina@gmail.com>
Co-authored-by: Yoshua Wuyts <yoshuawuyts@gmail.com>
Co-authored-by: Yoshua Wuyts <yoshuawuyts+github@gmail.com>
* Keep codegen adjacent to the relevant crates.
* Remove codgen deps from xtask, speeding-up from-source installation.
This regresses the release process a bit, as it now needs to run the
tests (and, by extension, compile the code).
9455: feat: Handle not let if expressions in replace_if_let_with_match r=Veykril a=Veykril
Transforms bare `if cond {}` into `_ if cond` guard patterns in the match as long as at least one `if let` is in the if chain, otherwise the assist wont be applicable.
bors r+
Co-authored-by: Lukas Wirth <lukastw97@gmail.com>
This story begins in #8384, where we added a smart test for our syntax
highting, which run the algorithm on synthetic files of varying length
in order to guesstimate if the complexity is O(N^2) or O(N)-ish.
The test turned out to be pretty effective, and flagged #9031 as a
change that makes syntax highlighting accidentally quadratic. There was
much rejoicing, for the time being.
Then, lnicola asked an ominous question[1]: "Are we sure that the time
is linear right now?"
Of course it turned out that our sophisticated non-linearity detector
*was* broken, and that our syntax highlighting *was* quadratic.
Investigating that, many brave hearts dug deeper and deeper into the
guts of rust-analyzer, only to get lost in a maze of traits delegating
to traits delegating to macros.
Eventually, matklad managed to peel off all layers of abstraction one by
one, until almost nothing was left. In fact, the issue was discovered in
the very foundation of the rust-analyzer -- in the syntax trees.
Worse, it was not a new problem, but rather a well-know, well-understood
and event (almost) well-fixed (!) performance bug.
The problem lies within `SyntaxNodePtr` type -- a light-weight "address"
of a node in a syntax tree [3]. Such pointers are used by rust-analyzer all
other the place to record relationships between IR nodes and the
original syntax.
Internally, the pointer to a syntax node is represented by node's range.
To "dereference" the pointer, you traverse the syntax tree from the
root, looking for the node with the right range. The inner loop of this
search is finding a node's child whose range contains the specified
range. This inner loop was implemented by naive linear search over all
the children. For wide trees, dereferencing a single `SyntaxNodePtr` was
linear. The problem with wide trees though is that they contain a lot of
nodes! And dereferencing pointers to all the nodes is quadratic in the
size of the file!
The solution to this problem is to speed up the children search --
rather than doing a linear lookup, we can use binary search to locate
the child with the desired interval.
Doing this optimization was one of the motivations (or rather, side
effects) of #6857. That's why `rowan` grew the useful
`child_or_token_at_range` method which does exactly this binary search.
But looks like we've never actually switch to this method? Oups.
Lesson learned: do not leave broken windows in the fundamental infra.
Otherwise, you'll have to repeatedly re-investigate the issue, by
digging from the top of the Everest down to the foundation!
[1]: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/.60syntax_highlighting_not_quadratic.60.20failure/near/240811501
[2]: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Syntax.20highlighting.20is.20quadratic
[3]: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Syntax.20highlighting.20is.20quadratic/near/243412392