Fix Assist "replace named generic type with impl trait"
This is a follow-up PR to fix the assist "replace named generic type with impl trait" described in #14626 to filter invalid param types. It integrates the feedback given in PR #14816 .
The change updates the logic to determine when a function parameter is safe to replace a type param with its trait implementation. Some parameter definitions are invalid & should not be replaced by their traits, therefore skipping the assist completely.
First, all usages of the generic type under the cursor are determined. These usage references are checked to see if they occur outside the function parameter list. If an outside reference is found, e.g. in body, return type or where clause, the assist is skipped. All remaining usages need to appear only in the function param list. For each usage the param type is further inspected to see if it's valid. The logic to determine if a function parameter is valid, follows a heuristic and may not cover all possible parameter definitions.
With this change the following param types (as given in [this comment](https://github.com/rust-lang/rust-analyzer/pull/14816#discussion_r1206834603)) are not replaced & therefore skip the assist.
```rust
fn foo<P: Trait>(
_: <P as Trait>::Assoc, // within path type qualifier
_: <() as OtherTrait<P>>::Assoc, // same as above
_: P::Assoc, // associated type shorthand
_: impl OtherTrait<P> // generic arg in impl trait (note that associated type bindings are fine)
_: &dyn Fn(P) // param type and/or return type for Fn* traits
) {}
```
The change updates the logic to determine if a function parameter is
valid for replacing the type param with the trait implementation.
First all usages are determined, to check if they are used outside the function
parameter list. If an outside reference is found, e.g. in body, return type or
where clause, the assist is skipped. All remaining usages only appear in the
function param list. For each usage the param type is checked to see if
it's valid.
**Please note** the logic currently follows a heuristic and may not cover
all existing parameter declarations.
* determine valid usage references by checking ancestors (on AST level)
* split test into separate ones
fix: Fix nav target calculation discarding file ids from differing macro upmapping
Fixes https://github.com/rust-lang/rust-analyzer/issues/14792
Turns out there was the assumption that upmapping from a macro will always end in the same root file, which is no longer the case thanks to `include!`
Add signature help for tuple patterns and expressions
~~These are somewhat wonky since their signature changes as you type depending on context but they help out nevertheless.~~ should be less wonky now with added parser and lowering recoveries
fix: Don't duplicate sysroot crates in rustc workspace
Since we handle `library` as the sysroot source directly in the rustc workspace, we now duplicate the crates there, once as sysroot and once as just plain workspace crate. This causes a variety of issues for `vec!` macros and similar that emit `$crate` tokens across crates.
Prioritize threads affected by user typing
To this end I’ve introduced a new custom thread pool type which can spawn threads using each QoS class. This way we can run latency-sensitive requests under one QoS class and everything else under another QoS class. The implementation is very similar to that of the `threadpool` crate (which is currently used by rust-analyzer) but with unused functionality stripped out.
I’ll have to rebase on master once #14859 is merged but I think everything else is alright :D
Fix edits for `convert_named_struct_to_tuple_struct`
Two fixes:
- When replacing syntax nodes, macro files weren't taken into account. Edits were simply made for `node.syntax().text_range()`, which would be wrong range when `node` is inside a macro file.
- We do ancestor node traversal for every struct name reference to find record expressions/patterns to edit, but we didn't verify that expressions/patterns do actually refer to the struct we're operating on.
Best reviewed one commit at a time.
Fixes#13780Fixes#14927
Previously we didn't verify that record expressions/patterns that were
found did actually point to the struct we're operating on. Moreover,
when that record expressions/patterns had missing child nodes, we would
continue traversing their ancestor nodes.
This code replaces the thread pool implementation we were using
previously (from the `threadpool` crate). By making the thread pool
aware of QoS, each job spawned on the thread pool can have a different
QoS class.
This commit also replaces every QoS class used previously with Default
as a temporary measure so that each usage can be chosen deliberately.
Specify thread types using Quality of Service API
<details>
<summary>Some background (in case you haven’t heard of QoS before)</summary>
Heterogenous multi-core CPUs are increasingly found in laptops and desktops (e.g. Alder Lake, Snapdragon 8cx Gen 3, M1). To maximize efficiency on this kind of hardware, it is important to provide the operating system with more information so threads can be scheduled on different core types appropriately.
The approach that XNU (the kernel of macOS, iOS, etc) and Windows have taken is to provide a high-level semantic API – quality of service, or QoS – which informs the OS of the program’s intent. For instance, you might specify that a thread is running a render loop for a game. This makes the OS provide this thread with as large a share of the system’s resources as possible. Specifying a thread is running an unimportant background task, on the other hand, is cause for it to be scheduled exclusively on high-efficiency cores instead of high-performance cores.
QoS APIs allows for easy configuration of many different parameters at once; for instance, setting QoS on XNU affects scheduling, timer latency, I/O priorities, and of course what core type the thread in question should run on. I don’t know any details on how QoS works on Windows, but I would guess it’s similar.
Hypothetically, taking advantage of these APIs would improve power consumption, thermals, battery life if applicable, etc.
</details>
# Relevance to rust-analyzer
From what I can tell the philosophy behind both the XNU and Windows QoS APIs is that _user interfaces should never stutter under any circumstances._ You can see this in the array of QoS classes which are available: the highest QoS class in both APIs is one intended explicitly for UI render loops.
Imagine rust-analyzer is performing CPU-intensive background work – maybe you just invoked Find Usages on `usize` or opened a large project – in this scenario the editor’s render loop should absolutely get higher priority than rust-analyzer, no matter what. You could view it in terms of “realtime-ness”: flight control software is hard realtime, audio software is soft realtime, GUIs are softer realtime, and rust-analyzer is not realtime at all. Of course, maximizing responsiveness is important, but respecting the rest of the system is more important.
# Implementation
I’ve tried my best to unify thread creation in `stdx`, where the new API I’ve introduced _requires_ specifying a QoS class. Different points along the performance/efficiency curve can make a great difference; the M1’s e-cores use around three times less power than the p-cores, so putting in this effort is worthwhile IMO.
It’s worth mentioning that Linux does not [yet](https://youtu.be/RfgPWpTwTQo) have a QoS API. Maybe translating QoS into regular thread priorities would be acceptable? From what I can tell the only scheduling-related code in rust-analyzer is Windows-specific, so ignoring QoS entirely on Linux shouldn’t cause any new issues. Also, I haven’t implemented support for the Windows QoS APIs because I don’t have a Windows machine to test on, and because I’m completely unfamiliar with Windows APIs :)
I noticed that rust-analyzer handles some requests on the main thread (using `.on_sync()`) and others on a threadpool (using `.on()`). I think it would make sense to run the main thread at the User Initiated QoS and the threadpool at Utility, but only if all requests that are caused by typing use `.on_sync()` and all that don’t use `.on()`. I don’t understand how the `.on_sync()`/`.on()` split that’s currently present was chosen, so I’ve let this code be for the moment. Let me know if changing this to what I proposed makes any sense.
To avoid having to change everything back in case I’ve misunderstood something, I’ve left all threads at the Utility QoS for now. Of course, this isn’t what I hope the code will look like in the end, but I figured I have to start somewhere :P
# References
<ul>
<li><a href="https://developer.apple.com/library/archive/documentation/Performance/Conceptual/power_efficiency_guidelines_osx/PrioritizeWorkAtTheTaskLevel.html">Apple documentation related to QoS</a></li>
<li><a href="67e155c940/include/pthread/qos.h">pthread API for setting QoS on XNU</a></li>
<li><a href="https://learn.microsoft.com/en-us/windows/win32/procthread/quality-of-service">Windows’s QoS classes</a></li>
<li>
<details>
<summary>Full documentation of XNU QoS classes. This documentation is only available as a huge not-very-readable comment in a header file, so I’ve reformatted it and put it here for reference.</summary>
<ul>
<li><p><strong><code>QOS_CLASS_USER_INTERACTIVE</code>: A QOS class which indicates work performed by this thread is interactive with the user.</strong></p><p>Such work is requested to run at high priority relative to other work on the system. Specifying this QOS class is a request to run with nearly all available system CPU and I/O bandwidth even under contention. This is not an energy-efficient QOS class to use for large tasks. The use of this QOS class should be limited to critical interaction with the user such as handling events on the main event loop, view drawing, animation, etc.</p></li>
<li><p><strong><code>QOS_CLASS_USER_INITIATED</code>: A QOS class which indicates work performed by this thread was initiated by the user and that the user is likely waiting for the results.</strong></p><p>Such work is requested to run at a priority below critical user-interactive work, but relatively higher than other work on the system. This is not an energy-efficient QOS class to use for large tasks. Its use should be limited to operations of short enough duration that the user is unlikely to switch tasks while waiting for the results. Typical user-initiated work will have progress indicated by the display of placeholder content or modal user interface.</p></li>
<li><p><strong><code>QOS_CLASS_DEFAULT</code>: A default QOS class used by the system in cases where more specific QOS class information is not available.</strong></p><p>Such work is requested to run at a priority below critical user-interactive and user-initiated work, but relatively higher than utility and background tasks. Threads created by <code>pthread_create()</code> without an attribute specifying a QOS class will default to <code>QOS_CLASS_DEFAULT</code>. This QOS class value is not intended to be used as a work classification, it should only be set when propagating or restoring QOS class values provided by the system.</p></li>
<li><p><strong><code>QOS_CLASS_UTILITY</code>: A QOS class which indicates work performed by this thread may or may not be initiated by the user and that the user is unlikely to be immediately waiting for the results.</strong></p><p>Such work is requested to run at a priority below critical user-interactive and user-initiated work, but relatively higher than low-level system maintenance tasks. The use of this QOS class indicates the work should be run in an energy and thermally-efficient manner. The progress of utility work may or may not be indicated to the user, but the effect of such work is user-visible.</p></li>
<li><p><strong><code>QOS_CLASS_BACKGROUND</code>: A QOS class which indicates work performed by this thread was not initiated by the user and that the user may be unaware of the results.</strong></p><p>Such work is requested to run at a priority below other work. The use of this QOS class indicates the work should be run in the most energy and thermally-efficient manner.</p></li>
<li><p><strong><code>QOS_CLASS_UNSPECIFIED</code>: A QOS class value which indicates the absence or removal of QOS class information.</strong></p><p>As an API return value, may indicate that threads or pthread attributes were configured with legacy API incompatible or in conflict with the QOS class system.</p></li>
</ul>
</details>
</li>
</ul>
feat: Assist to replace generic with impl trait
This adds a new assist named "Replace named generic with impl". It is the inverse operation to the existing "Replace impl trait with generic" assist.
It allows to refactor the following statement:
```rust
// 👇 cursor
fn new<T$0: ToString>(input: T) -> Self {}
```
to be transformed into:
```rust
fn new(input: impl ToString) -> Self {}
```
* adds new helper function `impl_trait_type` to create AST node
* add method to remove an existing generic param type from param list
Closes#14626
This removes an existing generic param from the `GenericParamList`. It
also considers to remove the extra colon & whitespace to the previous
sibling.
* change order to get all param types first and mark them as mutable
before the first edit happens
* add helper function to remove a generic parameter
* fix test output
This adds a new assist named "replace named generic with impl" to move
the generic param type from the generic param list into the function
signature.
```rust
fn new<T: ToString>(input: T) -> Self {}
```
becomes
```rust
fn new(input: impl ToString) -> Self {}
```
The first step is to determine if the assist can be applied, there has
to be a match between generic trait param & function paramter types.
* replace function parameter type(s) with impl
* add new `impl_trait_type` function to generate the new trait bounds with `impl` keyword for use in the
function signature
fix: assists no longer break indentation
Fixes https://github.com/rust-lang/rust-analyzer/issues/14674
These are _ad hoc_ patches for a number of assists that can produce incorrectly indented code, namely:
- generate_derive
- add_missing_impl_members
- add_missing_default_members
Some general solution is required in future, as the same problem arises in many other assists, e.g.
- replace_derive_with...
- generate_default_from_enum...
- generate_default_from_new
- generate_delegate_methods
(the list is incomplete)
Fix: a TODO and some clippy fixes
- fix(todo): implement IntoIterator for ArenaMap<IDX, V>
- chore: remove unused method
- fix: remove useless `return`s
- fix: various clippy lints
- fix: simplify boolean test to a single negation
fix: introduce new type var when expectation for ref pat is not ref
Fixes#14840
When we infer the type of ref patterns, its expected type may not be reference type: 1) expected type is an unresolved inference variable, or 2) expected type is erroneously other kind of type. In either case, we should produce a reference type with a new type variable rather than an error type so that we can continue inferring the inner patterns without further errors because of the (possible) type mismatch of this pattern.
fix: consider all tokens in macro expr when analyzing locals
Fixes#14687
2 fixes for `extract_function` assist (related closely enough that I squashed into one commit):
- Locals in macro expressions have been analyzed only when they are in the top-level token tree the macro call wraps. We should consider all descendant tokens.
- `self` in macro expressions haven't been analyzed.
Fix `preorder_expr` skipping the `else` block of let-else statements
Fixes exit/yield points not getting highlighted in such blocks for `highlight_related` (#14813; and possibly other bugs in features that use `preorder_expr`).
Fixes exit/yield points not getting highlighted in such blocks for `highlight_related` (#14813; and possibly other bugs in features that use `preorder_expr`).
MIR episode 5
This PR inits drop support (it is very broken at this stage, some things are dropped multiple time, drop scopes are wrong, ...) and adds stdout support (`println!` doesn't work since its expansion is dummy, but `stdout().write(b"hello world\n")` works if you use `RA_SYSROOT_HACK`) for interpreting. There is no useful unit test that it can interpret yet, but it is a good sign that it didn't hit a major road block yet.
In MIR lowering, it adds support for slice pattern and anonymous const blocks, and some fixes so that we can evaluate `SmolStr::new_inline` in const eval. With these changes, 57 failed mir body remains.
fix: Diagnose non-value return and break type mismatches
Could definitely deserve more polished diagnostics, but this at least brings the message across for now.
fix(analysis-stats): divided by zero error
## What does this PR try to resolve?
2023-05-15 rust-analyzer suffers from
```
thread 'main' panicked at 'attempt to divide by zero', crates/rust-analyzer/src/cli/analysis_stats.rs:230:56
```
This commit <51e8b8ff14> might be the culprit.
This PR uses `percentage` function to avoid the classic “division by zero” bug.
## Reproducer
```console
cargo new ra-test
pushd ra-test
echo "pub type Foo = u32;" >> src/lib.rs
rust-analyzer analysis-stats .
```
Support `#[macro_use(name, ...)]`
This PR adds support for another form of the `macro_use` attribute: `#[macro_use(name, ...)]` ([reference]).
Note that this form of the attribute is only applicable to extern crate decls, not to mod decls.
[reference]: https://doc.rust-lang.org/reference/macros-by-example.html#the-macro_use-attribute
Parse associated return type bounds
This PR implements parser support for associated return type bounds: `T: Foo<bar(): Send>`. This PR does not implement associated return types (`T::bar(): Send`) because it's not implemented even in rustc, and also removes `(..)`-style return type notation because it has been removed in rust-lang/rust#110203 (effectively reverting #14465).
I don't plan to proactively follow this unstable feature unless an RFC is accepted and my main motivation here is to remove no-longer-valid syntax `(..)` from our parser, nevertheless adding minimal parser support so anyone interested (as can be seen in #14465) can experiment it without rust-analyzer's syntax errors.
Expand more single ident macro calls upon item collection
Addresses https://github.com/rust-lang/rust-analyzer/pull/14781#issuecomment-1546201022
I believe this (almost) brings the number of unresolved names back to pre-#14781:
|r-a version|`analysis-stats compiler/rustc` (rust-lang/rust@69fef92ab2) |
|---|---|
|pre-#14781 (b069eb720b) | exprs: 2747778, ??ty: 122236 (4%), ?ty: 107826 (3%), !ty: 728 |
| #14781 (a7944a93a1) | exprs: 2713080, ??ty: 139651 (5%), ?ty: 114444 (4%), !ty: 730 |
| with this fix | exprs: 2747871, ??ty: 122237 (4%), ?ty: 108171 (3%), !ty: 676 |
(I haven't investigated on the increase in some numbers but hopefully not too much of a problem)
This is only a temporary solution. The core problem is that we haven't fully implemented the textual scope of legacy macros. For example, we *have been* failing to resolve `foo` in the following snippet, even before #14781 or after this patch. As noted in a FIXME, we need a way to resolve names in textual scope without eager expansion during item collection.
```rust
//- /main.rs crate:main deps:lib
lib::mk_foo!();
const A: i32 = foo!();
//^^^^^^ unresolved-macro-call
//- /lib.rs crate:lib
#[macro_export]
macro_rules! mk_foo {
() => {
macro_rules! foo { () => { 42 } }
}
}
```
Introduce macro sub-namespaces and `macro_use` prelude
This PR implements two mechanisms needed for correct macro name resolution: macro sub-namespace and `macro_use` prelude.
- [macro sub-namespaces][subns-ref]
Macros have two sub-namespaces: one for function-like macro and the other for those in attributes (including custom derive macros). When we're resolving a macro name for function-like macro, we should ignore non-function-like macros, and vice versa.
This helps resolve single-segment macro names because we can (and should, as rustc does) fallback to names in preludes when the name in the current module scope is in different sub-namespace.
- [`macro_use` prelude][prelude-ref]
`#[macro_use]`'d extern crate declarations (including the standard library) bring their macros into scope, but they should not be prioritized over local macros (those defined in place and those explicitly imported).
We have been bringing them into legacy (textual) macro scope, which has the highest precedence in name resolution. This PR introduces the `macro_use` prelude in crate-level `DefMap`s, whose precedence is lower than local macros but higher than the standard library prelude.
The first 3 commits are drive-by fixes/refactors.
Fixes#8828 (prelude)
Fixes#12505 (prelude)
Fixes#12734 (prelude)
Fixes#13683 (prelude)
Fixes#13821 (prelude)
Fixes#13974 (prelude)
Fixes#14254 (namespace)
[subns-ref]: https://doc.rust-lang.org/reference/names/namespaces.html#sub-namespaces
[prelude-ref]: https://doc.rust-lang.org/reference/names/preludes.html#macro_use-prelude
We've already removed non-sysroot proc macro server, which effectively
removed support for Rust <1.64.0, so this removal of fallback path
shouldn't be problem at this point.
This function/lang_item was introduced in #104321 as a temporary workaround of future lowering.
The usage and need for it went away in #104833.
After a bootstrap update, the function itself can be removed from `std`.
Make line-index a lib, use nohash_hasher
These seem like they are not specific to rust-analyzer and could be pulled out to their own libraries. So I did.
https://github.com/azdavis/millet/issues/31
Provide links to locally built documentation for `experimental/externalDocs`
This pull request addresses issue #12867, which requested the ability to provide links to locally built documentation when using the "Open docs for symbol" feature. Previously, rust-analyzer always used docs.rs for this purpose. With these changes, the feature will provide both web (docs.rs) and local documentation links without verifying their existence.
Changes in this PR:
- Added support for local documentation links alongside web documentation links.
- Added `target_dir` path argument for external_docs and other related methods.
- Added `sysroot` argument for external_docs.
- Added `target_directory` path to `CargoWorkspace`.
API Changes:
- Added an experimental client capability `{ "localDocs": boolean }`. If this capability is set, the `Open External Documentation` request returned from the server will include both web and local documentation links in the `ExternalDocsResponse` object.
Here's the `ExternalDocsResponse` interface:
```typescript
interface ExternalDocsResponse {
web?: string;
local?: string;
}
```
By providing links to both web-based and locally built documentation, this update improves the developer experience for those using different versions of crates, git dependencies, or local crates not available on docs.rs. Rust-analyzer will now provide both web (docs.rs) and local documentation links, leaving it to the client to open the desired link. Please note that this update does not perform any checks to ensure the validity of the provided links.
Refactor symbol index
Closes https://github.com/rust-lang/rust-analyzer/issues/14677
instead of eagerly fetching the source data in symbol index we do it lazily now, this shouldn't make it much more expensive as we had to parse the source most of the time anyways even after fetching.
fix: ide: exclude sized in go-to actions in hover
fixes#13163
i opted to just simply omit `Sized` entirely from go-to actions, as opposed to including it if even someone writes an explicit `T: Sized`, as i think a go-to on Sized is of dubious value practically.
feat: Highlight closure captures when cursor is on pipe or move keyword
This runs into the same issue on vscode as exit points for `->`, where highlights are only triggered on identifiers, https://github.com/rust-lang/rust-analyzer/issues/9395
Though putting the cursor on `move` should at least work.
chore: rust-analyzer: refactor notification handlers
Fixes the FIXME in `on_notification`.
```rust
// FIXME: Move these implementations out into a module similar to on_request
```
No code has changed, this just moves stuff around.
More core::fmt::rt cleanup.
- Removes the `V1` suffix from the `Argument` and `Flag` types.
- Moves more of the format_args lang items into the `core::fmt::rt` module. (The only remaining lang item in `core::fmt` is `Arguments` itself, which is a public type.)
Part of https://github.com/rust-lang/rust/issues/99012
Follow-up to https://github.com/rust-lang/rust/pull/110616
Deduplicate crates when extending crate graphs
This is quadratic in runtime per deduplication attempt, but I don't think that'll be a problem for the workload here. Don't be scared of the diff, the actual diff is +42 -22, the rest is tests and test data.
Fixes https://github.com/rust-lang/rust-analyzer/issues/14476
Handle dev-dependency cycles
cc https://github.com/rust-lang/rust-analyzer/issues/14167
This should fix cycles errors mostly (it fixes the one on rome/tools at least, but not on rustc. Though there it might just be because of rustc workspace being rustc workspace). Unfortunately this will effectively duplicate all crates currently, since if we want to be completely correct we'd need to set the test cfg for all dev dependencies and the transitive dependencies of those, something I worry we should try to avoid.
fix: Fix pat fragment handling in 2021 edition
Fixes https://github.com/rust-lang/rust-analyzer/issues/9055
The fix isn't that great, but we are kind of forced to do it the quick and hacky way right now since std has changed the `matches` macro to make use of this now. And for a proper fix we need to track hygiene for identifiers which is a long way off anyways
Register obligations during path inference
Fixes#14635
When we infer path expressions that resolve to some generic item, we need to consider their generic bounds. For example, when we resolve a path `Into::into` to `fn into<?0, ?1>` (note that `?0` is the self type of trait ref), we should register an obligation `?0: Into<?1>` or else their relationship would be lost.
Relevant part in rustc is [`add_required_obligations_with_code()`] that's called in [`instantiate_value_path()`].
[`instantiate_value_path()`]: 3462f79e94/compiler/rustc_hir_typeck/src/fn_ctxt/_impl.rs (L1052)
[`add_required_obligations_with_code()`]: 3462f79e94/compiler/rustc_hir_typeck/src/fn_ctxt/_impl.rs (L1411)
doc(alias)-based completion round 2
Follow-up on #14433
We can now complete fields, functions and some use/mods.
Flyimports don't behave, I don't really have the time to understand the structure there either.
While reading the flyimport code, I removed one method only used there, the closure-tree was a bit confusing, I can revert that if you want.
Report allocation errors as panics
OOM is now reported as a panic but with a custom payload type (`AllocErrorPanicPayload`) which holds the layout that was passed to `handle_alloc_error`.
This should be review one commit at a time:
- The first commit adds `AllocErrorPanicPayload` and changes allocation errors to always be reported as panics.
- The second commit removes `#[alloc_error_handler]` and the `alloc_error_hook` API.
ACP: https://github.com/rust-lang/libs-team/issues/192Closes#51540Closes#51245
Add syntax::make::ty_alias
There was until now no function that returns TypeAlias. This commit introduces a func that is fully compliant with the Rust Reference. I had problems working with Ident so for now the function uses simple string manipulation until ast_from_text function is called. I am however open to any ideas that could replace ident param in such a way that it accepts syntax::ast::Ident
fix: Resolve `$crate` in derive paths
Paths in derive meta item list may contain any kind of paths, including those that start with `$crate` generated by macros. We need to take hygiene into account when we lower paths in the list.
This issue was identified while investigating #14607, though this patch doesn't fix the broken trait resolution.
Simple fix for make::impl_trait
This is my first PR in this project. I made this PR because I needed this function to work properly for the main PR I am working on (#14386). This is a small amendment to what it was before. We still need to improve this in order for it to fully comply with its syntactic definition as stated [here](https://doc.rust-lang.org/reference/items/implementations.html).
Added byte position range for `proc_macro::Span`
Currently, the [`Debug`](https://doc.rust-lang.org/beta/proc_macro/struct.Span.html#impl-Debug-for-Span) implementation for [`proc_macro::Span`](https://doc.rust-lang.org/beta/proc_macro/struct.Span.html#) calls the debug function implemented in the trait implementation of `server::Span` for the type `Rustc` in the `rustc-expand` crate.
The current implementation, of the referenced function, looks something like this:
```rust
fn debug(&mut self, span: Self::Span) -> String {
if self.ecx.ecfg.span_debug {
format!("{:?}", span)
} else {
format!("{:?} bytes({}..{})", span.ctxt(), span.lo().0, span.hi().0)
}
}
```
It returns the byte position of the [`Span`](https://doc.rust-lang.org/beta/proc_macro/struct.Span.html#) as an interpolated string.
Because this is currently the only way to get a spans position in the file, I might lead someone, who is interested in this information, to parsing this interpolated string back into a range of bytes, which I think is a very non-rusty way.
The proposed `position()`, method implemented in this PR, gives the ability to directly get this info.
It returns a [`std::ops::Range`](https://doc.rust-lang.org/std/ops/struct.Range.html#) wrapping the lowest and highest byte of the [`Span`](https://doc.rust-lang.org/beta/proc_macro/struct.Span.html#).
I put it behind the `proc_macro_span` feature flag because many of the other functions that have a similar footprint also are annotated with it, I don't actually know if this is right.
It would be great if somebody could take a look at this, thank you very much in advanced.
Detect sysroot dependencies using symlink copy
cc #7637
It is currently in a proof of concept stage, and it doesn't generates a copy. You need to provide your own sysroot copy in `/tmp/ra-sysroot-hack` in a way that `/tmp/ra-sysroot-hack/library/std/lib.rs` exists and `/tmp/ra-sysroot-hack/Cargo.toml` is [the one from this comment](https://github.com/rust-lang/rust-analyzer/issues/7637#issuecomment-1495008329). I will add the symlink code if we decide that this approach is not a dead end.
It seems to somehow work on my system. Go to definition into std dependencies works, type checking can look through fields if I make them public and `cfg_if` appears to work (I tested it by hovering both sides and seeing that the correct one is enabled). Though finding layout of `HashMap` didn't work.
Please try it and let me know if I should go forward in this direction or not.
Restrict "sort items" assist for traits & impls
This restricts the "sort items alphabetically" assist when the selection is inside a `Impl` or `Trait` node & intersects with one of the associated items.
It re-orders the conditional checks of AST nodes in the `sort_items` function to check for more specific nodes first before checking `Trait` or `Impl` nodes. The `AssistContext` is passed into the `add_sort_methods_assist` function to check if the selection intersects with any inner items, e.g. associated const or type alias, function. In this case the assist does not apply.
Fixes: #14516
This fixes the applicability of the "sort items alphabetically" assist
when the selection is inside a `Trait` or `Impl`. It's now tested if the
selection is inside or overlaps with an inner node, e.g. associated
const or type alias, function.
internal: Report macro definition errors on the definition
We still report them on the call site as well for the time being, and the diagnostic doesn't know where the error in the definition comes from, but that can be done later on
Fix explicit deref problems in closure capture
fix the `need-mut` part of #14562
Perhaps surprisingly, it wasn't unique immutable borrow. The code still doesn't emit any of them, and I think those won't happen in edition 2021 (which is currently the only thing implemented), since we always capture `&mut *x` instead of `&mut x`. But I'm not very sure about it.
internal: Warn when loading sysroot fails to find the core library
Should help a bit more with user experience, before we only logged this now we show it in the status
Closes https://github.com/rust-lang/rust-analyzer/issues/11606
Don't suggest unstable items on stable toolchain
Closes#3020
This PR implements stability check in `ide-completion` so that unstable items are only suggested if you're on nightly toolchain.
It's a bit unfortunate `CompletionContext::check_stability()` is spammed all over the crate, but we should call it before building `CompletionItem` as you cannot get attributes on the item it's completing from that struct. I looked up every callsite of `Builder::add_to()`, `Completions::add[_opt]()`, and`Completions::add_all()` and inserted the check wherever necessary.
The tests are admittedly incomplete in that I didn't add tests for every kind of item as I thought that would be too big and not worthwhile. I copy-pasted some existing basic tests in every test module and adjusted them.
Drop support for non-syroot proc macro ABIs
This makes some bigger changes to how we handle the proc-macro-srv things, for one it is now an empty crate if built without the `sysroot-abi` feature, this simplifies some things dropping the need to put the feature cfg in various places. The cli wrapper now actually depends on the server, instead of being part of the server that is just exported, that way we can have a true dummy server that just errors on each request if no sysroot support was specified.
minor: Fix some simple FIXMEs
Each FIXME fix has been split into its own commit, since they're all pretty independent changes.
(Forgot to open a PR for this a few days ago, oops)
internal: Implement Structured API for snippets
Fixes#11638 (including moving the cursor before the generated type parameter)
Adds `add_tabstop_{before,after}` for inserting tabstop snippets before & after nodes, and `add_placeholder_snippet` for wrapping nodes inside placeholder nodes.
Currently, the snippets are inserted into the syntax tree in `SourceChange::commit` so that snippet bits won't interfere with syntax lookups before completing a `SourceChange`.
It would be preferable if snippet rendering was deferred to after so that rendering can work directly with text ranges, but have left that for a future PR (it would also make it easier to finely specify which text edits have snippets in them).
Another possible snippet variation to support would be a group of placeholders (i.e. placeholders with the same tabstop number) so that a generated item and its uses can be renamed right as it's generated, which is something that is technically supported by the current snippet hack in VSCode, though it's not clear if that's a thing that is officially supported.
Add doc-alias based completion
Closes#14406.
I adapted the parsing code from the CfgExpr parsing code, maybe there's a better abstraction for both, or attribute parsing in general. It also includes `doc(hidden)`-parsing, which means it could replace the other function.
There are a few tests for parsing.
`process_all_names` changed the most, I added some docs there to explain what happens.
Many call sites just pass an empy vec to `add_path_resolution`'s `doc_aliases`, since either it doesn't make sense to pass anything (e.g. visibility completion) or I don't know where to get them from. Shouldn't really matter, as it will just not show aliases if the vec is empty and we can extend alias completion in these cases later.
I added two tests in `special.rs` for struct name completion (which was the main thing I wanted). I also tried function and field names, but these don't work yet. I want to add those in a follow-up PR.
Normalize associated types in paths in expressions
Part of #14393
When we resolve paths in expressions (either path expressions or paths in struct expressions), there's a need of projection normalization, which `TyLoweringContext` cannot do on its own. We've been properly applying normalization for paths in struct expressions without type anchor, but not for others:
```rust
enum E {
S { v: i32 }
Empty,
}
impl Foo for Bar {
type Assoc = E;
fn foo() {
let _ = Self::Assoc::S { v: 42 }; // path in struct expr without type anchor; we already support this
let _ = <Self>::Assoc::S { v: 42 }; // path in struct expr with type anchor; resolves with this PR
let _ = Self::Assoc::Empty; // path expr; resolves with this PR
}
}
```
With this PR we correctly resolve the whole path, but we need some more tweaks in HIR and/or IDE layers to properly resolve a qualifier (prefix) of such paths and provide IDE features that are pointed out in #14393 to be currently broken.
Limited syntax support for return type notations (RTN)
Experimental RTN bound support was recently merged into rustc (https://github.com/rust-lang/rust/issues/109417), the goal of this PR is to allow experimentation without syntax errors everywhere.
The parsing implemented currently aligns with the state of the tracking issue, it only supports the form `T<foo(..): Bounds>`. The parser always checks for the presence of `..` to disambiguate from `Fn*()` types, this is not ideal but I didn't want to spend too much time as it is an experimental feature.
internal: Add config to specifiy lru capacities for all queries
Might help figuring out what queries should be limited by LRU by default, as currently we only limit `parse`, `parse_macro_expansion` and `macro_expand`.
fix: allow new, subsequent `rust-project.json`-based workspaces to get proc macro expansion
As detailed in https://github.com/rust-lang/rust-analyzer/issues/14417#issuecomment-1485336174, `rust-project.json` workspaces added after the initial `rust-project.json`-based workspace was already indexed by rust-analyzer would not receive procedural macro expansion despite `config.expand_proc_macros` returning true. To fix this issue:
1. I changed `reload.rs` to check which workspaces are newly added.
2. Spawned new procedural macro expansion servers based on the _new_ workspaces.
1. This is to prevent spawning duplicate procedural macro expansion servers for already existing workspaces. While the overall memory usage of duplicate procedural macro servers is minimal, this is more about the _principle_ of not leaking processes 😅.
3. Launched procedural macro expansion if any workspaces are `rust-project.json`-based _or_ `same_workspaces` is true. `same_workspaces` being true (and reachable) indicates that that build scripts have finished building (in Cargo-based projects), while the build scripts in `rust-project.json`-based projects have _already been built_ by the build system that produced the `rust-project.json`.
I couldn't really think of structuring this code in a better way without engaging with https://github.com/rust-lang/rust-analyzer/issues/7444.
fix: Properly handle local trait impls
Before we only handled trait impls that came from the block of either the trait or the target type, we now handle them correctly by tracking the block we are currently inferring from, then walking that up to collect all block trait impls.
internal: Only intern blocks that declare items
We only used `BlockId` for the block defmap, so this is wasted memory. Lowering for non item declaring blocks is also cheaper now as we no longer have to fully lower a block that defines not items.
Remove client side proc-macro version check
The server already verifies versions due to ABI picking now so there shouldn't be a need for the client side check anymore
internal: Coalesce adjacent Indels
Originally part of working on a structured snippet API (since sometimes the `$` bit of snippets would be broken off and would lead to it not being recognized), though since this is a pretty separate change, I thought it would make sense to put it into it's own PR.
The implementation is relatively straight forward and not overly optimized, though it's pretty low hanging fruit to optimize it when need be.
MIR episode 3
This PR adds lowering for try operator and overloaded dereference, and adds evaluating support for function pointers and trait objects. It also adds a flag to `analysis-stats` to show percentage of functions that it fails to emit mir for them, which is currently `20%` (which is somehow lying, as most of the supported `80%` are tests). The most offenders are closure (1975 items) and overloaded index (415 items). I will try to add overloaded index before monday to have it in this PR, and tackle the closure in the next episode.
feat: show only missing variant suggestion for enums in patterns completion and bump them in list too
Fixes#12438
### Points to help in review:
- This PR can be reviewed commit wise, first commit is about bumping enum variant completions up in the list of completions and second commit is about only showing enum variants which are not complete
- I am calculating missing variants in analysis.rs by firstly locating the enum and then comparing each of it's variant's name and checking if arm string already contains that name, this is kinda hacky but I didn't want to implement complete missing_arms assist here as that would have been too bulky to run on each completion cycle ( if we can improve this somehow would appreciate some inputs on it )
### Output:
https://user-images.githubusercontent.com/49019259/208245540-57d7321b-b275-477e-bef0-b3a1ff8b7040.mov
Relevant Zulip Discussion: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Issue.20.2312438
fix: Do not retry inlay hint requests
Should close https://github.com/rust-lang/rust-analyzer/issues/13372, retrying the way its currently implemented is not ideal as we do not adjust offsets in the requests, but doing that is a major PITA, so this should at least work around one of the more annoying issues stemming from it.
fix: don't replace `SyntaxToken` with `SyntaxNode`
Fixes#14339
When we inline method calls, we replace the `self` parameter with a local variable `this`. We have been replacing the `self` **tokens** with `NameRef` **nodes**, which makes the AST malformed. This leads to crash when we apply path transformation after the replacement (which only takes place when the method is generic and such scenario was not tested).
Add Cargo-style project discovery for Buck and Bazel Users
This feature requires the user to add a command that generates a `rust-project.json` from a set of files. Project discovery can be invoked in two ways:
1. At extension activation time, which includes the generated `rust-project.json` as part of the linkedProjects argument in `InitializeParams`.
2. Through a new command titled "rust-analyzer: Add current file to workspace", which makes use of a new, rust-analyzer-specific LSP request that adds the workspace without erasing any existing workspaces. Note that there is no mechanism to _remove_ workspaces other than "quit the rust-analyzer server".
Few notes:
- I think that the command-running functionality _could_ merit being placed into its own extension (and expose it via extension contribution points) to provide build-system idiomatic progress reporting and status handling, but I haven't (yet) made an extension that does this nor does Buck expose this sort of functionality.
- This approach would _just work_ for Bazel. I'll try and get the tool that's responsible for Buck integration open-sourced soon.
- On the testing side of things, I've used this in around my employer's Buck-powered monorepo and it's a nice experience. That being said, I can't think of an open-source repository where this can be tested in public, so you might need to trust me on this one.
I'd love to get feedback on:
- Naming of LSP extensions/new commands. I'm not too pleased with how "rust-analyzer: Add current file to workspace" is named, in that it's creating a _new_ workspace. I think that this command being added should be gated on `rust-analyzer.discoverProjectCommand` on being set, so I can add this in sequent commits.
- My Typescript. It's not particularly good.
- Suggestions on handling folders with _both_ Cargo and non-Cargo build systems and if I make activation a bit better.
(I previously tried to add this functionality entirely within rust-analyzer-the-LSP server itself, but matklad was right—an extension side approach is much, much easier.)
internal: add `as_slice` to `hir::Type`
~`remove_slice`~ `as_slice` is same as `remove_ref` but for slices.
Though there is `as_array` which I believe was named such because it also gets the length of the array, maybe. I am still shaky on the names feel free to suggest corrections.
feat: add `is_float` & `is_char` to `hir::Type`
Some useful functions we didn't have on `Type` (were present on `BuiltinType`).
Also, I am considering exposing `TyKind` with `get_kind`, let me know if that's a better idea than implementing these API extensions incrementally.
Add path of workspace root folders to status output
Hi folks! Just a quick addition to the status output. There are some colleagues of mine who use a mix of Buck and Cargo. A person spent a bit of time this past week trying to figure out there the `rust-project.json` was coming from and pointed out that `rust-analyzer: Status` could be a good place to put this information. rust-analyzer doesn't seem to record the full path of the `Cargo.toml` or the `rust-project.json`, just the root directory. While not perfect, this should be enough for people to unblock themselves on. Here's an example of `rust-analyzer: Status` on the rust-analyzer repo:
```
Workspaces:
Loaded 192 packages across 1 workspace.
Workspace roots: [AbsPath("/Users/dbarsky/Developer/rust-analyzer")]
Analysis:
57mb of files
0b of index symbols (0)
2514 trees, 128 preserved
29535 trees, 128 preserved (Macros)
0b in total
File info:
Crate: rust_analyzer(CrateId(131))
Dependencies: proc_macro=CrateId(5), core=CrateId(2), alloc=CrateId(0), std=CrateId(7), test=CrateId(9), always_assert=CrateId(12), anyhow=CrateId(13), cfg=CrateId(25), crossbeam_channel=CrateId(35), dissimilar=CrateId(41), expect_test=CrateId(46), flycheck=CrateId(50), hir=CrateId(56), hir_def=CrateId(57), hir_ty=CrateId(59), ide=CrateId(63), ide_db=CrateId(66), ide_ssr=CrateId(68), itertools=CrateId(73), jod_thread=CrateId(75), lsp_server=CrateId(83), lsp_types=CrateId(85), mbe=CrateId(87), num_cpus=CrateId(96), oorandom=CrateId(99), parking_lot=CrateId(102), proc_macro_api=CrateId(110), proc_macro_srv=CrateId(111), profile=CrateId(118), project_model=CrateId(119), rayon=CrateId(125), rustc_hash=CrateId(136), scip=CrateId(141), serde=CrateId(145), serde_json=CrateId(147), sourcegen=CrateId(153), stdx=CrateId(155), syntax=CrateId(158), test_utils=CrateId(159), threadpool=CrateId(165), toolchain=CrateId(170), tracing=CrateId(171), tracing_log=CrateId(174), tracing_subscriber=CrateId(175), tracing_tree=CrateId(176), tt=CrateId(177), vfs=CrateId(188), vfs_notify=CrateId(189), xflags=CrateId(192), xshell=CrateId(194)
```
This feature requires the user to add a command that generates a
`rust-project.json` from a set of files. Project discovery can be invoked
in two ways:
1. At extension activation time, which includes the generated
`rust-project.json` as part of the linkedProjects argument in
InitializeParams
2. Through a new command titled "Add current file to workspace", which
makes use of a new, rust-analyzer specific LSP request that adds
the workspace without erasing any existing workspaces.
I think that the command-running functionality _could_ merit being
placed into its own extension (and expose it via extension contribution
points), if only provide build-system idiomatic progress reporting and
status handling, but I haven't (yet) made an extension that does this.
internal: Rename `hir::diagnostics::MissingMatchArms.match_expr` field
`hir::diagnostics::MissingMatchArms.match_expr` had confusing name: it is pointing to scrutinee expression. Renamed to `scrutinee_expr` and used better fitting type for it.
Also small refactorings/cleanup.
fix: Watch both stdout and stderr in flycheck
Fixes#14217
This isn't great because it un-mixes the messages from the two streams, but maybe it's not such a big problem?
Load proc-macros for rustc_private crates
If the client support our server status notification there is no need to show the pop up for workspace fetching failures since that's already going to be shown in the status.
cc https://github.com/rust-lang/rust-analyzer/issues/14193
fix: show diagnostic for } token followed by else in let else statement
fix#14221
My thinking is to check if the `expr` after `=` is block like when parse `let ... lese` , and if so, emit error.
MIR episode 2
This PR adds:
1. `need-mut` and `unused-mut` diagnostics
2. `View mir` command which shows MIR for the body under cursor, useful for debugging
3. MIR lowering for or-patterns and for-loops
internal: Re-use the resolver in `InferenceContext` instead of rebuilding it whenever needed
This reduced inference time on my local build by roughly ~1 sec (out of like 60)
internal: Handle fields called as method calls as the fields they resolve to
Confusing PR title tbf but this makes it so `bar` in `foo.bar()` resolves to the field if it exists and no method with the same name exists. Improves UX slightly when incorrectly calling a field.
rust-analyzer used the token at the cursor after macro expansion
to decide whether to replace the token at the cursor before macro
expansion. In most cases these two are the same but in some cases these
can mismatch which can lead to incorrect replacements.
For example if an ident/expr macro argument is missing rust-analyzer
generates a "missing" identifier as a placeholder, there is only a
brace at the cursor. Therefore, rust-analyzer will incorrectly replace
the macro brace with the completion in that case leading to #14246.
Using the expanded token type was intentional. However, this doesn't
seem to ever be desirable (this is supported by the fact that there
were no tests that relied on this behavior) since the type of edit to
perform should always be determined by the token it's actually applied
to.
Handle trait alias definitions
Part of #2773
This PR adds a bunch of structs and enum variants for trait aliases. Trait aliases should be handled as an independent item because they are semantically distinct from traits.
I basically started by adding `TraitAlias{Id, Loc}` to `hir_def::item_tree` and iterated adding necessary stuffs until compiler stopped complaining what's missing. Let me know if there's still anything I need to add.
I'm opening up this PR for early review and stuff. I'm planning to add tests for IDE functionalities in this PR, but not type-related support, for which I put FIXME notes.
Fix associated item visibility in block-local impls
Fixes#14046
When we're resolving visibility of block-local items...
> `self` normally refers to the containing non-block module, and `super` to its parent (etc.). However, visibilities must only refer to a module in the DefMap they're written in, so we restrict them when that happens. ([link])
...unless we're resolving visibility of associated items in block-local impls, because that impl is semantically "hoisted" to the nearest (non-block) module. With this PR, we skip the adjustment for such items.
Since visibility representation of those items is modified, this PR also adjusts visibility rendering in `HirDisplay`.
[link]: a6603fc21d/crates/hir-def/src/nameres/path_resolution.rs (L101-L103)
Fix: Run doctests for structs with lifetime parameters from IDE
Fixes#14142: Doctests can't be triggered for structs with lifetimes
This MR adds lifetime parameters to the structs path for runnables so that they can be triggered from an IDE as well.
This is my first MR for rust-analyzer, please let me know if I should change something, either in code or the description here.
Beginning of MIR
This pull request introduces the initial implementation of MIR lowering and interpreting in Rust Analyzer.
The implementation of MIR has potential to bring several benefits:
- Executing a unit test without compiling it: This is my main goal. It can be useful for quickly testing code changes and print-debugging unit tests without the need for a full compilation (ideally in almost zero time, similar to languages like python and js). There is a probability that it goes nowhere, it might become slower than rustc, or it might need some unreasonable amount of memory, or we may fail to support a common pattern/function that make it unusable for most of the codes.
- Constant evaluation: MIR allows for easier and more correct constant evaluation, on par with rustc. If r-a wants to fully support the type system, it needs full const eval, which means arbitrary code execution, which needs MIR or something similar.
- Supporting more diagnostics: MIR can be used to detect errors, most famously borrow checker and lifetime errors, but also mutability errors and uninitialized variables, which can be difficult/impossible to detect in HIR.
- Lowering closures: With MIR we can find out closure capture modes, which is useful in detecting if a closure implements the `FnMut` or `Fn` traits, and calculating its size and data layout.
But the current PR implements no diagnostics and doesn't support closures. About const eval, I removed the old const eval code and it now uses the mir interpreter. Everything that is supported in stable rustc is either implemented or is super easy to implement. About interpreting unit tests, I added an experimental config, disabled by default, that shows a `pass` or `fail` on hover of unit tests (ideally it should be a button similar to `Run test` button, but I didn't figured out how to add them). Currently, no real world test works, due to missing features including closures, heap allocation, `dyn Trait` and ... so at this point it is only useful for me selecting what to implement next.
The implementation of MIR is based on the design of rustc, the data structures are almost copy paste (so it should be easy to migrate it to a possible future stable-mir), but the lowering and interpreting code is from me.
add: clean api to get `raw_ptr` type
There doesn't seem to be an API to fetch the type of `raw_ptr`, which is helpful for a project I work on.
Notes:
- I am unsure about the function name, do let me know if I should use something else.
- Also unsure about where to add tests, for hir changes. Will fix it as needed.
fix:add a case in which remainig is None in resolveing types when resolving hir path.
fix#14030 The variable type is being determined incorrectly
This PR fixed a problem in which `go to definition` is jumping to the incorrect position because it was failing to resolve the type in case it defined in the module when resolving hir.
In addition, I added a test for this issue and refactored the related code.
This is my first PR and I am using a translation tool to write this text. Let me know if you have any problems.
add openDocs command to context menu in VS Code extension
This adds the `openDocs` command to the VS Code context menu. I believe there are probably many user who are unaware of this command existing in the rust analyzer extension, and that this should enhance the discoverability of the command. Additionally, even if people are aware of this capability, it's helpful to have this in the context menu anyway; for example, one might forget the name of the command, or the keybinding they have assigned to it. I think that opening docs is a common enough action to warrant the extra line added to the context menu.
This makes a few other small changes as well. There are two minor style changes to increase style consistency. First, it changes the titles of the two commands that the rust analyzer extension will contribute to the context menu to title case. All standard VS Code commands that appear in the context menu are in title case. Second, it shortens the title of the `openDocs` command from `Open docs under cursor` to `Open Docs`. The implicit assumption in the standard VS Code context menu command titles is that the action applies to the symbol under the cursor: `Go to Definition`, `Find All References`, etc. Note that since these are changes to the command titles, rather than the command names themselves, these changes will not break any users' existing keybindings for these commands.
Second, this adds further restrictions to the `where` clauses of the two commands that the rust analyzer extension will contribute to the context menu, so that the two commands will appear in the context menu only when in a Rust project **and** within a Rust file. Say you have a Python or bash script inside your Rust project. Having these commands appear in the context menu when you right click a symbol in such a non-Rust file is extraneous and potentially confusing.
![demonstration](https://user-images.githubusercontent.com/6609145/219976062-b46ab21b-5753-48f5-a1da-562566cae71c.gif)
(This is a large commit. The changes to
`compiler/rustc_middle/src/ty/context.rs` are the most important ones.)
The current naming scheme is a mess, with a mix of `_intern_`, `intern_`
and `mk_` prefixes, with little consistency. In particular, in many
cases it's easy to use an iterator interner when a (preferable) slice
interner is available.
The guiding principles of the new naming system:
- No `_intern_` prefixes.
- The `intern_` prefix is for internal operations.
- The `mk_` prefix is for external operations.
- For cases where there is a slice interner and an iterator interner,
the former is `mk_foo` and the latter is `mk_foo_from_iter`.
Also, `slice_interners!` and `direct_interners!` can now be `pub` or
non-`pub`, which helps enforce the internal/external operations
division.
It's not perfect, but I think it's a clear improvement.
The following lists show everything that was renamed.
slice_interners
- const_list
- mk_const_list -> mk_const_list_from_iter
- intern_const_list -> mk_const_list
- substs
- mk_substs -> mk_substs_from_iter
- intern_substs -> mk_substs
- check_substs -> check_and_mk_substs (this is a weird one)
- canonical_var_infos
- intern_canonical_var_infos -> mk_canonical_var_infos
- poly_existential_predicates
- mk_poly_existential_predicates -> mk_poly_existential_predicates_from_iter
- intern_poly_existential_predicates -> mk_poly_existential_predicates
- _intern_poly_existential_predicates -> intern_poly_existential_predicates
- predicates
- mk_predicates -> mk_predicates_from_iter
- intern_predicates -> mk_predicates
- _intern_predicates -> intern_predicates
- projs
- intern_projs -> mk_projs
- place_elems
- mk_place_elems -> mk_place_elems_from_iter
- intern_place_elems -> mk_place_elems
- bound_variable_kinds
- mk_bound_variable_kinds -> mk_bound_variable_kinds_from_iter
- intern_bound_variable_kinds -> mk_bound_variable_kinds
direct_interners
- region
- intern_region (unchanged)
- const
- mk_const_internal -> intern_const
- const_allocation
- intern_const_alloc -> mk_const_alloc
- layout
- intern_layout -> mk_layout
- adt_def
- intern_adt_def -> mk_adt_def_from_data (unusual case, hard to avoid)
- alloc_adt_def(!) -> mk_adt_def
- external_constraints
- intern_external_constraints -> mk_external_constraints
Other
- type_list
- mk_type_list -> mk_type_list_from_iter
- intern_type_list -> mk_type_list
- tup
- mk_tup -> mk_tup_from_iter
- intern_tup -> mk_tup
fix: Search raw identifiers without prefix
When we find references/usages of a raw identifier, we should disregard `r#` prefix because there are keywords one can use without the prefix in earlier editions (see #13034; this bug is actually fallout from the PR). `name`, the text we're searching for, has already been stripped of the prefix, but the text of nodes we compare it to hasn't been.
The second commit is strictly refactoring, I can remove it if it's not much of value.
fix: Don't expand macros in the same expansion tree after overflow
This patch fixes 2 bugs:
- In `Expander::enter_expand_id()` (and in code paths it's called), we never check whether we've reached the recursion limit. Although it hasn't been reported as far as I'm aware, this may cause hangs or stack overflows if some malformed attribute macro is used on associated items.
- We keep expansion even when recursion limit is reached. Take the following for example:
```rust
macro_rules! foo { () => {{ foo!(); foo!(); }} }
fn main() { foo!(); }
```
We keep expanding the first `foo!()` in each expansion and would reach the limit at some point, *after which* we would try expanding the second `foo!()` in each expansion until it hits the limit again. This will (by default) lead to ~2^128 expansions.
This is essentially what's happening in #14074. Unlike rustc, we don't just stop expanding macros when we fail as long as it produces some tokens so that we can provide completions and other services in incomplete macro calls.
This patch provides a method that takes care of recursion depths (`Expander::within_limit()`) and stops macro expansions in the whole macro expansion tree once it detects recursion depth overflow. To be honest, I'm not really satisfied with this fix because it can still be used in unintended ways to bypass overflow checks, and I'm still seeking ways such that misuses are caught by the compiler by leveraging types or something.
Fixes#14074