Prefer standard library paths over shorter extern deps re-exports
This should generally speed up path finding for std items as we no longer bother looking through all external dependencies. It also makes more sense to prefer importing std items from the std dependencies directly.
Fixes https://github.com/rust-lang/rust-analyzer/issues/17540
Fix path resolution for child mods of those expanded by `include!`
Child modules wouldn't use the correct candidate paths due to a branch that doesn't seem to be doing what it's intended to do. Removing the branch fixes the problem and all existing test cases pass.
Having no knowledge of how any of this works, I believe this fixes#17645. Using another test that writes the included mod directly into `lib.rs` instead, I found the difference can be traced to the candidate files we use to look up mods. A separate branch for if the file comes from an `include!` macro doesn't take into account the original mod we're contained within:
```rust
None if file_id.macro_file().map_or(false, |it| it.is_include_macro(db.upcast())) => {
candidate_files.push(format!("{}.rs", name.display(db.upcast())));
candidate_files.push(format!("{}/mod.rs", name.display(db.upcast())));
}
```
I'm not sure why this branch exists. Tracing the branch back takes us to 3bb9efb but it doesn't say *why* the branch was added. The test case that was added in this commit passes with the branch removed, so I think it's just superfluous at this point.
Child modules wouldn't use the correct candidate paths due to a branch that doesn't seem to be doing what it's intended to do. Removing the branch fixes the problem and all existing test cases pass.
Add an option to use "::" for the external crate prefix.
Fixes#11823 .
Hi I'm very new to rust-analyzer and not sure how the review process are. Can somebody take a look at this PR? thanks!
refactor: Prefer plain trait definitions over macros for impl_intern_value_trivial
`impl_intern_value_trivial` can be defined with a trait directly, so prefer that over a macro definition.
fix: do not resolve prelude within block modules
fix#17338 (continuing from #17251).
In #17251, we injected preludes into non-top-level modules, which leading to r-a to directly resolve names in preludes in block modules. This PR fix it by checking whether the module is a pseudo-module introduced by blocks. (similar to what we do for extern preludes)
`std::env::set_var` will be unsafe in edition 2024, but not before it.
I couldn't quite figure out how to check for the span properly, so for now
we just turn the false positives into false negatives, which are less bad.
feat: More callable info
With this PR we retain more info about callables other than functions, allowing for closure parameter type inlay hints to be linkable as well as better signature help around closures and `Fn*` implementors.
In the stabilization attempt of `#[unix_sigpipe = "sig_dfl"]`, a concern
was raised related to using a language attribute for the feature: Long
term, we want `fn lang_start()` to be definable by any crate, not just
libstd. Having a special language attribute in that case becomes
awkward.
So as a first step towards towards the next stabilization attempt, this
PR changes the `#[unix_sigpipe = "..."]` attribute to a compiler flag
`-Zon-broken-pipe=...` to remove that concern, since now the language
is not "contaminated" by this feature.
Another point was also raised, namely that the ui should not leak
**how** it does things, but rather what the **end effect** is. The new
flag uses the proposed naming. This is of course something that can be
iterated on further before stabilization.
Have Derive Attribute share a token tree with it's proc macros.
The goal of this PR is to stop creating a token tree for each derive proc macro.
This is done by giving the derive proc macros an id to its parent derive element.
From running the analysis stat on the rust analyzer project I did see a small memory decrease.
```
Inference: 42.80s, 362ginstr, 591mb
MIR lowering: 8.67s, 67ginstr, 291mb
Mir failed bodies: 18 (0%)
Data layouts: 85.81ms, 609minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 440.57ms, 5235minstr, 13mb
Failed const evals: 1 (0%)
Total: 64.16s, 552ginstr, 1731mb
```
After Change
```
Inference: 40.32s, 340ginstr, 593mb
MIR lowering: 7.95s, 62ginstr, 292mb
Mir failed bodies: 18 (0%)
Data layouts: 87.97ms, 591minstr, 8mb
Failed data layouts: 135 (6%)
Const evaluation: 433.38ms, 5226minstr, 14mb
Failed const evals: 1 (0%)
Total: 60.49s, 523ginstr, 1680mb
```
Currently this breaks the expansion for the actual derive attribute.
## TODO
- [x] Pick a better name for the function `smart_macro_arg`
internal: Compress file text using LZ4
I haven't tested properly, but this roughly looks like:
```
1246 MB
59mb 4899 FileTextQuery
1008 MB
20mb 4899 CompressedFileTextQuery
555kb 1790 FileTextQuery
```
We might want to test on something more interesting, like `bevy`.
Setup infra for handling auto trait bounds disabled due to perf problems
This patch updates some of the partially-implemented functions of `ChalkContext as RustIrDatabase`, namely `adt_datum()` and `impl_provided_for()`. With those, we can now correctly work with auto trait bounds and distinguish methods based on them.
Resolves#7856 (the second code; the first one is resolved by #13074)
**IMPORTANT**: I don't think we want to merge this until #7637 is resolved. Currently this patch introduces A LOT of unknown types and type mismtaches as shown below. This is because we cannot resolve items like `hashbrown::HashMap` in `std` modules, leading to auto trait bounds on them and their dependents unprovable.
|crate (from `rustc-perf@c52ee6` except for r-a)|e3dc5a588f07d6f1d3a0f33051d4af26190abe9e|HEAD of this branch|
|---|---|---|
|rust-analyzer @ e3dc5a588f |exprs: 417528, ??ty: 907 (0%), ?ty: 114 (0%), !ty: 1|exprs: 417528, ??ty: 1704 (0%), ?ty: 403 (0%), !ty: 20|
|ripgrep|exprs: 62120, ??ty: 2 (0%), ?ty: 0 (0%), !ty: 0|exprs: 62120, ??ty: 132 (0%), ?ty: 58 (0%), !ty: 11|
|webrender/webrender|exprs: 94355, ??ty: 49 (0%), ?ty: 16 (0%), !ty: 2|exprs: 94355, ??ty: 429 (0%), ?ty: 130 (0%), !ty: 7|
|diesel|exprs: 132591, ??ty: 401 (0%), ?ty: 5129 (3%), !ty: 31|exprs: 132591, ??ty: 401 (0%), ?ty: 5129 (3%), !ty: 31|
feat: Introduce term search to rust-analyzer
# Introduce term search to `rust-analyzer`
_I've marked this as draft as there might be some shortcomings, please point them out so I can fix them. Otherwise I think it is kind of ready as I think I'll rather introduce extra functionality in follow up PRs._
Term search (or I guess expression search for rust) is a technique to generate code by basically making the types match.
Consider the following program
```rust
fn wrap(arg: i32) -> Option<i32> {
todo!();
}
```
From the types of values in scope and constructors of `Option`, we can produce the expected result of wrapping the argument in `Option`
Dependently typed languages such as `Idris2` and `Agda` have similar tools to help with proofs, but this can be also used in everyday development as a "auto-complete".
# Demo videos
https://github.com/rust-lang/rust-analyzer/assets/19900308/7b68a1b7-7dba-4e31-9221-6c7485e77d88https://github.com/rust-lang/rust-analyzer/assets/19900308/0fae530a-aabb-4b28-af71-e19f8d3d64b2
# What does it currently do
- It works well with locals, free functions, type constructors and non-static impl methods that take items by value.
- Works with functions/methods that take shared references, but not with unique references (very conservative).
- Can handle projections to struct fields (eg. `foo.bar.baz`) but this might me more conservative than it has to be to avoid conflicting with borrow checker
- Should create only valid programs (no type / borrow checking errors). Tested with `rust-analyzer analysis-stats /path/to/ripgrep/Cargo.toml --run-term-search --validate-term-search` (basically running `cargo check` on all of the generated programs and only error seems to be due to type inference which is more of issue of testing method.
# Performace / fitness
```txt
ripgrep (latest)
Tail Expr syntactic hits: 130/1692 (7%)
Tail Exprs found: 523/1692 (30%)
Term search avg time: 9ms
Term search: 15.64s, 97ginstr, 8mb
rust-analyzer (on this branch)
Tail Expr syntactic hits: 804/13860 (5%)
Tail Exprs found: 6757/13860 (48%)
Term search avg time: 78ms
Term search: 1088.23s, 6765ginstr, 98mb
```
Highly generic code seems to blow up the search space so currently the amount of generics allowed is functions/methods is limited down to 0 (1 didn't give much improvement and 2 is already like 0.5+s search time)
# Plans for the future (not in this PR)
- ``~~Add impl methods that do not take `self` type (should be quite straight forward)~~ Done
- Be smarter (aka less restrictive) about borrow checking - this seems quite hard but since the current approach is rather naive I think some easy improvement is available.
- ``~~See if it works as a autocomplete while typing~~ Done
_Feel free to ask questions / point of shortcoming either here or on Zulip, I'll be happy to address them. I'm doing this as part of my MSc thesis so I'll be working on it till summer anyway 😄_
feat: ignored and disabled macro expansion
Supersedes #15117, I was having some conflicts after a rebase and since I didn't remember much of it I started clean instead.
The end result is pretty much the same as the linked PR, but instead of proc macro lookups, I marked the expanders that explicitly cannot be expanded and we shouldn't even attempt to do so.
## Unresolved questions
- [ ] I introduced a `DISABLED_ID` next to `DUMMY_ID` in `hir-expand`'s `ProcMacroExpander`, that is effectively exactly the same thing with slightly different semantics, dummy macros are not (yet) expanded probably due to errors, while not expanding disabled macros is part of the usual flow. I'm not sure if it's the right way to handle this, I also thought of just adding a flag instead of replacing the macro ID, so that the disabled macro can still be expanded for any reason if needed.
Abstract more over ItemTreeLoc-like structs
Allows reducing some code duplication by using functions generic over said structs. The diff isn't negative due to me adding some additional impls for completeness.
Swap Subtree::token_trees from Vec to boxed slice
Performs one of the optimizations suggested in #16325, but a little bit more. Boxed slices guarantee `shrink_to_fit` aswell as saving a pointer width as no capacity has to be stored.
Most of the diff is:
- Changing `vec![]` to `Box::new([])`
- Changing initialize -> fill into fill -> into_boxed_slice
- Working around the lack of an owned iterator or automatic iteration over a `Box<[T]>`
I would like to use my own crate, [small-fixed-array](https://lib.rs/small-fixed-array), although I understand if it isn't mature enough for this. If I'm given the go ahead, I can rework this PR to use it instead.
internal: even more `tracing`
As part of profiling completions, I added some additional spans and moved `TyBuilder::subst_for_def` closer to its usage site (the latter had a small impact on completion performance. Thanks for the tip, Lukas!)