2837: Accidentally quadratic r=matklad a=matklad
Our syntax highlighting is accdentally quadratic. Current state of the PR fixes it in a pretty crude way, looks like for the proper fix we need to redo how source-analyzer works.
**NB:** don't be scared by diff stats, that's mostly a test-data file
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
2803: Fix various names, e.g. Iterator not resolving in core prelude r=matklad a=flodiebold
Basically, `Iterator` is re-exported via several steps, which happened to not be
resolved yet when we got to the prelude import, but since the name resolved to
the reexport from `core::iter` (just to no actual items), we gave up trying to
resolve it further.
Maybe part of the problem is that we can have
`PartialResolvedImport::Unresolved` or `PartialResolvedImport::Indeterminate`
with `None` in all namespaces, and handle them differently.
Fixes#2683.
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
Basically, `Iterator` is re-exported via several steps, which happened to not be
resolved yet when we got to the prelude import, but since the name resolved to
the reexport from `core::iter` (just to no actual items), we gave up trying to
resolve it further.
Maybe part of the problem is that we can have
`PartialResolvedImport::Unresolved` or `PartialResolvedImport::Indeterminate`
with `None` in all namespaces, and handle them differently.
Fixes#2683.
This change:
- introduces `compute_crate_def_map` query and renames
`CrateDefMap::crate_def_map_query` for consistency,
- annotates `crate_def_map` as `salsa::transparent` and adds a
top-level `crate_def_map` wrapper function around that starts the
profiler and immediately calls into `compute_crate_def_map` query.
This allows us to better understand where we spent the time, in
particular, how much is spent in the recomputaiton and how much in
salsa.
Example output (where we don't actually re-compute anything, but the
query still takes a non-trivial amount of time):
```
211ms - handle_inlay_hints
150ms - get_inlay_hints
150ms - SourceAnalyzer::new
65ms - def_with_body_from_child_node
65ms - analyze_container
65ms - analyze_container
65ms - Module::from_definition
65ms - Module::from_file
65ms - crate_def_map
1ms - parse_macro_query (6 calls)
0ms - raw_items_query (1 calls)
64ms - ???
```
Signed-off-by: Michal Terepeta <michal.terepeta@gmail.com>
2623: Add support macros in impl blocks r=matklad a=edwin0cheng
This PR add support for macros in impl blocks, which reuse `Expander` for macro expansion.
see also: #2459
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
2592: Add std::ops::Index support for infering r=edwin0cheng a=edwin0cheng
see also #2534
Seem like this can't fix#2534 for this case:
```rust
fn foo3(bar: [usize; 2]) {
let baz = bar[1]; // <--- baz is still unknown ?
println!("{}", baz);
}
```
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
2500: Fix format_args expansion & go to definition r=matklad a=flodiebold
The expansion of format_args wasn't yet correct enough to type-check. Also make macros in statement position expand to expressions for now, since it's not handled correctly in HIR lowering yet. This finally fixes go to definition within print macros, I think 🙂
2505: Remove more dead code r=matklad a=matklad
2506: Remove one more Ty r=matklad a=matklad
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
2501: Fix coercion from &Foo to an inference variable in a reference r=matklad a=flodiebold
We didn't try to unify within the reference, but we should.
2502: Delay legacy macro expansion r=matklad a=edwin0cheng
This PR make the following changes:
* Delay legacy macro expansion such that we concentrated all item collecting macro expansion in one place.
* Add `MacroDirective` to replace 3-tuples
* After this refactoring, no macro is expanded recursively, hence we can remove the `MacroStackMonitor` and we handle the expansion limit by the fix-point loop count.
2503: Code: check whether the LSP binary is in PATH r=matklad a=lnicola
I'm not really sure about the TS changes. I just made a couple of functions async and it seems to work.
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
Co-authored-by: Laurențiu Nicola <lnicola@dend.ro>
2466: Handle partial resolve cases r=matklad a=edwin0cheng
Another try to fix#2443 :
We resolve all imports every time in `DefCollector::collect` loop even it is resolved previously.
This is because other unresolved imports and macros will bring in another `PerNs`, so we can only assume that it has been partially resolved.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
2484: DynMap r=matklad a=matklad
Implement a `DynMap` a semi-dynamic, semi-static map, which helps to thread heterogeneously typed info in a uniform way. Totally inspired by df3bee3038/compiler/frontend/src/org/jetbrains/kotlin/resolve/BindingContext.java.
@flodiebold wdyt? Seems like a potentially useful pattern for various source-map-like things.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
2479: Add expansion infrastructure for derive macros r=matklad a=flodiebold
I thought I'd experiment a bit with attribute macro/derive expansion, and here's what I've got so far. It has dummy implementations of the Copy / Clone derives, to show that the approach works; it doesn't add any attribute macro support, but I think that fits into the architecture.
Basically, during raw item collection, we look at the attributes and generate macro calls for them if necessary. Currently I only do this for derives, and just add the derive macro calls as separate calls next to the item. I think for derives, it's important that they don't obscure the actual item, since they can't actually change it (e.g. sending the item token tree through macro expansion unnecessarily might make completion within it more complicated).
Attribute macros would have to be recognized at that stage and replace the item (i.e., the raw item collector will just emit an attribute macro call, and not the item). I think when we implement this, we should try to recognize known inert attributes, so that we don't do macro expansion unnecessarily; anything that isn't known needs to be treated as a possible attribute macro call (since the raw item collector can't resolve the macro yet).
There's basically no name resolution for attribute macros implemented, I just hardcoded the built-in derives. In the future, the built-ins should work within the normal name resolution infrastructure; the problem there is that the builtin stubs in `std` use macros 2.0, which we don't support yet (and adding support is outside the scope of this).
One aspect that I don't really have a solution for, but I don't know how important it is, is removing the attribute itself from its input. I'm pretty sure rustc leaves out the attribute macro from the input, but to do that, we'd have to create a completely new syntax node. I guess we could do it when / after converting to a token tree.
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
2455: Add BuiltinShadowMode r=flodiebold a=edwin0cheng
This PR try to fix#1905 by introduce an `BuiltinShadowMode` in name resolving functions.
cc @flodiebold
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
2418: Hide MacroCallLoc outside hir_expand r=matklad a=edwin0cheng
This PR refactor `MacroCallLoc` such that it be hided to become implementation details of hir_expand.
Co-authored-by: Edwin Cheng <edwin0cheng@gmail.com>
2396: Switch to variant-granularity field type inference r=flodiebold a=matklad
r? @flodiebold
Previously, we had a `ty` query for each field. This PR switcthes to a query per struct, which returns an `ArenaMap` with `Ty`s.
I don't know which approach is better. What is bugging me about the original approach is that, if we do all queries on the "leaf" defs, in practice we get a ton of queries which repeatedly reach into the parent definition to compute module, resolver, etc. This *seems* wasteful (but I don't think this is really what causes any perf problems for us).
At the same time, I've been looking at Kotlin, and they seem to use the general pattern of analyzing the *parent* definition, and storing info about children into a `BindingContext`.
I don't really which way is preferable. I think I want to try this approach, where query granularity generally mirrors the data granularity. The primary motivation for me here is probably just hope that we can avoid adding a ton of helpers to a `StructField`, and maybe in general avoid the need to switch to a global `StructField`, using `LocalStructFieldId` most of the time internally.
For external API (ie, for `ra_ide_api`), I think we should continue with fine-grained `StructField::ty` approach, which internally fetches the table for the whole struct and indexes into it.
In terms of actual memory savings, the results are as follows:
```
This PR:
142kb FieldTypesQuery (deps)
38kb FieldTypesQuery
Status Quo:
208kb TypeForFieldQuery (deps)
18kb TypeForFieldQuery
```
Note how the table itself occupies more than twice as much space! I don't have an explanation for this: a plausible hypothesis is that single-field structs are very common and for them the table is a pessimisation.
THere's noticiable wallclock time difference.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
2348: Add support for stringify! builtin macro r=matklad a=piotr-szpetkowski
Refs #2212
First time ever contributing here, hopefully it's ok.
2352: Move TypeAlias to hir_def r=matklad a=matklad
Co-authored-by: Piotr Szpetkowski <piotr.szpetkowski@pyquest.space>
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
The current system with AstIds has two primaraly drawbacks:
* It is possible to manufacture IDs out of thin air.
For example, it's possible to create IDs for items which are not
considered in CrateDefMap due to cfg. Or it is possible to mixup
structs and unions, because they share ID space.
* Getting the ID of a parent requires a secondary index.
Instead, the plan is to pursue the more traditional approach, where
each items stores the id of the parent declaration. This makes
`FromSource` more awkward, but also more correct: now, to get from an
AST to HIR, we first do this recursively for the parent item, and the
just search the children of the parent for the matching def