feat: Package Windows release artifacts as ZIP and add symbols file
Closes#13872Closes#7747
CC #10371
This allows us to ship a format that's easier to handle on Windows. As a bonus, we can also include the PDB, to get useful stack traces. Unfortunately, it adds a couple of dependencies to `xtask`, increasing the debug build times from 1.28 to 1.58 s (release from 1.60s to 2.20s) on my system.
Compute data layout of types
cc #4091
Things that aren't working:
* Closures
* Generators (so no support for `Future` I think)
* Opaque types
* Type alias and associated types which may need normalization
Things that show wrong result:
* ~Enums with explicit discriminant~
* SIMD types
* ~`NonZero*` and similar standard library items which control layout with special attributes~
At the user level, I didn't put much work, since I wasn't confident about what is the best way to present this information. Currently it shows size and align for ADTs, and size, align, offset for struct fields, in the hover, similar to clangd. I used it some days and I feel I liked it, but we may consider it too noisy and move it to an assist or command.
Bump chalk
There's a bug in current chalk that prevents us from properly supporting GATs, which is supposed to be fixed in v0.86. Note the following:
- v0.86 is only going to be released next Sunday so I'll keep this PR as draft until then.
- This doesn't compile without https://github.com/rust-lang/chalk/pull/779, which I hope will be included in v0.86. I confirmed this compiles with it locally.
Two breaking changes from v0.84:
- `TypeFolder` has been split into `TypeFolder` and `FallibleTypeFolder` (https://github.com/rust-lang/chalk/pull/772)
- `ProjectionTy::self_type_parameter()` has been removed (https://github.com/rust-lang/chalk/pull/778)
Two breaking changes:
- `TypeFolder` has been split into `TypeFolder` and `FallibleTypeFolder`
- `ProjectionTy::self_type_parameter()` has been removed
The main change here should be that flags are not inhereted, so
$ rust-analyzer analysis-stats . -v -v
would do what it should do
We also no longer Don\'t
Add a new configuration settings to set env vars when running cargo, rustc, etc. commands: cargo.extraEnv and checkOnSave.extraEnv
It can be extremely useful to be able to set environment variables when rust-analyzer is running various cargo or rustc commands (such as `cargo check`, `cargo --print cfg` or `cargo metadata`): users may want to set custom `RUSTFLAGS`, change `PATH` to use a custom toolchain or set a different `CARGO_HOME`.
There is the existing `server.extraEnv` setting that allows env vars to be set when the rust-analyzer server is launched, but using this as the recommended mechanism to also configure cargo/rust has some drawbacks:
- It convolutes configuring the rust-analyzer server with configuring cargo/rustc (one may want to change the `PATH` for cargo/rustc without affecting the rust-analyzer server).
- The name `server.extraEnv` doesn't indicate that cargo/rustc will be affected but renaming it to `cargo.extraEnv` doesn't indicate that the rust-analyzer server would be affected.
- To make the setting useful, it needs to be dynamically reloaded without requiring that the entire extension is reloaded. It might be possible to do this, but it would require the client communicating to the server what the overwritten env vars were at first launch, which isn't easy to do.
This change adds two new configuration settings: `cargo.extraEnv` and `checkOnSave.extraEnv` that can be used to change the environment for the rust-analyzer server after launch (thus affecting any process that rust-analyzer invokes) and the `cargo check` command respectively. `cargo.extraEnv` supports dynamic changes by keeping track of the pre-change values of environment variables, thus it can undo changes made previously before applying the new configuration (and then requesting a workspace reload).
Distinguish between
- there is no build data (for some reason?)
- there is build data, but the cargo package didn't build a proc macro dylib
- there is a proc macro dylib, but it didn't contain the proc macro we expected
- the name did not resolve to any macro (this is now an
unresolved_macro_call even for attributes)
I changed the handling of disabled attribute macro expansion to
immediately ignore the macro and report an unresolved_proc_macro,
because otherwise they would now result in loud unresolved_macro_call
errors. I hope this doesn't break anything.
Also try to improve error ranges for unresolved_macro_call / macro_error
by reusing the code for unresolved_proc_macro. It's not perfect but
probably better than before.
11598: feat: Parse destructuring assignment r=Veykril a=ChayimFriedman2
Part of #11532.
Lowering is not as easy and may not even be feasible right now as it requires generating identifiers: `(a, b) = (b, a)` is desugared into
```rust
{
let (<gensym_a>, <gensym_b>) = (b, a);
a = <gensym_a>;
b = <gensym_b>;
}
```
rustc uses hygiene to implement that, but we don't support hygiene yet.
However, I think parsing was the main problem as lowering will just affect type inference, and while `{unknown}` is not nice it's much better than a syntax error.
I'm still looking for the best way to do lowering, though.
Fixes#11454.
Co-authored-by: Chayim Refael Friedman <chayimfr@gmail.com>
11444: feat: Fix up syntax errors in attribute macro inputs to make completion work more often r=flodiebold a=flodiebold
This implements the "fix up syntax nodes" workaround mentioned in #11014. It isn't much more than a proof of concept; I have only implemented a few cases, but it already helps quite a bit.
Some notes:
- I'm not super happy about how much the fixup procedure needs to interact with the syntax node -> token tree conversion code (e.g. needing to share the token map). This could maybe be simplified with some refactoring of that code.
- It would maybe be nice to have the fixup procedure reuse or share information with the parser, though I'm not really sure how much that would actually help.
Co-authored-by: Florian Diebold <flodiebold@gmail.com>
11281: ide: parallel prime caches r=jonas-schievink a=jhgg
cache priming goes brrrr... the successor to #10149
---
this PR implements a parallel cache priming strategy that uses a topological work queue to feed a pool of worker threads the crates to index in parallel.
## todo
- [x] should we keep the old prime caches?
- [x] we should use num_cpus to detect how many cpus to use to prime caches. should we also expose a config for # of worker CPU threads to use?
- [x] something is wonky with cancellation, need to figure it out before this can merge.
Co-authored-by: Jake Heinz <jh@discordapp.com>
10956: minor: Bump deps r=Veykril a=lnicola
bors r+
10986: fix: Fix lint completions not working for unclosed attributes r=Veykril a=Veykril
Fixes#10682
Uses keywords and nested `TokenTree`s as a heuristic to figure out when to stop parsing in case the attribute is unclosed which should work pretty well as attributes are usually followed by either of those.
bors r+
Co-authored-by: Laurențiu Nicola <lnicola@dend.ro>
Co-authored-by: Lukas Wirth <lukastw97@gmail.com>
10877: feat: make hightlighting linear r=matklad a=matklad
In https://youtu.be/qvIZZf5dmTE, we've noticed that AstIdMap does a
linear lookup when going from SyntaxNode to Id. This leads to
accidentally quadratic overall performance. Replace linear lookup with a
O(1) hashmap lookup.
Future work: don't duplicate `SyntaxNodePtr` in `AstIdMap` and switch to
"call site dependency injection" style storage (eg, store a
`HashSet<ErasedFileAstId>`).
See the explanation of the work here on YouTube :-)
As you can see from then benchmark results, this doesn't actually make analysis stats fastre. I am a bit mystified as to why this is happening to be honest.
Baseline
```
Database loaded: 598.40ms, 304minstr, 118mb (metadata 390.57ms, 21minstr, 841kb; build 111.31ms, 8764kinstr, -214kb)
crates: 39, mods: 824, decls: 18647, fns: 13910
Item Collection: 9.70s, 75ginstr, 377mb
exprs: 382426, ??ty: 387 (0%), ?ty: 285 (0%), !ty: 145
Inference: 43.16s, 342ginstr, 641mb
Total: 52.86s, 417ginstr, 1018mb
```
This PR:
```
Database loaded: 626.34ms, 304minstr, 118mb (metadata 416.26ms, 21minstr, 841kb; build 113.67ms, 8750kinstr, -209kb)
crates: 39, mods: 824, decls: 18647, fns: 13910
Item Collection: 10.16s, 75ginstr, 389mb
exprs: 382426, ??ty: 387 (0%), ?ty: 285 (0%), !ty: 145
Inference: 44.51s, 342ginstr, 644mb
Total: 54.67s, 417ginstr, 1034mb
```
I think we probably should merge the first commit here, but not the second.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
10387: Move `IdxRange` into la-arena r=Veykril a=arzg
Currently, `IdxRange` (named `IdRange`) is located in `hir_def::item_tree`, when really it isn’t specific to `hir_def` and could become part of la-arena. The rename from `IdRange` to `IdxRange` is to maintain consistency with the naming convention used throughout la-arena (`Idx` instead of `Id`, `RawIdx` instead of `RawId`). This PR also adds a few new APIs to la-arena on top of `IdxRange` for convenience, namely:
- indexing into an `Arena` by an `IdxRange` and getting a slice of values back
- creating an `IdxRange` from an inclusive range
Currently this PR also exposes a new `Arena::next_idx` method to make constructing inclusive`IdxRange`s using `IdxRange::new` easier; however, it would in my opinion be better to remove this as it allows for easy creation of out-of-bounds `Idx`s, when `IdxRange::new_inclusive` mostly covers the same use-case while being less error-prone.
I decided to bump the la-arena version to 0.3.0 from 0.2.0 because adding a new `Index` impl for `Arena` turned out to be a breaking change: I had to add a type hint in `crates/hir_def/src/body/scope.rs` when one wasn’t necessary before, since rustc couldn’t work out the type of a closure parameter now that there are multiple `Index` impls. I’m not sure whether this is the right decision, though.
Co-authored-by: Aramis Razzaghipour <aramisnoah@gmail.com>
This addreses
https://github.com/rust-analyzer/rust-analyzer/issues/10464.
This patch picks up `lsp-types` 0.90.1, which serialises the
`SignatureInformation` and `ParameterInformation` with the right casing.
It also adds `activeSignature` field as part of the top-level signature
response. It keeps `activeParameter` at the top-level for backwards
compatibility.
10181: Begining of lsif r=HKalbasi a=HKalbasi
This PR adds a `lsif` command to cli, which can be used as `rust-analyzer lsif /path/to/project > dump.lsif`. It now generates a valid, but pretty useless lsif (only supports folding ranges). The propose of this PR is to discussing about the structure of lsif generator, before starting anything serious.
cc `@matklad` #8696#3098
Co-authored-by: hamidreza kalbasi <hamidrezakalbasi@protonmail.com>
Consider these expples
{ 92 }
async { 92 }
'a: { 92 }
#[a] { 92 }
Previously the tree for them were
BLOCK_EXPR
{ ... }
EFFECT_EXPR
async
BLOCK_EXPR
{ ... }
EFFECT_EXPR
'a:
BLOCK_EXPR
{ ... }
BLOCK_EXPR
#[a]
{ ... }
As you see, it gets progressively worse :) The last two items are
especially odd. The last one even violates the balanced curleys
invariant we have (#10357) The new approach is to say that the stuff in
`{}` is stmt_list, and the block is stmt_list + optional modifiers
BLOCK_EXPR
STMT_LIST
{ ... }
BLOCK_EXPR
async
STMT_LIST
{ ... }
BLOCK_EXPR
'a:
STMT_LIST
{ ... }
BLOCK_EXPR
#[a]
STMT_LIST
{ ... }
FragmentKind played two roles:
* entry point to the parser
* syntactic category of a macro call
These are different use-cases, and warrant different types. For example,
macro can't expand to visibility, but we have such fragment today.
This PR introduces `ExpandsTo` enum to separate this two use-cases.
I suspect we might further split `FragmentKind` into `$x:specifier` enum
specific to MBE, and a general parser entry point, but that's for
another PR!
10066: internal: improve compile times a bit r=matklad a=matklad
I wanted to *quickly* remove `smol_str = {features = "serde"}`, and figured out that the simplest way to do that is to replace our straightforward proc macro serialization with something significantly more obscure.
Co-authored-by: Aleksey Kladov <aleksey.kladov@gmail.com>
This pulls in https://github.com/rust-analyzer/rowan/pull/111, which
fixes a bug in green node hash, making it more efficient.
On analysis stats, total memory goes from 1271mb to 1244mb, instructions
from 358ginstr to 353ginstr (not 100% clear on this one -- for some
reasons instruction counts are not stable for me anymore).
The counts are (before, than after):
rowan::green::node::GreenNode 11_490_596 2_357_063 2_233_347
rowan::green::token::GreenToken 5_010_401 994_281 991_920
rowan::green::node::GreenNode 9_738_085 1_988_164 1_890_549
rowan::green::token::GreenToken 3_353_409 687_333 685_831
total max_live live