This commit changes how the expected type is calculated when working
with Fn pointers, making the parenthesis stop vanishing when completing
the function name.
I've been bugged by the behaviour on parenthesis completion for a long
while now. R-a assumes that the `LetStmt` type is the same as the
function type I've just written. Worse is that all parenthesis vanish,
even from functions that have completely different signatures. It will
now verify if the signature is the same.
While working on this, I noticed that record fields behave the same, so
I also made it prioritize the field type instead of the current
expression when possible, but I'm unsure if this is OK, so input is
appreciated.
ImplTraits as return types will still behave weirdly because lowering is
disallowed at the time it resolves the function types.
fix: make callable fields not complete in method access no parens case
Follow up PR for #15879
Fixes the callable field completion appearing in the method access with no parens case.
Complete exported macros in `#[macro_use($0)]`
Closes https://github.com/rust-lang/rust-analyzer/issues/15657.
Originally added a test case for incomplete input:
```rust
#[test]
fn completes_incomplete_syntax() {
check(
r#"
//- /dep.rs crate:dep
#[macro_export]
macro_rules! foo {
() => {};
}
//- /main.rs crate:main deps:dep
#[macro_use($0
extern crate dep;
"#,
expect![[r#"
ma foo
"#]],
)
}
```
but couldn't make it pass and removed it 😅 Our current recovering logic doesn't work for token trees and for this code:
```rust
#[macro_use(
extern crate lazy_static;
fn main() {}
```
we ended up with this syntax tree:
```
SOURCE_FILE@0..53
ATTR@0..52
POUND@0..1 "#"
L_BRACK@1..2 "["
META@2..52
PATH@2..11
PATH_SEGMENT@2..11
NAME_REF@2..11
IDENT@2..11 "macro_use"
TOKEN_TREE@11..52
L_PAREN@11..12 "("
WHITESPACE@12..13 "\n"
EXTERN_KW@13..19 "extern"
WHITESPACE@19..20 " "
CRATE_KW@20..25 "crate"
WHITESPACE@25..26 " "
IDENT@26..37 "lazy_static"
SEMICOLON@37..38 ";"
WHITESPACE@38..40 "\n\n"
FN_KW@40..42 "fn"
WHITESPACE@42..43 " "
IDENT@43..47 "main"
TOKEN_TREE@47..49
L_PAREN@47..48 "("
R_PAREN@48..49 ")"
WHITESPACE@49..50 " "
TOKEN_TREE@50..52
L_CURLY@50..51 "{"
R_CURLY@51..52 "}"
WHITESPACE@52..53 "\n"
```
Maybe we can try to parse the token tree in `crates/ide-completion/src/context/analysis.rs` but I'm not sure what's the best way forward.
feat: Prioritize import suggestions based on the expected type
Hi, this is a draft PR to solve #15384. `Adt` types work and now I have a few questions :)
1. What other types make sense in this context? Looking at [ModuleDef](05666441ba/crates/hir/src/lib.rs (L275)) I am thinking everything except Modules.
2. Is there an existing way of converting between `ModeuleDef` and `hir::Type` in the rustanalyzer code base?
3. Does this approach seem sound to you?
Ups: Upon writing this I just realised that the enum test is invalided as there are no enum variants and this no variant is passed as a function argument.
Before
Private functions have RawVisibility module, but were
missed because take_types returned None early. After resolve_visibility
returned None, Visibility::Public was set instead and private functions
ended up being offered in autocompletion.
Choosing such a function results in an immediate error diagnostic
about using a private function.
After
Pattern match of take_types that returns None and
query for Module-level visibility from the original_module
Fix#15134 - tested with a unit test and a manual end-to-end
test of building rust-analyzer from my branch and opening
the reproduction repository
REVIEW
Refactor to move scope_def_applicable and check function visibility
from a module
Please let me know what's the best way to add a unit tests to
nameres, which is where the root cause was
fix: Insert fn call parens only if the parens inserted around field name
Fixes#16014.
Sorry I missed it in previous PR. I've added a test as level to prevent regressions again.
Give any suggestions to improve the test if anything.
TokenMap -> SpanMap rewrite
Opening early so I can have an overview over the full diff more easily, still very unfinished and lots of work to be done.
The gist of what this PR does is move away from assigning IDs to tokens in arguments and expansions and instead gives the subtrees the text ranges they are sourced from (made relative to some item for incrementality). This means we now only have a single map per expension, opposed to map for expansion and arguments.
A few of the things that are not done yet (in arbitrary order):
- [x] generally clean up the current mess
- [x] proc-macros, have been completely ignored so far
- [x] syntax fixups, has been commented out for the time being needs to be rewritten on top of some marker SyntaxContextId
- [x] macro invocation syntax contexts are not properly passed around yet, so $crate hygiene does not work in all cases (but most)
- [x] builtin macros do not set spans properly, $crate basically does not work with them rn (which we use)
~~- [ ] remove all uses of dummy spans (or if that does not work, change the dummy entries for dummy spans so that tests will not silently pass due to havin a file id for the dummy file)~~
- [x] de-queryfy `macro_expand`, the sole caller of it is `parse_macro_expansion`, and both of these are lru-cached with the same limit so having it be a query is pointless
- [x] docs and more docs
- [x] fix eager macro spans and other stuff
- [x] simplify include! handling
- [x] Figure out how to undo the sudden `()` expression wrapping in expansions / alternatively prioritize getting invisible delimiters working again
- [x] Simplify InFile stuff and HirFIleId extensions
~~- [ ] span crate containing all the file ids, span stuff, ast ids. Then remove the dependency injection generics from tt and mbe~~
Fixes https://github.com/rust-lang/rust-analyzer/issues/10300
Fixes https://github.com/rust-lang/rust-analyzer/issues/15685