fix: ensure implied bounds from associated types are considered in autocomplete
closes: #16989
rust-analyzer needs to consider implied bounds from associated types in order to get all methods suggestions people expect. A pretty easy way to do that is to keep the `candidate_trait_id`'s receiver if it matches `TyFingerprint::Unnameable`. When benchmarking this change, I didn't notice a meaningful difference in autocomplete latency.
(`TyFingerprint::Unnameable` corresponds to `TyKind::AssociatedType`, `TyKind::OpaqueType`, `TyKind::FnDef`, `TyKind::Closure`, `TyKind::Coroutine`, and `TyKind::CoroutineWitness`.)
Clear diagnostics only after new ones were received
Closes#15934
This adds a flag inside the global state which controls when old diagnostics are cleared. Now, old diagnostics should be cleared only after at least one new diagnostic is available.
feat: More callable info
With this PR we retain more info about callables other than functions, allowing for closure parameter type inlay hints to be linkable as well as better signature help around closures and `Fn*` implementors.
Fix OOM caused by term search
The issue came from multi Cartesian product for exprs with many (25+) arguments, each having multiple options.
The solution is two fold:
### Avoid blowing up in Cartesian product
**Before the logic was:**
1. Find expressions for each argument/param - there may be many
2. Take the Cartesian product (which blows up in some cases)
4. If there are more than 2 options throw them away by squashing them to `Many`
**Now the logic is:**
1. Find expressions for each argument/param and squash them to `Many` if there are more than 2 as otherwise we are guaranteed to also have more than 2 after taking the product which means squashing them anyway.
2. Take the Cartesian product on iterator
3. Start consuming it one by one
4. If there are more than 2 options throw them away by squashing them to `Many` (same as before)
This is also why I had to update some tests as the expressions get squashed to many more eagerly.
### Use fuel to avoid long search times and high memory usage
Now all the tactics use `should_continue: Fn() -> bool` to chech if they should keep iterating _(Similarly to chalk)_.
This reduces the search times by a magnitude, for example from ~139ms/hole to ~14ms/hole for `ripgrep` crate.
There are slightly less expressions found, but I think speed gain worth it for usability.
Also note that syntactic hits decreases more because of squashing so you simple need to run search multiple times to get full terms.
Also the worst case time (For example `nalgebra` crate cus it has tons of generics) has search times mostly under 200ms.
Benchmarks on `ripgrep` crate
Before:
```
Tail Expr syntactic hits: 291/1692 (17%)
Tail Exprs found: 1253/1692 (74%)
Term search avg time: 139ms
````
After:
```
Tail Expr syntactic hits: 239/1692 (14%)
Tail Exprs found: 1226/1692 (72%)
Term search avg time: 14ms
```
fix: keep parentheses when the precedence of inner expr is lower than the outer one
fix#17185
Additionally, this PR simplifies some code in `apply_demorgan`.