Expand lint tables && make clippy happy 🎉
This PR expands the lint tables on `./Cargo.toml` and thereby makes `cargo clippy` exit successfully! 🎉Fixes#15918
## How?
In the beginning there are some warnings for rustc.
Next, and most importantly, there is the clippy lint table. There are a few sections in there.
First there are the lint groups.
Second there are all lints which are permanently allowed with the reasoning why they are allowed.
Third there is a huge list of temporarily allowed lints. They should be removed in the mid-term, but incur a substantial amount of work, therefore they are allowed for now and can be worked on bit by bit.
Fourth there are all lints which should warn.
Additionally there are a few allow statements in the code for lints which should be permanently allowed in this specific place, but not in the whole code base.
## Follow up work
- [ ] Run clippy in CI
- [ ] Remove tidy test (at least `@Veykril` wrote this in #15017)
- [ ] Work on temporarily allowed lints
internal: Record FnAbi
This unfortunately breaks our lub coercions, so will need to look into fixing that first, though I am not sure what is going wrong where...
Stubbed some stuff out for the time being.
`cargo clippy --fix`
This PR is the result of running `cargo clippy --fix && cargo fmt` in the root of the repository. I did not manually review all the changes, but just skimmed through a few of them. The tests still pass, so it seems fine.
Optimize large array creation in const-eval
This changes repeated memcpy's to a memset for the case that we're propagating a single byte into a region of memory. It also optimizes the element-by-element copies to have a tighter loop; I'm pretty sure the old code was actually doing a multiply within each loop iteration.
For an 8GB array (`static SLICE: [u8; SIZE] = [0u8; 1 << 33];`) this takes us from ~23 seconds to ~6 seconds locally, which is spent roughly 50/50 in (a) memset to zero and (b) memcpy of the original place into a new place, when popping stack frame. The latter seems hard to avoid but is a big memcpy (since we're copying the type rather than initializing a region, so it's pretty fast), and the first is as good as it's going to get without special casing constant-valued arrays.
Closes https://github.com/rust-lang/rust/issues/55795. (That issue's references to lint checking don't appear true anymore, but I think this closes that case as something that is slow due to *time* pretty fully. An 8GB array taking only 6 seconds feels reasonable enough to not merit further tracking).
Get rid of the hir_owner query.
This query was meant as a firewall between `hir_owner_nodes` which is supposed to change often, and the queries that only depend on the item signature. That firewall was inefficient, leaking the contents of the HIR body through `HirId`s.
`hir_owner` incurs a significant cost, as we need to hash HIR twice in multiple modes. This PR proposes to remove it, and simplify the hashing scheme.
For the future, `def_kind`, `def_span`... are much more efficient for incremental decoupling, and should be preferred.
[rustc_data_structures] Use partition_point to find slice range end.
This PR uses approach introduced in https://github.com/rust-lang/rust/pull/114152 to find
the end of the range. It's much easier to understand and reason about invariants of such
implementation.
Technically it's possible to make it even shorter by returning `&[start..end]` unconditionally
because even if searched item is not present in the slice, `start` and `end` would point at
the same index, so the range would be empty. The reason I decided not to use this shorter
implementation is because it would involve more comparisons in case there are no elements
in the slice with key equal to `key`.
Also, not that it matters much, but this implementation also improves perf according to the
benchmark below:
https://gist.github.com/ttsugriy/63c0ed39ae132b131931fa1f8a3dea55
The results on my M1 macbook air are:
```
Running benches/bin_search_slice_benchmark.rs (target/release/deps/bin_search_slice_benchmark-90fa6d68c3bd1298)
Benchmarking multiply add/binary_search_slice: Collecting 100 samples in estimated 5.0002 s (1
multiply add/binary_search_slice
time: [44.719 ns 44.918 ns 45.158 ns]
No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
Benchmarking multiply add/binary_search_slice_new: Collecting 100 samples in estimated 5.0001
multiply add/binary_search_slice_new
time: [36.955 ns 37.060 ns 37.221 ns]
No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
3 (3.00%) high mild
4 (4.00%) high severe
```
Rollup of 8 pull requests
Successful merges:
- #119172 (Detect `NulInCStr` error earlier.)
- #119833 (Make tcx optional from StableMIR run macro and extend it to accept closures)
- #119967 (Add `PatKind::Err` to AST/HIR)
- #119978 (Move async closure parameters into the resultant closure's future eagerly)
- #120021 (don't store const var origins for known vars)
- #120038 (Don't create a separate "basename" when naming and opening a MIR dump file)
- #120057 (Don't ICE when deducing future output if other errors already occurred)
- #120073 (Remove spastorino from users_on_vacation)
r? `@ghost`
`@rustbot` modify labels: rollup