Fix inconsistent rounding of 0.5 when formatted to 0 decimal places
As described in #70336, when displaying values to zero decimal places the value of 0.5 is rounded to 1, which is inconsistent with the display of other half-integer values which round to even.
From testing the flt2dec implementation, it looks like this comes down to the condition in the fixed-width Dragon implementation where an empty buffer is treated as a case to apply rounding up. I believe the change below fixes it and updates only the relevant tests.
Nevertheless I am aware this is very much a core piece of functionality, so please take a very careful look to make sure I haven't missed anything. I hope this change does not break anything in the wider ecosystem as having a consistent rounding behaviour in floating point formatting is in my opinion a useful feature to have.
Resolves#70336
interpret: support for per-byte provenance
Also factors the provenance map into its own module.
The third commit does the same for the init mask. I can move it in a separate PR if you prefer.
Fixes https://github.com/rust-lang/miri/issues/2181
r? `@oli-obk`
Add new MIR constant propagation based on dataflow analysis
The current constant propagation in `rustc_mir_transform/src/const_prop.rs` fails to handle many cases that would be expected from a constant propagation optimization. For example:
```rust
let x = if true { 0 } else { 0 };
```
This pull request adds a new constant propagation MIR optimization pass based on the existing dataflow analysis framework. Since most of the analysis is not unique to constant propagation, a generic framework has been extracted. It works on top of the existing framework and could be reused for other optimzations.
Closes#80038. Closes#81605.
## Todo
### Essential
- [x] [Writes to inactive enum variants](https://github.com/rust-lang/rust/pull/101168#pullrequestreview-1089493974). Resolved by rejecting the registration of places with downcast projections for now. Could be improved by flooding other variants if mutable access to a variant is observed.
- [X] Handle [`StatementKind::CopyNonOverlapping`](https://github.com/rust-lang/rust/pull/101168#discussion_r957774914). Resolved by flooding the destination.
- [x] Handle `UnsafeCell` / `!Freeze` correctly.
- [X] Overflow propagation of `CheckedBinaryOp`: Decided to not propagate if overflow flag is `true` (`false` will still be propagated)
- [x] More documentation in general.
- [x] Arguments for correctness, documentation of necessary assumptions.
- [x] Better performance, or alternatively, require `-Zmir-opt-level=3` for now.
### Extra
- [x] Add explicit unreachability, i.e. upgrading the lattice from $\mathbb{P} \to \mathbb{V}$ to $\set{\bot} \cup (\mathbb{P} \to \mathbb{V})$.
- [x] Use storage statements to improve precision.
- [ ] Consider opening issue for duplicate diagnostics: https://github.com/rust-lang/rust/pull/101168#issuecomment-1276609950
- [ ] Flood moved-from places with $\bot$ (requires some changes for places with tracked projections).
- [ ] Add downcast projections back in.
- [ ] [Algebraic simplifications](https://github.com/rust-lang/rust/pull/101168#discussion_r957967878) (possibly with a shared API; done by old const prop).
- [ ] Propagation through slices / arrays.
- [ ] Find other optimizations that are done by old `const_prop.rs`, but not by this one.
Wrap bundled static libraries into object files
Fixes#103044 (not sure, couldn't test locally)
Bundled static libraries should be wrapped into object files as it's done for metadata file.
r? `@petrochenkov`
Change the way libunwind is linked for *-windows-gnullvm targets
I have no idea why previous way works for `x86_64-fortanix-unknown-sgx` (assuming it actually works...) but not for `gnullvm`. It fails when linking libtest during Rust build (unless somebody adds `RUSTFLAGS='-Clinkarg=-lunwind'`).
Also fixes exception handling on AArch64.
Merge crossbeam-channel into `std::sync::mpsc`
This PR imports the [`crossbeam-channel`](https://github.com/crossbeam-rs/crossbeam/tree/master/crossbeam-channel#crossbeam-channel) crate into the standard library as a private module, `sync::mpmc`. `sync::mpsc` is now implemented as a thin wrapper around `sync::mpmc`. The primary purpose of this PR is to resolve https://github.com/rust-lang/rust/issues/39364. The public API intentionally remains the same.
The reason https://github.com/rust-lang/rust/issues/39364 has not been fixed in over 5 years is that the current channel is *incredibly* complex. It was written many years ago and has sat mostly untouched since. `crossbeam-channel` has become the most popular alternative on crates.io, amassing over 30 million downloads. While crossbeam's channel is also complex, like all fast concurrent data structures, it avoids some of the major issues with the current implementation around dynamic flavor upgrades. The new implementation decides on the datastructure to be used when the channel is created, and the channel retains that structure until it is dropped.
Replacing `sync::mpsc` with a simpler, less performant implementation has been discussed as an alternative. However, Rust touts itself as enabling *fearless concurrency*, and having the standard library feature a subpar implementation of a core concurrency primitive doesn't feel right. The argument is that slower is better than broken, but this PR shows that we can do better.
As mentioned before, the primary purpose of this PR is to fix https://github.com/rust-lang/rust/issues/39364, and so the public API intentionally remains the same. *After* that problem is fixed, the fact that `sync::mpmc` now exists makes it easier to fix the primary limitation of `mpsc`, the fact that it only supports a single consumer. spmc and mpmc are two other common concurrency patterns, and this change enables a path to deprecating `mpsc` and exposing a general `sync::channel` module that supports multiple consumers. It also implements other useful methods such as `send_timeout`. That said, exposing MPMC and other new functionality is mostly out of scope for this PR, and it would be helpful if discussion stays on topic :)
For what it's worth, the new implementation has also been shown to be more performant in [some basic benchmarks](https://github.com/crossbeam-rs/crossbeam/tree/master/crossbeam-channel/benchmarks#results).
cc `@taiki-e`
r? rust-lang/libs
Improve performance of `rem_euclid()` for signed integers
such code is copy from
https://github.com/rust-lang/rust/blob/master/library/std/src/f32.rs and
https://github.com/rust-lang/rust/blob/master/library/std/src/f64.rs
using `r+rhs.abs()` is faster than calc it with an if clause. Bench result:
```
$ cargo bench
Compiling div-euclid v0.1.0 (/me/div-euclid)
Finished bench [optimized] target(s) in 1.01s
Running unittests src/lib.rs (target/release/deps/div_euclid-7a4530ca7817d1ef)
running 7 tests
test tests::it_works ... ignored
test tests::bench_aaabs ... bench: 10,498,793 ns/iter (+/- 104,360)
test tests::bench_aadefault ... bench: 11,061,862 ns/iter (+/- 94,107)
test tests::bench_abs ... bench: 10,477,193 ns/iter (+/- 81,942)
test tests::bench_default ... bench: 10,622,983 ns/iter (+/- 25,119)
test tests::bench_zzabs ... bench: 10,481,971 ns/iter (+/- 43,787)
test tests::bench_zzdefault ... bench: 11,074,976 ns/iter (+/- 29,633)
test result: ok. 0 passed; 0 failed; 1 ignored; 6 measured; 0 filtered out; finished in 19.35s
```
It seems that, default `rem_euclid` triggered a branch prediction, thus `bench_default` is faster than `bench_aadefault` and `bench_aadefault`, which shuffles the order of calculations. but all of them slower than what it was in `f64`'s and `f32`'s `rem_euclid`, thus I submit this PR.
bench code:
```rust
#![feature(test)]
extern crate test;
fn rem_euclid(a:i32,rhs:i32)->i32{
let r = a % rhs;
if r < 0 { r + rhs.abs() } else { r }
}
#[cfg(test)]
mod tests {
use super::*;
use test::Bencher;
use rand::prelude::*;
use rand::rngs::SmallRng;
const N:i32=1000;
#[test]
fn it_works() {
let a: i32 = 7; // or any other integer type
let b = 4;
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
for i in &d {
for j in &n {
assert_eq!(i.rem_euclid(*j),rem_euclid(*i,*j));
}
}
assert_eq!(rem_euclid(a,b), 3);
assert_eq!(rem_euclid(-a,b), 1);
assert_eq!(rem_euclid(a,-b), 3);
assert_eq!(rem_euclid(-a,-b), 1);
}
#[bench]
fn bench_aaabs(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_aadefault(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
#[bench]
fn bench_abs(b: &mut Bencher) {
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_default(b: &mut Bencher) {
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
#[bench]
fn bench_zzabs(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_zzdefault(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
}
```
Resolve lifetimes independently for each item-like.
Now that the heavy-lifting is done on the AST and during lowering, we do not need to perform HIR lifetime resolution on a full item at once. Instead, we can treat each item-like independently, and look at `generics_of` the parent exceptionally for associated items.
Remove lock wrappers in `sys_common`
This moves the lazy allocation to `sys` (SGX and UNIX). While this leads to a bit more verbosity, it will simplify future improvements by making room in `sys_common` for platform-independent implementations.
This also removes the condvar check on SGX as it is not necessary for soundness and will be removed anyway once mutex has been made movable.
For simplicity's sake, `libunwind` also uses lazy allocation now on SGX. This will require an update to the C definitions before merging this (CC `@raoulstrackx).`
r? `@m-ou-se`
Retry failed macro matching for diagnostics
When a declarative macro fails to match, retry the matching to collect diagnostic info instead of collecting it on the fly in the hot path. Split out of #103439.
You made a bunch of changes to declarative macro matching, so
r? `@nnethercote`
This change should produce a few small perf wins: https://github.com/rust-lang/rust/pull/103439#issuecomment-1294249602
Recover wrong-cased keywords that start items
(_this pr was inspired by [this tweet](https://twitter.com/Azumanga/status/1552982326409367561)_)
r? `@estebank`
We've talked a bit about this recovery, but I just wanted to make sure that this is the right approach :)
For now I've only added the case insensitive recovery to `use`s, since most other items like `impl` blocks, modules, functions can start with multiple keywords which complicates the matter.
Use 64 bits for incremental cache in-file positions
We currently use a 32-bit integer to encode byte positions into the incremental cache.
This is not enough when the query chache file is >4GB.
As the overflow check was a `debug_assert`, it was removed in released compilers, making compilation succeed silently.
At the next compilation, cache decoding would try to read unrelated data because of garbled file position, triggering an ICE.
Fixes https://github.com/rust-lang/rust/issues/79786
(I'm closing that bug since it the original report and the subsequent questions are probably different instances. A new bug should be opened for new instances of that ICE.)
Rollup of 9 pull requests
Successful merges:
- #102763 (Some diagnostic-related nits)
- #103443 (Parser: Recover from using colon as path separator in imports)
- #103675 (remove redundent "<>" for ty::Slice with reference type)
- #104046 (bootstrap: add support for running Miri on a file)
- #104115 (Migrate crate-search element to CSS variables)
- #104190 (Ignore "Change InferCtxtBuilder from enter to build" in git blame)
- #104201 (Add check in GUI test for file loading failure)
- #104211 (⬆️ rust-analyzer)
- #104231 (Update mailmap)
Failed merges:
- #104169 (Migrate `:target` rules to use CSS variables)
r? `@ghost`
`@rustbot` modify labels: rollup
Add check in GUI test for file loading failure
Since https://github.com/rust-lang/rust/pull/101702, some resources location need to be updated in case their content changed because then their hash will change too. This will prevent errors like https://github.com/rust-lang/rust/pull/104114 to happen again.
The second commit is to prevent CORS errors: when a file is linked from a file itself imported, the web browser considers they come from a different domain and therefore triggers the error. The option tells the web browser to ignore this case.
cc ```@jsha```
r? ```@notriddle```
Ignore "Change InferCtxtBuilder from enter to build" in git blame
Because it changed the indentation of many things, this commit caused a lot of diff with no functional changes, so we should ignore it.
r? ```@compiler-errors``` as you've complained about this before
The relevant commit: 283abbf0e7
bootstrap: add support for running Miri on a file
This enables:
```
./x.py run src/tools/miri --stage 0 --args src/tools/miri/tests/pass/hello.rs
```
That can be super helpful for debugging.
Also avoid sharing the Miri sysroot dir with a system-wide (rustup-managed) installation of Miri.
Fixes https://github.com/rust-lang/rust/issues/76666
Parser: Recover from using colon as path separator in imports
I don't know if this is the right approach, any feedback is welcome.
r? ```@compiler-errors```
Fixes#103269
Some diagnostic-related nits
1. Use `&mut Diagnostic` instead of `&mut DiagnosticBuilder<'_, T>`
2. Make `diag.span_suggestions` take an `IntoIterator` instead of `Iterator`, just to remove some `.into_iter` calls on the caller.
idk if I should add a lint to make sure people use `&mut Diagnostic` instead of `&mut DiagnosticBuilder<'_, T>` in cases where we're just, e.g., adding subdiagnostics to the diagnostic... maybe a followup.
Const Compare for Tuples
Makes the impls for Tuples of ~const `PartialEq` types also `PartialEq`, impls for Tuples of ~const `PartialOrd` types also `PartialOrd`, for Tuples of ~const `Ord` types also `Ord`.
behind the `#![feature(const_cmp)]` gate.
~~Do not merge before #104113 is merged because I want to use this feature to clean up the new test that I added there.~~
r? ``@fee1-dead``
Don't intra linkcheck reference
This removes the reference from the intra-doc link checks. This causes problems if any of the reference content needs to change, it causes the linkchecker to break. The reference has its own broken link check (https://github.com/rust-lang/reference/tree/master/style-check) which uses pulldown-cmark on the source to find actual broken links (instead of false-positives like this regex does).
I think the intra-doc link check could potentially be removed completely, since I think rustdoc is now checking for them well enough. However, it may serve as a decent regression check.
Refactor build-manifest to minimize the number of changes needed to add a new component
- Add all components to `PkgType`
- Automate functionality wherever possible, so functions often don't have to be manually edited
- Where that's not possible, use exhaustive matches on `PkgType` instead of adding individual strings.
- Add documentation for how to add a component. Improve the existing documentation for how to test changes.
I tested locally that this generates an identical manifest before and after my change, as follows:
```sh
git checkout d44e14225ab00e164aa9ea9e8d9e1bee40f96b3e
cargo +nightly run --manifest-path src/tools/build-manifest/Cargo.toml build/dist build/manifest-before 1970-01-01 http://example.com nightly
git checkout refactor-build-manifest
cargo +nightly run --manifest-path src/tools/build-manifest/Cargo.toml build/dist build/manifest-before 1970-01-01 http://example.com nightly
sort -u build/manifest-before/channel-rust-nightly.toml | diff - <(sort -u build/manifest-after/channel-rust-nightly.toml)
```
I then verified by hand that the differences before sorting are inconsequential (mostly targets being slightly reordered).
The only change in behavior is that `llvm-tools` is now properly renamed to `llvm-tools-preview`:
```
; sort -u build/manifest-before/channel-rust-nightly.toml | diff - <(sort -u build/manifest-after/channel-rust-nightly.toml)
784a785
> [renames.llvm-tools]
894a896
> to = "llvm-tools-preview"
```
This is based on https://github.com/rust-lang/rust/pull/102241 and should not be merged before.
Remove #![allow(rustc::potential_query_instability)] from rustc_infer
Related to #84447
This PR probably needs to be benchmarked to check for regressions.
internal: Migrate `ide_assists::utils` and `ide_assists::handlers` to use format arg captures (part 1)
This not only serves as making future migration to mutable syntax trees easier, it also finds out what needs to be migrated in the first place.
~~Aside from the first commit, subsequent commits are structured to only deal with one file/handler at a time.~~
This is the first of 3 PRs, migrating:
Utils:
- `gen_trait_fn_body`
- `render_snippet`
- `ReferenceConversion`
- `convert_type`
- `getter`
Handlers:
- `add_explicit_type`
- `add_return_type`
- `add_turbo_fish`
- `apply_demorgan`
- `auto_import`
- `convert_comment_block`
- `convert_integer_literal`
- `convert_into_to_from`
- `convert_iter_for_each_to_for`
- `convert_let_else_to_match`
- `convert_tuple_struct_to_named_struct`
- `convert_two_arm_bool_match_to_matches_macro`
- `destructure_tuple_binding`
- `extract_function`
- `extract_module`
- `extract_struct_from_enum_variant`
- `extract_type_alias`
- `extract_variable`
- `fix_visibility`