* add normalized orthographic projection
* custom scale for ScaledOrthographicProjection
* allow choosing base axis for ScaledOrthographicProjection
* cargo fmt
* add general (scaled) orthographic camera bundle
FIXME: does the same "far" trick from Camera2DBundle make any sense here?
* fixes
* camera bundles: rename and new ortho constructors
* unify orthographic projections
* give PerspectiveCameraBundle constructors like those of OrthographicCameraBundle
* update examples with new camera bundle syntax
* rename CameraUiBundle to UiCameraBundle
* update examples
* ScalingMode::None
* remove extra blank lines
* sane default bounds for orthographic projection
* fix alien_cake_addict example
* reorder ScalingMode enum variants
* ios example fix
* Remove AHashExt
There is little benefit of Hash*::new() over Hash*::default(), but it
does require more code that needs to be duplicated for every Hash* in
bevy_utils. It may also slightly increase compile times.
* Add StableHash* to bevy_utils
* Use StableHashMap instead of HashMap + BTreeSet for diagnostics
This is a significant reduction in the release mode compile times of
bevy_diagnostics
```
Benchmark #1: touch crates/bevy_diagnostic/src/lib.rs && cargo build --release -p bevy_diagnostic -j1
Time (mean ± σ): 3.645 s ± 0.009 s [User: 3.551 s, System: 0.094 s]
Range (min … max): 3.632 s … 3.658 s 20 runs
```
```
Benchmark #1: touch crates/bevy_diagnostic/src/lib.rs && cargo build --release -p bevy_diagnostic -j1
Time (mean ± σ): 2.938 s ± 0.012 s [User: 2.850 s, System: 0.090 s]
Range (min … max): 2.919 s … 2.969 s 20 runs
```
Relying on TypeId being some hash internally isn't future-proof because there is no guarantee about internal layout or structure of TypeId. I benchmarked TypeId noop hasher vs fxhash and found that there is very little difference.
Also fxhash is likely to be better supported because it is widely used in rustc itself.
[Benchmarks of hashers](https://github.com/bevyengine/bevy/issues/1097)
[Engine wide benchmarks](https://github.com/bevyengine/bevy/pull/1119#issuecomment-751361215)
Previously, if the actual value of LeftStickX was e.g. 0.034 and fluctuated a little
bit (less than the threshold) it would repeatedly send out events,
because it compared the value to the *filtered* old one - 0.0 - which is
more then `0.01` (the threshold) away.
The is fixed by first doing the deadzone and then comparing to the old
value.
Another possible solution would be to store both the actual old value
and the filtered one, but that would add complexity.