bevy/tools/compile_fail_utils
BD103 bdb4899978
Move compile fail tests (#13196)
# Objective

- Follow-up of #13184 :)
- We use `ui_test` to test compiler errors for our custom macros.
- There are four crates related to compile fail tests
- `bevy_ecs_compile_fail_tests`, `bevy_macros_compile_fail_tests`, and
`bevy_reflect_compile_fail_tests`, which actually test the macros.
-
[`bevy_compile_test_utils`](64c1c65783/crates/bevy_compile_test_utils),
which provides helpers and common patterns for these tests.
- All of these crates reside within the `crates` directory.
- This can be confusing, especially for newcomers. All of the other
folders in `crates` are actual published libraries, except for these 4.

## Solution

- Move all compile fail tests to a `compile_fail` folder under their
corresponding crate.
- E.g. `crates/bevy_ecs_compile_fail_tests` would be moved to
`crates/bevy_ecs/compile_fail`.
- Move `bevy_compile_test_utils` to `tools/compile_fail_utils`.

There are a few benefits to this approach:

1. An internal testing detail is less intrusive (and confusing) for
those who just want to browse the public Bevy interface.
2. Follows a pre-existing approach of organizing related crates inside a
larger crate's folder.
   - See `bevy_gizmos/macros` for an example.
4. Makes consistent the terms `compile_test`, `compile_fail`, and
`compile_fail_test` in code. It's all just `compile_fail` now, because
we are specifically testing the error messages on compiler failures.
- To be clear it can still be referred to by these terms in comments and
speech, just the names of the crates and the CI command are now
consistent.

## Testing

Run the compile fail CI command:

```shell
cargo run -p ci -- compile-fail
```

If it still passes, then my refactor was successful.
2024-05-03 13:35:21 +00:00
..
src Move compile fail tests (#13196) 2024-05-03 13:35:21 +00:00
tests Move compile fail tests (#13196) 2024-05-03 13:35:21 +00:00
Cargo.toml Move compile fail tests (#13196) 2024-05-03 13:35:21 +00:00
README.md Move compile fail tests (#13196) 2024-05-03 13:35:21 +00:00

Helpers for compile fail tests

This crate contains everything needed to set up compile tests for the Bevy repo. It, like all Bevy compile test crates, is excluded from the Bevy workspace. This is done to not fail crater tests for Bevy. The CI workflow executes these tests on the stable rust toolchain see (tools/ci).

Writing new test cases

Test cases are annotated .rs files. These annotations define how the test case should be run and what we're expecting to fail. Please see https://github.com/oli-obk/ui_test/blob/main/README.md for more information.

Annotations can roughly be split into global annotations which are prefixed with //@ and define how tests should be run and error annotations which are prefixed with //~ and define where errors we expect to happen. From the global annotations, you're only likely to care about //@check-pass which will make any compile errors in the test trigger a test failure.

The error annotations are composed of two parts. An optional location specifier:

  • ^ The error happens on the line above.
  • v The error happens on the line below.
  • | The error annotation is connected to another one.
  • If the location specifier is missing, the error is assumed to happen on the same line as the annotation.

An error matcher:

  • E#### The error we're expecting has the #### rustc error code, e.g E0499
  • <lint_name> The given compiler lint is triggered, e.g. dead_code
  • LEVEL: <substring> A compiler error of the given level (valid levels are: ERROR, HELP, WARN or NOTE) will be raised and it will contain the substring. Substrings can contain spaces.
  • LEVEL: /<regex>/ Same as above but a regex is used to match the error message.

An example of an error annotation would be //~v ERROR: missing trait. This error annotation will match any error occurring on the line below that contains the substring missing trait.

Adding compile fail tests for a crate that doesn't have them

This will be a rather involved process. You'll have to:

  • Create a subdirectory named compile_fail within the crate you are testing. (E.g. bevy_ecs/compile_fail for bevy_ecs.)
  • Add compile_fail_utils as a dev-dependency.
  • Create a folder called tests within the new crate.
  • Add a test runner file to this folder. The file should contain a main function calling one of the test functions defined in this crate.
  • Add a [[test]] table to the Cargo.toml. This table will need to contain harness = false and name = <name of the test runner file you defined>.
  • Add the path of the new crate under [workspace].exclude in the root Cargo.toml.
  • Modify the CI tool to run cargo test on this crate.
  • And finally, write your compile tests.

If you have any questions, don't be scared to ask for help.

A note about .stderr files

We're capable of generating .stderr files for all our compile tests. These files contain the error output generated by the test. To create or regenerate them yourself, trigger the tests with the BLESS environment variable set to any value (e.g. BLESS="some symbolic value"). We currently have to ignore mismatches between these files and the actual stderr output from their corresponding test due to issues with file paths. We attempt to sanitize file paths but for proc-macros, the compiler error messages contain file paths to the current toolchain's copy of the standard library. If we knew of a way to construct a path to the current toolchains folder we could fix this.