mirror of
https://github.com/bevyengine/bevy
synced 2024-11-25 06:00:20 +00:00
Quality-of-life updates for running CI locally (#12242)
# Objective - Make running CI locally relatively less painful by allowing continuation after failure - Fix #12206 ## Solution Previously, `ci` would halt after encounting a failure (or shortly thereafter). For instance, if you ran `cargo run -p ci` and a single test within a CI check failed somewhere in the middle, you would never even reach many of the other tests within the check and certainly none of the CI checks that came afterward. This was annoying; if I am fixing issues with CI tests, I want to just see all of the issues up-front instead of having to rerun CI every time I fix something to see the next error. Furthermore, it is not infrequent (because of subtle configuration/system differences) to encounter spurious CI failures locally; previously, when these occurred, they would make the process of running CI largely pointless, since you would have to push your branch in order to see your actual CI failures from the automated testing service. Now, when running `ci` locally, we have a couple new tools to ameliorate these and provide a smoother experience: - Firstly, there is a new optional flag `--keep-going` that can be passed while calling `ci` (e.g. `cargo run -p ci -- doc --keep-going`). It has the following effects: - It causes the `--keep-going` flag to be passed to the script's `cargo doc` and `cargo check` invocations, so that they do not stop when they encounter an error in a single module; instead, they keep going (!) and find errors subsequently. - It causes the `--no-fail-fast` flag to be passed to the script's `cargo test` invocations, to similar effect. - Finally, it causes `ci` itself not to abort after a single check fails; instead, it will run every check that was invoked. Thus, for instance, `cargo run -p ci -- --keep-going` will run every CI check even if it encounters intermediate failures, and every such check will itself be run to completion. - Secondly, we now allow multiple ordinary arguments to be passed to `ci`. For instance, `cargo -p ci -- doc test` now executes both the 'doc' checks and the 'test' checks. This allows the caller to run the tests they care about with fewer invocations of `ci`. As of this PR, automated testing will remain unchanged. --- ## Changelog - tools/ci/src/main.rs refactored into staging and execution steps, since the previous control flow did not naturally support continuing after failure. - Added "--keep-going" flag to `ci`. - Added support for invoking `ci` with multiple arguments. --- ## Discussion ### Design considerations I had originally split this into separate flags that controlled: 1. whether `--keep-going`/`--no-fail-fast` would be passed to the constituent commands 2. whether `ci` would continue after a component test failed However, I decided to merge these two behaviors, since I think that, if you're in the business of seeing all the errors, you'll probably want to actually see all of the errors. One slightly annoying thing, however, about the new behavior with `--keep-going`, is that you will sometimes find yourself wishing that the script would pause or something, since it tends to fill the screen with a lot of junk. I have found that sending stdout to /dev/null helps quite a bit, but I don't think `cargo fmt` or `cargo clippy` actually write to stderr, so you need to be cognizant of that (and perhaps invoke the lints separately). ~~Next, I'll mention that I feel somewhat strongly that the current behavior of `ci` for automated testing should remain the same, since its job is more like detecting that errors exist rather than finding all of them.~~ (I was convinced this could have value.) Orthogonally, there is the question of whether or not we might want to make this (or something similar) actually the default behavior and make the automated test script invoke some optional flags — it doesn't have to type with its hands, after all. I'm not against that, but I don't really want to rock the boat much more with this PR, since anyone who looks at the diff might already be a little incensed.
This commit is contained in:
parent
e8583d132c
commit
bb02e6ac43
1 changed files with 289 additions and 90 deletions
|
@ -1,11 +1,12 @@
|
|||
//! CI script used for Bevy.
|
||||
|
||||
use xshell::{cmd, Shell};
|
||||
|
||||
use bitflags::bitflags;
|
||||
use core::panic;
|
||||
use std::collections::BTreeMap;
|
||||
use xshell::{cmd, Cmd, Shell};
|
||||
|
||||
bitflags! {
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
struct Check: u32 {
|
||||
const FORMAT = 0b00000001;
|
||||
const CLIPPY = 0b00000010;
|
||||
|
@ -20,6 +21,29 @@ bitflags! {
|
|||
}
|
||||
}
|
||||
|
||||
bitflags! {
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||
struct Flag: u32 {
|
||||
const KEEP_GOING = 0b00000001;
|
||||
}
|
||||
}
|
||||
|
||||
// None of the CI tests require any information at runtime other than the options that have been set,
|
||||
// which is why all of these are 'static; we could easily update this to use more flexible types.
|
||||
struct CITest<'a> {
|
||||
/// The command to execute
|
||||
command: Cmd<'a>,
|
||||
|
||||
/// The message to display if the test command fails
|
||||
failure_message: &'static str,
|
||||
|
||||
/// The subdirectory path to run the test command within
|
||||
subdir: Option<&'static str>,
|
||||
|
||||
/// Environment variables that need to be set before the test runs
|
||||
env_vars: Vec<(&'static str, &'static str)>,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// When run locally, results may differ from actual CI runs triggered by
|
||||
// .github/workflows/ci.yml
|
||||
|
@ -44,125 +68,300 @@ fn main() {
|
|||
("doc-test", Check::DOC_TEST),
|
||||
];
|
||||
|
||||
let what_to_run = if let Some(arg) = std::env::args().nth(1).as_deref() {
|
||||
if let Some((_, check)) = arguments.iter().find(|(str, _)| *str == arg) {
|
||||
*check
|
||||
} else {
|
||||
println!(
|
||||
"Invalid argument: {arg:?}.\nEnter one of: {}.",
|
||||
arguments[1..]
|
||||
.iter()
|
||||
.map(|(s, _)| s)
|
||||
.fold(arguments[0].0.to_owned(), |c, v| c + ", " + v)
|
||||
);
|
||||
return;
|
||||
let flag_arguments = [("--keep-going", Flag::KEEP_GOING)];
|
||||
|
||||
// Parameters are parsed disregarding their order. Note that the first arg is generally the name of
|
||||
// the executable, so it is ignored. Any parameter may either be a flag or the name of a battery of tests
|
||||
// to include.
|
||||
let (mut checks, mut flags) = (Check::empty(), Flag::empty());
|
||||
for arg in std::env::args().skip(1) {
|
||||
if let Some((_, flag)) = flag_arguments.iter().find(|(flag_arg, _)| *flag_arg == arg) {
|
||||
flags.insert(*flag);
|
||||
continue;
|
||||
}
|
||||
} else {
|
||||
Check::all()
|
||||
};
|
||||
if let Some((_, check)) = arguments.iter().find(|(check_arg, _)| *check_arg == arg) {
|
||||
// Note that this actually adds all of the constituent checks to the test suite.
|
||||
checks.insert(*check);
|
||||
continue;
|
||||
}
|
||||
// We encountered an invalid parameter:
|
||||
println!(
|
||||
"Invalid argument: {arg:?}.\n\
|
||||
Valid parameters: {}.",
|
||||
arguments[1..]
|
||||
.iter()
|
||||
.map(|(s, _)| s)
|
||||
.chain(flag_arguments.iter().map(|(s, _)| s))
|
||||
.fold(arguments[0].0.to_owned(), |c, v| c + ", " + v)
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// If no checks are specified, we run every check
|
||||
if checks.is_empty() {
|
||||
checks = Check::all();
|
||||
}
|
||||
|
||||
let sh = Shell::new().unwrap();
|
||||
|
||||
if what_to_run.contains(Check::FORMAT) {
|
||||
// Each check contains a 'battery' (vector) that can include more than one command, but almost all of them
|
||||
// just contain a single command.
|
||||
let mut test_suite: BTreeMap<Check, Vec<CITest>> = BTreeMap::new();
|
||||
|
||||
if checks.contains(Check::FORMAT) {
|
||||
// See if any code needs to be formatted
|
||||
cmd!(sh, "cargo fmt --all -- --check")
|
||||
.run()
|
||||
.expect("Please run 'cargo fmt --all' to format your code.");
|
||||
test_suite.insert(
|
||||
Check::FORMAT,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo fmt --all -- --check"),
|
||||
failure_message: "Please run 'cargo fmt --all' to format your code.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::CLIPPY) {
|
||||
if checks.contains(Check::CLIPPY) {
|
||||
// See if clippy has any complaints.
|
||||
// - Type complexity must be ignored because we use huge templates for queries
|
||||
cmd!(
|
||||
sh,
|
||||
"cargo clippy --workspace --all-targets --all-features -- -Dwarnings"
|
||||
)
|
||||
.run()
|
||||
.expect("Please fix clippy errors in output above.");
|
||||
test_suite.insert(
|
||||
Check::CLIPPY,
|
||||
vec![CITest {
|
||||
command: cmd!(
|
||||
sh,
|
||||
"cargo clippy --workspace --all-targets --all-features -- -Dwarnings"
|
||||
),
|
||||
failure_message: "Please fix clippy errors in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::COMPILE_FAIL) {
|
||||
{
|
||||
// ECS Compile Fail Tests
|
||||
// Run UI tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_ecs_compile_fail_tests/README.md
|
||||
let _subdir = sh.push_dir("crates/bevy_ecs_compile_fail_tests");
|
||||
cmd!(sh, "cargo test --target-dir ../../target")
|
||||
.run()
|
||||
.expect("Compiler errors of the ECS compile fail tests seem to be different than expected! Check locally and compare rust versions.");
|
||||
}
|
||||
{
|
||||
// Reflect Compile Fail Tests
|
||||
// Run tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_reflect_compile_fail_tests/README.md
|
||||
let _subdir = sh.push_dir("crates/bevy_reflect_compile_fail_tests");
|
||||
cmd!(sh, "cargo test --target-dir ../../target")
|
||||
.run()
|
||||
.expect("Compiler errors of the Reflect compile fail tests seem to be different than expected! Check locally and compare rust versions.");
|
||||
}
|
||||
{
|
||||
// Macro Compile Fail Tests
|
||||
// Run tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_macros_compile_fail_tests/README.md
|
||||
let _subdir = sh.push_dir("crates/bevy_macros_compile_fail_tests");
|
||||
cmd!(sh, "cargo test --target-dir ../../target")
|
||||
.run()
|
||||
.expect("Compiler errors of the macros compile fail tests seem to be different than expected! Check locally and compare rust versions.");
|
||||
if checks.contains(Check::COMPILE_FAIL) {
|
||||
let mut args = vec!["--target-dir", "../../target"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--no-fail-fast");
|
||||
}
|
||||
|
||||
// ECS Compile Fail Tests
|
||||
// Run UI tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_ecs_compile_fail_tests/README.md
|
||||
|
||||
// (These must be cloned because of move semantics in `cmd!`)
|
||||
let args_clone = args.clone();
|
||||
|
||||
test_suite.insert(Check::COMPILE_FAIL, vec![CITest {
|
||||
command: cmd!(sh, "cargo test {args_clone...}"),
|
||||
failure_message: "Compiler errors of the ECS compile fail tests seem to be different than expected! Check locally and compare rust versions.",
|
||||
subdir: Some("crates/bevy_ecs_compile_fail_tests"),
|
||||
env_vars: vec![],
|
||||
}]);
|
||||
|
||||
// Reflect Compile Fail Tests
|
||||
// Run tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_reflect_compile_fail_tests/README.md
|
||||
let args_clone = args.clone();
|
||||
|
||||
test_suite.entry(Check::COMPILE_FAIL).and_modify(|tests| tests.push( CITest {
|
||||
command: cmd!(sh, "cargo test {args_clone...}"),
|
||||
failure_message: "Compiler errors of the Reflect compile fail tests seem to be different than expected! Check locally and compare rust versions.",
|
||||
subdir: Some("crates/bevy_reflect_compile_fail_tests"),
|
||||
env_vars: vec![],
|
||||
}));
|
||||
|
||||
// Macro Compile Fail Tests
|
||||
// Run tests (they do not get executed with the workspace tests)
|
||||
// - See crates/bevy_macros_compile_fail_tests/README.md
|
||||
let args_clone = args.clone();
|
||||
|
||||
test_suite.entry(Check::COMPILE_FAIL).and_modify(|tests| tests.push( CITest {
|
||||
command: cmd!(sh, "cargo test {args_clone...}"),
|
||||
failure_message: "Compiler errors of the macros compile fail tests seem to be different than expected! Check locally and compare rust versions.",
|
||||
subdir: Some("crates/bevy_macros_compile_fail_tests"),
|
||||
env_vars: vec![],
|
||||
}));
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::TEST) {
|
||||
if checks.contains(Check::TEST) {
|
||||
// Run tests (except doc tests and without building examples)
|
||||
cmd!(sh, "cargo test --workspace --lib --bins --tests --benches")
|
||||
.run()
|
||||
.expect("Please fix failing tests in output above.");
|
||||
let mut args = vec!["--workspace", "--lib", "--bins", "--tests", "--benches"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--no-fail-fast");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::TEST,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo test {args...}"),
|
||||
failure_message: "Please fix failing tests in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::DOC_TEST) {
|
||||
if checks.contains(Check::DOC_TEST) {
|
||||
// Run doc tests
|
||||
cmd!(sh, "cargo test --workspace --doc")
|
||||
.run()
|
||||
.expect("Please fix failing doc-tests in output above.");
|
||||
let mut args = vec!["--workspace", "--doc"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--no-fail-fast");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::DOC_TEST,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo test {args...}"),
|
||||
failure_message: "Please fix failing doc-tests in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::DOC_CHECK) {
|
||||
if checks.contains(Check::DOC_CHECK) {
|
||||
// Check that building docs work and does not emit warnings
|
||||
std::env::set_var("RUSTDOCFLAGS", "-D warnings");
|
||||
cmd!(
|
||||
sh,
|
||||
"cargo doc --workspace --all-features --no-deps --document-private-items"
|
||||
)
|
||||
.run()
|
||||
.expect("Please fix doc warnings in output above.");
|
||||
let mut args = vec![
|
||||
"--workspace",
|
||||
"--all-features",
|
||||
"--no-deps",
|
||||
"--document-private-items",
|
||||
];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--keep-going");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::DOC_CHECK,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo doc {args...}"),
|
||||
failure_message: "Please fix doc warnings in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![("RUSTDOCFLAGS", "-D warnings")],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::BENCH_CHECK) {
|
||||
let _subdir = sh.push_dir("benches");
|
||||
if checks.contains(Check::BENCH_CHECK) {
|
||||
// Check that benches are building
|
||||
cmd!(sh, "cargo check --benches --target-dir ../target")
|
||||
.run()
|
||||
.expect("Failed to check the benches.");
|
||||
let mut args = vec!["--benches", "--target-dir", "../target"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--keep-going");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::BENCH_CHECK,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo check {args...}"),
|
||||
failure_message: "Failed to check the benches.",
|
||||
subdir: Some("benches"),
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::EXAMPLE_CHECK) {
|
||||
if checks.contains(Check::EXAMPLE_CHECK) {
|
||||
// Build examples and check they compile
|
||||
cmd!(sh, "cargo check --workspace --examples")
|
||||
.run()
|
||||
.expect("Please fix compiler errors for examples in output above.");
|
||||
let mut args = vec!["--workspace", "--examples"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--keep-going");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::EXAMPLE_CHECK,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo check {args...}"),
|
||||
failure_message: "Please fix compiler errors for examples in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::COMPILE_CHECK) {
|
||||
if checks.contains(Check::COMPILE_CHECK) {
|
||||
// Build bevy and check that it compiles
|
||||
cmd!(sh, "cargo check --workspace")
|
||||
.run()
|
||||
.expect("Please fix compiler errors in output above.");
|
||||
let mut args = vec!["--workspace"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--keep-going");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::COMPILE_CHECK,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo check {args...}"),
|
||||
failure_message: "Please fix compiler errors in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
if what_to_run.contains(Check::CFG_CHECK) {
|
||||
if checks.contains(Check::CFG_CHECK) {
|
||||
// Check cfg and imports
|
||||
std::env::set_var("RUSTFLAGS", "-D warnings");
|
||||
cmd!(sh, "cargo +nightly check -Zcheck-cfg --workspace")
|
||||
.run()
|
||||
.expect("Please fix failing cfg checks in output above.");
|
||||
let mut args = vec!["-Zcheck-cfg", "--workspace"];
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
args.push("--keep-going");
|
||||
}
|
||||
|
||||
test_suite.insert(
|
||||
Check::CFG_CHECK,
|
||||
vec![CITest {
|
||||
command: cmd!(sh, "cargo +nightly check {args...}"),
|
||||
failure_message: "Please fix failing cfg checks in output above.",
|
||||
subdir: None,
|
||||
env_vars: vec![("RUSTFLAGS", "-D warnings")],
|
||||
}],
|
||||
);
|
||||
}
|
||||
|
||||
// Actually run the tests:
|
||||
|
||||
let mut failed_checks: Check = Check::empty();
|
||||
let mut failure_message: String = String::new();
|
||||
|
||||
// In KEEP_GOING-mode, we save all errors until the end; otherwise, we just
|
||||
// panic with the given message for test failure.
|
||||
fn fail(
|
||||
current_check: Check,
|
||||
failure_message: &'static str,
|
||||
failed_checks: &mut Check,
|
||||
existing_fail_message: &mut String,
|
||||
flags: &Flag,
|
||||
) {
|
||||
if flags.contains(Flag::KEEP_GOING) {
|
||||
failed_checks.insert(current_check);
|
||||
if !existing_fail_message.is_empty() {
|
||||
existing_fail_message.push('\n');
|
||||
}
|
||||
existing_fail_message.push_str(failure_message);
|
||||
} else {
|
||||
panic!("{failure_message}");
|
||||
}
|
||||
}
|
||||
|
||||
for (check, battery) in test_suite {
|
||||
for ci_test in battery {
|
||||
// If the CI test is to be executed in a subdirectory, we move there before running the command
|
||||
let _subdir_hook = ci_test.subdir.map(|path| sh.push_dir(path));
|
||||
|
||||
// Actually run the test, setting environment variables temporarily
|
||||
if ci_test.command.envs(ci_test.env_vars).run().is_err() {
|
||||
fail(
|
||||
check,
|
||||
ci_test.failure_message,
|
||||
&mut failed_checks,
|
||||
&mut failure_message,
|
||||
&flags,
|
||||
);
|
||||
}
|
||||
// ^ This must run while `_subdir_hook` is in scope; it is dropped on loop iteration.
|
||||
}
|
||||
}
|
||||
|
||||
if !failed_checks.is_empty() {
|
||||
panic!(
|
||||
"One or more CI checks failed.\n\
|
||||
{failure_message}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue