mirror of
https://github.com/rust-lang/rust-analyzer
synced 2024-12-27 21:43:37 +00:00
b0c4b776b5
Our project model code is rather complicated -- the logic for lowering from `cargo metadata` to `CrateGraph` is fiddly and special-case. So far, we survived without testing this at all, but this increasingly seems like a poor option. So this PR introduces a simple tests just to detect the most obvious failures. The idea here is that, although we rely on external processes (cargo & rustc), we are actually using their stable interfaces, so we might just mock out the outputs. Long term, I would like to try to virtualize IO here, so as to do such mocking in a more principled way, but lets start simple. Should we forgo the mocking and just call `cargo metadata` directly perhaps? Touch question -- I personally feel that fast, in-process tests are more important in this case than any extra assurance we get from running the real thing. Super-long term, we would probably want to extend our heavy tests to cover more use-cases, but we should figure a way to do that without slowing the tests down for everyone. Perhaps we need two-tiered bors system, where we pull from `master` into `release` branch only when an additional set of tests passes? |
||
---|---|---|
.. | ||
src | ||
Cargo.toml |