Our project model code is rather complicated -- the logic for lowering
from `cargo metadata` to `CrateGraph` is fiddly and special-case. So
far, we survived without testing this at all, but this increasingly
seems like a poor option.
So this PR introduces a simple tests just to detect the most obvious
failures. The idea here is that, although we rely on external processes
(cargo & rustc), we are actually using their stable interfaces, so we
might just mock out the outputs.
Long term, I would like to try to virtualize IO here, so as to do such
mocking in a more principled way, but lets start simple.
Should we forgo the mocking and just call `cargo metadata` directly
perhaps? Touch question -- I personally feel that fast, in-process tests
are more important in this case than any extra assurance we get from
running the real thing.
Super-long term, we would probably want to extend our heavy tests to
cover more use-cases, but we should figure a way to do that without
slowing the tests down for everyone.
Perhaps we need two-tiered bors system, where we pull from `master` into
`release` branch only when an additional set of tests passes?
The completion of cfg will look at the enabled cfg keys when
performing completion.
It will also look crate features when completing a feature cfg
option. A fixed list of known values for some cfg options are
provided.
For unknown keys it will look at the enabled values for that cfg key,
which means that completion will only show enabled options for those.
If you are opening libcore from rust-lang/rust as opposed to e.g.
goto definition from some other crate which would use the sysroot
instance of libcore, a `#![cfg(not(test))]` would previously have made
all the code excluded from the module tree, breaking the editor
experience.
This puts in a slight hack that checks for the crate name "core" and
turns off `#[cfg(test)]`.