7.6 KiB
Contributing Quick Start
Rust Analyzer is just a usual rust project, which is organized as a Cargo workspace, builds on stable and doesn't depend on C libraries. So, just
$ cargo test
should be enough to get you started!
To learn more about how rust-analyzer works, see ./architecture.md document.
We also publish rustdoc docs to pages:
https://rust-analyzer.github.io/rust-analyzer/ra_ide/
Various organizational and process issues are discussed in this document.
Getting in Touch
Rust Analyzer is a part of RLS-2.0 working group. Discussion happens in this Zulip stream:
https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Fwg-rls-2.2E0
Issue Labels
- good-first-issue are good issues to get into the project.
- E-mentor issues have links to the code in question and tests.
- E-easy, E-medium, E-hard, labels are estimates for how hard would be to write a fix.
- fun is for cool, but probably hard stuff.
CI
We use GitHub Actions for CI. Most of the things, including formatting, are checked by
cargo test
so, if cargo test
passes locally, that's a good sign that CI will
be green as well. The only exception is that some long-running tests are skipped locally by default.
Use env RUN_SLOW_TESTS=1 cargo test
to run the full suite.
We use bors-ng to enforce the not rocket science rule.
You can run cargo xtask install-pre-commit-hook
to install git-hook to run rustfmt on commit.
Code organization
All Rust code lives in the crates
top-level directory, and is organized as a
single Cargo workspace. The editors
top-level directory contains code for
integrating with editors. Currently, it contains the plugin for VS Code (in
typescript). The docs
top-level directory contains both developer and user
documentation.
We have some automation infra in Rust in the xtask
package. It contains
stuff like formatting checking, code generation and powers cargo xtask install
.
The latter syntax is achieved with the help of cargo aliases (see .cargo
directory).
Launching rust-analyzer
Debugging language server can be tricky: LSP is rather chatty, so driving it from the command line is not really feasible, driving it via VS Code requires interacting with two processes.
For this reason, the best way to see how rust-analyzer works is to find a relevant test and execute it (VS Code includes an action for running a single test).
However, launching a VS Code instance with locally build language server is possible. There's "Run Extension (Dev Server)" launch configuration for this.
In general, I use one of the following workflows for fixing bugs and implementing features.
If the problem concerns only internal parts of rust-analyzer (ie, I don't need
to touch ra_lsp_server
crate or typescript code), there is a unit-test for it.
So, I use Rust Analyzer: Run action in VS Code to run this single test, and
then just do printf-driven development/debugging. As a sanity check after I'm
done, I use cargo xtask install --server
and Reload Window action in VS
Code to sanity check that the thing works as I expect.
If the problem concerns only the VS Code extension, I use Run Extension
launch configuration from launch.json
. Notably, this uses the usual
ra_lsp_server
binary from PATH
. For this it is important to have the following
in setting.json
file:
{
"rust-analyzer.raLspServerPath": "ra_lsp_server"
}
After I am done with the fix, I use cargo xtask install --client-code
to try the new extension for real.
If I need to fix something in the ra_lsp_server
crate, I feel sad because it's
on the boundary between the two processes, and working there is slow. I usually
just cargo xtask install --server
and poke changes from my live environment.
Note that this uses --release
, which is usually faster overall, because
loading stdlib into debug version of rust-analyzer takes a lot of time. To speed
things up, sometimes I open a temporary hello-world project which has
"rust-analyzer.withSysroot": false
in .code/settings.json
. This flag causes
rust-analyzer to skip loading the sysroot, which greatly reduces the amount of
things rust-analyzer needs to do, and makes printf's more useful. Note that you
should only use eprint!
family of macros for debugging: stdout is used for LSP
communication, and print!
would break it.
If I need to fix something simultaneously in the server and in the client, I feel even more sad. I don't have a specific workflow for this case.
Additionally, I use cargo run --release -p ra_cli -- analysis-stats path/to/some/rust/crate
to run a batch analysis. This is primarily useful for
performance optimizations, or for bug minimization.
Logging
Logging is done by both rust-analyzer and VS Code, so it might be tricky to figure out where logs go.
Inside rust-analyzer, we use the standard log
crate for logging, and
env_logger
for logging frontend. By default, log goes to stderr, but the
stderr itself is processed by VS Code.
To see stderr in the running VS Code instance, go to the "Output" tab of the
panel and select rust-analyzer
. This shows eprintln!
as well. Note that
stdout
is used for the actual protocol, so println!
will break things.
To log all communication between the server and the client, there are two choices:
-
you can log on the server side, by running something like
env RUST_LOG=gen_lsp_server=trace code .
-
you can log on the client side, by enabling
"rust-analyzer.trace.server": "verbose"
workspace setting. These logs are shown in a separate tab in the output and could be used with LSP inspector. Kudos to @DJMcNab for setting this awesome infra up!
There's also two VS Code commands which might be of interest:
-
Rust Analyzer: Status
shows some memory-usage statistics. To take full advantage of it, you need to compile rust-analyzer with jemalloc support:$ cargo install --path crates/ra_lsp_server --force --features jemalloc
There's an alias for this:
cargo xtask install --server --jemalloc
. -
Rust Analyzer: Syntax Tree
shows syntax tree of the current file/selection.
Profiling
We have a built-in hierarchical profiler, you can enable it by using RA_PROF
env-var:
RA_PROFILE=* // dump everything
RA_PROFILE=foo|bar|baz // enabled only selected entries
RA_PROFILE=*@3>10 // dump everything, up to depth 3, if it takes more than 10 ms
In particular, I have `export RA_PROFILE='*>10' in my shell profile.
To measure time for from-scratch analysis, use something like this:
$ cargo run --release -p ra_cli -- analysis-stats ../chalk/
For measuring time of incremental analysis, use either of these:
$ cargo run --release -p ra_cli -- analysis-bench ../chalk/ --highlight ../chalk/chalk-engine/src/logic.rs
$ cargo run --release -p ra_cli -- analysis-bench ../chalk/ --complete ../chalk/chalk-engine/src/logic.rs:94:0