reorg docs

This commit is contained in:
Aleksey Kladov 2020-08-02 14:37:27 +02:00
parent e4d0f19b01
commit edee52fa57
2 changed files with 263 additions and 268 deletions

View file

@ -50,277 +50,85 @@ We use bors-ng to enforce the [not rocket science](https://graydon2.dreamwidth.o
You can run `cargo xtask install-pre-commit-hook` to install git-hook to run rustfmt on commit. You can run `cargo xtask install-pre-commit-hook` to install git-hook to run rustfmt on commit.
# Code organization
All Rust code lives in the `crates` top-level directory, and is organized as a
single Cargo workspace. The `editors` top-level directory contains code for
integrating with editors. Currently, it contains the plugin for VS Code (in
TypeScript). The `docs` top-level directory contains both developer and user
documentation.
We have some automation infra in Rust in the `xtask` package. It contains
stuff like formatting checking, code generation and powers `cargo xtask install`.
The latter syntax is achieved with the help of cargo aliases (see `.cargo`
directory).
# Launching rust-analyzer # Launching rust-analyzer
Debugging the language server can be tricky: LSP is rather chatty, so driving it Debugging the language server can be tricky.
from the command line is not really feasible, driving it via VS Code requires LSP is rather chatty, so driving it from the command line is not really feasible, driving it via VS Code requires interacting with two processes.
interacting with two processes.
For this reason, the best way to see how rust-analyzer works is to find a For this reason, the best way to see how rust-analyzer works is to find a relevant test and execute it.
relevant test and execute it (VS Code includes an action for running a single VS Code & Emacs include an action for running a single test.
test).
However, launching a VS Code instance with a locally built language server is Launching a VS Code instance with a locally built language server is also possible.
possible. There's **"Run Extension (Debug Build)"** launch configuration for this. There's **"Run Extension (Debug Build)"** launch configuration for this in VS Code.
In general, I use one of the following workflows for fixing bugs and In general, I use one of the following workflows for fixing bugs and implementing features:
implementing features.
If the problem concerns only internal parts of rust-analyzer (i.e. I don't need If the problem concerns only internal parts of rust-analyzer (i.e. I don't need to touch the `rust-analyzer` crate or TypeScript code), there is a unit-test for it.
to touch the `rust-analyzer` crate or TypeScript code), there is a unit-test for it. So, I use **Rust Analyzer: Run** action in VS Code to run this single test, and then just do printf-driven development/debugging.
So, I use **Rust Analyzer: Run** action in VS Code to run this single test, and As a sanity check after I'm done, I use `cargo xtask install --server` and **Reload Window** action in VS Code to verify that the thing works as I expect.
then just do printf-driven development/debugging. As a sanity check after I'm
done, I use `cargo xtask install --server` and **Reload Window** action in VS
Code to sanity check that the thing works as I expect.
If the problem concerns only the VS Code extension, I use **Run Installed Extension** If the problem concerns only the VS Code extension, I use **Run Installed Extension** launch configuration from `launch.json`.
launch configuration from `launch.json`. Notably, this uses the usual Notably, this uses the usual `rust-analyzer` binary from `PATH`.
`rust-analyzer` binary from `PATH`. For this, it is important to have the following For this, it is important to have the following in your `settings.json` file:
in your `settings.json` file:
```json ```json
{ {
"rust-analyzer.serverPath": "rust-analyzer" "rust-analyzer.serverPath": "rust-analyzer"
} }
``` ```
After I am done with the fix, I use `cargo After I am done with the fix, I use `cargo xtask install --client-code` to try the new extension for real.
xtask install --client-code` to try the new extension for real.
If I need to fix something in the `rust-analyzer` crate, I feel sad because it's If I need to fix something in the `rust-analyzer` crate, I feel sad because it's on the boundary between the two processes, and working there is slow.
on the boundary between the two processes, and working there is slow. I usually I usually just `cargo xtask install --server` and poke changes from my live environment.
just `cargo xtask install --server` and poke changes from my live environment. Note that this uses `--release`, which is usually faster overall, because loading stdlib into debug version of rust-analyzer takes a lot of time.
Note that this uses `--release`, which is usually faster overall, because To speed things up, sometimes I open a temporary hello-world project which has `"rust-analyzer.withSysroot": false` in `.code/settings.json`.
loading stdlib into debug version of rust-analyzer takes a lot of time. To speed This flag causes rust-analyzer to skip loading the sysroot, which greatly reduces the amount of things rust-analyzer needs to do, and makes printf's more useful.
things up, sometimes I open a temporary hello-world project which has Note that you should only use the `eprint!` family of macros for debugging: stdout is used for LSP communication, and `print!` would break it.
`"rust-analyzer.withSysroot": false` in `.code/settings.json`. This flag causes
rust-analyzer to skip loading the sysroot, which greatly reduces the amount of
things rust-analyzer needs to do, and makes printf's more useful. Note that you
should only use the `eprint!` family of macros for debugging: stdout is used for LSP
communication, and `print!` would break it.
If I need to fix something simultaneously in the server and in the client, I If I need to fix something simultaneously in the server and in the client, I feel even more sad.
feel even more sad. I don't have a specific workflow for this case. I don't have a specific workflow for this case.
Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats Additionally, I use `cargo run --release -p rust-analyzer -- analysis-stats path/to/some/rust/crate` to run a batch analysis.
path/to/some/rust/crate` to run a batch analysis. This is primarily useful for This is primarily useful for performance optimizations, or for bug minimization.
performance optimizations, or for bug minimization.
# Code Style & Review Process ## Parser Tests
Our approach to "clean code" is two-fold: Tests for the parser (`ra_parser`) live in the `ra_syntax` crate (see `test_data` directory).
There are two kinds of tests:
* We generally don't block PRs on style changes. * Manually written test cases in `parser/ok` and `parser/err`
* At the same time, all code in rust-analyzer is constantly refactored. * "Inline" tests in `parser/inline` (these are generated) from comments in `ra_parser` crate.
It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author. The purpose of inline tests is not to achieve full coverage by test cases, but to explain to the reader of the code what each particular `if` and `match` is responsible for.
Sending small cleanup PRs (like renaming a single local variable) is encouraged. If you are tempted to add a large inline test, it might be a good idea to leave only the simplest example in place, and move the test to a manual `parser/ok` test.
## Scale of Changes To update test data, run with `UPDATE_EXPECT` variable:
Everyone knows that it's better to send small & focused pull requests. ```bash
The problem is, sometimes you *have* to, eg, rewrite the whole compiler, and that just doesn't fit into a set of isolated PRs. env UPDATE_EXPECT=1 cargo qt
The main things to keep an eye on are the boundaries between various components.
There are three kinds of changes:
1. Internals of a single component are changed.
Specifically, you don't change any `pub` items.
A good example here would be an addition of a new assist.
2. API of a component is expanded.
Specifically, you add a new `pub` function which wasn't there before.
A good example here would be expansion of assist API, for example, to implement lazy assists or assists groups.
3. A new dependency between components is introduced.
Specifically, you add a `pub use` reexport from another crate or you add a new line to the `[dependencies]` section of `Cargo.toml`.
A good example here would be adding reference search capability to the assists crates.
For the first group, the change is generally merged as long as:
* it works for the happy case,
* it has tests,
* it doesn't panic for the unhappy case.
For the second group, the change would be subjected to quite a bit of scrutiny and iteration.
The new API needs to be right (or at least easy to change later).
The actual implementation doesn't matter that much.
It's very important to minimize the amount of changed lines of code for changes of the second kind.
Often, you start doing a change of the first kind, only to realise that you need to elevate to a change of the second kind.
In this case, we'll probably ask you to split API changes into a separate PR.
Changes of the third group should be pretty rare, so we don't specify any specific process for them.
That said, adding an innocent-looking `pub use` is a very simple way to break encapsulation, keep an eye on it!
Note: if you enjoyed this abstract hand-waving about boundaries, you might appreciate
https://www.tedinski.com/2018/02/06/system-boundaries.html
## Crates.io Dependencies
We try to be very conservative with usage of crates.io dependencies.
Don't use small "helper" crates (exception: `itertools` is allowed).
If there's some general reusable bit of code you need, consider adding it to the `stdx` crate.
## Minimal Tests
Most tests in rust-analyzer start with a snippet of Rust code.
This snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed.
There are many benefits to this:
* less to read or to scroll past
* easier to understand what exactly is tested
* less stuff printed during printf-debugging
* less time to run test
It also makes sense to format snippets more compactly (for example, by placing enum defitions like `enum E { Foo, Bar }` on a single line),
as long as they are still readable.
## Order of Imports
We separate import groups with blank lines
```rust
mod x;
mod y;
use std::{ ... }
use crate_foo::{ ... }
use crate_bar::{ ... }
use crate::{}
use super::{} // but prefer `use crate::`
``` ```
## Import Style After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above.
Items from `hir` and `ast` should be used qualified: ## TypeScript Tests
```rust If you change files under `editors/code` and would like to run the tests and linter, install npm and run:
// Good
use ra_syntax::ast;
fn frobnicate(func: hir::Function, strukt: ast::StructDef) {} ```bash
cd editors/code
// Not as good npm ci
use hir::Function; npm run lint
use ra_syntax::ast::StructDef;
fn frobnicate(func: Function, strukt: StructDef) {}
``` ```
Avoid local `use MyEnum::*` imports. # Code organization
Prefer `use crate::foo::bar` to `use super::bar`. All Rust code lives in the `crates` top-level directory, and is organized as a single Cargo workspace.
The `editors` top-level directory contains code for integrating with editors.
Currently, it contains the plugin for VS Code (in TypeScript).
The `docs` top-level directory contains both developer and user documentation.
## Order of Items We have some automation infra in Rust in the `xtask` package.
It contains stuff like formatting checking, code generation and powers `cargo xtask install`.
Optimize for the reader who sees the file for the first time, and wants to get the general idea about what's going on. The latter syntax is achieved with the help of cargo aliases (see `.cargo` directory).
People read things from top to bottom, so place most important things first.
Specifically, if all items except one are private, always put the non-private item on top.
Put `struct`s and `enum`s first, functions and impls last.
Do
```rust
// Good
struct Foo {
bars: Vec<Bar>
}
struct Bar;
```
rather than
```rust
// Not as good
struct Bar;
struct Foo {
bars: Vec<Bar>
}
```
## Variable Naming
We generally use boring and long names for local variables ([yay code completion](https://github.com/rust-analyzer/rust-analyzer/pull/4162#discussion_r417130973)).
The default name is a lowercased name of the type: `global_state: GlobalState`.
Avoid ad-hoc acronyms and contractions, but use the ones that exist consistently (`db`, `ctx`, `acc`).
The default name for "result of the function" local variable is `res`.
## Collection types
We prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`.
They use a hasher that's slightly faster and using them consistently will reduce code size by some small amount.
## Preconditions
Function preconditions should generally be expressed in types and provided by the caller (rather than checked by callee):
```rust
// Good
fn frbonicate(walrus: Walrus) {
...
}
// Not as good
fn frobnicate(walrus: Option<Walrus>) {
let walrus = match walrus {
Some(it) => it,
None => return,
};
...
}
```
## Premature Pessimization
While we don't specifically optimize code yet, avoid writing code which is slower than it needs to be.
Don't allocate a `Vec` where an iterator would do, don't allocate strings needlessly.
```rust
// Good
use itertools::Itertools;
let (first_word, second_word) = match text.split_ascii_whitespace().collect_tuple() {
Some(it) => it,
None => return,
}
// Not as good
let words = text.split_ascii_whitespace().collect::<Vec<_>>();
if words.len() != 2 {
return
}
```
## Documentation
For `.md` and `.adoc` files, prefer a sentence-per-line format, don't wrap lines.
If the line is too long, you want to split the sentence in two :-)
## Commit Style
We don't have specific rules around git history hygiene.
Maintaining clean git history is encouraged, but not enforced.
We use rebase workflow, it's OK to rewrite history during PR review process.
Avoid @mentioning people in commit messages and pull request descriptions (they are added to commit message by bors), as such messages create a lot of duplicate notification traffic during rebases.
# Architecture Invariants # Architecture Invariants
@ -355,35 +163,11 @@ The main IDE crate (`ra_ide`) uses "Plain Old Data" for the API.
Rather than talking in definitions and references, it talks in Strings and textual offsets. Rather than talking in definitions and references, it talks in Strings and textual offsets.
In general, API is centered around UI concerns -- the result of the call is what the user sees in the editor, and not what the compiler sees underneath. In general, API is centered around UI concerns -- the result of the call is what the user sees in the editor, and not what the compiler sees underneath.
The results are 100% Rust specific though. The results are 100% Rust specific though.
Shout outs to LSP developers for popularizing the idea that "UI" is a good place to draw a boundary at.
## Parser Tests # Code Style & Review Process
Tests for the parser (`ra_parser`) live in the `ra_syntax` crate (see `test_data` directory). Do see [./style.md](./style.md).
There are two kinds of tests:
* Manually written test cases in `parser/ok` and `parser/err`
* "Inline" tests in `parser/inline` (these are generated) from comments in `ra_parser` crate.
The purpose of inline tests is not to achieve full coverage by test cases, but to explain to the reader of the code what each particular `if` and `match` is responsible for.
If you are tempted to add a large inline test, it might be a good idea to leave only the simplest example in place, and move the test to a manual `parser/ok` test.
To update test data, run with `UPDATE_EXPECT` variable:
```bash
env UPDATE_EXPECT=1 cargo qt
```
After adding a new inline test you need to run `cargo xtest codegen` and also update the test data as described above.
## TypeScript Tests
If you change files under `editors/code` and would like to run the tests and linter, install npm and run:
```bash
cd editors/code
npm ci
npm run lint
```
# Logging # Logging

211
docs/dev/style.md Normal file
View file

@ -0,0 +1,211 @@
Our approach to "clean code" is two-fold:
* We generally don't block PRs on style changes.
* At the same time, all code in rust-analyzer is constantly refactored.
It is explicitly OK for a reviewer to flag only some nits in the PR, and then send a follow-up cleanup PR for things which are easier to explain by example, cc-ing the original author.
Sending small cleanup PRs (like renaming a single local variable) is encouraged.
# Scale of Changes
Everyone knows that it's better to send small & focused pull requests.
The problem is, sometimes you *have* to, eg, rewrite the whole compiler, and that just doesn't fit into a set of isolated PRs.
The main things to keep an eye on are the boundaries between various components.
There are three kinds of changes:
1. Internals of a single component are changed.
Specifically, you don't change any `pub` items.
A good example here would be an addition of a new assist.
2. API of a component is expanded.
Specifically, you add a new `pub` function which wasn't there before.
A good example here would be expansion of assist API, for example, to implement lazy assists or assists groups.
3. A new dependency between components is introduced.
Specifically, you add a `pub use` reexport from another crate or you add a new line to the `[dependencies]` section of `Cargo.toml`.
A good example here would be adding reference search capability to the assists crates.
For the first group, the change is generally merged as long as:
* it works for the happy case,
* it has tests,
* it doesn't panic for the unhappy case.
For the second group, the change would be subjected to quite a bit of scrutiny and iteration.
The new API needs to be right (or at least easy to change later).
The actual implementation doesn't matter that much.
It's very important to minimize the amount of changed lines of code for changes of the second kind.
Often, you start doing a change of the first kind, only to realise that you need to elevate to a change of the second kind.
In this case, we'll probably ask you to split API changes into a separate PR.
Changes of the third group should be pretty rare, so we don't specify any specific process for them.
That said, adding an innocent-looking `pub use` is a very simple way to break encapsulation, keep an eye on it!
Note: if you enjoyed this abstract hand-waving about boundaries, you might appreciate
https://www.tedinski.com/2018/02/06/system-boundaries.html
# Crates.io Dependencies
We try to be very conservative with usage of crates.io dependencies.
Don't use small "helper" crates (exception: `itertools` is allowed).
If there's some general reusable bit of code you need, consider adding it to the `stdx` crate.
# Minimal Tests
Most tests in rust-analyzer start with a snippet of Rust code.
This snippets should be minimal -- if you copy-paste a snippet of real code into the tests, make sure to remove everything which could be removed.
There are many benefits to this:
* less to read or to scroll past
* easier to understand what exactly is tested
* less stuff printed during printf-debugging
* less time to run test
It also makes sense to format snippets more compactly (for example, by placing enum definitions like `enum E { Foo, Bar }` on a single line),
as long as they are still readable.
## Order of Imports
We separate import groups with blank lines
```rust
mod x;
mod y;
// First std.
use std::{ ... }
// Second, external crates (both crates.io crates and other rust-analyzer crates).
use crate_foo::{ ... }
use crate_bar::{ ... }
// Then current crate.
use crate::{}
// Finally, parent and child modules, but prefer `use crate::`.
use super::{}
```
Module declarations come before the imports.
Order them in "suggested reading order" for a person new to the code base.
## Import Style
Items from `hir` and `ast` should be used qualified:
```rust
// Good
use ra_syntax::ast;
fn frobnicate(func: hir::Function, strukt: ast::StructDef) {}
// Not as good
use hir::Function;
use ra_syntax::ast::StructDef;
fn frobnicate(func: Function, strukt: StructDef) {}
```
Avoid local `use MyEnum::*` imports.
Prefer `use crate::foo::bar` to `use super::bar`.
## Order of Items
Optimize for the reader who sees the file for the first time, and wants to get a general idea about what's going on.
People read things from top to bottom, so place most important things first.
Specifically, if all items except one are private, always put the non-private item on top.
Put `struct`s and `enum`s first, functions and impls last.
Do
```rust
// Good
struct Foo {
bars: Vec<Bar>
}
struct Bar;
```
rather than
```rust
// Not as good
struct Bar;
struct Foo {
bars: Vec<Bar>
}
```
## Variable Naming
We generally use boring and long names for local variables ([yay code completion](https://github.com/rust-analyzer/rust-analyzer/pull/4162#discussion_r417130973)).
The default name is a lowercased name of the type: `global_state: GlobalState`.
Avoid ad-hoc acronyms and contractions, but use the ones that exist consistently (`db`, `ctx`, `acc`).
The default name for "result of the function" local variable is `res`.
The default name for "I don't really care about the name" variable is `it`.
## Collection types
We prefer `rustc_hash::FxHashMap` and `rustc_hash::FxHashSet` instead of the ones in `std::collections`.
They use a hasher that's slightly faster and using them consistently will reduce code size by some small amount.
## Preconditions
Function preconditions should generally be expressed in types and provided by the caller (rather than checked by callee):
```rust
// Good
fn frbonicate(walrus: Walrus) {
...
}
// Not as good
fn frobnicate(walrus: Option<Walrus>) {
let walrus = match walrus {
Some(it) => it,
None => return,
};
...
}
```
## Premature Pessimization
Avoid writing code which is slower than it needs to be.
Don't allocate a `Vec` where an iterator would do, don't allocate strings needlessly.
```rust
// Good
use itertools::Itertools;
let (first_word, second_word) = match text.split_ascii_whitespace().collect_tuple() {
Some(it) => it,
None => return,
}
// Not as good
let words = text.split_ascii_whitespace().collect::<Vec<_>>();
if words.len() != 2 {
return
}
```
## Documentation
For `.md` and `.adoc` files, prefer a sentence-per-line format, don't wrap lines.
If the line is too long, you want to split the sentence in two :-)
## Commit Style
We don't have specific rules around git history hygiene.
Maintaining clean git history is encouraged, but not enforced.
We use rebase workflow, it's OK to rewrite history during PR review process.
Avoid @mentioning people in commit messages and pull request descriptions(they are added to commit message by bors).
Such messages create a lot of duplicate notification traffic during rebases.