Fix typos in ARCHITECTURE.md and a number of crates

specifically: gen_lsp_server, ra_arena, ra_cli, ra_db, ra_hir
This commit is contained in:
Marcus Klaas de Vries 2019-01-09 00:47:12 +01:00
parent f8261d611a
commit 0b8fbb4fad
23 changed files with 150 additions and 91 deletions

View file

@ -1,6 +1,6 @@
# Architecture
This document describes high-level architecture of rust-analyzer.
This document describes the high-level architecture of rust-analyzer.
If you want to familiarize yourself with the code base, you are just
in the right place!
@ -12,10 +12,10 @@ On the highest level, rust-analyzer is a thing which accepts input source code
from the client and produces a structured semantic model of the code.
More specifically, input data consists of a set of test files (`(PathBuf,
String)` pairs) and an information about project structure, the so called
`CrateGraph`. Crate graph specifies which files are crate roots, which cfg flags
are specified for each crate (TODO: actually implement this) and what are
dependencies between the crates. The analyzer keeps all these input data in
String)` pairs) and information about project structure, captured in the so called
`CrateGraph`. The crate graph specifies which files are crate roots, which cfg
flags are specified for each crate (TODO: actually implement this) and what
dependencies exist between the crates. The analyzer keeps all this input data in
memory and never does any IO. Because the input data is source code, which
typically measures in tens of megabytes at most, keeping all input data in
memory is OK.
@ -28,8 +28,8 @@ declarations, etc.
The client can submit a small delta of input data (typically, a change to a
single file) and get a fresh code model which accounts for changes.
Underlying engine makes sure that model is computed lazily (on-demand) and can
be quickly updated for small modifications.
The underlying engine makes sure that model is computed lazily (on-demand) and
can be quickly updated for small modifications.
## Code generation
@ -37,7 +37,7 @@ be quickly updated for small modifications.
Some of the components of this repository are generated through automatic
processes. These are outlined below:
- `gen-syntax`: The kinds of tokens are reused in several places, so a generator
- `gen-syntax`: The kinds of tokens that are reused in several places, so a generator
is used. We use tera templates to generate the files listed below, based on
the grammar described in [grammar.ron]:
- [ast/generated.rs][ast generated] in `ra_syntax` based on
@ -58,17 +58,16 @@ processes. These are outlined below:
### `crates/ra_syntax`
Rust syntax tree structure and parser. See
[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design
notes.
[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes.
- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees.
- `grammar` module is the actual parser. It is a hand-written recursive descent parsers, which
produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java),
which is a good source for inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs)
- `grammar` module is the actual parser. It is a hand-written recursive descent parser, which
produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java),
which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs)
is what we use for the definition of the Rust language.
- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees.
This is the thing that turns a flat list of events into a tree (see `EventProcessor`)
- `ast` a type safe API on top of the raw `rowan` tree.
- `ast` provides a type safe API on top of the raw `rowan` tree.
- `grammar.ron` RON description of the grammar, which is used to
generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command.
- `algo`: generic tree algorithms, including `walk` for O(1) stack
@ -90,7 +89,7 @@ fixes a bug in the grammar.
We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and
on-demand computation. Roughly, you can think of salsa as a key-value store, but
it also can compute derived values using specified functions. The `ra_db` crate
provides a basic infrastructure for interacting with salsa. Crucially, it
provides basic infrastructure for interacting with salsa. Crucially, it
defines most of the "input" queries: facts supplied by the client of the
analyzer. Reading the docs of the `ra_db::input` module should be useful:
everything else is strictly derived from those inputs.
@ -102,7 +101,7 @@ HIR provides high-level "object oriented" access to Rust code.
The principal difference between HIR and syntax trees is that HIR is bound to a
particular crate instance. That is, it has cfg flags and features applied (in
theory, in practice this is to be implemented). So, the relation between
syntax and HIR is many-to-one. The `source_binder` modules is responsible for
syntax and HIR is many-to-one. The `source_binder` module is responsible for
guessing a HIR for a particular source position.
Underneath, HIR works on top of salsa, using a `HirDatabase` trait.
@ -111,12 +110,12 @@ Underneath, HIR works on top of salsa, using a `HirDatabase` trait.
A stateful library for analyzing many Rust files as they change. `AnalysisHost`
is a mutable entity (clojure's atom) which holds the current state, incorporates
changes and handles out `Analysis` --- an immutable and consistent snapshot of
world state at a point in time, which actually powers analysis.
changes and hands out `Analysis` --- an immutable and consistent snapshot of
the world state at a point in time, which actually powers analysis.
One interesting aspect of analysis is its support for cancellation. When a
change is applied to `AnalysisHost`, first all currently active snapshots are
cancelled. Only after all snapshots are dropped the change actually affects the
canceled. Only after all snapshots are dropped the change actually affects the
database.
APIs in this crate are IDE centric: they take text offsets as input and produce
@ -142,7 +141,7 @@ An LSP implementation which wraps `ra_ide_api` into a langauge server protocol.
### `crates/ra_vfs`
Although `hir` and `ra_ide_api` don't do any io, we need to be able to read
Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read
files from disk at the end of the day. This is what `ra_vfs` does. It also
manages overlays: "dirty" files in the editor, whose "true" contents is
different from data on disk.
@ -175,16 +174,16 @@ VS Code plugin
## Common workflows
To try out VS Code extensions, run `cargo install-code`. This installs both the
`ra_lsp_server` binary and VS Code extension. To install only the binary, use
`ra_lsp_server` binary and the VS Code extension. To install only the binary, use
`cargo install --path crates/ra_lsp_server --force`
To see logs from the language server, set `RUST_LOG=info` env variable. To see
all communication between the server and the client, use
`RUST_LOG=gen_lsp_server=debug` (will print quite a bit of stuff).
`RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff).
To run tests, just `cargo test`.
To work on VS Code extension, launch code inside `editors/code` and use `F5` to
To work on the VS Code extension, launch code inside `editors/code` and use `F5` to
launch/debug. To automatically apply formatter and linter suggestions, use `npm
run fix`.

View file

@ -78,10 +78,10 @@ pub use crate::{
};
/// Main entry point: runs the server from initialization to shutdown.
/// To attach server to standard input/output streams, use `stdio_transport`
/// To attach server to standard input/output streams, use the `stdio_transport`
/// function to create corresponding `sender` and `receiver` pair.
///
///`server` should use `handle_shutdown` function to handle the `Shutdown`
/// `server` should use the `handle_shutdown` function to handle the `Shutdown`
/// request.
pub fn run_server(
caps: ServerCapabilities,
@ -104,7 +104,7 @@ pub fn run_server(
Ok(())
}
/// if `req` is `Shutdown`, respond to it and return `None`, otherwise return `Some(req)`
/// If `req` is `Shutdown`, respond to it and return `None`, otherwise return `Some(req)`
pub fn handle_shutdown(req: RawRequest, sender: &Sender<RawMessage>) -> Option<RawRequest> {
match req.cast::<Shutdown>() {
Ok((id, ())) => {

View file

@ -54,7 +54,7 @@ pub enum ErrorCode {
ServerErrorEnd = -32000,
ServerNotInitialized = -32002,
UnknownErrorCode = -32001,
RequestCancelled = -32800,
RequestCanceled = -32800,
ContentModified = -32801,
}

View file

@ -8,11 +8,11 @@
//! * user types next character, while syntax highlighting *is still in
//! progress*.
//!
//! In this situation, we want to react to modification as quckly as possible.
//! In this situation, we want to react to modification as quickly as possible.
//! At the same time, in-progress results are not very interesting, because they
//! are invalidated by the edit anyway. So, we first cancel all in-flight
//! requests, and then apply modification knowing that it won't intrfere with
//! any background processing (this bit is handled by salsa, see
//! requests, and then apply modification knowing that it won't interfere with
//! any background processing (this bit is handled by salsa, see the
//! `BaseDatabase::check_canceled` method).
/// An "error" signifing that the operation was canceled.

View file

@ -1,9 +1,9 @@
/// This modules specifies the input to rust-analyzer. In some sense, this is
/// This module specifies the input to rust-analyzer. In some sense, this is
/// **the** most important module, because all other fancy stuff is strictly
/// derived from this input.
///
/// Note that neither this module, nor any other part of the analyzer's core do
/// actual IO. See `vfs` and `project_model` in `ra_lsp_server` crate for how
/// actual IO. See `vfs` and `project_model` in the `ra_lsp_server` crate for how
/// actual IO is done and lowered to input.
use std::sync::Arc;
@ -17,17 +17,17 @@ use rustc_hash::FxHashSet;
/// `FileId` is an integer which uniquely identifies a file. File paths are
/// messy and system-dependent, so most of the code should work directly with
/// `FileId`, without inspecting the path. The mapping between `FileId` and path
/// and `SourceRoot` is constant. File rename is represented as a pair of
/// and `SourceRoot` is constant. A file rename is represented as a pair of
/// deletion/creation.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct FileId(pub u32);
/// Files are grouped into source roots. A source root is a directory on the
/// file systems which is watched for changes. Typically it corresponds to a
/// Cargo package. Source roots *might* be nested: in this case, file belongs to
/// the nearest enclosing source root. Path to files are always relative to a
/// source root, and analyzer does not know the root path of the source root at
/// all. So, a file from one source root can't refere a file in another source
/// Rust crate. Source roots *might* be nested: in this case, a file belongs to
/// the nearest enclosing source root. Paths to files are always relative to a
/// source root, and the analyzer does not know the root path of the source root at
/// all. So, a file from one source root can't refer to a file in another source
/// root by path.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct SourceRootId(pub u32);
@ -38,15 +38,15 @@ pub struct SourceRoot {
}
/// `CrateGraph` is a bit of information which turns a set of text files into a
/// number of Rust crates. Each Crate is the `FileId` of it's root module, the
/// set of cfg flags (not yet implemented) and the set of dependencies. Note
/// number of Rust crates. Each crate is defined by the `FileId` of its root module,
/// the set of cfg flags (not yet implemented) and the set of dependencies. Note
/// that, due to cfg's, there might be several crates for a single `FileId`! As
/// in the rust-lang proper, a crate does not have a name. Instead, names are
/// specified on dependency edges. That is, a crate might be known under
/// different names in different dependant crates.
/// different names in different dependent crates.
///
/// Note that `CrateGraph` is build-system agnostic: it's a concept of the Rust
/// langauge proper, not a concept of the build system. In practice, we get
/// language proper, not a concept of the build system. In practice, we get
/// `CrateGraph` by lowering `cargo metadata` output.
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct CrateGraph {

View file

@ -1,5 +1,5 @@
//! ra_db defines basic database traits. Concrete DB is defined by ra_ide_api.
mod cancelation;
//! ra_db defines basic database traits. The concrete DB is defined by ra_ide_api.
mod cancellation;
mod syntax_ptr;
mod input;
mod loc2id;
@ -8,7 +8,7 @@ pub mod mock;
use ra_syntax::{TextUnit, TextRange, SourceFile, TreePtr};
pub use crate::{
cancelation::{Canceled, Cancelable},
cancellation::{Canceled, Cancelable},
syntax_ptr::LocalSyntaxPtr,
input::{
FilesDatabase, FileId, CrateId, SourceRoot, SourceRootId, CrateGraph, Dependency,

View file

@ -5,7 +5,7 @@ use rustc_hash::FxHashMap;
use ra_arena::{Arena, ArenaId};
/// There are two principle ways to refer to things:
/// - by their locatinon (module in foo/bar/baz.rs at line 42)
/// - by their location (module in foo/bar/baz.rs at line 42)
/// - by their numeric id (module `ModuleId(42)`)
///
/// The first one is more powerful (you can actually find the thing in question
@ -13,7 +13,7 @@ use ra_arena::{Arena, ArenaId};
///
/// `Loc2IdMap` allows us to have a cake an eat it as well: by maintaining a
/// bidirectional mapping between positional and numeric ids, we can use compact
/// representation wich still allows us to get the actual item
/// representation which still allows us to get the actual item.
#[derive(Debug)]
struct Loc2IdMap<LOC, ID>
where

View file

@ -13,8 +13,8 @@ use crate::{
ty::InferenceResult,
};
/// hir::Crate describes a single crate. It's the main inteface with which
/// crate's dependencies interact. Mostly, it should be just a proxy for the
/// hir::Crate describes a single crate. It's the main interface with which
/// a crate's dependencies interact. Mostly, it should be just a proxy for the
/// root module.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct Crate {
@ -78,6 +78,7 @@ impl Module {
pub fn definition_source(&self, db: &impl HirDatabase) -> Cancelable<(FileId, ModuleSource)> {
self.definition_source_impl(db)
}
/// Returns a node which declares this module, either a `mod foo;` or a `mod foo {}`.
/// `None` for the crate root.
pub fn declaration_source(
@ -91,20 +92,24 @@ impl Module {
pub fn krate(&self, db: &impl HirDatabase) -> Cancelable<Option<Crate>> {
self.krate_impl(db)
}
/// Topmost parent of this module. Every module has a `crate_root`, but some
/// might miss `krate`. This can happen if a module's file is not included
/// into any module tree of any target from Cargo.toml.
/// might be missing `krate`. This can happen if a module's file is not included
/// in the module tree of any target in Cargo.toml.
pub fn crate_root(&self, db: &impl HirDatabase) -> Cancelable<Module> {
self.crate_root_impl(db)
}
/// Finds a child module with the specified name.
pub fn child(&self, db: &impl HirDatabase, name: &Name) -> Cancelable<Option<Module>> {
self.child_impl(db, name)
}
/// Finds a parent module.
pub fn parent(&self, db: &impl HirDatabase) -> Cancelable<Option<Module>> {
self.parent_impl(db)
}
pub fn path_to_root(&self, db: &impl HirDatabase) -> Cancelable<Vec<Module>> {
let mut res = vec![self.clone()];
let mut curr = self.clone();
@ -114,13 +119,16 @@ impl Module {
}
Ok(res)
}
/// Returns a `ModuleScope`: a set of items, visible in this module.
pub fn scope(&self, db: &impl HirDatabase) -> Cancelable<ModuleScope> {
self.scope_impl(db)
}
pub fn resolve_path(&self, db: &impl HirDatabase, path: &Path) -> Cancelable<PerNs<DefId>> {
self.resolve_path_impl(db, path)
}
pub fn problems(
&self,
db: &impl HirDatabase,
@ -140,6 +148,7 @@ impl StructField {
pub fn name(&self) -> &Name {
&self.name
}
pub fn type_ref(&self) -> &TypeRef {
&self.type_ref
}
@ -160,18 +169,21 @@ impl VariantData {
_ => &[],
}
}
pub fn is_struct(&self) -> bool {
match self {
VariantData::Struct(..) => true,
_ => false,
}
}
pub fn is_tuple(&self) -> bool {
match self {
VariantData::Tuple(..) => true,
_ => false,
}
}
pub fn is_unit(&self) -> bool {
match self {
VariantData::Unit => true,

View file

@ -26,6 +26,7 @@ pub trait HirDatabase: SyntaxDatabase
type HirSourceFileQuery;
use fn HirFileId::hir_source_file;
}
fn expand_macro_invocation(invoc: MacroCallId) -> Option<Arc<MacroExpansion>> {
type ExpandMacroCallQuery;
use fn crate::macros::expand_macro_invocation;
@ -80,10 +81,12 @@ pub trait HirDatabase: SyntaxDatabase
type InputModuleItemsQuery;
use fn query_definitions::input_module_items;
}
fn item_map(source_root_id: SourceRootId) -> Cancelable<Arc<ItemMap>> {
type ItemMapQuery;
use fn query_definitions::item_map;
}
fn module_tree(source_root_id: SourceRootId) -> Cancelable<Arc<ModuleTree>> {
type ModuleTreeQuery;
use fn crate::module_tree::ModuleTree::module_tree_query;

View file

@ -33,8 +33,7 @@ pub struct Body {
/// IDs. This is needed to go from e.g. a position in a file to the HIR
/// expression containing it; but for type inference etc., we want to operate on
/// a structure that is agnostic to the actual positions of expressions in the
/// file, so that we don't recompute the type inference whenever some whitespace
/// is typed.
/// file, so that we don't recompute types whenever some whitespace is typed.
#[derive(Debug, Eq, PartialEq)]
pub struct BodySyntaxMapping {
body: Arc<Body>,
@ -74,20 +73,25 @@ impl BodySyntaxMapping {
pub fn expr_syntax(&self, expr: ExprId) -> Option<LocalSyntaxPtr> {
self.expr_syntax_mapping_back.get(expr).cloned()
}
pub fn syntax_expr(&self, ptr: LocalSyntaxPtr) -> Option<ExprId> {
self.expr_syntax_mapping.get(&ptr).cloned()
}
pub fn node_expr(&self, node: &ast::Expr) -> Option<ExprId> {
self.expr_syntax_mapping
.get(&LocalSyntaxPtr::new(node.syntax()))
.cloned()
}
pub fn pat_syntax(&self, pat: PatId) -> Option<LocalSyntaxPtr> {
self.pat_syntax_mapping_back.get(pat).cloned()
}
pub fn syntax_pat(&self, ptr: LocalSyntaxPtr) -> Option<PatId> {
self.pat_syntax_mapping.get(&ptr).cloned()
}
pub fn node_pat(&self, node: &ast::Pat) -> Option<PatId> {
self.pat_syntax_mapping
.get(&LocalSyntaxPtr::new(node.syntax()))

View file

@ -9,25 +9,25 @@ use crate::{
use crate::code_model_api::Module;
/// hir makes a heavy use of ids: integer (u32) handlers to various things. You
/// hir makes heavy use of ids: integer (u32) handlers to various things. You
/// can think of id as a pointer (but without a lifetime) or a file descriptor
/// (but for hir objects).
///
/// This module defines a bunch of ids we are using. The most important ones are
/// probably `HirFileId` and `DefId`.
/// Input to the analyzer is a set of file, where each file is indetified by
/// Input to the analyzer is a set of files, where each file is indentified by
/// `FileId` and contains source code. However, another source of source code in
/// Rust are macros: each macro can be thought of as producing a "temporary
/// file". To assign id to such file, we use the id of a macro call that
/// file". To assign an id to such a file, we use the id of the macro call that
/// produced the file. So, a `HirFileId` is either a `FileId` (source code
/// written by user), or a `MacroCallId` (source code produced by macro).
///
/// What is a `MacroCallId`? Simplifying, it's a `HirFileId` of a file containin
/// the call plus the offset of the macro call in the file. Note that this is a
/// recursive definition! Nethetheless, size_of of `HirFileId` is finite
/// recursive definition! However, the size_of of `HirFileId` is finite
/// (because everything bottoms out at the real `FileId`) and small
/// (`MacroCallId` uses location interner).
/// (`MacroCallId` uses the location interner).
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct HirFileId(HirFileIdRepr);
@ -235,7 +235,7 @@ pub struct SourceItemId {
pub(crate) item_id: Option<SourceFileItemId>,
}
/// Maps item's `SyntaxNode`s to `SourceFileItemId` and back.
/// Maps items' `SyntaxNode`s to `SourceFileItemId`s and back.
#[derive(Debug, PartialEq, Eq)]
pub struct SourceFileItems {
file_id: HirFileId,

View file

@ -128,13 +128,13 @@ impl ImplItem {
pub struct ImplId(pub RawId);
impl_arena_id!(ImplId);
/// Collection of impl blocks is a two-step process: First we collect the blocks
/// per-module; then we build an index of all impl blocks in the crate. This
/// way, we avoid having to do this process for the whole crate whenever someone
/// types in any file; as long as the impl blocks in the file don't change, we
/// don't need to do the second step again.
/// The collection of impl blocks is a two-step process: first we collect the
/// blocks per-module; then we build an index of all impl blocks in the crate.
/// This way, we avoid having to do this process for the whole crate whenever
/// a file is changed; as long as the impl blocks in the file don't change,
/// we don't need to do the second step again.
///
/// (The second step does not yet exist currently.)
/// (The second step does not yet exist.)
#[derive(Debug, PartialEq, Eq)]
pub struct ModuleImplBlocks {
impls: Arena<ImplId, ImplData>,

View file

@ -1,9 +1,9 @@
//! HIR (previsouly known as descriptors) provides a high-level OO acess to Rust
//! code.
//! HIR (previously known as descriptors) provides a high-level object oriented
//! access to Rust code.
//!
//! The principal difference between HIR and syntax trees is that HIR is bound
//! to a particular crate instance. That is, it has cfg flags and features
//! applied. So, there relation between syntax and HIR is many-to-one.
//! applied. So, the relation between syntax and HIR is many-to-one.
macro_rules! ctry {
($expr:expr) => {

View file

@ -4,9 +4,9 @@
/// that is produced after expansion. See `HirFileId` and `MacroCallId` for how
/// do we do that.
///
/// When file-management question is resolved, all that is left is a token tree
/// to token tree transformation plus hygent. We don't have either of thouse
/// yet, so all macros are string based at the moment!
/// When the file-management question is resolved, all that is left is a
/// token-tree-to-token-tree transformation plus hygiene. We don't have either of
/// those yet, so all macros are string based at the moment!
use std::sync::Arc;
use ra_db::LocalSyntaxPtr;

View file

@ -85,9 +85,9 @@ impl_arena_id!(LinkId);
/// Physically, rust source is organized as a set of files, but logically it is
/// organized as a tree of modules. Usually, a single file corresponds to a
/// single module, but it is not nessary the case.
/// single module, but it is not neccessarily always the case.
///
/// Module encapsulate the logic of transitioning from the fuzzy world of files
/// `ModuleTree` encapsulates the logic of transitioning from the fuzzy world of files
/// (which can have multiple parents) to the precise world of modules (which
/// always have one parent).
#[derive(Default, Debug, PartialEq, Eq)]

View file

@ -3,7 +3,7 @@ use std::fmt;
use ra_syntax::{ast, SmolStr};
/// `Name` is a wrapper around string, which is used in hir for both references
/// and declarations. In theory, names should also carry hygene info, but we are
/// and declarations. In theory, names should also carry hygiene info, but we are
/// not there yet!
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct Name {

View file

@ -1,18 +1,18 @@
//! Name resolution algorithm. The end result of the algorithm is `ItemMap`: a
//! map with maps each module to it's scope: the set of items, visible in the
//! Name resolution algorithm. The end result of the algorithm is an `ItemMap`:
//! a map which maps each module to its scope: the set of items visible in the
//! module. That is, we only resolve imports here, name resolution of item
//! bodies will be done in a separate step.
//!
//! Like Rustc, we use an interative per-crate algorithm: we start with scopes
//! Like Rustc, we use an interactive per-crate algorithm: we start with scopes
//! containing only directly defined items, and then iteratively resolve
//! imports.
//!
//! To make this work nicely in the IDE scenarios, we place `InputModuleItems`
//! To make this work nicely in the IDE scenario, we place `InputModuleItems`
//! in between raw syntax and name resolution. `InputModuleItems` are computed
//! using only the module's syntax, and it is all directly defined items plus
//! imports. The plain is to make `InputModuleItems` independent of local
//! modifications (that is, typing inside a function shold not change IMIs),
//! such that the results of name resolution can be preserved unless the module
//! imports. The plan is to make `InputModuleItems` independent of local
//! modifications (that is, typing inside a function should not change IMIs),
//! so that the results of name resolution can be preserved unless the module
//! structure itself is modified.
use std::sync::Arc;
@ -34,7 +34,7 @@ use crate::{
module_tree::{ModuleId, ModuleTree},
};
/// Item map is the result of the name resolution. Item map contains, for each
/// `ItemMap` is the result of name resolution. It contains, for each
/// module, the set of visible items.
// FIXME: currenty we compute item map per source-root. We should do it per crate instead.
#[derive(Default, Debug, PartialEq, Eq)]
@ -59,9 +59,9 @@ impl ModuleScope {
/// A set of items and imports declared inside a module, without relation to
/// other modules.
///
/// This stands in-between raw syntax and name resolution and alow us to avoid
/// recomputing name res: if `InputModuleItems` are the same, we can avoid
/// running name resolution.
/// This sits in-between raw syntax and name resolution and allows us to avoid
/// recomputing name res: if two instance of `InputModuleItems` are the same, we
/// can avoid redoing name resolution.
#[derive(Debug, Default, PartialEq, Eq)]
pub struct InputModuleItems {
pub(crate) items: Vec<ModuleItem>,
@ -114,7 +114,7 @@ enum ImportKind {
Named(NamedImport),
}
/// Resolution is basically `DefId` atm, but it should account for stuff like
/// `Resolution` is basically `DefId` atm, but it should account for stuff like
/// multiple namespaces, ambiguity and errors.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Resolution {

View file

@ -1,4 +1,4 @@
/// Lookup hir elements using position in the source code. This is a lossy
/// Lookup hir elements using positions in the source code. This is a lossy
/// transformation: in general, a single source might correspond to several
/// modules, functions, etc, due to macros, cfgs and `#[path=]` attributes on
/// modules.

View file

@ -144,7 +144,7 @@ pub enum Ty {
Bool,
/// The primitive character type; holds a Unicode scalar value
/// (a non-surrogate code point). Written as `char`.
/// (a non-surrogate code point). Written as `char`.
Char,
/// A primitive signed integer type. For example, `i32`.
@ -204,7 +204,7 @@ pub enum Ty {
// `|a| yield a`.
// Generator(DefId, GeneratorSubsts<'tcx>, hir::GeneratorMovability),
// A type representin the types stored inside a generator.
// A type representing the types stored inside a generator.
// This should only appear in GeneratorInteriors.
// GeneratorWitness(Binder<&'tcx List<Ty<'tcx>>>),
/// The never type `!`.

View file

@ -1,4 +1,4 @@
//! In certain situations, rust automatically inserts derefs as necessary: For
//! In certain situations, rust automatically inserts derefs as necessary: for
//! example, field accesses `foo.bar` still work when `foo` is actually a
//! reference to a type with the field `bar`. This is an approximation of the
//! logic in rustc (which lives in librustc_typeck/check/autoderef.rs).

View file

@ -15,7 +15,7 @@ use crate::{
};
// These tests compare the inference results for all expressions in a file
// against snapshots of the current results. If you change something and these
// against snapshots of the expected results. If you change something and these
// tests fail expectedly, you can update the comparison files by deleting them
// and running the tests again. Similarly, to add a new test, just write the
// test here in the same pattern and it will automatically write the snapshot.

View file

@ -121,9 +121,11 @@ impl AnalysisChange {
pub fn new() -> AnalysisChange {
AnalysisChange::default()
}
pub fn add_root(&mut self, root_id: SourceRootId, is_local: bool) {
self.new_roots.push((root_id, is_local));
}
pub fn add_file(
&mut self,
root_id: SourceRootId,
@ -142,9 +144,11 @@ impl AnalysisChange {
.added
.push(file);
}
pub fn change_file(&mut self, file_id: FileId, new_text: Arc<String>) {
self.files_changed.push((file_id, new_text))
}
pub fn remove_file(&mut self, root_id: SourceRootId, file_id: FileId, path: RelativePathBuf) {
let file = RemoveFile { file_id, path };
self.roots_changed
@ -153,9 +157,11 @@ impl AnalysisChange {
.removed
.push(file);
}
pub fn add_library(&mut self, data: LibraryData) {
self.libraries_added.push(data)
}
pub fn set_crate_graph(&mut self, graph: CrateGraph) {
self.crate_graph = Some(graph);
}
@ -218,15 +224,19 @@ impl Query {
limit: usize::max_value(),
}
}
pub fn only_types(&mut self) {
self.only_types = true;
}
pub fn libs(&mut self) {
self.libs = true;
}
pub fn exact(&mut self) {
self.exact = true;
}
pub fn limit(&mut self, limit: usize) {
self.limit = limit
}
@ -257,15 +267,19 @@ impl NavigationTarget {
ptr: Some(symbol.ptr.clone()),
}
}
pub fn name(&self) -> &SmolStr {
&self.name
}
pub fn kind(&self) -> SyntaxKind {
self.kind
}
pub fn file_id(&self) -> FileId {
self.file_id
}
pub fn range(&self) -> TextRange {
self.range
}
@ -305,6 +319,7 @@ impl AnalysisHost {
db: self.db.snapshot(),
}
}
/// Applies changes to the current state of the world. If there are
/// outstanding snapshots, they will be canceled.
pub fn apply_change(&mut self, change: AnalysisChange) {
@ -326,30 +341,36 @@ impl Analysis {
pub fn file_text(&self, file_id: FileId) -> Arc<String> {
self.db.file_text(file_id)
}
/// Gets the syntax tree of the file.
pub fn file_syntax(&self, file_id: FileId) -> TreePtr<SourceFile> {
self.db.source_file(file_id).clone()
}
/// Gets the file's `LineIndex`: data structure to convert between absolute
/// offsets and line/column representation.
pub fn file_line_index(&self, file_id: FileId) -> Arc<LineIndex> {
self.db.line_index(file_id)
}
/// Selects the next syntactic nodes encopasing the range.
pub fn extend_selection(&self, frange: FileRange) -> TextRange {
extend_selection::extend_selection(&self.db, frange)
}
/// Returns position of the mathcing brace (all types of braces are
/// supported).
pub fn matching_brace(&self, file: &SourceFile, offset: TextUnit) -> Option<TextUnit> {
ra_ide_api_light::matching_brace(file, offset)
}
/// Returns a syntax tree represented as `String`, for debug purposes.
// FIXME: use a better name here.
pub fn syntax_tree(&self, file_id: FileId) -> String {
let file = self.db.source_file(file_id);
ra_ide_api_light::syntax_tree(&file)
}
/// Returns an edit to remove all newlines in the range, cleaning up minor
/// stuff like trailing commas.
pub fn join_lines(&self, frange: FileRange) -> SourceChange {
@ -359,6 +380,7 @@ impl Analysis {
ra_ide_api_light::join_lines(&file, frange.range),
)
}
/// Returns an edit which should be applied when opening a new line, fixing
/// up minor stuff like continuing the comment.
pub fn on_enter(&self, position: FilePosition) -> Option<SourceChange> {
@ -366,6 +388,7 @@ impl Analysis {
let edit = ra_ide_api_light::on_enter(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit))
}
/// Returns an edit which should be applied after `=` was typed. Primarily,
/// this works when adding `let =`.
// FIXME: use a snippet completion instead of this hack here.
@ -374,23 +397,27 @@ impl Analysis {
let edit = ra_ide_api_light::on_eq_typed(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit))
}
/// Returns an edit which should be applied when a dot ('.') is typed on a blank line, indenting the line appropriately.
pub fn on_dot_typed(&self, position: FilePosition) -> Option<SourceChange> {
let file = self.db.source_file(position.file_id);
let edit = ra_ide_api_light::on_dot_typed(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit))
}
/// Returns a tree representation of symbols in the file. Useful to draw a
/// file outline.
pub fn file_structure(&self, file_id: FileId) -> Vec<StructureNode> {
let file = self.db.source_file(file_id);
ra_ide_api_light::file_structure(&file)
}
/// Returns the set of folding ranges.
pub fn folding_ranges(&self, file_id: FileId) -> Vec<Fold> {
let file = self.db.source_file(file_id);
ra_ide_api_light::folding_ranges(&file)
}
/// Fuzzy searches for a symbol.
pub fn symbol_search(&self, query: Query) -> Cancelable<Vec<NavigationTarget>> {
let res = symbol_index::world_symbols(&*self.db, query)?
@ -399,62 +426,76 @@ impl Analysis {
.collect();
Ok(res)
}
pub fn goto_definition(
&self,
position: FilePosition,
) -> Cancelable<Option<Vec<NavigationTarget>>> {
goto_definition::goto_definition(&*self.db, position)
}
/// Finds all usages of the reference at point.
pub fn find_all_refs(&self, position: FilePosition) -> Cancelable<Vec<(FileId, TextRange)>> {
self.db.find_all_refs(position)
}
/// Returns a short text descrbing element at position.
pub fn hover(&self, position: FilePosition) -> Cancelable<Option<RangeInfo<String>>> {
hover::hover(&*self.db, position)
}
/// Computes parameter information for the given call expression.
pub fn call_info(&self, position: FilePosition) -> Cancelable<Option<CallInfo>> {
call_info::call_info(&*self.db, position)
}
/// Returns a `mod name;` declaration which created the current module.
pub fn parent_module(&self, position: FilePosition) -> Cancelable<Vec<NavigationTarget>> {
self.db.parent_module(position)
}
/// Returns crates this file belongs too.
pub fn crate_for(&self, file_id: FileId) -> Cancelable<Vec<CrateId>> {
self.db.crate_for(file_id)
}
/// Returns the root file of the given crate.
pub fn crate_root(&self, crate_id: CrateId) -> Cancelable<FileId> {
Ok(self.db.crate_graph().crate_root(crate_id))
}
/// Returns the set of possible targets to run for the current file.
pub fn runnables(&self, file_id: FileId) -> Cancelable<Vec<Runnable>> {
runnables::runnables(&*self.db, file_id)
}
/// Computes syntax highlighting for the given file.
pub fn highlight(&self, file_id: FileId) -> Cancelable<Vec<HighlightedRange>> {
syntax_highlighting::highlight(&*self.db, file_id)
}
/// Computes completions at the given position.
pub fn completions(&self, position: FilePosition) -> Cancelable<Option<Vec<CompletionItem>>> {
let completions = completion::completions(&self.db, position)?;
Ok(completions.map(|it| it.into()))
}
/// Computes assists (aks code actons aka intentions) for the given
/// position.
pub fn assists(&self, frange: FileRange) -> Cancelable<Vec<SourceChange>> {
Ok(self.db.assists(frange))
}
/// Computes the set of diagnostics for the given file.
pub fn diagnostics(&self, file_id: FileId) -> Cancelable<Vec<Diagnostic>> {
self.db.diagnostics(file_id)
}
/// Computes the type of the expression at the given position.
pub fn type_of(&self, frange: FileRange) -> Cancelable<Option<String>> {
hover::type_of(&*self.db, frange)
}
/// Returns the edit required to rename reference at the position to the new
/// name.
pub fn rename(

View file

@ -326,7 +326,7 @@ fn on_notification(
if pending_requests.remove(&id) {
let response = RawResponse::err(
id,
ErrorCode::RequestCancelled as i32,
ErrorCode::RequestCanceled as i32,
"canceled by client".to_string(),
);
msg_sender.send(RawMessage::Response(response)).unwrap()