Fix typos in ARCHITECTURE.md and a number of crates

specifically: gen_lsp_server, ra_arena, ra_cli, ra_db, ra_hir
This commit is contained in:
Marcus Klaas de Vries 2019-01-09 00:47:12 +01:00
parent f8261d611a
commit 0b8fbb4fad
23 changed files with 150 additions and 91 deletions

View file

@ -1,6 +1,6 @@
# Architecture # Architecture
This document describes high-level architecture of rust-analyzer. This document describes the high-level architecture of rust-analyzer.
If you want to familiarize yourself with the code base, you are just If you want to familiarize yourself with the code base, you are just
in the right place! in the right place!
@ -12,10 +12,10 @@ On the highest level, rust-analyzer is a thing which accepts input source code
from the client and produces a structured semantic model of the code. from the client and produces a structured semantic model of the code.
More specifically, input data consists of a set of test files (`(PathBuf, More specifically, input data consists of a set of test files (`(PathBuf,
String)` pairs) and an information about project structure, the so called String)` pairs) and information about project structure, captured in the so called
`CrateGraph`. Crate graph specifies which files are crate roots, which cfg flags `CrateGraph`. The crate graph specifies which files are crate roots, which cfg
are specified for each crate (TODO: actually implement this) and what are flags are specified for each crate (TODO: actually implement this) and what
dependencies between the crates. The analyzer keeps all these input data in dependencies exist between the crates. The analyzer keeps all this input data in
memory and never does any IO. Because the input data is source code, which memory and never does any IO. Because the input data is source code, which
typically measures in tens of megabytes at most, keeping all input data in typically measures in tens of megabytes at most, keeping all input data in
memory is OK. memory is OK.
@ -28,8 +28,8 @@ declarations, etc.
The client can submit a small delta of input data (typically, a change to a The client can submit a small delta of input data (typically, a change to a
single file) and get a fresh code model which accounts for changes. single file) and get a fresh code model which accounts for changes.
Underlying engine makes sure that model is computed lazily (on-demand) and can The underlying engine makes sure that model is computed lazily (on-demand) and
be quickly updated for small modifications. can be quickly updated for small modifications.
## Code generation ## Code generation
@ -37,7 +37,7 @@ be quickly updated for small modifications.
Some of the components of this repository are generated through automatic Some of the components of this repository are generated through automatic
processes. These are outlined below: processes. These are outlined below:
- `gen-syntax`: The kinds of tokens are reused in several places, so a generator - `gen-syntax`: The kinds of tokens that are reused in several places, so a generator
is used. We use tera templates to generate the files listed below, based on is used. We use tera templates to generate the files listed below, based on
the grammar described in [grammar.ron]: the grammar described in [grammar.ron]:
- [ast/generated.rs][ast generated] in `ra_syntax` based on - [ast/generated.rs][ast generated] in `ra_syntax` based on
@ -58,17 +58,16 @@ processes. These are outlined below:
### `crates/ra_syntax` ### `crates/ra_syntax`
Rust syntax tree structure and parser. See Rust syntax tree structure and parser. See
[RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design [RFC](https://github.com/rust-lang/rfcs/pull/2256) for some design notes.
notes.
- [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees. - [rowan](https://github.com/rust-analyzer/rowan) library is used for constructing syntax trees.
- `grammar` module is the actual parser. It is a hand-written recursive descent parsers, which - `grammar` module is the actual parser. It is a hand-written recursive descent parser, which
produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java), produces a sequence of events like "start node X", "finish not Y". It works similarly to [kotlin's parser](https://github.com/JetBrains/kotlin/blob/4d951de616b20feca92f3e9cc9679b2de9e65195/compiler/frontend/src/org/jetbrains/kotlin/parsing/KotlinParsing.java),
which is a good source for inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs) which is a good source of inspiration for dealing with syntax errors and incomplete input. Original [libsyntax parser](https://github.com/rust-lang/rust/blob/6b99adeb11313197f409b4f7c4083c2ceca8a4fe/src/libsyntax/parse/parser.rs)
is what we use for the definition of the Rust language. is what we use for the definition of the Rust language.
- `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees. - `parser_api/parser_impl` bridges the tree-agnostic parser from `grammar` with `rowan` trees.
This is the thing that turns a flat list of events into a tree (see `EventProcessor`) This is the thing that turns a flat list of events into a tree (see `EventProcessor`)
- `ast` a type safe API on top of the raw `rowan` tree. - `ast` provides a type safe API on top of the raw `rowan` tree.
- `grammar.ron` RON description of the grammar, which is used to - `grammar.ron` RON description of the grammar, which is used to
generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command. generate `syntax_kinds` and `ast` modules, using `cargo gen-syntax` command.
- `algo`: generic tree algorithms, including `walk` for O(1) stack - `algo`: generic tree algorithms, including `walk` for O(1) stack
@ -90,7 +89,7 @@ fixes a bug in the grammar.
We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and We use the [salsa](https://github.com/salsa-rs/salsa) crate for incremental and
on-demand computation. Roughly, you can think of salsa as a key-value store, but on-demand computation. Roughly, you can think of salsa as a key-value store, but
it also can compute derived values using specified functions. The `ra_db` crate it also can compute derived values using specified functions. The `ra_db` crate
provides a basic infrastructure for interacting with salsa. Crucially, it provides basic infrastructure for interacting with salsa. Crucially, it
defines most of the "input" queries: facts supplied by the client of the defines most of the "input" queries: facts supplied by the client of the
analyzer. Reading the docs of the `ra_db::input` module should be useful: analyzer. Reading the docs of the `ra_db::input` module should be useful:
everything else is strictly derived from those inputs. everything else is strictly derived from those inputs.
@ -102,7 +101,7 @@ HIR provides high-level "object oriented" access to Rust code.
The principal difference between HIR and syntax trees is that HIR is bound to a The principal difference between HIR and syntax trees is that HIR is bound to a
particular crate instance. That is, it has cfg flags and features applied (in particular crate instance. That is, it has cfg flags and features applied (in
theory, in practice this is to be implemented). So, the relation between theory, in practice this is to be implemented). So, the relation between
syntax and HIR is many-to-one. The `source_binder` modules is responsible for syntax and HIR is many-to-one. The `source_binder` module is responsible for
guessing a HIR for a particular source position. guessing a HIR for a particular source position.
Underneath, HIR works on top of salsa, using a `HirDatabase` trait. Underneath, HIR works on top of salsa, using a `HirDatabase` trait.
@ -111,12 +110,12 @@ Underneath, HIR works on top of salsa, using a `HirDatabase` trait.
A stateful library for analyzing many Rust files as they change. `AnalysisHost` A stateful library for analyzing many Rust files as they change. `AnalysisHost`
is a mutable entity (clojure's atom) which holds the current state, incorporates is a mutable entity (clojure's atom) which holds the current state, incorporates
changes and handles out `Analysis` --- an immutable and consistent snapshot of changes and hands out `Analysis` --- an immutable and consistent snapshot of
world state at a point in time, which actually powers analysis. the world state at a point in time, which actually powers analysis.
One interesting aspect of analysis is its support for cancellation. When a One interesting aspect of analysis is its support for cancellation. When a
change is applied to `AnalysisHost`, first all currently active snapshots are change is applied to `AnalysisHost`, first all currently active snapshots are
cancelled. Only after all snapshots are dropped the change actually affects the canceled. Only after all snapshots are dropped the change actually affects the
database. database.
APIs in this crate are IDE centric: they take text offsets as input and produce APIs in this crate are IDE centric: they take text offsets as input and produce
@ -142,7 +141,7 @@ An LSP implementation which wraps `ra_ide_api` into a langauge server protocol.
### `crates/ra_vfs` ### `crates/ra_vfs`
Although `hir` and `ra_ide_api` don't do any io, we need to be able to read Although `hir` and `ra_ide_api` don't do any IO, we need to be able to read
files from disk at the end of the day. This is what `ra_vfs` does. It also files from disk at the end of the day. This is what `ra_vfs` does. It also
manages overlays: "dirty" files in the editor, whose "true" contents is manages overlays: "dirty" files in the editor, whose "true" contents is
different from data on disk. different from data on disk.
@ -175,16 +174,16 @@ VS Code plugin
## Common workflows ## Common workflows
To try out VS Code extensions, run `cargo install-code`. This installs both the To try out VS Code extensions, run `cargo install-code`. This installs both the
`ra_lsp_server` binary and VS Code extension. To install only the binary, use `ra_lsp_server` binary and the VS Code extension. To install only the binary, use
`cargo install --path crates/ra_lsp_server --force` `cargo install --path crates/ra_lsp_server --force`
To see logs from the language server, set `RUST_LOG=info` env variable. To see To see logs from the language server, set `RUST_LOG=info` env variable. To see
all communication between the server and the client, use all communication between the server and the client, use
`RUST_LOG=gen_lsp_server=debug` (will print quite a bit of stuff). `RUST_LOG=gen_lsp_server=debug` (this will print quite a bit of stuff).
To run tests, just `cargo test`. To run tests, just `cargo test`.
To work on VS Code extension, launch code inside `editors/code` and use `F5` to To work on the VS Code extension, launch code inside `editors/code` and use `F5` to
launch/debug. To automatically apply formatter and linter suggestions, use `npm launch/debug. To automatically apply formatter and linter suggestions, use `npm
run fix`. run fix`.

View file

@ -78,10 +78,10 @@ pub use crate::{
}; };
/// Main entry point: runs the server from initialization to shutdown. /// Main entry point: runs the server from initialization to shutdown.
/// To attach server to standard input/output streams, use `stdio_transport` /// To attach server to standard input/output streams, use the `stdio_transport`
/// function to create corresponding `sender` and `receiver` pair. /// function to create corresponding `sender` and `receiver` pair.
/// ///
///`server` should use `handle_shutdown` function to handle the `Shutdown` /// `server` should use the `handle_shutdown` function to handle the `Shutdown`
/// request. /// request.
pub fn run_server( pub fn run_server(
caps: ServerCapabilities, caps: ServerCapabilities,
@ -104,7 +104,7 @@ pub fn run_server(
Ok(()) Ok(())
} }
/// if `req` is `Shutdown`, respond to it and return `None`, otherwise return `Some(req)` /// If `req` is `Shutdown`, respond to it and return `None`, otherwise return `Some(req)`
pub fn handle_shutdown(req: RawRequest, sender: &Sender<RawMessage>) -> Option<RawRequest> { pub fn handle_shutdown(req: RawRequest, sender: &Sender<RawMessage>) -> Option<RawRequest> {
match req.cast::<Shutdown>() { match req.cast::<Shutdown>() {
Ok((id, ())) => { Ok((id, ())) => {

View file

@ -54,7 +54,7 @@ pub enum ErrorCode {
ServerErrorEnd = -32000, ServerErrorEnd = -32000,
ServerNotInitialized = -32002, ServerNotInitialized = -32002,
UnknownErrorCode = -32001, UnknownErrorCode = -32001,
RequestCancelled = -32800, RequestCanceled = -32800,
ContentModified = -32801, ContentModified = -32801,
} }

View file

@ -8,11 +8,11 @@
//! * user types next character, while syntax highlighting *is still in //! * user types next character, while syntax highlighting *is still in
//! progress*. //! progress*.
//! //!
//! In this situation, we want to react to modification as quckly as possible. //! In this situation, we want to react to modification as quickly as possible.
//! At the same time, in-progress results are not very interesting, because they //! At the same time, in-progress results are not very interesting, because they
//! are invalidated by the edit anyway. So, we first cancel all in-flight //! are invalidated by the edit anyway. So, we first cancel all in-flight
//! requests, and then apply modification knowing that it won't intrfere with //! requests, and then apply modification knowing that it won't interfere with
//! any background processing (this bit is handled by salsa, see //! any background processing (this bit is handled by salsa, see the
//! `BaseDatabase::check_canceled` method). //! `BaseDatabase::check_canceled` method).
/// An "error" signifing that the operation was canceled. /// An "error" signifing that the operation was canceled.

View file

@ -1,9 +1,9 @@
/// This modules specifies the input to rust-analyzer. In some sense, this is /// This module specifies the input to rust-analyzer. In some sense, this is
/// **the** most important module, because all other fancy stuff is strictly /// **the** most important module, because all other fancy stuff is strictly
/// derived from this input. /// derived from this input.
/// ///
/// Note that neither this module, nor any other part of the analyzer's core do /// Note that neither this module, nor any other part of the analyzer's core do
/// actual IO. See `vfs` and `project_model` in `ra_lsp_server` crate for how /// actual IO. See `vfs` and `project_model` in the `ra_lsp_server` crate for how
/// actual IO is done and lowered to input. /// actual IO is done and lowered to input.
use std::sync::Arc; use std::sync::Arc;
@ -17,17 +17,17 @@ use rustc_hash::FxHashSet;
/// `FileId` is an integer which uniquely identifies a file. File paths are /// `FileId` is an integer which uniquely identifies a file. File paths are
/// messy and system-dependent, so most of the code should work directly with /// messy and system-dependent, so most of the code should work directly with
/// `FileId`, without inspecting the path. The mapping between `FileId` and path /// `FileId`, without inspecting the path. The mapping between `FileId` and path
/// and `SourceRoot` is constant. File rename is represented as a pair of /// and `SourceRoot` is constant. A file rename is represented as a pair of
/// deletion/creation. /// deletion/creation.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct FileId(pub u32); pub struct FileId(pub u32);
/// Files are grouped into source roots. A source root is a directory on the /// Files are grouped into source roots. A source root is a directory on the
/// file systems which is watched for changes. Typically it corresponds to a /// file systems which is watched for changes. Typically it corresponds to a
/// Cargo package. Source roots *might* be nested: in this case, file belongs to /// Rust crate. Source roots *might* be nested: in this case, a file belongs to
/// the nearest enclosing source root. Path to files are always relative to a /// the nearest enclosing source root. Paths to files are always relative to a
/// source root, and analyzer does not know the root path of the source root at /// source root, and the analyzer does not know the root path of the source root at
/// all. So, a file from one source root can't refere a file in another source /// all. So, a file from one source root can't refer to a file in another source
/// root by path. /// root by path.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct SourceRootId(pub u32); pub struct SourceRootId(pub u32);
@ -38,15 +38,15 @@ pub struct SourceRoot {
} }
/// `CrateGraph` is a bit of information which turns a set of text files into a /// `CrateGraph` is a bit of information which turns a set of text files into a
/// number of Rust crates. Each Crate is the `FileId` of it's root module, the /// number of Rust crates. Each crate is defined by the `FileId` of its root module,
/// set of cfg flags (not yet implemented) and the set of dependencies. Note /// the set of cfg flags (not yet implemented) and the set of dependencies. Note
/// that, due to cfg's, there might be several crates for a single `FileId`! As /// that, due to cfg's, there might be several crates for a single `FileId`! As
/// in the rust-lang proper, a crate does not have a name. Instead, names are /// in the rust-lang proper, a crate does not have a name. Instead, names are
/// specified on dependency edges. That is, a crate might be known under /// specified on dependency edges. That is, a crate might be known under
/// different names in different dependant crates. /// different names in different dependent crates.
/// ///
/// Note that `CrateGraph` is build-system agnostic: it's a concept of the Rust /// Note that `CrateGraph` is build-system agnostic: it's a concept of the Rust
/// langauge proper, not a concept of the build system. In practice, we get /// language proper, not a concept of the build system. In practice, we get
/// `CrateGraph` by lowering `cargo metadata` output. /// `CrateGraph` by lowering `cargo metadata` output.
#[derive(Debug, Clone, Default, PartialEq, Eq)] #[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct CrateGraph { pub struct CrateGraph {

View file

@ -1,5 +1,5 @@
//! ra_db defines basic database traits. Concrete DB is defined by ra_ide_api. //! ra_db defines basic database traits. The concrete DB is defined by ra_ide_api.
mod cancelation; mod cancellation;
mod syntax_ptr; mod syntax_ptr;
mod input; mod input;
mod loc2id; mod loc2id;
@ -8,7 +8,7 @@ pub mod mock;
use ra_syntax::{TextUnit, TextRange, SourceFile, TreePtr}; use ra_syntax::{TextUnit, TextRange, SourceFile, TreePtr};
pub use crate::{ pub use crate::{
cancelation::{Canceled, Cancelable}, cancellation::{Canceled, Cancelable},
syntax_ptr::LocalSyntaxPtr, syntax_ptr::LocalSyntaxPtr,
input::{ input::{
FilesDatabase, FileId, CrateId, SourceRoot, SourceRootId, CrateGraph, Dependency, FilesDatabase, FileId, CrateId, SourceRoot, SourceRootId, CrateGraph, Dependency,

View file

@ -5,7 +5,7 @@ use rustc_hash::FxHashMap;
use ra_arena::{Arena, ArenaId}; use ra_arena::{Arena, ArenaId};
/// There are two principle ways to refer to things: /// There are two principle ways to refer to things:
/// - by their locatinon (module in foo/bar/baz.rs at line 42) /// - by their location (module in foo/bar/baz.rs at line 42)
/// - by their numeric id (module `ModuleId(42)`) /// - by their numeric id (module `ModuleId(42)`)
/// ///
/// The first one is more powerful (you can actually find the thing in question /// The first one is more powerful (you can actually find the thing in question
@ -13,7 +13,7 @@ use ra_arena::{Arena, ArenaId};
/// ///
/// `Loc2IdMap` allows us to have a cake an eat it as well: by maintaining a /// `Loc2IdMap` allows us to have a cake an eat it as well: by maintaining a
/// bidirectional mapping between positional and numeric ids, we can use compact /// bidirectional mapping between positional and numeric ids, we can use compact
/// representation wich still allows us to get the actual item /// representation which still allows us to get the actual item.
#[derive(Debug)] #[derive(Debug)]
struct Loc2IdMap<LOC, ID> struct Loc2IdMap<LOC, ID>
where where

View file

@ -13,8 +13,8 @@ use crate::{
ty::InferenceResult, ty::InferenceResult,
}; };
/// hir::Crate describes a single crate. It's the main inteface with which /// hir::Crate describes a single crate. It's the main interface with which
/// crate's dependencies interact. Mostly, it should be just a proxy for the /// a crate's dependencies interact. Mostly, it should be just a proxy for the
/// root module. /// root module.
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct Crate { pub struct Crate {
@ -78,6 +78,7 @@ impl Module {
pub fn definition_source(&self, db: &impl HirDatabase) -> Cancelable<(FileId, ModuleSource)> { pub fn definition_source(&self, db: &impl HirDatabase) -> Cancelable<(FileId, ModuleSource)> {
self.definition_source_impl(db) self.definition_source_impl(db)
} }
/// Returns a node which declares this module, either a `mod foo;` or a `mod foo {}`. /// Returns a node which declares this module, either a `mod foo;` or a `mod foo {}`.
/// `None` for the crate root. /// `None` for the crate root.
pub fn declaration_source( pub fn declaration_source(
@ -91,20 +92,24 @@ impl Module {
pub fn krate(&self, db: &impl HirDatabase) -> Cancelable<Option<Crate>> { pub fn krate(&self, db: &impl HirDatabase) -> Cancelable<Option<Crate>> {
self.krate_impl(db) self.krate_impl(db)
} }
/// Topmost parent of this module. Every module has a `crate_root`, but some /// Topmost parent of this module. Every module has a `crate_root`, but some
/// might miss `krate`. This can happen if a module's file is not included /// might be missing `krate`. This can happen if a module's file is not included
/// into any module tree of any target from Cargo.toml. /// in the module tree of any target in Cargo.toml.
pub fn crate_root(&self, db: &impl HirDatabase) -> Cancelable<Module> { pub fn crate_root(&self, db: &impl HirDatabase) -> Cancelable<Module> {
self.crate_root_impl(db) self.crate_root_impl(db)
} }
/// Finds a child module with the specified name. /// Finds a child module with the specified name.
pub fn child(&self, db: &impl HirDatabase, name: &Name) -> Cancelable<Option<Module>> { pub fn child(&self, db: &impl HirDatabase, name: &Name) -> Cancelable<Option<Module>> {
self.child_impl(db, name) self.child_impl(db, name)
} }
/// Finds a parent module. /// Finds a parent module.
pub fn parent(&self, db: &impl HirDatabase) -> Cancelable<Option<Module>> { pub fn parent(&self, db: &impl HirDatabase) -> Cancelable<Option<Module>> {
self.parent_impl(db) self.parent_impl(db)
} }
pub fn path_to_root(&self, db: &impl HirDatabase) -> Cancelable<Vec<Module>> { pub fn path_to_root(&self, db: &impl HirDatabase) -> Cancelable<Vec<Module>> {
let mut res = vec![self.clone()]; let mut res = vec![self.clone()];
let mut curr = self.clone(); let mut curr = self.clone();
@ -114,13 +119,16 @@ impl Module {
} }
Ok(res) Ok(res)
} }
/// Returns a `ModuleScope`: a set of items, visible in this module. /// Returns a `ModuleScope`: a set of items, visible in this module.
pub fn scope(&self, db: &impl HirDatabase) -> Cancelable<ModuleScope> { pub fn scope(&self, db: &impl HirDatabase) -> Cancelable<ModuleScope> {
self.scope_impl(db) self.scope_impl(db)
} }
pub fn resolve_path(&self, db: &impl HirDatabase, path: &Path) -> Cancelable<PerNs<DefId>> { pub fn resolve_path(&self, db: &impl HirDatabase, path: &Path) -> Cancelable<PerNs<DefId>> {
self.resolve_path_impl(db, path) self.resolve_path_impl(db, path)
} }
pub fn problems( pub fn problems(
&self, &self,
db: &impl HirDatabase, db: &impl HirDatabase,
@ -140,6 +148,7 @@ impl StructField {
pub fn name(&self) -> &Name { pub fn name(&self) -> &Name {
&self.name &self.name
} }
pub fn type_ref(&self) -> &TypeRef { pub fn type_ref(&self) -> &TypeRef {
&self.type_ref &self.type_ref
} }
@ -160,18 +169,21 @@ impl VariantData {
_ => &[], _ => &[],
} }
} }
pub fn is_struct(&self) -> bool { pub fn is_struct(&self) -> bool {
match self { match self {
VariantData::Struct(..) => true, VariantData::Struct(..) => true,
_ => false, _ => false,
} }
} }
pub fn is_tuple(&self) -> bool { pub fn is_tuple(&self) -> bool {
match self { match self {
VariantData::Tuple(..) => true, VariantData::Tuple(..) => true,
_ => false, _ => false,
} }
} }
pub fn is_unit(&self) -> bool { pub fn is_unit(&self) -> bool {
match self { match self {
VariantData::Unit => true, VariantData::Unit => true,

View file

@ -26,6 +26,7 @@ pub trait HirDatabase: SyntaxDatabase
type HirSourceFileQuery; type HirSourceFileQuery;
use fn HirFileId::hir_source_file; use fn HirFileId::hir_source_file;
} }
fn expand_macro_invocation(invoc: MacroCallId) -> Option<Arc<MacroExpansion>> { fn expand_macro_invocation(invoc: MacroCallId) -> Option<Arc<MacroExpansion>> {
type ExpandMacroCallQuery; type ExpandMacroCallQuery;
use fn crate::macros::expand_macro_invocation; use fn crate::macros::expand_macro_invocation;
@ -80,10 +81,12 @@ pub trait HirDatabase: SyntaxDatabase
type InputModuleItemsQuery; type InputModuleItemsQuery;
use fn query_definitions::input_module_items; use fn query_definitions::input_module_items;
} }
fn item_map(source_root_id: SourceRootId) -> Cancelable<Arc<ItemMap>> { fn item_map(source_root_id: SourceRootId) -> Cancelable<Arc<ItemMap>> {
type ItemMapQuery; type ItemMapQuery;
use fn query_definitions::item_map; use fn query_definitions::item_map;
} }
fn module_tree(source_root_id: SourceRootId) -> Cancelable<Arc<ModuleTree>> { fn module_tree(source_root_id: SourceRootId) -> Cancelable<Arc<ModuleTree>> {
type ModuleTreeQuery; type ModuleTreeQuery;
use fn crate::module_tree::ModuleTree::module_tree_query; use fn crate::module_tree::ModuleTree::module_tree_query;

View file

@ -33,8 +33,7 @@ pub struct Body {
/// IDs. This is needed to go from e.g. a position in a file to the HIR /// IDs. This is needed to go from e.g. a position in a file to the HIR
/// expression containing it; but for type inference etc., we want to operate on /// expression containing it; but for type inference etc., we want to operate on
/// a structure that is agnostic to the actual positions of expressions in the /// a structure that is agnostic to the actual positions of expressions in the
/// file, so that we don't recompute the type inference whenever some whitespace /// file, so that we don't recompute types whenever some whitespace is typed.
/// is typed.
#[derive(Debug, Eq, PartialEq)] #[derive(Debug, Eq, PartialEq)]
pub struct BodySyntaxMapping { pub struct BodySyntaxMapping {
body: Arc<Body>, body: Arc<Body>,
@ -74,20 +73,25 @@ impl BodySyntaxMapping {
pub fn expr_syntax(&self, expr: ExprId) -> Option<LocalSyntaxPtr> { pub fn expr_syntax(&self, expr: ExprId) -> Option<LocalSyntaxPtr> {
self.expr_syntax_mapping_back.get(expr).cloned() self.expr_syntax_mapping_back.get(expr).cloned()
} }
pub fn syntax_expr(&self, ptr: LocalSyntaxPtr) -> Option<ExprId> { pub fn syntax_expr(&self, ptr: LocalSyntaxPtr) -> Option<ExprId> {
self.expr_syntax_mapping.get(&ptr).cloned() self.expr_syntax_mapping.get(&ptr).cloned()
} }
pub fn node_expr(&self, node: &ast::Expr) -> Option<ExprId> { pub fn node_expr(&self, node: &ast::Expr) -> Option<ExprId> {
self.expr_syntax_mapping self.expr_syntax_mapping
.get(&LocalSyntaxPtr::new(node.syntax())) .get(&LocalSyntaxPtr::new(node.syntax()))
.cloned() .cloned()
} }
pub fn pat_syntax(&self, pat: PatId) -> Option<LocalSyntaxPtr> { pub fn pat_syntax(&self, pat: PatId) -> Option<LocalSyntaxPtr> {
self.pat_syntax_mapping_back.get(pat).cloned() self.pat_syntax_mapping_back.get(pat).cloned()
} }
pub fn syntax_pat(&self, ptr: LocalSyntaxPtr) -> Option<PatId> { pub fn syntax_pat(&self, ptr: LocalSyntaxPtr) -> Option<PatId> {
self.pat_syntax_mapping.get(&ptr).cloned() self.pat_syntax_mapping.get(&ptr).cloned()
} }
pub fn node_pat(&self, node: &ast::Pat) -> Option<PatId> { pub fn node_pat(&self, node: &ast::Pat) -> Option<PatId> {
self.pat_syntax_mapping self.pat_syntax_mapping
.get(&LocalSyntaxPtr::new(node.syntax())) .get(&LocalSyntaxPtr::new(node.syntax()))

View file

@ -9,25 +9,25 @@ use crate::{
use crate::code_model_api::Module; use crate::code_model_api::Module;
/// hir makes a heavy use of ids: integer (u32) handlers to various things. You /// hir makes heavy use of ids: integer (u32) handlers to various things. You
/// can think of id as a pointer (but without a lifetime) or a file descriptor /// can think of id as a pointer (but without a lifetime) or a file descriptor
/// (but for hir objects). /// (but for hir objects).
/// ///
/// This module defines a bunch of ids we are using. The most important ones are /// This module defines a bunch of ids we are using. The most important ones are
/// probably `HirFileId` and `DefId`. /// probably `HirFileId` and `DefId`.
/// Input to the analyzer is a set of file, where each file is indetified by /// Input to the analyzer is a set of files, where each file is indentified by
/// `FileId` and contains source code. However, another source of source code in /// `FileId` and contains source code. However, another source of source code in
/// Rust are macros: each macro can be thought of as producing a "temporary /// Rust are macros: each macro can be thought of as producing a "temporary
/// file". To assign id to such file, we use the id of a macro call that /// file". To assign an id to such a file, we use the id of the macro call that
/// produced the file. So, a `HirFileId` is either a `FileId` (source code /// produced the file. So, a `HirFileId` is either a `FileId` (source code
/// written by user), or a `MacroCallId` (source code produced by macro). /// written by user), or a `MacroCallId` (source code produced by macro).
/// ///
/// What is a `MacroCallId`? Simplifying, it's a `HirFileId` of a file containin /// What is a `MacroCallId`? Simplifying, it's a `HirFileId` of a file containin
/// the call plus the offset of the macro call in the file. Note that this is a /// the call plus the offset of the macro call in the file. Note that this is a
/// recursive definition! Nethetheless, size_of of `HirFileId` is finite /// recursive definition! However, the size_of of `HirFileId` is finite
/// (because everything bottoms out at the real `FileId`) and small /// (because everything bottoms out at the real `FileId`) and small
/// (`MacroCallId` uses location interner). /// (`MacroCallId` uses the location interner).
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct HirFileId(HirFileIdRepr); pub struct HirFileId(HirFileIdRepr);
@ -235,7 +235,7 @@ pub struct SourceItemId {
pub(crate) item_id: Option<SourceFileItemId>, pub(crate) item_id: Option<SourceFileItemId>,
} }
/// Maps item's `SyntaxNode`s to `SourceFileItemId` and back. /// Maps items' `SyntaxNode`s to `SourceFileItemId`s and back.
#[derive(Debug, PartialEq, Eq)] #[derive(Debug, PartialEq, Eq)]
pub struct SourceFileItems { pub struct SourceFileItems {
file_id: HirFileId, file_id: HirFileId,

View file

@ -128,13 +128,13 @@ impl ImplItem {
pub struct ImplId(pub RawId); pub struct ImplId(pub RawId);
impl_arena_id!(ImplId); impl_arena_id!(ImplId);
/// Collection of impl blocks is a two-step process: First we collect the blocks /// The collection of impl blocks is a two-step process: first we collect the
/// per-module; then we build an index of all impl blocks in the crate. This /// blocks per-module; then we build an index of all impl blocks in the crate.
/// way, we avoid having to do this process for the whole crate whenever someone /// This way, we avoid having to do this process for the whole crate whenever
/// types in any file; as long as the impl blocks in the file don't change, we /// a file is changed; as long as the impl blocks in the file don't change,
/// don't need to do the second step again. /// we don't need to do the second step again.
/// ///
/// (The second step does not yet exist currently.) /// (The second step does not yet exist.)
#[derive(Debug, PartialEq, Eq)] #[derive(Debug, PartialEq, Eq)]
pub struct ModuleImplBlocks { pub struct ModuleImplBlocks {
impls: Arena<ImplId, ImplData>, impls: Arena<ImplId, ImplData>,

View file

@ -1,9 +1,9 @@
//! HIR (previsouly known as descriptors) provides a high-level OO acess to Rust //! HIR (previously known as descriptors) provides a high-level object oriented
//! code. //! access to Rust code.
//! //!
//! The principal difference between HIR and syntax trees is that HIR is bound //! The principal difference between HIR and syntax trees is that HIR is bound
//! to a particular crate instance. That is, it has cfg flags and features //! to a particular crate instance. That is, it has cfg flags and features
//! applied. So, there relation between syntax and HIR is many-to-one. //! applied. So, the relation between syntax and HIR is many-to-one.
macro_rules! ctry { macro_rules! ctry {
($expr:expr) => { ($expr:expr) => {

View file

@ -4,9 +4,9 @@
/// that is produced after expansion. See `HirFileId` and `MacroCallId` for how /// that is produced after expansion. See `HirFileId` and `MacroCallId` for how
/// do we do that. /// do we do that.
/// ///
/// When file-management question is resolved, all that is left is a token tree /// When the file-management question is resolved, all that is left is a
/// to token tree transformation plus hygent. We don't have either of thouse /// token-tree-to-token-tree transformation plus hygiene. We don't have either of
/// yet, so all macros are string based at the moment! /// those yet, so all macros are string based at the moment!
use std::sync::Arc; use std::sync::Arc;
use ra_db::LocalSyntaxPtr; use ra_db::LocalSyntaxPtr;

View file

@ -85,9 +85,9 @@ impl_arena_id!(LinkId);
/// Physically, rust source is organized as a set of files, but logically it is /// Physically, rust source is organized as a set of files, but logically it is
/// organized as a tree of modules. Usually, a single file corresponds to a /// organized as a tree of modules. Usually, a single file corresponds to a
/// single module, but it is not nessary the case. /// single module, but it is not neccessarily always the case.
/// ///
/// Module encapsulate the logic of transitioning from the fuzzy world of files /// `ModuleTree` encapsulates the logic of transitioning from the fuzzy world of files
/// (which can have multiple parents) to the precise world of modules (which /// (which can have multiple parents) to the precise world of modules (which
/// always have one parent). /// always have one parent).
#[derive(Default, Debug, PartialEq, Eq)] #[derive(Default, Debug, PartialEq, Eq)]

View file

@ -3,7 +3,7 @@ use std::fmt;
use ra_syntax::{ast, SmolStr}; use ra_syntax::{ast, SmolStr};
/// `Name` is a wrapper around string, which is used in hir for both references /// `Name` is a wrapper around string, which is used in hir for both references
/// and declarations. In theory, names should also carry hygene info, but we are /// and declarations. In theory, names should also carry hygiene info, but we are
/// not there yet! /// not there yet!
#[derive(Clone, PartialEq, Eq, Hash)] #[derive(Clone, PartialEq, Eq, Hash)]
pub struct Name { pub struct Name {

View file

@ -1,18 +1,18 @@
//! Name resolution algorithm. The end result of the algorithm is `ItemMap`: a //! Name resolution algorithm. The end result of the algorithm is an `ItemMap`:
//! map with maps each module to it's scope: the set of items, visible in the //! a map which maps each module to its scope: the set of items visible in the
//! module. That is, we only resolve imports here, name resolution of item //! module. That is, we only resolve imports here, name resolution of item
//! bodies will be done in a separate step. //! bodies will be done in a separate step.
//! //!
//! Like Rustc, we use an interative per-crate algorithm: we start with scopes //! Like Rustc, we use an interactive per-crate algorithm: we start with scopes
//! containing only directly defined items, and then iteratively resolve //! containing only directly defined items, and then iteratively resolve
//! imports. //! imports.
//! //!
//! To make this work nicely in the IDE scenarios, we place `InputModuleItems` //! To make this work nicely in the IDE scenario, we place `InputModuleItems`
//! in between raw syntax and name resolution. `InputModuleItems` are computed //! in between raw syntax and name resolution. `InputModuleItems` are computed
//! using only the module's syntax, and it is all directly defined items plus //! using only the module's syntax, and it is all directly defined items plus
//! imports. The plain is to make `InputModuleItems` independent of local //! imports. The plan is to make `InputModuleItems` independent of local
//! modifications (that is, typing inside a function shold not change IMIs), //! modifications (that is, typing inside a function should not change IMIs),
//! such that the results of name resolution can be preserved unless the module //! so that the results of name resolution can be preserved unless the module
//! structure itself is modified. //! structure itself is modified.
use std::sync::Arc; use std::sync::Arc;
@ -34,7 +34,7 @@ use crate::{
module_tree::{ModuleId, ModuleTree}, module_tree::{ModuleId, ModuleTree},
}; };
/// Item map is the result of the name resolution. Item map contains, for each /// `ItemMap` is the result of name resolution. It contains, for each
/// module, the set of visible items. /// module, the set of visible items.
// FIXME: currenty we compute item map per source-root. We should do it per crate instead. // FIXME: currenty we compute item map per source-root. We should do it per crate instead.
#[derive(Default, Debug, PartialEq, Eq)] #[derive(Default, Debug, PartialEq, Eq)]
@ -59,9 +59,9 @@ impl ModuleScope {
/// A set of items and imports declared inside a module, without relation to /// A set of items and imports declared inside a module, without relation to
/// other modules. /// other modules.
/// ///
/// This stands in-between raw syntax and name resolution and alow us to avoid /// This sits in-between raw syntax and name resolution and allows us to avoid
/// recomputing name res: if `InputModuleItems` are the same, we can avoid /// recomputing name res: if two instance of `InputModuleItems` are the same, we
/// running name resolution. /// can avoid redoing name resolution.
#[derive(Debug, Default, PartialEq, Eq)] #[derive(Debug, Default, PartialEq, Eq)]
pub struct InputModuleItems { pub struct InputModuleItems {
pub(crate) items: Vec<ModuleItem>, pub(crate) items: Vec<ModuleItem>,
@ -114,7 +114,7 @@ enum ImportKind {
Named(NamedImport), Named(NamedImport),
} }
/// Resolution is basically `DefId` atm, but it should account for stuff like /// `Resolution` is basically `DefId` atm, but it should account for stuff like
/// multiple namespaces, ambiguity and errors. /// multiple namespaces, ambiguity and errors.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct Resolution { pub struct Resolution {

View file

@ -1,4 +1,4 @@
/// Lookup hir elements using position in the source code. This is a lossy /// Lookup hir elements using positions in the source code. This is a lossy
/// transformation: in general, a single source might correspond to several /// transformation: in general, a single source might correspond to several
/// modules, functions, etc, due to macros, cfgs and `#[path=]` attributes on /// modules, functions, etc, due to macros, cfgs and `#[path=]` attributes on
/// modules. /// modules.

View file

@ -144,7 +144,7 @@ pub enum Ty {
Bool, Bool,
/// The primitive character type; holds a Unicode scalar value /// The primitive character type; holds a Unicode scalar value
/// (a non-surrogate code point). Written as `char`. /// (a non-surrogate code point). Written as `char`.
Char, Char,
/// A primitive signed integer type. For example, `i32`. /// A primitive signed integer type. For example, `i32`.
@ -204,7 +204,7 @@ pub enum Ty {
// `|a| yield a`. // `|a| yield a`.
// Generator(DefId, GeneratorSubsts<'tcx>, hir::GeneratorMovability), // Generator(DefId, GeneratorSubsts<'tcx>, hir::GeneratorMovability),
// A type representin the types stored inside a generator. // A type representing the types stored inside a generator.
// This should only appear in GeneratorInteriors. // This should only appear in GeneratorInteriors.
// GeneratorWitness(Binder<&'tcx List<Ty<'tcx>>>), // GeneratorWitness(Binder<&'tcx List<Ty<'tcx>>>),
/// The never type `!`. /// The never type `!`.

View file

@ -1,4 +1,4 @@
//! In certain situations, rust automatically inserts derefs as necessary: For //! In certain situations, rust automatically inserts derefs as necessary: for
//! example, field accesses `foo.bar` still work when `foo` is actually a //! example, field accesses `foo.bar` still work when `foo` is actually a
//! reference to a type with the field `bar`. This is an approximation of the //! reference to a type with the field `bar`. This is an approximation of the
//! logic in rustc (which lives in librustc_typeck/check/autoderef.rs). //! logic in rustc (which lives in librustc_typeck/check/autoderef.rs).

View file

@ -15,7 +15,7 @@ use crate::{
}; };
// These tests compare the inference results for all expressions in a file // These tests compare the inference results for all expressions in a file
// against snapshots of the current results. If you change something and these // against snapshots of the expected results. If you change something and these
// tests fail expectedly, you can update the comparison files by deleting them // tests fail expectedly, you can update the comparison files by deleting them
// and running the tests again. Similarly, to add a new test, just write the // and running the tests again. Similarly, to add a new test, just write the
// test here in the same pattern and it will automatically write the snapshot. // test here in the same pattern and it will automatically write the snapshot.

View file

@ -121,9 +121,11 @@ impl AnalysisChange {
pub fn new() -> AnalysisChange { pub fn new() -> AnalysisChange {
AnalysisChange::default() AnalysisChange::default()
} }
pub fn add_root(&mut self, root_id: SourceRootId, is_local: bool) { pub fn add_root(&mut self, root_id: SourceRootId, is_local: bool) {
self.new_roots.push((root_id, is_local)); self.new_roots.push((root_id, is_local));
} }
pub fn add_file( pub fn add_file(
&mut self, &mut self,
root_id: SourceRootId, root_id: SourceRootId,
@ -142,9 +144,11 @@ impl AnalysisChange {
.added .added
.push(file); .push(file);
} }
pub fn change_file(&mut self, file_id: FileId, new_text: Arc<String>) { pub fn change_file(&mut self, file_id: FileId, new_text: Arc<String>) {
self.files_changed.push((file_id, new_text)) self.files_changed.push((file_id, new_text))
} }
pub fn remove_file(&mut self, root_id: SourceRootId, file_id: FileId, path: RelativePathBuf) { pub fn remove_file(&mut self, root_id: SourceRootId, file_id: FileId, path: RelativePathBuf) {
let file = RemoveFile { file_id, path }; let file = RemoveFile { file_id, path };
self.roots_changed self.roots_changed
@ -153,9 +157,11 @@ impl AnalysisChange {
.removed .removed
.push(file); .push(file);
} }
pub fn add_library(&mut self, data: LibraryData) { pub fn add_library(&mut self, data: LibraryData) {
self.libraries_added.push(data) self.libraries_added.push(data)
} }
pub fn set_crate_graph(&mut self, graph: CrateGraph) { pub fn set_crate_graph(&mut self, graph: CrateGraph) {
self.crate_graph = Some(graph); self.crate_graph = Some(graph);
} }
@ -218,15 +224,19 @@ impl Query {
limit: usize::max_value(), limit: usize::max_value(),
} }
} }
pub fn only_types(&mut self) { pub fn only_types(&mut self) {
self.only_types = true; self.only_types = true;
} }
pub fn libs(&mut self) { pub fn libs(&mut self) {
self.libs = true; self.libs = true;
} }
pub fn exact(&mut self) { pub fn exact(&mut self) {
self.exact = true; self.exact = true;
} }
pub fn limit(&mut self, limit: usize) { pub fn limit(&mut self, limit: usize) {
self.limit = limit self.limit = limit
} }
@ -257,15 +267,19 @@ impl NavigationTarget {
ptr: Some(symbol.ptr.clone()), ptr: Some(symbol.ptr.clone()),
} }
} }
pub fn name(&self) -> &SmolStr { pub fn name(&self) -> &SmolStr {
&self.name &self.name
} }
pub fn kind(&self) -> SyntaxKind { pub fn kind(&self) -> SyntaxKind {
self.kind self.kind
} }
pub fn file_id(&self) -> FileId { pub fn file_id(&self) -> FileId {
self.file_id self.file_id
} }
pub fn range(&self) -> TextRange { pub fn range(&self) -> TextRange {
self.range self.range
} }
@ -305,6 +319,7 @@ impl AnalysisHost {
db: self.db.snapshot(), db: self.db.snapshot(),
} }
} }
/// Applies changes to the current state of the world. If there are /// Applies changes to the current state of the world. If there are
/// outstanding snapshots, they will be canceled. /// outstanding snapshots, they will be canceled.
pub fn apply_change(&mut self, change: AnalysisChange) { pub fn apply_change(&mut self, change: AnalysisChange) {
@ -326,30 +341,36 @@ impl Analysis {
pub fn file_text(&self, file_id: FileId) -> Arc<String> { pub fn file_text(&self, file_id: FileId) -> Arc<String> {
self.db.file_text(file_id) self.db.file_text(file_id)
} }
/// Gets the syntax tree of the file. /// Gets the syntax tree of the file.
pub fn file_syntax(&self, file_id: FileId) -> TreePtr<SourceFile> { pub fn file_syntax(&self, file_id: FileId) -> TreePtr<SourceFile> {
self.db.source_file(file_id).clone() self.db.source_file(file_id).clone()
} }
/// Gets the file's `LineIndex`: data structure to convert between absolute /// Gets the file's `LineIndex`: data structure to convert between absolute
/// offsets and line/column representation. /// offsets and line/column representation.
pub fn file_line_index(&self, file_id: FileId) -> Arc<LineIndex> { pub fn file_line_index(&self, file_id: FileId) -> Arc<LineIndex> {
self.db.line_index(file_id) self.db.line_index(file_id)
} }
/// Selects the next syntactic nodes encopasing the range. /// Selects the next syntactic nodes encopasing the range.
pub fn extend_selection(&self, frange: FileRange) -> TextRange { pub fn extend_selection(&self, frange: FileRange) -> TextRange {
extend_selection::extend_selection(&self.db, frange) extend_selection::extend_selection(&self.db, frange)
} }
/// Returns position of the mathcing brace (all types of braces are /// Returns position of the mathcing brace (all types of braces are
/// supported). /// supported).
pub fn matching_brace(&self, file: &SourceFile, offset: TextUnit) -> Option<TextUnit> { pub fn matching_brace(&self, file: &SourceFile, offset: TextUnit) -> Option<TextUnit> {
ra_ide_api_light::matching_brace(file, offset) ra_ide_api_light::matching_brace(file, offset)
} }
/// Returns a syntax tree represented as `String`, for debug purposes. /// Returns a syntax tree represented as `String`, for debug purposes.
// FIXME: use a better name here. // FIXME: use a better name here.
pub fn syntax_tree(&self, file_id: FileId) -> String { pub fn syntax_tree(&self, file_id: FileId) -> String {
let file = self.db.source_file(file_id); let file = self.db.source_file(file_id);
ra_ide_api_light::syntax_tree(&file) ra_ide_api_light::syntax_tree(&file)
} }
/// Returns an edit to remove all newlines in the range, cleaning up minor /// Returns an edit to remove all newlines in the range, cleaning up minor
/// stuff like trailing commas. /// stuff like trailing commas.
pub fn join_lines(&self, frange: FileRange) -> SourceChange { pub fn join_lines(&self, frange: FileRange) -> SourceChange {
@ -359,6 +380,7 @@ impl Analysis {
ra_ide_api_light::join_lines(&file, frange.range), ra_ide_api_light::join_lines(&file, frange.range),
) )
} }
/// Returns an edit which should be applied when opening a new line, fixing /// Returns an edit which should be applied when opening a new line, fixing
/// up minor stuff like continuing the comment. /// up minor stuff like continuing the comment.
pub fn on_enter(&self, position: FilePosition) -> Option<SourceChange> { pub fn on_enter(&self, position: FilePosition) -> Option<SourceChange> {
@ -366,6 +388,7 @@ impl Analysis {
let edit = ra_ide_api_light::on_enter(&file, position.offset)?; let edit = ra_ide_api_light::on_enter(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit)) Some(SourceChange::from_local_edit(position.file_id, edit))
} }
/// Returns an edit which should be applied after `=` was typed. Primarily, /// Returns an edit which should be applied after `=` was typed. Primarily,
/// this works when adding `let =`. /// this works when adding `let =`.
// FIXME: use a snippet completion instead of this hack here. // FIXME: use a snippet completion instead of this hack here.
@ -374,23 +397,27 @@ impl Analysis {
let edit = ra_ide_api_light::on_eq_typed(&file, position.offset)?; let edit = ra_ide_api_light::on_eq_typed(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit)) Some(SourceChange::from_local_edit(position.file_id, edit))
} }
/// Returns an edit which should be applied when a dot ('.') is typed on a blank line, indenting the line appropriately. /// Returns an edit which should be applied when a dot ('.') is typed on a blank line, indenting the line appropriately.
pub fn on_dot_typed(&self, position: FilePosition) -> Option<SourceChange> { pub fn on_dot_typed(&self, position: FilePosition) -> Option<SourceChange> {
let file = self.db.source_file(position.file_id); let file = self.db.source_file(position.file_id);
let edit = ra_ide_api_light::on_dot_typed(&file, position.offset)?; let edit = ra_ide_api_light::on_dot_typed(&file, position.offset)?;
Some(SourceChange::from_local_edit(position.file_id, edit)) Some(SourceChange::from_local_edit(position.file_id, edit))
} }
/// Returns a tree representation of symbols in the file. Useful to draw a /// Returns a tree representation of symbols in the file. Useful to draw a
/// file outline. /// file outline.
pub fn file_structure(&self, file_id: FileId) -> Vec<StructureNode> { pub fn file_structure(&self, file_id: FileId) -> Vec<StructureNode> {
let file = self.db.source_file(file_id); let file = self.db.source_file(file_id);
ra_ide_api_light::file_structure(&file) ra_ide_api_light::file_structure(&file)
} }
/// Returns the set of folding ranges. /// Returns the set of folding ranges.
pub fn folding_ranges(&self, file_id: FileId) -> Vec<Fold> { pub fn folding_ranges(&self, file_id: FileId) -> Vec<Fold> {
let file = self.db.source_file(file_id); let file = self.db.source_file(file_id);
ra_ide_api_light::folding_ranges(&file) ra_ide_api_light::folding_ranges(&file)
} }
/// Fuzzy searches for a symbol. /// Fuzzy searches for a symbol.
pub fn symbol_search(&self, query: Query) -> Cancelable<Vec<NavigationTarget>> { pub fn symbol_search(&self, query: Query) -> Cancelable<Vec<NavigationTarget>> {
let res = symbol_index::world_symbols(&*self.db, query)? let res = symbol_index::world_symbols(&*self.db, query)?
@ -399,62 +426,76 @@ impl Analysis {
.collect(); .collect();
Ok(res) Ok(res)
} }
pub fn goto_definition( pub fn goto_definition(
&self, &self,
position: FilePosition, position: FilePosition,
) -> Cancelable<Option<Vec<NavigationTarget>>> { ) -> Cancelable<Option<Vec<NavigationTarget>>> {
goto_definition::goto_definition(&*self.db, position) goto_definition::goto_definition(&*self.db, position)
} }
/// Finds all usages of the reference at point. /// Finds all usages of the reference at point.
pub fn find_all_refs(&self, position: FilePosition) -> Cancelable<Vec<(FileId, TextRange)>> { pub fn find_all_refs(&self, position: FilePosition) -> Cancelable<Vec<(FileId, TextRange)>> {
self.db.find_all_refs(position) self.db.find_all_refs(position)
} }
/// Returns a short text descrbing element at position. /// Returns a short text descrbing element at position.
pub fn hover(&self, position: FilePosition) -> Cancelable<Option<RangeInfo<String>>> { pub fn hover(&self, position: FilePosition) -> Cancelable<Option<RangeInfo<String>>> {
hover::hover(&*self.db, position) hover::hover(&*self.db, position)
} }
/// Computes parameter information for the given call expression. /// Computes parameter information for the given call expression.
pub fn call_info(&self, position: FilePosition) -> Cancelable<Option<CallInfo>> { pub fn call_info(&self, position: FilePosition) -> Cancelable<Option<CallInfo>> {
call_info::call_info(&*self.db, position) call_info::call_info(&*self.db, position)
} }
/// Returns a `mod name;` declaration which created the current module. /// Returns a `mod name;` declaration which created the current module.
pub fn parent_module(&self, position: FilePosition) -> Cancelable<Vec<NavigationTarget>> { pub fn parent_module(&self, position: FilePosition) -> Cancelable<Vec<NavigationTarget>> {
self.db.parent_module(position) self.db.parent_module(position)
} }
/// Returns crates this file belongs too. /// Returns crates this file belongs too.
pub fn crate_for(&self, file_id: FileId) -> Cancelable<Vec<CrateId>> { pub fn crate_for(&self, file_id: FileId) -> Cancelable<Vec<CrateId>> {
self.db.crate_for(file_id) self.db.crate_for(file_id)
} }
/// Returns the root file of the given crate. /// Returns the root file of the given crate.
pub fn crate_root(&self, crate_id: CrateId) -> Cancelable<FileId> { pub fn crate_root(&self, crate_id: CrateId) -> Cancelable<FileId> {
Ok(self.db.crate_graph().crate_root(crate_id)) Ok(self.db.crate_graph().crate_root(crate_id))
} }
/// Returns the set of possible targets to run for the current file. /// Returns the set of possible targets to run for the current file.
pub fn runnables(&self, file_id: FileId) -> Cancelable<Vec<Runnable>> { pub fn runnables(&self, file_id: FileId) -> Cancelable<Vec<Runnable>> {
runnables::runnables(&*self.db, file_id) runnables::runnables(&*self.db, file_id)
} }
/// Computes syntax highlighting for the given file. /// Computes syntax highlighting for the given file.
pub fn highlight(&self, file_id: FileId) -> Cancelable<Vec<HighlightedRange>> { pub fn highlight(&self, file_id: FileId) -> Cancelable<Vec<HighlightedRange>> {
syntax_highlighting::highlight(&*self.db, file_id) syntax_highlighting::highlight(&*self.db, file_id)
} }
/// Computes completions at the given position. /// Computes completions at the given position.
pub fn completions(&self, position: FilePosition) -> Cancelable<Option<Vec<CompletionItem>>> { pub fn completions(&self, position: FilePosition) -> Cancelable<Option<Vec<CompletionItem>>> {
let completions = completion::completions(&self.db, position)?; let completions = completion::completions(&self.db, position)?;
Ok(completions.map(|it| it.into())) Ok(completions.map(|it| it.into()))
} }
/// Computes assists (aks code actons aka intentions) for the given /// Computes assists (aks code actons aka intentions) for the given
/// position. /// position.
pub fn assists(&self, frange: FileRange) -> Cancelable<Vec<SourceChange>> { pub fn assists(&self, frange: FileRange) -> Cancelable<Vec<SourceChange>> {
Ok(self.db.assists(frange)) Ok(self.db.assists(frange))
} }
/// Computes the set of diagnostics for the given file. /// Computes the set of diagnostics for the given file.
pub fn diagnostics(&self, file_id: FileId) -> Cancelable<Vec<Diagnostic>> { pub fn diagnostics(&self, file_id: FileId) -> Cancelable<Vec<Diagnostic>> {
self.db.diagnostics(file_id) self.db.diagnostics(file_id)
} }
/// Computes the type of the expression at the given position. /// Computes the type of the expression at the given position.
pub fn type_of(&self, frange: FileRange) -> Cancelable<Option<String>> { pub fn type_of(&self, frange: FileRange) -> Cancelable<Option<String>> {
hover::type_of(&*self.db, frange) hover::type_of(&*self.db, frange)
} }
/// Returns the edit required to rename reference at the position to the new /// Returns the edit required to rename reference at the position to the new
/// name. /// name.
pub fn rename( pub fn rename(

View file

@ -326,7 +326,7 @@ fn on_notification(
if pending_requests.remove(&id) { if pending_requests.remove(&id) {
let response = RawResponse::err( let response = RawResponse::err(
id, id,
ErrorCode::RequestCancelled as i32, ErrorCode::RequestCanceled as i32,
"canceled by client".to_string(), "canceled by client".to_string(),
); );
msg_sender.send(RawMessage::Response(response)).unwrap() msg_sender.send(RawMessage::Response(response)).unwrap()