nushell/crates/nu-errors/src/lib.rs

991 lines
36 KiB
Rust
Raw Normal View History

use bigdecimal::BigDecimal;
use codespan_reporting::diagnostic::{Diagnostic, Label};
2019-05-10 16:59:12 +00:00
use derive_new::new;
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
use getset::Getters;
2021-02-22 18:33:34 +00:00
use nu_ansi_term::Color;
use nu_source::{
DbgDocBldr, DebugDocBuilder, HasFallibleSpan, PrettyDebug, Span, Spanned, SpannedItem,
};
use num_bigint::BigInt;
use num_traits::ToPrimitive;
use serde::{Deserialize, Serialize};
Add support for ~ expansion This ended up being a bit of a yak shave. The basic idea in this commit is to expand `~` in paths, but only in paths. The way this is accomplished is by doing the expansion inside of the code that parses literal syntax for `SyntaxType::Path`. As a quick refresher: every command is entitled to expand its arguments in a custom way. While this could in theory be used for general-purpose macros, today the expansion facility is limited to syntactic hints. For example, the syntax `where cpu > 0` expands under the hood to `where { $it.cpu > 0 }`. This happens because the first argument to `where` is defined as a `SyntaxType::Block`, and the parser coerces binary expressions whose left-hand-side looks like a member into a block when the command is expecting one. This is mildly more magical than what most programming languages would do, but we believe that it makes sense to allow commands to fine-tune the syntax because of the domain nushell is in (command-line shells). The syntactic expansions supported by this facility are relatively limited. For example, we don't allow `$it` to become a bare word, simply because the command asks for a string in the relevant position. That would quickly become more confusing than it's worth. This PR adds a new `SyntaxType` rule: `SyntaxType::Path`. When a command declares a parameter as a `SyntaxType::Path`, string literals and bare words passed as an argument to that parameter are processed using the path expansion rules. Right now, that only means that `~` is expanded into the home directory, but additional rules are possible in the future. By restricting this expansion to a syntactic expansion when passed as an argument to a command expecting a path, we avoid making `~` a generally reserved character. This will also allow us to give good tab completion for paths with `~` characters in them when a command is expecting a path. In order to accomplish the above, this commit changes the parsing functions to take a `Context` instead of just a `CommandRegistry`. From the perspective of macro expansion, you can think of the `CommandRegistry` as a dictionary of in-scope macros, and the `Context` as the compile-time state used in expansion. This could gain additional functionality over time as we find more uses for the expansion system.
2019-08-26 19:21:03 +00:00
use std::fmt;
2019-11-04 15:47:03 +00:00
use std::ops::Range;
2019-05-10 16:59:12 +00:00
2019-12-02 16:14:05 +00:00
/// A structured reason for a ParseError. Note that parsing in nu is more like macro expansion in
/// other languages, so the kinds of errors that can occur during parsing are more contextual than
/// you might expect.
#[derive(Debug, Clone, PartialEq, PartialOrd, Eq, Ord, Hash, Serialize, Deserialize)]
pub enum ParseErrorReason {
2019-12-02 16:14:05 +00:00
/// The parser encountered an EOF rather than what it was expecting
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
Eof { expected: String, span: Span },
Add Range and start Signature support This commit contains two improvements: - Support for a Range syntax (and a corresponding Range value) - Work towards a signature syntax Implementing the Range syntax resulted in cleaning up how operators in the core syntax works. There are now two kinds of infix operators - tight operators (`.` and `..`) - loose operators Tight operators may not be interspersed (`$it.left..$it.right` is a syntax error). Loose operators require whitespace on both sides of the operator, and can be arbitrarily interspersed. Precedence is left to right in the core syntax. Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single token node in the core syntax. A single token node can be parsed from beginning to end in a context-free manner. The rule for `.` is `<token node>.<member>`. The rule for `..` is `<token node>..<token node>`. Loose operators all have the same syntactic rule: `<token node><space><loose op><space><token node>`. The second aspect of this pull request is the beginning of support for a signature syntax. Before implementing signatures, a necessary prerequisite is for the core syntax to support multi-line programs. That work establishes a few things: - `;` and newlines are handled in the core grammar, and both count as "separators" - line comments begin with `#` and continue until the end of the line In this commit, multi-token productions in the core grammar can use separators interchangably with spaces. However, I think we will ultimately want a different rule preventing separators from occurring before an infix operator, so that the end of a line is always unambiguous. This would avoid gratuitous differences between modules and repl usage. We already effectively have this rule, because otherwise `x<newline> | y` would be a single pipeline, but of course that wouldn't work.
2019-12-04 21:14:52 +00:00
/// The parser expected to see the end of a token stream (possibly the token
/// stream from inside a delimited token node), but found something else.
ExtraTokens { actual: Spanned<String> },
2019-12-02 16:14:05 +00:00
/// The parser encountered something other than what it was expecting
Mismatch {
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
expected: String,
2019-11-04 15:47:03 +00:00
actual: Spanned<String>,
},
/// An unexpected internal error has occurred
InternalError { message: Spanned<String> },
2019-12-02 16:14:05 +00:00
/// The parser tried to parse an argument for a command, but it failed for
/// some reason
ArgumentError {
2019-11-04 15:47:03 +00:00
command: Spanned<String>,
error: ArgumentError,
},
}
2019-12-02 16:14:05 +00:00
/// A newtype for `ParseErrorReason`
#[derive(Debug, Clone, Getters, PartialEq, PartialOrd, Eq, Ord, Hash, Serialize, Deserialize)]
pub struct ParseError {
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
#[get = "pub"]
reason: ParseErrorReason,
}
impl ParseError {
2019-12-02 16:14:05 +00:00
/// Construct a [ParseErrorReason::Eof](ParseErrorReason::Eof)
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
pub fn unexpected_eof(expected: impl Into<String>, span: Span) -> ParseError {
ParseError {
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
reason: ParseErrorReason::Eof {
expected: expected.into(),
span,
},
}
}
Add Range and start Signature support This commit contains two improvements: - Support for a Range syntax (and a corresponding Range value) - Work towards a signature syntax Implementing the Range syntax resulted in cleaning up how operators in the core syntax works. There are now two kinds of infix operators - tight operators (`.` and `..`) - loose operators Tight operators may not be interspersed (`$it.left..$it.right` is a syntax error). Loose operators require whitespace on both sides of the operator, and can be arbitrarily interspersed. Precedence is left to right in the core syntax. Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single token node in the core syntax. A single token node can be parsed from beginning to end in a context-free manner. The rule for `.` is `<token node>.<member>`. The rule for `..` is `<token node>..<token node>`. Loose operators all have the same syntactic rule: `<token node><space><loose op><space><token node>`. The second aspect of this pull request is the beginning of support for a signature syntax. Before implementing signatures, a necessary prerequisite is for the core syntax to support multi-line programs. That work establishes a few things: - `;` and newlines are handled in the core grammar, and both count as "separators" - line comments begin with `#` and continue until the end of the line In this commit, multi-token productions in the core grammar can use separators interchangably with spaces. However, I think we will ultimately want a different rule preventing separators from occurring before an infix operator, so that the end of a line is always unambiguous. This would avoid gratuitous differences between modules and repl usage. We already effectively have this rule, because otherwise `x<newline> | y` would be a single pipeline, but of course that wouldn't work.
2019-12-04 21:14:52 +00:00
/// Construct a [ParseErrorReason::ExtraTokens](ParseErrorReason::ExtraTokens)
pub fn extra_tokens(actual: Spanned<impl Into<String>>) -> ParseError {
let Spanned { span, item } = actual;
ParseError {
reason: ParseErrorReason::ExtraTokens {
actual: item.into().spanned(span),
},
}
}
2019-12-02 16:14:05 +00:00
/// Construct a [ParseErrorReason::Mismatch](ParseErrorReason::Mismatch)
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
pub fn mismatch(expected: impl Into<String>, actual: Spanned<impl Into<String>>) -> ParseError {
2019-11-04 15:47:03 +00:00
let Spanned { span, item } = actual;
ParseError {
reason: ParseErrorReason::Mismatch {
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
expected: expected.into(),
2019-11-04 15:47:03 +00:00
actual: item.into().spanned(span),
},
}
}
/// Construct a [ParseErrorReason::InternalError](ParseErrorReason::InternalError)
pub fn internal_error(message: Spanned<impl Into<String>>) -> ParseError {
ParseError {
reason: ParseErrorReason::InternalError {
message: message.item.into().spanned(message.span),
},
}
}
2019-12-02 16:14:05 +00:00
/// Construct a [ParseErrorReason::ArgumentError](ParseErrorReason::ArgumentError)
2019-11-04 15:47:03 +00:00
pub fn argument_error(command: Spanned<impl Into<String>>, kind: ArgumentError) -> ParseError {
ParseError {
reason: ParseErrorReason::ArgumentError {
2019-11-04 15:47:03 +00:00
command: command.item.into().spanned(command.span),
error: kind,
},
}
}
}
2019-12-02 16:14:05 +00:00
/// Convert a [ParseError](ParseError) into a [ShellError](ShellError)
impl From<ParseError> for ShellError {
fn from(error: ParseError) -> ShellError {
match error.reason {
2019-11-04 15:47:03 +00:00
ParseErrorReason::Eof { expected, span } => ShellError::unexpected_eof(expected, span),
ParseErrorReason::ExtraTokens { actual } => ShellError::type_error("nothing", actual),
ParseErrorReason::Mismatch { actual, expected } => {
ShellError::type_error(expected, actual)
}
ParseErrorReason::InternalError { message } => ShellError::labeled_error(
format!("Internal error: {}", message.item),
&message.item,
&message.span,
),
2019-11-04 15:47:03 +00:00
ParseErrorReason::ArgumentError { command, error } => {
ShellError::argument_error(command, error)
}
}
}
}
2019-12-02 16:14:05 +00:00
/// ArgumentError describes various ways that the parser could fail because of unexpected arguments.
/// Nu commands are like a combination of functions and macros, and these errors correspond to
/// problems that could be identified during expansion based on the syntactic signature of a
/// command.
2019-11-04 15:47:03 +00:00
#[derive(Debug, Eq, PartialEq, Clone, Ord, Hash, PartialOrd, Serialize, Deserialize)]
pub enum ArgumentError {
2019-12-02 16:14:05 +00:00
/// The command specified a mandatory flag, but it was missing.
MissingMandatoryFlag(String),
2019-12-02 16:14:05 +00:00
/// The command specified a mandatory positional argument, but it was missing.
MissingMandatoryPositional(String),
2019-12-02 16:14:05 +00:00
/// A flag was found, and it should have been followed by a value, but no value was found
MissingValueForName(String),
/// An argument was found, but the command does not recognize it
UnexpectedArgument(Spanned<String>),
/// An flag was found, but the command does not recognize it
UnexpectedFlag(Spanned<String>),
2019-12-02 16:14:05 +00:00
/// A sequence of characters was found that was not syntactically valid (but would have
/// been valid if the command was an external command)
InvalidExternalWord,
}
impl PrettyDebug for ArgumentError {
fn pretty(&self) -> DebugDocBuilder {
match self {
ArgumentError::MissingMandatoryFlag(flag) => {
DbgDocBldr::description("missing `")
+ DbgDocBldr::description(flag)
+ DbgDocBldr::description("` as mandatory flag")
}
ArgumentError::UnexpectedArgument(name) => {
DbgDocBldr::description("unexpected `")
+ DbgDocBldr::description(&name.item)
+ DbgDocBldr::description("` is not supported")
}
ArgumentError::UnexpectedFlag(name) => {
DbgDocBldr::description("unexpected `")
+ DbgDocBldr::description(&name.item)
+ DbgDocBldr::description("` is not supported")
}
ArgumentError::MissingMandatoryPositional(pos) => {
DbgDocBldr::description("missing `")
+ DbgDocBldr::description(pos)
+ DbgDocBldr::description("` as mandatory positional argument")
}
ArgumentError::MissingValueForName(name) => {
DbgDocBldr::description("missing value for flag `")
+ DbgDocBldr::description(name)
+ DbgDocBldr::description("`")
}
ArgumentError::InvalidExternalWord => DbgDocBldr::description("invalid word"),
}
}
}
2019-12-02 16:14:05 +00:00
/// A `ShellError` is a proximate error and a possible cause, which could have its own cause,
/// creating a cause chain.
2019-11-04 15:47:03 +00:00
#[derive(Debug, Eq, PartialEq, Ord, PartialOrd, Clone, Serialize, Deserialize, Hash)]
2019-07-09 04:31:26 +00:00
pub struct ShellError {
2020-06-28 21:06:05 +00:00
pub error: ProximateShellError,
pub cause: Option<Box<ShellError>>,
}
/// `PrettyDebug` is for internal debugging. For user-facing debugging, [into_diagnostic](ShellError::into_diagnostic)
2019-12-02 16:14:05 +00:00
/// is used, which prints an error, highlighting spans.
impl PrettyDebug for ShellError {
fn pretty(&self) -> DebugDocBuilder {
match &self.error {
ProximateShellError::SyntaxError { problem } => {
DbgDocBldr::error("Syntax Error")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit("(", DbgDocBldr::description(&problem.item), ")")
}
ProximateShellError::UnexpectedEof { .. } => DbgDocBldr::error("Unexpected end"),
ProximateShellError::TypeError { expected, actual } => {
DbgDocBldr::error("Type Error")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("expected:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(expected)
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("actual:")
+ DbgDocBldr::space()
+ DbgDocBldr::option(actual.item.as_ref().map(DbgDocBldr::description)),
")",
)
}
ProximateShellError::MissingProperty { subpath, expr } => {
DbgDocBldr::error("Missing Property")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("expr:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&expr.item)
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("subpath:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&subpath.item),
")",
)
}
ProximateShellError::InvalidIntegerIndex { subpath, .. } => {
DbgDocBldr::error("Invalid integer index")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("subpath:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&subpath.item),
")",
)
}
ProximateShellError::MissingValue { reason, .. } => {
DbgDocBldr::error("Missing Value")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("reason:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(reason),
")",
)
}
ProximateShellError::ArgumentError { command, error } => {
DbgDocBldr::error("Argument Error")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("command:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&command.item)
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("error:")
+ DbgDocBldr::space()
+ error.pretty(),
")",
)
}
ProximateShellError::RangeError {
kind,
actual_kind,
operation,
} => {
DbgDocBldr::error("Range Error")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("expected:")
+ DbgDocBldr::space()
+ kind.pretty()
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("actual:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&actual_kind.item)
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("operation:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(operation),
")",
)
}
ProximateShellError::Diagnostic(_) => DbgDocBldr::error("diagnostic"),
ProximateShellError::CoerceError { left, right } => {
DbgDocBldr::error("Coercion Error")
+ DbgDocBldr::space()
+ DbgDocBldr::delimit(
"(",
DbgDocBldr::description("left:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&left.item)
+ DbgDocBldr::description(",")
+ DbgDocBldr::space()
+ DbgDocBldr::description("right:")
+ DbgDocBldr::space()
+ DbgDocBldr::description(&right.item),
")",
)
}
ProximateShellError::UntaggedRuntimeError { reason } => {
DbgDocBldr::error("Unknown Error")
+ DbgDocBldr::delimit("(", DbgDocBldr::description(reason), ")")
}
ProximateShellError::Unimplemented { reason } => {
DbgDocBldr::error("Unimplemented")
+ DbgDocBldr::delimit("(", DbgDocBldr::description(reason), ")")
}
2020-04-21 03:14:18 +00:00
ProximateShellError::ExternalPlaceholderError => {
DbgDocBldr::error("non-zero external exit code")
2020-04-21 03:14:18 +00:00
}
}
}
}
impl std::fmt::Display for ShellError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.pretty().display())
}
}
2019-08-02 19:15:07 +00:00
impl serde::de::Error for ShellError {
fn custom<T>(msg: T) -> Self
where
T: std::fmt::Display,
{
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(msg.to_string())
2019-08-02 19:15:07 +00:00
}
}
impl ShellError {
2020-01-14 07:38:56 +00:00
/// An error that describes a mismatch between the given type and the expected type
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
pub fn type_error(
expected: impl Into<String>,
2019-11-04 15:47:03 +00:00
actual: Spanned<impl Into<String>>,
) -> ShellError {
2019-07-09 04:31:26 +00:00
ProximateShellError::TypeError {
expected: expected.into(),
actual: actual.map(|i| Some(i.into())),
}
2019-07-09 04:31:26 +00:00
.start()
}
2019-11-04 15:47:03 +00:00
pub fn missing_property(
subpath: Spanned<impl Into<String>>,
expr: Spanned<impl Into<String>>,
) -> ShellError {
ProximateShellError::MissingProperty {
2019-12-02 16:14:05 +00:00
subpath: subpath.map(|s| s.into()),
expr: expr.map(|e| e.into()),
2019-11-04 15:47:03 +00:00
}
.start()
}
pub fn missing_value(span: impl Into<Option<Span>>, reason: impl Into<String>) -> ShellError {
ProximateShellError::MissingValue {
span: span.into(),
reason: reason.into(),
}
.start()
}
2019-11-04 15:47:03 +00:00
pub fn invalid_integer_index(
subpath: Spanned<impl Into<String>>,
integer: impl Into<Span>,
) -> ShellError {
ProximateShellError::InvalidIntegerIndex {
2019-12-02 16:14:05 +00:00
subpath: subpath.map(|s| s.into()),
2019-11-04 15:47:03 +00:00
integer: integer.into(),
}
.start()
}
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
pub fn untagged_runtime_error(error: impl Into<String>) -> ShellError {
ProximateShellError::UntaggedRuntimeError {
reason: error.into(),
}
.start()
}
pub fn unexpected_eof(expected: impl Into<String>, span: impl Into<Span>) -> ShellError {
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
ProximateShellError::UnexpectedEof {
expected: expected.into(),
2019-11-04 15:47:03 +00:00
span: span.into(),
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
}
.start()
}
pub fn range_error(
expected: impl Into<ExpectedRange>,
actual: &Spanned<impl fmt::Debug>,
2019-11-04 15:47:03 +00:00
operation: impl Into<String>,
) -> ShellError {
ProximateShellError::RangeError {
kind: expected.into(),
actual_kind: format!("{:?}", actual.item).spanned(actual.span),
2019-11-04 15:47:03 +00:00
operation: operation.into(),
}
.start()
}
pub fn syntax_error(problem: Spanned<impl Into<String>>) -> ShellError {
ProximateShellError::SyntaxError {
problem: problem.map(|p| p.into()),
}
.start()
}
pub fn coerce_error(
2019-11-04 15:47:03 +00:00
left: Spanned<impl Into<String>>,
right: Spanned<impl Into<String>>,
2019-07-09 04:31:26 +00:00
) -> ShellError {
ProximateShellError::CoerceError {
left: left.map(|l| l.into()),
right: right.map(|r| r.into()),
}
.start()
}
pub fn argument_error(command: Spanned<impl Into<String>>, kind: ArgumentError) -> ShellError {
2019-07-09 04:31:26 +00:00
ProximateShellError::ArgumentError {
2019-11-04 15:47:03 +00:00
command: command.map(|c| c.into()),
2019-07-09 04:31:26 +00:00
error: kind,
}
.start()
}
pub fn diagnostic(diagnostic: Diagnostic<usize>) -> ShellError {
2019-07-09 04:31:26 +00:00
ProximateShellError::Diagnostic(ShellDiagnostic { diagnostic }).start()
2019-06-07 22:35:07 +00:00
}
2020-04-21 03:14:18 +00:00
pub fn external_non_zero() -> ShellError {
ProximateShellError::ExternalPlaceholderError.start()
}
pub fn into_diagnostic(self) -> Option<Diagnostic<usize>> {
2019-07-09 04:31:26 +00:00
match self.error {
2019-11-04 15:47:03 +00:00
ProximateShellError::MissingValue { span, reason } => {
let mut d = Diagnostic::bug().with_message(format!("Internal Error (missing value) :: {}", reason));
2019-11-04 15:47:03 +00:00
if let Some(span) = span {
d = d.with_labels(vec![Label::primary(0, span)]);
}
2020-04-21 03:14:18 +00:00
Some(d)
}
2019-07-09 04:31:26 +00:00
ProximateShellError::ArgumentError {
command,
error,
2020-04-21 03:14:18 +00:00
} => Some(match error {
ArgumentError::InvalidExternalWord => Diagnostic::error().with_message("Invalid bare word for Nu command (did you intend to invoke an external command?)")
.with_labels(vec![Label::primary(0, command.span)]),
ArgumentError::UnexpectedArgument(argument) => Diagnostic::error().with_message(
format!(
"{} unexpected {}",
Color::Cyan.paint(&command.item),
Color::Green.bold().paint(&argument.item)
)
)
.with_labels(
vec![Label::primary(0, argument.span).with_message(
format!("unexpected argument (try {} -h)", &command.item))]
),
ArgumentError::UnexpectedFlag(flag) => Diagnostic::error().with_message(
format!(
"{} unexpected {}",
Color::Cyan.paint(&command.item),
Color::Green.bold().paint(&flag.item)
),
)
.with_labels(vec![
Label::primary(0, flag.span).with_message(
format!("unexpected flag (try {} -h)", &command.item))
]),
ArgumentError::MissingMandatoryFlag(name) => Diagnostic::error().with_message( format!(
"{} requires {}{}",
2019-11-04 15:47:03 +00:00
Color::Cyan.paint(&command.item),
2020-01-11 17:21:59 +00:00
Color::Green.bold().paint("--"),
Color::Green.bold().paint(name)
),
)
.with_labels(vec![Label::primary(0, command.span)]),
ArgumentError::MissingMandatoryPositional(name) => Diagnostic::error().with_message(
format!(
2019-08-20 06:11:11 +00:00
"{} requires {} parameter",
2019-11-04 15:47:03 +00:00
Color::Cyan.paint(&command.item),
2019-08-20 06:11:11 +00:00
Color::Green.bold().paint(name.clone())
),
)
.with_labels(
vec![Label::primary(0, command.span).with_message(format!("requires {} parameter", name))],
2019-08-20 06:11:11 +00:00
),
ArgumentError::MissingValueForName(name) => Diagnostic::error().with_message(
format!(
"{} is missing value for flag {}{}",
2019-11-04 15:47:03 +00:00
Color::Cyan.paint(&command.item),
2020-01-11 17:21:59 +00:00
Color::Green.bold().paint("--"),
Color::Green.bold().paint(name)
),
)
.with_labels(vec![Label::primary(0, command.span)]),
2020-04-21 03:14:18 +00:00
}),
2019-07-09 04:31:26 +00:00
ProximateShellError::TypeError {
expected,
actual:
2019-11-04 15:47:03 +00:00
Spanned {
item: Some(actual),
2019-11-04 15:47:03 +00:00
span,
},
} => Some(Diagnostic::error().with_message("Type Error").with_labels(
vec![Label::primary(0, span)
.with_message(format!("Expected {}, found {}", expected, actual))]),
),
2019-07-09 04:31:26 +00:00
ProximateShellError::TypeError {
expected,
2019-08-01 01:58:42 +00:00
actual:
2019-11-04 15:47:03 +00:00
Spanned {
2019-08-01 01:58:42 +00:00
item: None,
2019-11-04 15:47:03 +00:00
span
2019-08-01 01:58:42 +00:00
},
} => Some(Diagnostic::error().with_message("Type Error")
.with_labels(vec![Label::primary(0, span).with_message(expected)])),
2019-06-24 00:55:31 +00:00
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
ProximateShellError::UnexpectedEof {
2019-11-04 15:47:03 +00:00
expected, span
} => Some(Diagnostic::error().with_message("Unexpected end of input")
.with_labels(vec![Label::primary(0, span).with_message(format!("Expected {}", expected))])),
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
ProximateShellError::RangeError {
kind,
operation,
actual_kind:
2019-11-04 15:47:03 +00:00
Spanned {
item,
2019-11-04 15:47:03 +00:00
span
},
} => Some(Diagnostic::error().with_message("Range Error").with_labels(
vec![Label::primary(0, span).with_message(format!(
"Expected to convert {} to {} while {}, but it was out of range",
item,
2019-12-02 16:14:05 +00:00
kind.display(),
operation
))]),
),
ProximateShellError::SyntaxError {
problem:
2019-11-04 15:47:03 +00:00
Spanned {
span,
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
item
},
} => Some(Diagnostic::error().with_message("Syntax Error")
.with_labels(vec![Label::primary(0, span).with_message(item)])),
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ProximateShellError::MissingProperty { subpath, expr, .. } => {
2019-06-24 00:55:31 +00:00
let mut diag = Diagnostic::error().with_message("Missing property");
2019-06-24 00:55:31 +00:00
2019-12-02 16:14:05 +00:00
if subpath.span == Span::unknown() {
diag.message = format!("Missing property (for {})", subpath.item);
} else {
let subpath = Label::primary(0, subpath.span).with_message(subpath.item);
2021-02-12 10:13:14 +00:00
let mut labels = vec![subpath];
2019-12-02 16:14:05 +00:00
if expr.span != Span::unknown() {
let expr = Label::primary(0, expr.span).with_message(expr.item);
labels.push(expr);
2019-12-02 16:14:05 +00:00
}
diag = diag.with_labels(labels);
2019-06-24 00:55:31 +00:00
}
2020-04-21 03:14:18 +00:00
Some(diag)
2019-06-24 00:55:31 +00:00
}
2019-11-04 15:47:03 +00:00
ProximateShellError::InvalidIntegerIndex { subpath,integer } => {
let mut diag = Diagnostic::error().with_message("Invalid integer property");
let mut labels = vec![];
2019-12-02 16:14:05 +00:00
if subpath.span == Span::unknown() {
diag.message = format!("Invalid integer property (for {})", subpath.item)
} else {
let label = Label::primary(0, subpath.span).with_message(subpath.item);
labels.push(label);
2019-11-04 15:47:03 +00:00
}
labels.push(Label::secondary(0, integer).with_message("integer"));
diag = diag.with_labels(labels);
2019-11-04 15:47:03 +00:00
2020-04-21 03:14:18 +00:00
Some(diag)
2019-11-04 15:47:03 +00:00
}
2020-04-21 03:14:18 +00:00
ProximateShellError::Diagnostic(diag) => Some(diag.diagnostic),
2019-07-09 04:31:26 +00:00
ProximateShellError::CoerceError { left, right } => {
Some(Diagnostic::error().with_message("Coercion error")
.with_labels(vec![Label::primary(0, left.span).with_message(left.item),
Label::secondary(0, right.span).with_message(right.item)]))
2019-06-24 00:55:31 +00:00
}
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ProximateShellError::UntaggedRuntimeError { reason } => Some(Diagnostic::error().with_message(format!("Error: {}", reason))),
ProximateShellError::Unimplemented { reason } => Some(Diagnostic::error().with_message(format!("Inimplemented: {}", reason))),
2020-04-21 03:14:18 +00:00
ProximateShellError::ExternalPlaceholderError => None,
2019-06-24 00:55:31 +00:00
}
}
pub fn labeled_error(
2019-06-07 22:35:07 +00:00
msg: impl Into<String>,
label: impl Into<String>,
span: impl Into<Span>,
2019-06-07 22:35:07 +00:00
) -> ShellError {
ShellError::diagnostic(
Diagnostic::error()
.with_message(msg.into())
.with_labels(vec![
Label::primary(0, span.into()).with_message(label.into())
]),
2019-06-07 22:35:07 +00:00
)
}
pub fn labeled_error_with_secondary(
2019-06-15 18:36:17 +00:00
msg: impl Into<String>,
primary_label: impl Into<String>,
primary_span: impl Into<Span>,
secondary_label: impl Into<String>,
secondary_span: impl Into<Span>,
2019-06-15 18:36:17 +00:00
) -> ShellError {
ShellError::diagnostic(
Diagnostic::error()
.with_message(msg.into())
.with_labels(vec![
Label::primary(0, primary_span.into()).with_message(primary_label.into()),
Label::secondary(0, secondary_span.into()).with_message(secondary_label.into()),
]),
)
2019-06-15 18:36:17 +00:00
}
pub fn unimplemented(title: impl Into<String>) -> ShellError {
ShellError::untagged_runtime_error(&format!("Unimplemented: {}", title.into()))
2019-06-04 21:42:31 +00:00
}
pub fn unexpected(title: impl Into<String>) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(&format!("Unexpected: {}", title.into()))
2019-06-22 03:43:37 +00:00
}
pub fn is_unimplemented(&self) -> bool {
matches!(self.error, ProximateShellError::Unimplemented { .. })
}
}
2019-05-16 21:43:36 +00:00
2019-12-02 16:14:05 +00:00
/// `ExpectedRange` describes a range of values that was expected by a command. In addition
/// to typical ranges, this enum allows an error to specify that the range of allowed values
/// corresponds to a particular numeric type (which is a dominant use-case for the
/// [RangeError](ProximateShellError::RangeError) error type).
2019-11-04 15:47:03 +00:00
#[derive(Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Clone, Serialize, Deserialize)]
pub enum ExpectedRange {
I8,
I16,
I32,
I64,
I128,
U8,
U16,
U32,
U64,
U128,
F32,
F64,
2019-11-04 15:47:03 +00:00
Usize,
Size,
BigInt,
BigDecimal,
2019-11-04 15:47:03 +00:00
Range { start: usize, end: usize },
}
2019-12-02 16:14:05 +00:00
/// Convert a Rust range into an [ExpectedRange](ExpectedRange).
2019-11-04 15:47:03 +00:00
impl From<Range<usize>> for ExpectedRange {
fn from(range: Range<usize>) -> Self {
ExpectedRange::Range {
start: range.start,
end: range.end,
}
}
}
impl PrettyDebug for ExpectedRange {
fn pretty(&self) -> DebugDocBuilder {
DbgDocBldr::description(match self {
ExpectedRange::I8 => "an 8-bit signed integer",
ExpectedRange::I16 => "a 16-bit signed integer",
ExpectedRange::I32 => "a 32-bit signed integer",
ExpectedRange::I64 => "a 64-bit signed integer",
ExpectedRange::I128 => "a 128-bit signed integer",
ExpectedRange::U8 => "an 8-bit unsigned integer",
ExpectedRange::U16 => "a 16-bit unsigned integer",
ExpectedRange::U32 => "a 32-bit unsigned integer",
ExpectedRange::U64 => "a 64-bit unsigned integer",
ExpectedRange::U128 => "a 128-bit unsigned integer",
ExpectedRange::F32 => "a 32-bit float",
ExpectedRange::F64 => "a 64-bit float",
2019-11-04 15:47:03 +00:00
ExpectedRange::Usize => "an list index",
ExpectedRange::Size => "a list offset",
ExpectedRange::BigDecimal => "a decimal",
ExpectedRange::BigInt => "an integer",
2019-12-02 16:14:05 +00:00
ExpectedRange::Range { start, end } => {
return DbgDocBldr::description(format!("{} to {}", start, end))
2019-12-02 16:14:05 +00:00
}
})
}
}
2019-11-04 15:47:03 +00:00
#[derive(Debug, Eq, PartialEq, Clone, Ord, PartialOrd, Serialize, Deserialize, Hash)]
2019-07-09 04:31:26 +00:00
pub enum ProximateShellError {
SyntaxError {
2019-11-04 15:47:03 +00:00
problem: Spanned<String>,
},
Overhaul the expansion system The main thrust of this (very large) commit is an overhaul of the expansion system. The parsing pipeline is: - Lightly parse the source file for atoms, basic delimiters and pipeline structure into a token tree - Expand the token tree into a HIR (high-level intermediate representation) based upon the baseline syntax rules for expressions and the syntactic shape of commands. Somewhat non-traditionally, nu doesn't have an AST at all. It goes directly from the token tree, which doesn't represent many important distinctions (like the difference between `hello` and `5KB`) directly into a high-level representation that doesn't have a direct correspondence to the source code. At a high level, nu commands work like macros, in the sense that the syntactic shape of the invocation of a command depends on the definition of a command. However, commands do not have the ability to perform unrestricted expansions of the token tree. Instead, they describe their arguments in terms of syntactic shapes, and the expander expands the token tree into HIR based upon that definition. For example, the `where` command says that it takes a block as its first required argument, and the description of the block syntactic shape expands the syntax `cpu > 10` into HIR that represents `{ $it.cpu > 10 }`. This commit overhauls that system so that the syntactic shapes are described in terms of a few new traits (`ExpandSyntax` and `ExpandExpression` are the primary ones) that are more composable than the previous system. The first big win of this new system is the addition of the `ColumnPath` shape, which looks like `cpu."max ghz"` or `package.version`. Previously, while a variable path could look like `$it.cpu."max ghz"`, the tail of a variable path could not be easily reused in other contexts. Now, that tail is its own syntactic shape, and it can be used as part of a command's signature. This cleans up commands like `inc`, `add` and `edit` as well as shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
2019-09-17 22:26:27 +00:00
UnexpectedEof {
expected: String,
2019-11-04 15:47:03 +00:00
span: Span,
},
2019-07-09 04:31:26 +00:00
TypeError {
expected: String,
2019-11-04 15:47:03 +00:00
actual: Spanned<Option<String>>,
2019-07-09 04:31:26 +00:00
},
MissingProperty {
2019-12-02 16:14:05 +00:00
subpath: Spanned<String>,
expr: Spanned<String>,
2019-11-04 15:47:03 +00:00
},
InvalidIntegerIndex {
2019-12-02 16:14:05 +00:00
subpath: Spanned<String>,
2019-11-04 15:47:03 +00:00
integer: Span,
2019-07-09 04:31:26 +00:00
},
MissingValue {
2019-11-04 15:47:03 +00:00
span: Option<Span>,
reason: String,
},
2019-07-09 04:31:26 +00:00
ArgumentError {
2019-11-04 15:47:03 +00:00
command: Spanned<String>,
2019-07-09 04:31:26 +00:00
error: ArgumentError,
},
RangeError {
kind: ExpectedRange,
2019-11-04 15:47:03 +00:00
actual_kind: Spanned<String>,
operation: String,
},
2019-07-09 04:31:26 +00:00
Diagnostic(ShellDiagnostic),
CoerceError {
2019-11-04 15:47:03 +00:00
left: Spanned<String>,
right: Spanned<String>,
2019-07-09 04:31:26 +00:00
},
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
UntaggedRuntimeError {
reason: String,
},
Unimplemented {
reason: String,
},
2020-04-21 03:14:18 +00:00
ExternalPlaceholderError,
2019-07-09 04:31:26 +00:00
}
Add support for ~ expansion This ended up being a bit of a yak shave. The basic idea in this commit is to expand `~` in paths, but only in paths. The way this is accomplished is by doing the expansion inside of the code that parses literal syntax for `SyntaxType::Path`. As a quick refresher: every command is entitled to expand its arguments in a custom way. While this could in theory be used for general-purpose macros, today the expansion facility is limited to syntactic hints. For example, the syntax `where cpu > 0` expands under the hood to `where { $it.cpu > 0 }`. This happens because the first argument to `where` is defined as a `SyntaxType::Block`, and the parser coerces binary expressions whose left-hand-side looks like a member into a block when the command is expecting one. This is mildly more magical than what most programming languages would do, but we believe that it makes sense to allow commands to fine-tune the syntax because of the domain nushell is in (command-line shells). The syntactic expansions supported by this facility are relatively limited. For example, we don't allow `$it` to become a bare word, simply because the command asks for a string in the relevant position. That would quickly become more confusing than it's worth. This PR adds a new `SyntaxType` rule: `SyntaxType::Path`. When a command declares a parameter as a `SyntaxType::Path`, string literals and bare words passed as an argument to that parameter are processed using the path expansion rules. Right now, that only means that `~` is expanded into the home directory, but additional rules are possible in the future. By restricting this expansion to a syntactic expansion when passed as an argument to a command expecting a path, we avoid making `~` a generally reserved character. This will also allow us to give good tab completion for paths with `~` characters in them when a command is expecting a path. In order to accomplish the above, this commit changes the parsing functions to take a `Context` instead of just a `CommandRegistry`. From the perspective of macro expansion, you can think of the `CommandRegistry` as a dictionary of in-scope macros, and the `Context` as the compile-time state used in expansion. This could gain additional functionality over time as we find more uses for the expansion system.
2019-08-26 19:21:03 +00:00
2019-07-09 04:31:26 +00:00
impl ProximateShellError {
fn start(self) -> ShellError {
ShellError {
cause: None,
error: self,
}
}
}
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
impl HasFallibleSpan for ShellError {
fn maybe_span(&self) -> Option<Span> {
self.error.maybe_span()
}
}
impl HasFallibleSpan for ProximateShellError {
fn maybe_span(&self) -> Option<Span> {
Some(match self {
ProximateShellError::SyntaxError { problem } => problem.span,
ProximateShellError::UnexpectedEof { span, .. } => *span,
ProximateShellError::TypeError { actual, .. } => actual.span,
ProximateShellError::MissingProperty { subpath, .. } => subpath.span,
ProximateShellError::InvalidIntegerIndex { subpath, .. } => subpath.span,
ProximateShellError::MissingValue { span, .. } => return *span,
ProximateShellError::ArgumentError { command, .. } => command.span,
ProximateShellError::RangeError { actual_kind, .. } => actual_kind.span,
ProximateShellError::Diagnostic(_) => return None,
ProximateShellError::CoerceError { left, right } => left.span.until(right.span),
ProximateShellError::UntaggedRuntimeError { .. } => return None,
ProximateShellError::Unimplemented { .. } => return None,
2020-04-21 03:14:18 +00:00
ProximateShellError::ExternalPlaceholderError => return None,
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
})
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ShellDiagnostic {
2020-06-28 21:06:05 +00:00
pub diagnostic: Diagnostic<usize>,
}
2019-11-04 15:47:03 +00:00
impl std::hash::Hash for ShellDiagnostic {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.diagnostic.severity.hash(state);
self.diagnostic.code.hash(state);
self.diagnostic.message.hash(state);
for label in &self.diagnostic.labels {
label.range.hash(state);
2019-11-04 15:47:03 +00:00
label.message.hash(state);
match label.style {
codespan_reporting::diagnostic::LabelStyle::Primary => 0.hash(state),
codespan_reporting::diagnostic::LabelStyle::Secondary => 1.hash(state),
2019-11-04 15:47:03 +00:00
}
}
}
}
impl PartialEq for ShellDiagnostic {
fn eq(&self, _other: &ShellDiagnostic) -> bool {
false
2019-05-16 21:43:36 +00:00
}
2019-05-10 16:59:12 +00:00
}
impl Eq for ShellDiagnostic {}
impl std::cmp::PartialOrd for ShellDiagnostic {
fn partial_cmp(&self, _other: &Self) -> Option<std::cmp::Ordering> {
Some(std::cmp::Ordering::Less)
}
}
impl std::cmp::Ord for ShellDiagnostic {
fn cmp(&self, _other: &Self) -> std::cmp::Ordering {
std::cmp::Ordering::Less
}
}
#[derive(Debug, Ord, PartialOrd, Eq, PartialEq, new, Clone, Serialize, Deserialize)]
pub struct StringError {
title: String,
error: String,
}
2019-05-10 16:59:12 +00:00
impl std::error::Error for ShellError {}
2019-08-21 12:08:23 +00:00
impl std::convert::From<Box<dyn std::error::Error>> for ShellError {
fn from(input: Box<dyn std::error::Error>) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{}", input))
2019-08-21 12:08:23 +00:00
}
}
2019-05-10 16:59:12 +00:00
impl std::convert::From<std::io::Error> for ShellError {
fn from(input: std::io::Error) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{}", input))
}
}
impl std::convert::From<std::string::FromUtf8Error> for ShellError {
fn from(input: std::string::FromUtf8Error) -> ShellError {
ShellError::untagged_runtime_error(format!("{}", input))
}
}
impl std::convert::From<std::str::Utf8Error> for ShellError {
fn from(input: std::str::Utf8Error) -> ShellError {
ShellError::untagged_runtime_error(format!("{}", input))
2019-05-10 16:59:12 +00:00
}
}
2019-05-24 04:34:43 +00:00
2019-08-21 12:08:23 +00:00
impl std::convert::From<serde_yaml::Error> for ShellError {
fn from(input: serde_yaml::Error) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{:?}", input))
2019-08-21 12:08:23 +00:00
}
}
impl std::convert::From<toml::ser::Error> for ShellError {
fn from(input: toml::ser::Error) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{:?}", input))
}
}
impl std::convert::From<serde_json::Error> for ShellError {
fn from(input: serde_json::Error) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{:?}", input))
}
}
2019-08-24 19:36:19 +00:00
impl std::convert::From<Box<dyn std::error::Error + Send + Sync>> for ShellError {
fn from(input: Box<dyn std::error::Error + Send + Sync>) -> ShellError {
Overhaul the coloring system This commit replaces the previous naive coloring system with a coloring system that is more aligned with the parser. The main benefit of this change is that it allows us to use parsing rules to decide how to color tokens. For example, consider the following syntax: ``` $ ps | where cpu > 10 ``` Ideally, we could color `cpu` like a column name and not a string, because `cpu > 10` is a shorthand block syntax that expands to `{ $it.cpu > 10 }`. The way that we know that it's a shorthand block is that the `where` command declares that its first parameter is a `SyntaxShape::Block`, which allows the shorthand block form. In order to accomplish this, we need to color the tokens in a way that corresponds to their expanded semantics, which means that high-fidelity coloring requires expansion. This commit adds a `ColorSyntax` trait that corresponds to the `ExpandExpression` trait. The semantics are fairly similar, with a few differences. First `ExpandExpression` consumes N tokens and returns a single `hir::Expression`. `ColorSyntax` consumes N tokens and writes M `FlatShape` tokens to the output. Concretely, for syntax like `[1 2 3]` - `ExpandExpression` takes a single token node and produces a single `hir::Expression` - `ColorSyntax` takes the same token node and emits 7 `FlatShape`s (open delimiter, int, whitespace, int, whitespace, int, close delimiter) Second, `ColorSyntax` is more willing to plow through failures than `ExpandExpression`. In particular, consider syntax like ``` $ ps | where cpu > ``` In this case - `ExpandExpression` will see that the `where` command is expecting a block, see that it's not a literal block and try to parse it as a shorthand block. It will successfully find a member followed by an infix operator, but not a following expression. That means that the entire pipeline part fails to parse and is a syntax error. - `ColorSyntax` will also try to parse it as a shorthand block and ultimately fail, but it will fall back to "backoff coloring mode", which parsing any unidentified tokens in an unfallible, simple way. In this case, `cpu` will color as a string and `>` will color as an operator. Finally, it's very important that coloring a pipeline infallibly colors the entire string, doesn't fail, and doesn't get stuck in an infinite loop. In order to accomplish this, this PR separates `ColorSyntax`, which is infallible from `FallibleColorSyntax`, which might fail. This allows the type system to let us know if our coloring rules bottom out at at an infallible rule. It's not perfect: it's still possible for the coloring process to get stuck or consume tokens non-atomically. I intend to reduce the opportunity for those problems in a future commit. In the meantime, the current system catches a number of mistakes (like trying to use a fallible coloring rule in a loop without thinking about the possibility that it will never terminate).
2019-10-06 20:22:50 +00:00
ShellError::untagged_runtime_error(format!("{:?}", input))
}
}
impl std::convert::From<glob::PatternError> for ShellError {
fn from(input: glob::PatternError) -> ShellError {
ShellError::untagged_runtime_error(format!("{:?}", input))
2019-08-24 19:36:19 +00:00
}
}
pub trait CoerceInto<U> {
fn coerce_into(self, operation: impl Into<String>) -> Result<U, ShellError>;
}
trait ToExpectedRange {
fn to_expected_range() -> ExpectedRange;
}
macro_rules! ranged_int {
($ty:tt -> $op:tt -> $variant:tt) => {
impl ToExpectedRange for $ty {
fn to_expected_range() -> ExpectedRange {
ExpectedRange::$variant
}
}
impl CoerceInto<$ty> for nu_source::Tagged<BigInt> {
fn coerce_into(self, operation: impl Into<String>) -> Result<$ty, ShellError> {
2020-08-02 22:34:33 +00:00
self.$op().ok_or_else(|| {
ShellError::range_error(
$ty::to_expected_range(),
&self.item.spanned(self.tag.span),
operation.into(),
2020-08-02 22:34:33 +00:00
)
})
}
}
impl CoerceInto<$ty> for nu_source::Tagged<&BigInt> {
fn coerce_into(self, operation: impl Into<String>) -> Result<$ty, ShellError> {
2020-08-02 22:34:33 +00:00
self.$op().ok_or_else(|| {
ShellError::range_error(
$ty::to_expected_range(),
&self.item.spanned(self.tag.span),
operation.into(),
2020-08-02 22:34:33 +00:00
)
})
}
}
};
}
ranged_int!(u8 -> to_u8 -> U8);
ranged_int!(u16 -> to_u16 -> U16);
ranged_int!(u32 -> to_u32 -> U32);
ranged_int!(u64 -> to_u64 -> U64);
ranged_int!(i8 -> to_i8 -> I8);
ranged_int!(i16 -> to_i16 -> I16);
ranged_int!(i32 -> to_i32 -> I32);
ranged_int!(i64 -> to_i64 -> I64);
macro_rules! ranged_decimal {
($ty:tt -> $op:tt -> $variant:tt) => {
impl ToExpectedRange for $ty {
fn to_expected_range() -> ExpectedRange {
ExpectedRange::$variant
}
}
impl CoerceInto<$ty> for nu_source::Tagged<BigDecimal> {
fn coerce_into(self, operation: impl Into<String>) -> Result<$ty, ShellError> {
2020-08-02 22:34:33 +00:00
self.$op().ok_or_else(|| {
ShellError::range_error(
$ty::to_expected_range(),
&self.item.spanned(self.tag.span),
operation.into(),
2020-08-02 22:34:33 +00:00
)
})
}
}
impl CoerceInto<$ty> for nu_source::Tagged<&BigDecimal> {
fn coerce_into(self, operation: impl Into<String>) -> Result<$ty, ShellError> {
2020-08-02 22:34:33 +00:00
self.$op().ok_or_else(|| {
ShellError::range_error(
$ty::to_expected_range(),
&self.item.spanned(self.tag.span),
operation.into(),
2020-08-02 22:34:33 +00:00
)
})
}
}
};
}
ranged_decimal!(f32 -> to_f32 -> F32);
ranged_decimal!(f64 -> to_f64 -> F64);