2019-11-21 14:33:14 +00:00
|
|
|
This pattern is extremely repetitive and can be abstracted:
|
|
|
|
|
|
|
|
```rs
|
|
|
|
let args = args.evaluate_once(registry)?;
|
|
|
|
let tag = args.name_tag();
|
|
|
|
let input = args.input;
|
|
|
|
|
|
|
|
let stream = async_stream! {
|
|
|
|
let values: Vec<Value> = input.values.collect().await;
|
|
|
|
|
|
|
|
let mut concat_string = String::new();
|
|
|
|
let mut latest_tag: Option<Tag> = None;
|
|
|
|
|
|
|
|
for value in values {
|
|
|
|
latest_tag = Some(value_tag.clone());
|
|
|
|
let value_span = value.tag.span;
|
|
|
|
|
|
|
|
match &value.value {
|
|
|
|
UntaggedValue::Primitive(Primitive::String(s)) => {
|
|
|
|
concat_string.push_str(&s);
|
|
|
|
concat_string.push_str("\n");
|
|
|
|
}
|
|
|
|
_ => yield Err(ShellError::labeled_error_with_secondary(
|
|
|
|
"Expected a string from pipeline",
|
|
|
|
"requires string input",
|
|
|
|
name_span,
|
|
|
|
"value originates from here",
|
|
|
|
value_span,
|
|
|
|
)),
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
Mandatory and Optional in parse_command
|
|
|
|
|
|
|
|
trace_remaining?
|
|
|
|
|
|
|
|
select_fields and select_fields take unnecessary Tag
|
|
|
|
|
|
|
|
Value#value should be Value#untagged
|
|
|
|
|
|
|
|
Unify dictionary building, probably around a macro
|
|
|
|
|
|
|
|
sys plugin in own crate
|
|
|
|
|
|
|
|
textview in own crate
|
2019-12-04 21:14:52 +00:00
|
|
|
|
|
|
|
Combine atomic and atomic_parse in parser
|
|
|
|
|
|
|
|
at_end_possible_ws needs to be comment and separator sensitive
|
Restructure and streamline token expansion (#1123)
Restructure and streamline token expansion
The purpose of this commit is to streamline the token expansion code, by
removing aspects of the code that are no longer relevant, removing
pointless duplication, and eliminating the need to pass the same
arguments to `expand_syntax`.
The first big-picture change in this commit is that instead of a handful
of `expand_` functions, which take a TokensIterator and ExpandContext, a
smaller number of methods on the `TokensIterator` do the same job.
The second big-picture change in this commit is fully eliminating the
coloring traits, making coloring a responsibility of the base expansion
implementations. This also means that the coloring tracer is merged into
the expansion tracer, so you can follow a single expansion and see how
the expansion process produced colored tokens.
One side effect of this change is that the expander itself is marginally
more error-correcting. The error correction works by switching from
structured expansion to `BackoffColoringMode` when an unexpected token
is found, which guarantees that all spans of the source are colored, but
may not be the most optimal error recovery strategy.
That said, because `BackoffColoringMode` only extends as far as a
closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in
fairly granular correction strategy.
The current code still produces an `Err` (plus a complete list of
colored shapes) from the parsing process if any errors are encountered,
but this could easily be addressed now that the underlying expansion is
error-correcting.
This commit also colors any spans that are syntax errors in red, and
causes the parser to include some additional information about what
tokens were expected at any given point where an error was encountered,
so that completions and hinting could be more robust in the future.
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
|
|
|
|
|
|
|
Eliminate unnecessary `nodes` parser
|
|
|
|
|
|
|
|
#[derive(HasSpan)]
|
|
|
|
|
|
|
|
Figure out a solution for the duplication in stuff like NumberShape vs. NumberExpressionShape
|
|
|
|
|
|
|
|
use `struct Expander` from signature.rs
|