nushell/TODO.md

60 lines
1.6 KiB
Markdown
Raw Normal View History

This pattern is extremely repetitive and can be abstracted:
```rs
let args = args.evaluate_once(registry)?;
let tag = args.name_tag();
let input = args.input;
let stream = async_stream! {
let values: Vec<Value> = input.values.collect().await;
let mut concat_string = String::new();
let mut latest_tag: Option<Tag> = None;
for value in values {
latest_tag = Some(value_tag.clone());
let value_span = value.tag.span;
match &value.value {
UntaggedValue::Primitive(Primitive::String(s)) => {
concat_string.push_str(&s);
concat_string.push_str("\n");
}
_ => yield Err(ShellError::labeled_error_with_secondary(
"Expected a string from pipeline",
"requires string input",
name_span,
"value originates from here",
value_span,
)),
}
}
```
Mandatory and Optional in parse_command
trace_remaining?
select_fields and select_fields take unnecessary Tag
Value#value should be Value#untagged
Unify dictionary building, probably around a macro
sys plugin in own crate
textview in own crate
Add Range and start Signature support This commit contains two improvements: - Support for a Range syntax (and a corresponding Range value) - Work towards a signature syntax Implementing the Range syntax resulted in cleaning up how operators in the core syntax works. There are now two kinds of infix operators - tight operators (`.` and `..`) - loose operators Tight operators may not be interspersed (`$it.left..$it.right` is a syntax error). Loose operators require whitespace on both sides of the operator, and can be arbitrarily interspersed. Precedence is left to right in the core syntax. Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single token node in the core syntax. A single token node can be parsed from beginning to end in a context-free manner. The rule for `.` is `<token node>.<member>`. The rule for `..` is `<token node>..<token node>`. Loose operators all have the same syntactic rule: `<token node><space><loose op><space><token node>`. The second aspect of this pull request is the beginning of support for a signature syntax. Before implementing signatures, a necessary prerequisite is for the core syntax to support multi-line programs. That work establishes a few things: - `;` and newlines are handled in the core grammar, and both count as "separators" - line comments begin with `#` and continue until the end of the line In this commit, multi-token productions in the core grammar can use separators interchangably with spaces. However, I think we will ultimately want a different rule preventing separators from occurring before an infix operator, so that the end of a line is always unambiguous. This would avoid gratuitous differences between modules and repl usage. We already effectively have this rule, because otherwise `x<newline> | y` would be a single pipeline, but of course that wouldn't work.
2019-12-04 21:14:52 +00:00
Combine atomic and atomic_parse in parser
at_end_possible_ws needs to be comment and separator sensitive
Restructure and streamline token expansion (#1123) Restructure and streamline token expansion The purpose of this commit is to streamline the token expansion code, by removing aspects of the code that are no longer relevant, removing pointless duplication, and eliminating the need to pass the same arguments to `expand_syntax`. The first big-picture change in this commit is that instead of a handful of `expand_` functions, which take a TokensIterator and ExpandContext, a smaller number of methods on the `TokensIterator` do the same job. The second big-picture change in this commit is fully eliminating the coloring traits, making coloring a responsibility of the base expansion implementations. This also means that the coloring tracer is merged into the expansion tracer, so you can follow a single expansion and see how the expansion process produced colored tokens. One side effect of this change is that the expander itself is marginally more error-correcting. The error correction works by switching from structured expansion to `BackoffColoringMode` when an unexpected token is found, which guarantees that all spans of the source are colored, but may not be the most optimal error recovery strategy. That said, because `BackoffColoringMode` only extends as far as a closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in fairly granular correction strategy. The current code still produces an `Err` (plus a complete list of colored shapes) from the parsing process if any errors are encountered, but this could easily be addressed now that the underlying expansion is error-correcting. This commit also colors any spans that are syntax errors in red, and causes the parser to include some additional information about what tokens were expected at any given point where an error was encountered, so that completions and hinting could be more robust in the future. Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com> Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
2020-01-21 22:45:03 +00:00
Eliminate unnecessary `nodes` parser
#[derive(HasSpan)]
Figure out a solution for the duplication in stuff like NumberShape vs. NumberExpressionShape
use `struct Expander` from signature.rs