* Added attributes to from-xml command
* Added attributes as their own rows
* Removed unneccesary lifetime declarations
* from-xml now has children and attributes side by side
* Fixed tests and linting
* Fixed lint-problem
* Switch to using `shell`
Switch to using the shell for subprocess to enable more natural shelling out.
* Update external.rs
* This is a test with .shell() for external
* El pollo loco's PR
* co co co
* Attempt to fix windows
* Fmt
* Less is more?
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Restructure and streamline token expansion
The purpose of this commit is to streamline the token expansion code, by
removing aspects of the code that are no longer relevant, removing
pointless duplication, and eliminating the need to pass the same
arguments to `expand_syntax`.
The first big-picture change in this commit is that instead of a handful
of `expand_` functions, which take a TokensIterator and ExpandContext, a
smaller number of methods on the `TokensIterator` do the same job.
The second big-picture change in this commit is fully eliminating the
coloring traits, making coloring a responsibility of the base expansion
implementations. This also means that the coloring tracer is merged into
the expansion tracer, so you can follow a single expansion and see how
the expansion process produced colored tokens.
One side effect of this change is that the expander itself is marginally
more error-correcting. The error correction works by switching from
structured expansion to `BackoffColoringMode` when an unexpected token
is found, which guarantees that all spans of the source are colored, but
may not be the most optimal error recovery strategy.
That said, because `BackoffColoringMode` only extends as far as a
closing delimiter (`)`, `]`, `}`) or pipe (`|`), it does result in
fairly granular correction strategy.
The current code still produces an `Err` (plus a complete list of
colored shapes) from the parsing process if any errors are encountered,
but this could easily be addressed now that the underlying expansion is
error-correcting.
This commit also colors any spans that are syntax errors in red, and
causes the parser to include some additional information about what
tokens were expected at any given point where an error was encountered,
so that completions and hinting could be more robust in the future.
Co-authored-by: Jonathan Turner <jonathandturner@users.noreply.github.com>
Co-authored-by: Andrés N. Robalino <andres@androbtech.com>
Also, this commit makes `ls` a per-item command.
A command that processes things item by item may still take some time to stream
out the results from a single item. For example, `ls` on a directory with a lot
of files could be interrupted in the middle of showing all of these files.
This commit changes the way we shell out externals when using the `"$it"` argument. Also pipes per row to an external's stdin if no `"$it"` argument is present for external commands.
Further separation of logic (preparing the external's command arguments, getting the data for piping, emitting values, spawning processes) will give us a better idea for lower level details regarding external commands until we can find the right abstractions for making them more generic and unify within the pipeline calling logic of Nu internal's and external's.
* Put a sample_data.ods file for testing
This is a copy of the sample_data.xlsx file but in ods format
* Add the from-ods command
Most of the work was doing `rg xlsx` and then copy/paste with light editing
* Add tests for the from-ods command
* Fix failing test
The problem was improper filename sorting in the test `prepares_and_decorates_filesystem_source_files`
* Clippy fixes
* Finish converting to use clippy
* fix warnings in new master
* fix windows
* fix windows
Co-authored-by: Artem Vorotnikov <artem@vorotnikov.me>
* start playing with ways to use the uniq command
* WIP
* Got uniq working, but still need to figure out args issue and add tests
* Add some tests for uniq
* fmt
* remove commented out code
* Add documentation and some additional tests showing uniq values and rows. Also removed args TODO
* add changes that didn't get committed
* whoops, I didn't save the docs correctly...
* fmt
* Add a test for uniq with nested json
* Add another test
* Fix unique-ness when json keys are out of order and make the test json more complicated
Add tests for ~tilde expansion:
- test that "~" is expanded (no more "~" in output)
- ensure that "1~1" is not expanded to "1/home/user1" as it was
before
Fixes#972
Note: the first test does not check the literal expansion because
the path on Windows is expanded as a Linux path, but the correct
expansion may come for free once `shellexpand` will use the `dirs`
crate too (https://github.com/netvl/shellexpand/issues/3).
This commit contains two improvements:
- Support for a Range syntax (and a corresponding Range value)
- Work towards a signature syntax
Implementing the Range syntax resulted in cleaning up how operators in
the core syntax works. There are now two kinds of infix operators
- tight operators (`.` and `..`)
- loose operators
Tight operators may not be interspersed (`$it.left..$it.right` is a
syntax error). Loose operators require whitespace on both sides of the
operator, and can be arbitrarily interspersed. Precedence is left to
right in the core syntax.
Note that delimited syntax (like `( ... )` or `[ ... ]`) is a single
token node in the core syntax. A single token node can be parsed from
beginning to end in a context-free manner.
The rule for `.` is `<token node>.<member>`. The rule for `..` is
`<token node>..<token node>`.
Loose operators all have the same syntactic rule: `<token
node><space><loose op><space><token node>`.
The second aspect of this pull request is the beginning of support for a
signature syntax. Before implementing signatures, a necessary
prerequisite is for the core syntax to support multi-line programs.
That work establishes a few things:
- `;` and newlines are handled in the core grammar, and both count as
"separators"
- line comments begin with `#` and continue until the end of the line
In this commit, multi-token productions in the core grammar can use
separators interchangably with spaces. However, I think we will
ultimately want a different rule preventing separators from occurring
before an infix operator, so that the end of a line is always
unambiguous. This would avoid gratuitous differences between modules and
repl usage.
We already effectively have this rule, because otherwise `x<newline> |
y` would be a single pipeline, but of course that wouldn't work.
`left =~ right` return true if left contains right, using Rust's
`String::contains`. `!~` is the negated version.
A new `apply_operator` function is added which decouples evaluation from
`Value::compare`. This returns a `Value` and opens the door to
implementing `+` for example, though it wouldn't be useful immediately.
The `operator!` macro had to be changed slightly as it would choke on
`~` in arguments.
After the previous commit, nushell uses PrettyDebug and
PrettyDebugWithSource for our pretty-printed display output.
PrettyDebug produces a structured `pretty.rs` document rather than
writing directly into a fmt::Formatter, and types that implement
`PrettyDebug` have a convenience `display` method that produces a string
(to be used in situations where `Display` is needed for compatibility
with other traits, or where simple rendering is appropriate).
fixes#969, admittedly without a --delimiter alias
moves from_structured_data.rs to from_delimited_data.rs to better
identify its scope and adds to_delimited_data.rs. Now csv and tsv both
use the same code, tsv passes in a fixed '\t' argument where csv passes
in the value of --separator
With the proposed changes, these tests now become invalid. If the first line is
to be counted as data, then converting the headers to ints will fail. Removing
the headers and instead treating the first line as data, however, reflects the
new, desired mode of operation.
tables and able to work with them for data processing & viewing
purposes. At the moment, certain ways to process said tables we
are able to view a histogram of a given column.
As usage matures, we may find certain core commands that could
be used ergonomically when working with tables on Nu.
a joy. Fundamentally we embrace functional programming principles for
transforming the dataset from any format picked up by Nu. This table
processing "primitive" commands will build up and make pipelines
composable with data processing capabilities allowing us the valuate,
reduce, and map, the tables as far as even composing this declartively.
On this regard, `split-by` expects some table with grouped data and we
can use it further in interesting ways (Eg. collecting labels for
visualizing the data in charts and/or suit it for a particular chart
of our interest).
This commit should finish the `coloring_in_tokens` feature, which moves
the shape accumulator into the token stream. This allows rollbacks of
the token stream to also roll back any shapes that were added.
This commit also adds a much nicer syntax highlighter trace, which shows
all of the paths the highlighter took to arrive at a particular coloring
output. This change is fairly substantial, but really improves the
understandability of the flow. I intend to update the normal parser with
a similar tracing view.
In general, this change also fleshes out the concept of "atomic" token
stream operations.
A good next step would be to try to make the parser more
error-correcting, using the coloring infrastructure. A follow-up step
would involve merging the parser and highlighter shapes themselves.
Previously, we would build a command that looked something like this:
<ex_cmd> "$it" "&&" "<ex_cmd>" "$it"
So that the "&&" and "<ex_cmd>" would also be arguments to the command,
instead of a chained command. This commit builds up a command string
that can be passed to an external shell.
* Moves off of draining between filters. Instead, the sink will pull on the stream, and will drain element-wise. This moves the whole stream to being lazy.
* Adds ctrl-c support and connects it into some of the key points where we pull on the stream. If a ctrl-c is detect, we immediately halt pulling on the stream and return to the prompt.
* Moves away from having a SourceMap where anchor locations are stored. Now AnchorLocation is kept directly in the Tag.
* To make this possible, split tag and span. Span is largely used in the parser and is copyable. Tag is now no longer copyable.
The main thrust of this (very large) commit is an overhaul of the
expansion system.
The parsing pipeline is:
- Lightly parse the source file for atoms, basic delimiters and pipeline
structure into a token tree
- Expand the token tree into a HIR (high-level intermediate
representation) based upon the baseline syntax rules for expressions
and the syntactic shape of commands.
Somewhat non-traditionally, nu doesn't have an AST at all. It goes
directly from the token tree, which doesn't represent many important
distinctions (like the difference between `hello` and `5KB`) directly
into a high-level representation that doesn't have a direct
correspondence to the source code.
At a high level, nu commands work like macros, in the sense that the
syntactic shape of the invocation of a command depends on the
definition of a command.
However, commands do not have the ability to perform unrestricted
expansions of the token tree. Instead, they describe their arguments in
terms of syntactic shapes, and the expander expands the token tree into
HIR based upon that definition.
For example, the `where` command says that it takes a block as its first
required argument, and the description of the block syntactic shape
expands the syntax `cpu > 10` into HIR that represents
`{ $it.cpu > 10 }`.
This commit overhauls that system so that the syntactic shapes are
described in terms of a few new traits (`ExpandSyntax` and
`ExpandExpression` are the primary ones) that are more composable than
the previous system.
The first big win of this new system is the addition of the `ColumnPath`
shape, which looks like `cpu."max ghz"` or `package.version`.
Previously, while a variable path could look like `$it.cpu."max ghz"`,
the tail of a variable path could not be easily reused in other
contexts. Now, that tail is its own syntactic shape, and it can be used
as part of a command's signature.
This cleans up commands like `inc`, `add` and `edit` as well as
shorthand blocks, which can now look like `| where cpu."max ghz" > 10`
with the `help` command to explore and list all commands available.
Enter will also try to see if the location to be entered is an existing
Nu command, if it is it will let you inspect the command under `help`.
This provides baseline needed so we can iterate on it.
The original intent of this patch was to remove more unwraps to reduce
panics. I then lost a ton of time to the fact that the playground isn't
in a temp directory (because of permissions issues on Windows).
This commit improves the test facilities to:
- use a tempdir for the playground
- change the playground API so you instantiate it with a block that
encloses the lifetime of the tempdir
- the block is called with a `dirs` argument that has `dirs.test()` and
other important directories that we were computing by hand all the time
- the block is also called with a `playground` argument that you can use
to construct files (it's the same `Playground` as before)
- change the nu! and nu_error! macros to produce output instead of
taking a variable binding
- change the nu! and nu_error! macros to do the cwd() transformation
internally
- change the nu! and nu_error! macros to take varargs at the end that
get interpolated into the running command
I didn't manage to finish porting all of the tests, so a bunch of tests
are currently commented out. That will need to change before we land
this patch.
This ended up being a bit of a yak shave. The basic idea in this commit is to
expand `~` in paths, but only in paths.
The way this is accomplished is by doing the expansion inside of the code that
parses literal syntax for `SyntaxType::Path`.
As a quick refresher: every command is entitled to expand its arguments in a
custom way. While this could in theory be used for general-purpose macros,
today the expansion facility is limited to syntactic hints.
For example, the syntax `where cpu > 0` expands under the hood to
`where { $it.cpu > 0 }`. This happens because the first argument to `where`
is defined as a `SyntaxType::Block`, and the parser coerces binary expressions
whose left-hand-side looks like a member into a block when the command is
expecting one.
This is mildly more magical than what most programming languages would do,
but we believe that it makes sense to allow commands to fine-tune the syntax
because of the domain nushell is in (command-line shells).
The syntactic expansions supported by this facility are relatively limited.
For example, we don't allow `$it` to become a bare word, simply because the
command asks for a string in the relevant position. That would quickly
become more confusing than it's worth.
This PR adds a new `SyntaxType` rule: `SyntaxType::Path`. When a command
declares a parameter as a `SyntaxType::Path`, string literals and bare
words passed as an argument to that parameter are processed using the
path expansion rules. Right now, that only means that `~` is expanded into
the home directory, but additional rules are possible in the future.
By restricting this expansion to a syntactic expansion when passed as an
argument to a command expecting a path, we avoid making `~` a generally
reserved character. This will also allow us to give good tab completion
for paths with `~` characters in them when a command is expecting a path.
In order to accomplish the above, this commit changes the parsing functions
to take a `Context` instead of just a `CommandRegistry`. From the perspective
of macro expansion, you can think of the `CommandRegistry` as a dictionary
of in-scope macros, and the `Context` as the compile-time state used in
expansion. This could gain additional functionality over time as we find
more uses for the expansion system.