2023-02-27 21:58:56 +00:00
|
|
|
use itertools::Itertools;
|
Debugger experiments (#11441)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-03-08 18:21:35 +00:00
|
|
|
use nu_protocol::debugger::WithoutDebug;
|
2023-02-27 21:58:56 +00:00
|
|
|
use nu_protocol::{
|
2023-10-23 14:12:11 +00:00
|
|
|
ast::{Block, RangeInclusion},
|
2023-02-27 21:58:56 +00:00
|
|
|
engine::{EngineState, Stack, StateDelta, StateWorkingSet},
|
|
|
|
Example, PipelineData, Signature, Span, Type, Value,
|
|
|
|
};
|
revert: move to ahash (#9464)
This PR reverts https://github.com/nushell/nushell/pull/9391
We try not to revert PRs like this, though after discussion with the
Nushell team, we decided to revert this one.
The main reason is that Nushell, as a codebase, isn't ready for these
kinds of optimisations. It's in the part of the development cycle where
our main focus should be on improving the algorithms inside of Nushell
itself. Once we have matured our algorithms, then we can look for
opportunities to switch out technologies we're using for alternate
forms.
Much of Nushell still has lots of opportunities for tuning the codebase,
paying down technical debt, and making the codebase generally cleaner
and more robust. This should be the focus. Performance improvements
should flow out of that work.
Said another, optimisation that isn't part of tuning the codebase is
premature at this stage. We need to focus on doing the hard work of
making the engine, parser, etc better.
# User-Facing Changes
Reverts the HashMap -> ahash change.
cc @FilipAndersson245
2023-06-18 03:27:57 +00:00
|
|
|
use std::collections::HashSet;
|
Make EngineState clone cheaper with Arc on all of the heavy objects (#12229)
# Description
This makes many of the larger objects in `EngineState` into `Arc`, and
uses `Arc::make_mut` to do clone-on-write if the reference is not
unique. This is generally very cheap, giving us the best of both worlds
- allowing us to mutate without cloning if we have an exclusive
reference, and cloning if we don't.
This started as more of a curiosity for me after remembering that
`Arc::make_mut` exists and can make using `Arc` for mostly immutable
data that sometimes needs to be changed very convenient, and also after
hearing someone complain about memory usage on Discord - this is a
somewhat significant win for that.
The exact objects that were wrapped in `Arc`:
- `files`, `file_contents` - the strings and byte buffers
- `decls` - the whole `Vec`, but mostly to avoid lots of individual
`malloc()` calls on Clone rather than for memory usage
- `blocks` - the blocks themselves, rather than the outer Vec
- `modules` - the modules themselves, rather than the outer Vec
- `env_vars`, `previous_env_vars` - the entire maps
- `config`
The changes required were relatively minimal, but this is a breaking API
change. In particular, blocks are added as Arcs, to allow the parser
cache functionality to work.
With my normal nu config, running on Linux, this saves me about 15 MiB
of process memory usage when running interactively (65 MiB → 50 MiB).
This also makes quick command executions cheaper, particularly since
every REPL loop now involves a clone of the engine state so that we can
recover from a panic. It also reduces memory usage where engine state
needs to be cloned and sent to another thread or kept within an
iterator.
# User-Facing Changes
Shouldn't be any, since it's all internal stuff, but it does change some
public interfaces so it's a breaking change
2024-03-19 18:07:00 +00:00
|
|
|
use std::sync::Arc;
|
2023-02-27 21:58:56 +00:00
|
|
|
|
|
|
|
pub fn check_example_input_and_output_types_match_command_signature(
|
|
|
|
example: &Example,
|
|
|
|
cwd: &std::path::Path,
|
|
|
|
engine_state: &mut Box<EngineState>,
|
2023-09-12 03:38:20 +00:00
|
|
|
signature_input_output_types: &[(Type, Type)],
|
2023-02-27 21:58:56 +00:00
|
|
|
signature_operates_on_cell_paths: bool,
|
|
|
|
) -> HashSet<(Type, Type)> {
|
|
|
|
let mut witnessed_type_transformations = HashSet::<(Type, Type)>::new();
|
|
|
|
|
|
|
|
// Skip tests that don't have results to compare to
|
|
|
|
if let Some(example_output) = example.result.as_ref() {
|
|
|
|
if let Some(example_input_type) =
|
|
|
|
eval_pipeline_without_terminal_expression(example.example, cwd, engine_state)
|
|
|
|
{
|
|
|
|
let example_input_type = example_input_type.get_type();
|
|
|
|
let example_output_type = example_output.get_type();
|
|
|
|
|
|
|
|
let example_matches_signature =
|
|
|
|
signature_input_output_types
|
|
|
|
.iter()
|
|
|
|
.any(|(sig_in_type, sig_out_type)| {
|
|
|
|
example_input_type.is_subtype(sig_in_type)
|
|
|
|
&& example_output_type.is_subtype(sig_out_type)
|
|
|
|
&& {
|
|
|
|
witnessed_type_transformations
|
|
|
|
.insert((sig_in_type.clone(), sig_out_type.clone()));
|
|
|
|
true
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
// The example type checks as a cell path operation if both:
|
|
|
|
// 1. The command is declared to operate on cell paths.
|
|
|
|
// 2. The example_input_type is list or record or table, and the example
|
|
|
|
// output shape is the same as the input shape.
|
|
|
|
let example_matches_signature_via_cell_path_operation = signature_operates_on_cell_paths
|
|
|
|
&& example_input_type.accepts_cell_paths()
|
|
|
|
// TODO: This is too permissive; it should make use of the signature.input_output_types at least.
|
|
|
|
&& example_output_type.to_shape() == example_input_type.to_shape();
|
|
|
|
|
2023-07-26 21:34:43 +00:00
|
|
|
if !(example_matches_signature || example_matches_signature_via_cell_path_operation) {
|
2023-02-27 21:58:56 +00:00
|
|
|
panic!(
|
2023-07-26 21:34:43 +00:00
|
|
|
"The example `{}` demonstrates a transformation of type {:?} -> {:?}. \
|
2023-02-27 21:58:56 +00:00
|
|
|
However, this does not match the declared signature: {:?}.{} \
|
2023-07-26 21:34:43 +00:00
|
|
|
For this command `operates_on_cell_paths()` is {}.",
|
|
|
|
example.example,
|
|
|
|
example_input_type,
|
|
|
|
example_output_type,
|
|
|
|
signature_input_output_types,
|
|
|
|
if signature_input_output_types.is_empty() {
|
|
|
|
" (Did you forget to declare the input and output types for the command?)"
|
|
|
|
} else {
|
|
|
|
""
|
|
|
|
},
|
|
|
|
signature_operates_on_cell_paths
|
|
|
|
);
|
2023-02-27 21:58:56 +00:00
|
|
|
};
|
|
|
|
};
|
|
|
|
}
|
|
|
|
witnessed_type_transformations
|
|
|
|
}
|
|
|
|
|
|
|
|
fn eval_pipeline_without_terminal_expression(
|
|
|
|
src: &str,
|
|
|
|
cwd: &std::path::Path,
|
|
|
|
engine_state: &mut Box<EngineState>,
|
|
|
|
) -> Option<Value> {
|
|
|
|
let (mut block, delta) = parse(src, engine_state);
|
|
|
|
if block.pipelines.len() == 1 {
|
|
|
|
let n_expressions = block.pipelines[0].elements.len();
|
Make EngineState clone cheaper with Arc on all of the heavy objects (#12229)
# Description
This makes many of the larger objects in `EngineState` into `Arc`, and
uses `Arc::make_mut` to do clone-on-write if the reference is not
unique. This is generally very cheap, giving us the best of both worlds
- allowing us to mutate without cloning if we have an exclusive
reference, and cloning if we don't.
This started as more of a curiosity for me after remembering that
`Arc::make_mut` exists and can make using `Arc` for mostly immutable
data that sometimes needs to be changed very convenient, and also after
hearing someone complain about memory usage on Discord - this is a
somewhat significant win for that.
The exact objects that were wrapped in `Arc`:
- `files`, `file_contents` - the strings and byte buffers
- `decls` - the whole `Vec`, but mostly to avoid lots of individual
`malloc()` calls on Clone rather than for memory usage
- `blocks` - the blocks themselves, rather than the outer Vec
- `modules` - the modules themselves, rather than the outer Vec
- `env_vars`, `previous_env_vars` - the entire maps
- `config`
The changes required were relatively minimal, but this is a breaking API
change. In particular, blocks are added as Arcs, to allow the parser
cache functionality to work.
With my normal nu config, running on Linux, this saves me about 15 MiB
of process memory usage when running interactively (65 MiB → 50 MiB).
This also makes quick command executions cheaper, particularly since
every REPL loop now involves a clone of the engine state so that we can
recover from a panic. It also reduces memory usage where engine state
needs to be cloned and sent to another thread or kept within an
iterator.
# User-Facing Changes
Shouldn't be any, since it's all internal stuff, but it does change some
public interfaces so it's a breaking change
2024-03-19 18:07:00 +00:00
|
|
|
Arc::make_mut(&mut block).pipelines[0]
|
|
|
|
.elements
|
|
|
|
.truncate(&n_expressions - 1);
|
2023-02-27 21:58:56 +00:00
|
|
|
|
|
|
|
if !block.pipelines[0].elements.is_empty() {
|
|
|
|
let empty_input = PipelineData::empty();
|
|
|
|
Some(eval_block(block, empty_input, cwd, engine_state, delta))
|
|
|
|
} else {
|
|
|
|
Some(Value::nothing(Span::test_data()))
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
// E.g. multiple semicolon-separated statements
|
|
|
|
None
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Make EngineState clone cheaper with Arc on all of the heavy objects (#12229)
# Description
This makes many of the larger objects in `EngineState` into `Arc`, and
uses `Arc::make_mut` to do clone-on-write if the reference is not
unique. This is generally very cheap, giving us the best of both worlds
- allowing us to mutate without cloning if we have an exclusive
reference, and cloning if we don't.
This started as more of a curiosity for me after remembering that
`Arc::make_mut` exists and can make using `Arc` for mostly immutable
data that sometimes needs to be changed very convenient, and also after
hearing someone complain about memory usage on Discord - this is a
somewhat significant win for that.
The exact objects that were wrapped in `Arc`:
- `files`, `file_contents` - the strings and byte buffers
- `decls` - the whole `Vec`, but mostly to avoid lots of individual
`malloc()` calls on Clone rather than for memory usage
- `blocks` - the blocks themselves, rather than the outer Vec
- `modules` - the modules themselves, rather than the outer Vec
- `env_vars`, `previous_env_vars` - the entire maps
- `config`
The changes required were relatively minimal, but this is a breaking API
change. In particular, blocks are added as Arcs, to allow the parser
cache functionality to work.
With my normal nu config, running on Linux, this saves me about 15 MiB
of process memory usage when running interactively (65 MiB → 50 MiB).
This also makes quick command executions cheaper, particularly since
every REPL loop now involves a clone of the engine state so that we can
recover from a panic. It also reduces memory usage where engine state
needs to be cloned and sent to another thread or kept within an
iterator.
# User-Facing Changes
Shouldn't be any, since it's all internal stuff, but it does change some
public interfaces so it's a breaking change
2024-03-19 18:07:00 +00:00
|
|
|
pub fn parse(contents: &str, engine_state: &EngineState) -> (Arc<Block>, StateDelta) {
|
2023-02-27 21:58:56 +00:00
|
|
|
let mut working_set = StateWorkingSet::new(engine_state);
|
2023-04-07 18:09:38 +00:00
|
|
|
let output = nu_parser::parse(&mut working_set, None, contents.as_bytes(), false);
|
2023-02-27 21:58:56 +00:00
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
if let Some(err) = working_set.parse_errors.first() {
|
2023-02-27 21:58:56 +00:00
|
|
|
panic!("test parse error in `{contents}`: {err:?}")
|
|
|
|
}
|
|
|
|
|
|
|
|
(output, working_set.render())
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn eval_block(
|
Make EngineState clone cheaper with Arc on all of the heavy objects (#12229)
# Description
This makes many of the larger objects in `EngineState` into `Arc`, and
uses `Arc::make_mut` to do clone-on-write if the reference is not
unique. This is generally very cheap, giving us the best of both worlds
- allowing us to mutate without cloning if we have an exclusive
reference, and cloning if we don't.
This started as more of a curiosity for me after remembering that
`Arc::make_mut` exists and can make using `Arc` for mostly immutable
data that sometimes needs to be changed very convenient, and also after
hearing someone complain about memory usage on Discord - this is a
somewhat significant win for that.
The exact objects that were wrapped in `Arc`:
- `files`, `file_contents` - the strings and byte buffers
- `decls` - the whole `Vec`, but mostly to avoid lots of individual
`malloc()` calls on Clone rather than for memory usage
- `blocks` - the blocks themselves, rather than the outer Vec
- `modules` - the modules themselves, rather than the outer Vec
- `env_vars`, `previous_env_vars` - the entire maps
- `config`
The changes required were relatively minimal, but this is a breaking API
change. In particular, blocks are added as Arcs, to allow the parser
cache functionality to work.
With my normal nu config, running on Linux, this saves me about 15 MiB
of process memory usage when running interactively (65 MiB → 50 MiB).
This also makes quick command executions cheaper, particularly since
every REPL loop now involves a clone of the engine state so that we can
recover from a panic. It also reduces memory usage where engine state
needs to be cloned and sent to another thread or kept within an
iterator.
# User-Facing Changes
Shouldn't be any, since it's all internal stuff, but it does change some
public interfaces so it's a breaking change
2024-03-19 18:07:00 +00:00
|
|
|
block: Arc<Block>,
|
2023-02-27 21:58:56 +00:00
|
|
|
input: PipelineData,
|
|
|
|
cwd: &std::path::Path,
|
|
|
|
engine_state: &mut Box<EngineState>,
|
|
|
|
delta: StateDelta,
|
|
|
|
) -> Value {
|
|
|
|
engine_state
|
|
|
|
.merge_delta(delta)
|
|
|
|
.expect("Error merging delta");
|
|
|
|
|
IO and redirection overhaul (#11934)
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
2024-03-14 20:51:55 +00:00
|
|
|
let mut stack = Stack::new().capture();
|
2023-02-27 21:58:56 +00:00
|
|
|
|
|
|
|
stack.add_env_var("PWD".to_string(), Value::test_string(cwd.to_string_lossy()));
|
|
|
|
|
IO and redirection overhaul (#11934)
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
2024-03-14 20:51:55 +00:00
|
|
|
match nu_engine::eval_block::<WithoutDebug>(engine_state, &mut stack, &block, input) {
|
2023-02-27 21:58:56 +00:00
|
|
|
Err(err) => panic!("test eval error in `{}`: {:?}", "TODO", err),
|
|
|
|
Ok(result) => result.into_value(Span::test_data()),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn check_example_evaluates_to_expected_output(
|
|
|
|
example: &Example,
|
|
|
|
cwd: &std::path::Path,
|
|
|
|
engine_state: &mut Box<EngineState>,
|
|
|
|
) {
|
IO and redirection overhaul (#11934)
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
2024-03-14 20:51:55 +00:00
|
|
|
let mut stack = Stack::new().capture();
|
2023-02-27 21:58:56 +00:00
|
|
|
|
|
|
|
// Set up PWD
|
|
|
|
stack.add_env_var("PWD".to_string(), Value::test_string(cwd.to_string_lossy()));
|
|
|
|
|
|
|
|
engine_state
|
2023-06-04 19:04:28 +00:00
|
|
|
.merge_env(&mut stack, cwd)
|
2023-02-27 21:58:56 +00:00
|
|
|
.expect("Error merging environment");
|
|
|
|
|
|
|
|
let empty_input = PipelineData::empty();
|
|
|
|
let result = eval(example.example, empty_input, cwd, engine_state);
|
|
|
|
|
|
|
|
// Note. Value implements PartialEq for Bool, Int, Float, String and Block
|
|
|
|
// If the command you are testing requires to compare another case, then
|
|
|
|
// you need to define its equality in the Value struct
|
|
|
|
if let Some(expected) = example.result.as_ref() {
|
|
|
|
assert_eq!(
|
2023-10-23 14:12:11 +00:00
|
|
|
DebuggableValue(&result),
|
|
|
|
DebuggableValue(expected),
|
2023-02-27 21:58:56 +00:00
|
|
|
"The example result differs from the expected value",
|
|
|
|
)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn check_all_signature_input_output_types_entries_have_examples(
|
|
|
|
signature: Signature,
|
|
|
|
witnessed_type_transformations: HashSet<(Type, Type)>,
|
|
|
|
) {
|
2023-07-11 22:00:31 +00:00
|
|
|
let declared_type_transformations = HashSet::from_iter(signature.input_output_types);
|
2023-02-27 21:58:56 +00:00
|
|
|
assert!(
|
|
|
|
witnessed_type_transformations.is_subset(&declared_type_transformations),
|
|
|
|
"This should not be possible (bug in test): the type transformations \
|
|
|
|
collected in the course of matching examples to the signature type map \
|
|
|
|
contain type transformations not present in the signature type map."
|
|
|
|
);
|
|
|
|
|
|
|
|
if !signature.allow_variants_without_examples {
|
|
|
|
assert_eq!(
|
|
|
|
witnessed_type_transformations,
|
|
|
|
declared_type_transformations,
|
|
|
|
"There are entries in the signature type map which do not correspond to any example: \
|
|
|
|
{:?}",
|
|
|
|
declared_type_transformations
|
|
|
|
.difference(&witnessed_type_transformations)
|
|
|
|
.map(|(s1, s2)| format!("{s1} -> {s2}"))
|
|
|
|
.join(", ")
|
|
|
|
);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn eval(
|
|
|
|
contents: &str,
|
|
|
|
input: PipelineData,
|
|
|
|
cwd: &std::path::Path,
|
|
|
|
engine_state: &mut Box<EngineState>,
|
|
|
|
) -> Value {
|
|
|
|
let (block, delta) = parse(contents, engine_state);
|
|
|
|
eval_block(block, input, cwd, engine_state, delta)
|
|
|
|
}
|
2023-10-23 14:12:11 +00:00
|
|
|
|
|
|
|
pub struct DebuggableValue<'a>(pub &'a Value);
|
|
|
|
|
|
|
|
impl PartialEq for DebuggableValue<'_> {
|
|
|
|
fn eq(&self, other: &Self) -> bool {
|
|
|
|
self.0 == other.0
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<'a> std::fmt::Debug for DebuggableValue<'a> {
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
|
|
|
match self.0 {
|
|
|
|
Value::Bool { val, .. } => {
|
|
|
|
write!(f, "{:?}", val)
|
|
|
|
}
|
|
|
|
Value::Int { val, .. } => {
|
|
|
|
write!(f, "{:?}", val)
|
|
|
|
}
|
|
|
|
Value::Float { val, .. } => {
|
|
|
|
write!(f, "{:?}f", val)
|
|
|
|
}
|
|
|
|
Value::Filesize { val, .. } => {
|
|
|
|
write!(f, "Filesize({:?})", val)
|
|
|
|
}
|
|
|
|
Value::Duration { val, .. } => {
|
|
|
|
let duration = std::time::Duration::from_nanos(*val as u64);
|
|
|
|
write!(f, "Duration({:?})", duration)
|
|
|
|
}
|
|
|
|
Value::Date { val, .. } => {
|
|
|
|
write!(f, "Date({:?})", val)
|
|
|
|
}
|
|
|
|
Value::Range { val, .. } => match val.inclusion {
|
|
|
|
RangeInclusion::Inclusive => write!(
|
|
|
|
f,
|
|
|
|
"Range({:?}..{:?}, step: {:?})",
|
|
|
|
val.from, val.to, val.incr
|
|
|
|
),
|
|
|
|
RangeInclusion::RightExclusive => write!(
|
|
|
|
f,
|
|
|
|
"Range({:?}..<{:?}, step: {:?})",
|
|
|
|
val.from, val.to, val.incr
|
|
|
|
),
|
|
|
|
},
|
`open`, `rm`, `umv`, `cp`, `rm` and `du`: Don't globs if inputs are variables or string interpolation (#11886)
# Description
This is a follow up to
https://github.com/nushell/nushell/pull/11621#issuecomment-1937484322
Also Fixes: #11838
## About the code change
It applys the same logic when we pass variables to external commands:
https://github.com/nushell/nushell/blob/0487e9ffcbc57c2d5feca606e10c3f8221ff5e00/crates/nu-command/src/system/run_external.rs#L162-L170
That is: if user input dynamic things(like variables, sub-expression, or
string interpolation), it returns a quoted `NuPath`, then user input
won't be globbed
# User-Facing Changes
Given two input files: `a*c.txt`, `abc.txt`
* `let f = "a*c.txt"; rm $f` will remove one file: `a*c.txt`.
~* `let f = "a*c.txt"; rm --glob $f` will remove `a*c.txt` and
`abc.txt`~
* `let f: glob = "a*c.txt"; rm $f` will remove `a*c.txt` and `abc.txt`
## Rules about globbing with *variable*
Given two files: `a*c.txt`, `abc.txt`
| Cmd Type | example | Result |
| ----- | ------------------ | ------ |
| builtin | let f = "a*c.txt"; rm $f | remove `a*c.txt` |
| builtin | let f: glob = "a*c.txt"; rm $f | remove `a*c.txt` and
`abc.txt`
| builtin | let f = "a*c.txt"; rm ($f \| into glob) | remove `a*c.txt`
and `abc.txt`
| custom | def crm [f: glob] { rm $f }; let f = "a*c.txt"; crm $f |
remove `a*c.txt` and `abc.txt`
| custom | def crm [f: glob] { rm ($f \| into string) }; let f =
"a*c.txt"; crm $f | remove `a*c.txt`
| custom | def crm [f: string] { rm $f }; let f = "a*c.txt"; crm $f |
remove `a*c.txt`
| custom | def crm [f: string] { rm $f }; let f = "a*c.txt"; crm ($f \|
into glob) | remove `a*c.txt` and `abc.txt`
In general, if a variable is annotated with `glob` type, nushell will
expand glob pattern. Or else, we need to use `into | glob` to expand
glob pattern
# Tests + Formatting
Done
# After Submitting
I think `str glob-escape` command will be no-longer required. We can
remove it.
2024-02-23 01:17:09 +00:00
|
|
|
Value::String { val, .. } | Value::Glob { val, .. } => {
|
2023-10-23 14:12:11 +00:00
|
|
|
write!(f, "{:?}", val)
|
|
|
|
}
|
|
|
|
Value::Record { val, .. } => {
|
|
|
|
write!(f, "{{")?;
|
2023-11-22 22:48:48 +00:00
|
|
|
let mut first = true;
|
2024-03-26 15:17:44 +00:00
|
|
|
for (col, value) in (&**val).into_iter() {
|
2023-11-22 22:48:48 +00:00
|
|
|
if !first {
|
2023-10-23 14:12:11 +00:00
|
|
|
write!(f, ", ")?;
|
|
|
|
}
|
2023-11-22 22:48:48 +00:00
|
|
|
first = false;
|
2023-10-23 14:12:11 +00:00
|
|
|
write!(f, "{:?}: {:?}", col, DebuggableValue(value))?;
|
|
|
|
}
|
|
|
|
write!(f, "}}")
|
|
|
|
}
|
|
|
|
Value::List { vals, .. } => {
|
|
|
|
write!(f, "[")?;
|
|
|
|
for (i, value) in vals.iter().enumerate() {
|
|
|
|
if i > 0 {
|
|
|
|
write!(f, ", ")?;
|
|
|
|
}
|
|
|
|
write!(f, "{:?}", DebuggableValue(value))?;
|
|
|
|
}
|
|
|
|
write!(f, "]")
|
|
|
|
}
|
|
|
|
Value::Block { val, .. } => {
|
|
|
|
write!(f, "Block({:?})", val)
|
|
|
|
}
|
|
|
|
Value::Closure { val, .. } => {
|
|
|
|
write!(f, "Closure({:?})", val)
|
|
|
|
}
|
|
|
|
Value::Nothing { .. } => {
|
|
|
|
write!(f, "Nothing")
|
|
|
|
}
|
|
|
|
Value::Error { error, .. } => {
|
|
|
|
write!(f, "Error({:?})", error)
|
|
|
|
}
|
|
|
|
Value::Binary { val, .. } => {
|
|
|
|
write!(f, "Binary({:?})", val)
|
|
|
|
}
|
|
|
|
Value::CellPath { val, .. } => {
|
2023-11-10 20:12:51 +00:00
|
|
|
write!(f, "CellPath({:?})", val.to_string())
|
2023-10-23 14:12:11 +00:00
|
|
|
}
|
|
|
|
Value::CustomValue { val, .. } => {
|
|
|
|
write!(f, "CustomValue({:?})", val)
|
|
|
|
}
|
|
|
|
Value::LazyRecord { val, .. } => {
|
|
|
|
let rec = val.collect().map_err(|_| std::fmt::Error)?;
|
|
|
|
write!(f, "LazyRecord({:?})", DebuggableValue(&rec))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|