2023-08-29 21:46:50 +00:00
|
|
|
use nu_cmd_base::hook::eval_hook;
|
2023-02-01 23:02:27 +00:00
|
|
|
use nu_engine::{eval_block, eval_block_with_early_return};
|
2022-04-30 18:23:05 +00:00
|
|
|
use nu_parser::{escape_quote_string, lex, parse, unescape_unquote_string, Token, TokenContents};
|
Debugger experiments (#11441)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-03-08 18:21:35 +00:00
|
|
|
use nu_protocol::debugger::WithoutDebug;
|
2022-03-16 18:17:06 +00:00
|
|
|
use nu_protocol::engine::StateWorkingSet;
|
2022-02-18 18:43:34 +00:00
|
|
|
use nu_protocol::{
|
|
|
|
engine::{EngineState, Stack},
|
2022-11-06 00:46:40 +00:00
|
|
|
print_if_stream, PipelineData, ShellError, Span, Value,
|
2022-02-18 18:43:34 +00:00
|
|
|
};
|
2023-04-08 11:53:43 +00:00
|
|
|
use nu_protocol::{report_error, report_error_new};
|
2022-03-16 22:21:06 +00:00
|
|
|
#[cfg(windows)]
|
|
|
|
use nu_utils::enable_vt_processing;
|
2023-01-24 20:28:59 +00:00
|
|
|
use nu_utils::utils::perf;
|
2023-03-20 04:05:22 +00:00
|
|
|
use std::path::Path;
|
2022-02-18 18:43:34 +00:00
|
|
|
|
2022-03-16 18:17:06 +00:00
|
|
|
// This will collect environment variables from std::env and adds them to a stack.
|
|
|
|
//
|
|
|
|
// In order to ensure the values have spans, it first creates a dummy file, writes the collected
|
|
|
|
// env vars into it (in a "NAME"="value" format, quite similar to the output of the Unix 'env'
|
|
|
|
// tool), then uses the file to get the spans. The file stays in memory, no filesystem IO is done.
|
2022-06-10 18:01:08 +00:00
|
|
|
//
|
|
|
|
// The "PWD" env value will be forced to `init_cwd`.
|
|
|
|
// The reason to use `init_cwd`:
|
|
|
|
//
|
|
|
|
// While gathering parent env vars, the parent `PWD` may not be the same as `current working directory`.
|
|
|
|
// Consider to the following command as the case (assume we execute command inside `/tmp`):
|
|
|
|
//
|
|
|
|
// tmux split-window -v -c "#{pane_current_path}"
|
|
|
|
//
|
|
|
|
// Here nu execute external command `tmux`, and tmux starts a new `nushell`, with `init_cwd` value "#{pane_current_path}".
|
|
|
|
// But at the same time `PWD` still remains to be `/tmp`.
|
|
|
|
//
|
|
|
|
// In this scenario, the new `nushell`'s PWD should be "#{pane_current_path}" rather init_cwd.
|
|
|
|
pub fn gather_parent_env_vars(engine_state: &mut EngineState, init_cwd: &Path) {
|
|
|
|
gather_env_vars(std::env::vars(), engine_state, init_cwd);
|
2022-04-15 22:38:27 +00:00
|
|
|
}
|
|
|
|
|
2022-06-10 18:01:08 +00:00
|
|
|
fn gather_env_vars(
|
|
|
|
vars: impl Iterator<Item = (String, String)>,
|
|
|
|
engine_state: &mut EngineState,
|
|
|
|
init_cwd: &Path,
|
|
|
|
) {
|
2022-03-16 18:17:06 +00:00
|
|
|
fn report_capture_error(engine_state: &EngineState, env_str: &str, msg: &str) {
|
|
|
|
let working_set = StateWorkingSet::new(engine_state);
|
|
|
|
report_error(
|
|
|
|
&working_set,
|
2023-12-06 23:40:03 +00:00
|
|
|
&ShellError::GenericError {
|
|
|
|
error: format!("Environment variable was not captured: {env_str}"),
|
|
|
|
msg: "".into(),
|
|
|
|
span: None,
|
|
|
|
help: Some(msg.into()),
|
|
|
|
inner: vec![],
|
|
|
|
},
|
2022-03-16 18:17:06 +00:00
|
|
|
);
|
|
|
|
}
|
|
|
|
|
2022-03-25 20:14:48 +00:00
|
|
|
fn put_env_to_fake_file(name: &str, val: &str, fake_env_file: &mut String) {
|
2022-04-30 18:23:05 +00:00
|
|
|
fake_env_file.push_str(&escape_quote_string(name));
|
2022-03-16 18:17:06 +00:00
|
|
|
fake_env_file.push('=');
|
2022-04-30 18:23:05 +00:00
|
|
|
fake_env_file.push_str(&escape_quote_string(val));
|
2022-03-16 18:17:06 +00:00
|
|
|
fake_env_file.push('\n');
|
|
|
|
}
|
|
|
|
|
|
|
|
let mut fake_env_file = String::new();
|
2022-04-15 22:38:27 +00:00
|
|
|
// Write all the env vars into a fake file
|
|
|
|
for (name, val) in vars {
|
|
|
|
put_env_to_fake_file(&name, &val, &mut fake_env_file);
|
|
|
|
}
|
|
|
|
|
2022-06-10 18:01:08 +00:00
|
|
|
match init_cwd.to_str() {
|
|
|
|
Some(cwd) => {
|
|
|
|
put_env_to_fake_file("PWD", cwd, &mut fake_env_file);
|
|
|
|
}
|
|
|
|
None => {
|
|
|
|
// Could not capture current working directory
|
|
|
|
let working_set = StateWorkingSet::new(engine_state);
|
|
|
|
report_error(
|
|
|
|
&working_set,
|
2023-12-06 23:40:03 +00:00
|
|
|
&ShellError::GenericError {
|
|
|
|
error: "Current directory is not a valid utf-8 path".into(),
|
|
|
|
msg: "".into(),
|
|
|
|
span: None,
|
|
|
|
help: Some(format!(
|
2023-01-30 01:37:54 +00:00
|
|
|
"Retrieving current directory failed: {init_cwd:?} not a valid utf-8 path"
|
2022-06-10 18:01:08 +00:00
|
|
|
)),
|
2023-12-06 23:40:03 +00:00
|
|
|
inner: vec![],
|
|
|
|
},
|
2022-06-10 18:01:08 +00:00
|
|
|
);
|
2022-03-16 18:17:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Lex the fake file, assign spans to all environment variables and add them
|
|
|
|
// to stack
|
|
|
|
let span_offset = engine_state.next_span_start();
|
|
|
|
|
|
|
|
engine_state.add_file(
|
|
|
|
"Host Environment Variables".to_string(),
|
|
|
|
fake_env_file.as_bytes().to_vec(),
|
|
|
|
);
|
|
|
|
|
|
|
|
let (tokens, _) = lex(fake_env_file.as_bytes(), span_offset, &[], &[], true);
|
|
|
|
|
|
|
|
for token in tokens {
|
|
|
|
if let Token {
|
|
|
|
contents: TokenContents::Item,
|
|
|
|
span: full_span,
|
|
|
|
} = token
|
|
|
|
{
|
2023-07-31 19:47:46 +00:00
|
|
|
let contents = engine_state.get_span_contents(full_span);
|
2022-03-16 18:17:06 +00:00
|
|
|
let (parts, _) = lex(contents, full_span.start, &[], &[b'='], true);
|
|
|
|
|
|
|
|
let name = if let Some(Token {
|
|
|
|
contents: TokenContents::Item,
|
|
|
|
span,
|
2023-11-17 15:15:55 +00:00
|
|
|
}) = parts.first()
|
2022-03-16 18:17:06 +00:00
|
|
|
{
|
2023-04-07 00:35:45 +00:00
|
|
|
let mut working_set = StateWorkingSet::new(engine_state);
|
|
|
|
let bytes = working_set.get_span_contents(*span);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
|
|
|
if bytes.len() < 2 {
|
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got empty name.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
let (bytes, err) = unescape_unquote_string(bytes, *span);
|
|
|
|
if let Some(err) = err {
|
|
|
|
working_set.error(err);
|
|
|
|
}
|
2022-04-15 22:38:27 +00:00
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
if working_set.parse_errors.first().is_some() {
|
2022-04-15 22:38:27 +00:00
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got unparsable name.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
bytes
|
2022-03-16 18:17:06 +00:00
|
|
|
} else {
|
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got empty name.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
};
|
|
|
|
|
|
|
|
let value = if let Some(Token {
|
|
|
|
contents: TokenContents::Item,
|
|
|
|
span,
|
|
|
|
}) = parts.get(2)
|
|
|
|
{
|
2023-04-07 00:35:45 +00:00
|
|
|
let mut working_set = StateWorkingSet::new(engine_state);
|
|
|
|
let bytes = working_set.get_span_contents(*span);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
|
|
|
if bytes.len() < 2 {
|
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got empty value.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
let (bytes, err) = unescape_unquote_string(bytes, *span);
|
|
|
|
if let Some(err) = err {
|
|
|
|
working_set.error(err);
|
|
|
|
}
|
2022-04-15 22:38:27 +00:00
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
if working_set.parse_errors.first().is_some() {
|
2022-04-15 22:38:27 +00:00
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got unparsable value.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-16 18:17:06 +00:00
|
|
|
|
2023-09-03 14:27:29 +00:00
|
|
|
Value::string(bytes, *span)
|
2022-03-16 18:17:06 +00:00
|
|
|
} else {
|
|
|
|
report_capture_error(
|
|
|
|
engine_state,
|
|
|
|
&String::from_utf8_lossy(contents),
|
|
|
|
"Got empty value.",
|
|
|
|
);
|
|
|
|
|
|
|
|
continue;
|
|
|
|
};
|
|
|
|
|
|
|
|
// stack.add_env_var(name, value);
|
2022-05-07 19:39:22 +00:00
|
|
|
engine_state.add_env_var(name, value);
|
2022-03-16 18:17:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn eval_source(
|
|
|
|
engine_state: &mut EngineState,
|
|
|
|
stack: &mut Stack,
|
|
|
|
source: &[u8],
|
|
|
|
fname: &str,
|
|
|
|
input: PipelineData,
|
2023-02-01 23:02:27 +00:00
|
|
|
allow_return: bool,
|
2022-03-16 18:17:06 +00:00
|
|
|
) -> bool {
|
2023-01-24 20:28:59 +00:00
|
|
|
let start_time = std::time::Instant::now();
|
|
|
|
|
2022-03-16 18:17:06 +00:00
|
|
|
let (block, delta) = {
|
|
|
|
let mut working_set = StateWorkingSet::new(engine_state);
|
2023-04-07 00:35:45 +00:00
|
|
|
let output = parse(
|
2022-03-16 18:17:06 +00:00
|
|
|
&mut working_set,
|
|
|
|
Some(fname), // format!("entry #{}", entry_num)
|
|
|
|
source,
|
|
|
|
false,
|
|
|
|
);
|
Deprecate `--flag: bool` in custom command (#11365)
# Description
While #11057 is merged, it's hard to tell the difference between
`--flag: bool` and `--flag`, and it makes user hard to read custom
commands' signature, and hard to use them correctly.
After discussion, I think we can deprecate `--flag: bool` usage, and
encourage using `--flag` instead.
# User-Facing Changes
The following code will raise warning message, but don't stop from
running.
```nushell
❯ def florb [--dry-run: bool, --another-flag] { "aaa" }; florb
Error: × Deprecated: --flag: bool
╭─[entry #7:1:1]
1 │ def florb [--dry-run: bool, --another-flag] { "aaa" }; florb
· ──┬─
· ╰── `--flag: bool` is deprecated. Please use `--flag` instead, more info: https://www.nushell.sh/book/custom_commands.html
╰────
aaa
```
cc @kubouch
# Tests + Formatting
Done
# After Submitting
- [ ] Add more information under
https://www.nushell.sh/book/custom_commands.html to indicate `--dry-run:
bool` is not allowed,
- [ ] remove `: bool` from custom commands between 0.89 and 0.90
---------
Co-authored-by: Antoine Stevan <44101798+amtoine@users.noreply.github.com>
2023-12-21 09:07:08 +00:00
|
|
|
if let Some(warning) = working_set.parse_warnings.first() {
|
|
|
|
report_error(&working_set, warning);
|
|
|
|
}
|
|
|
|
|
2023-04-07 00:35:45 +00:00
|
|
|
if let Some(err) = working_set.parse_errors.first() {
|
2022-04-04 11:11:27 +00:00
|
|
|
set_last_exit_code(stack, 1);
|
2023-04-07 00:35:45 +00:00
|
|
|
report_error(&working_set, err);
|
2022-03-16 18:17:06 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
(output, working_set.render())
|
|
|
|
};
|
|
|
|
|
2022-07-14 14:09:27 +00:00
|
|
|
if let Err(err) = engine_state.merge_delta(delta) {
|
|
|
|
set_last_exit_code(stack, 1);
|
|
|
|
report_error_new(engine_state, &err);
|
|
|
|
return false;
|
|
|
|
}
|
2022-03-16 18:17:06 +00:00
|
|
|
|
2023-02-01 23:02:27 +00:00
|
|
|
let b = if allow_return {
|
Debugger experiments (#11441)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-03-08 18:21:35 +00:00
|
|
|
eval_block_with_early_return::<WithoutDebug>(
|
|
|
|
engine_state,
|
|
|
|
stack,
|
|
|
|
&block,
|
|
|
|
input,
|
|
|
|
false,
|
|
|
|
false,
|
|
|
|
)
|
2023-02-01 23:02:27 +00:00
|
|
|
} else {
|
Debugger experiments (#11441)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
2024-03-08 18:21:35 +00:00
|
|
|
eval_block::<WithoutDebug>(engine_state, stack, &block, input, false, false)
|
2023-02-01 23:02:27 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
match b {
|
2022-10-10 12:32:55 +00:00
|
|
|
Ok(pipeline_data) => {
|
2022-11-06 00:46:40 +00:00
|
|
|
let config = engine_state.get_config();
|
|
|
|
let result;
|
|
|
|
if let PipelineData::ExternalStream {
|
|
|
|
stdout: stream,
|
|
|
|
stderr: stderr_stream,
|
|
|
|
exit_code,
|
|
|
|
..
|
|
|
|
} = pipeline_data
|
|
|
|
{
|
|
|
|
result = print_if_stream(stream, stderr_stream, false, exit_code);
|
|
|
|
} else if let Some(hook) = config.hooks.display_output.clone() {
|
2023-08-27 11:55:20 +00:00
|
|
|
match eval_hook(
|
|
|
|
engine_state,
|
|
|
|
stack,
|
|
|
|
Some(pipeline_data),
|
|
|
|
vec![],
|
|
|
|
&hook,
|
|
|
|
"display_output",
|
|
|
|
) {
|
2022-11-06 00:46:40 +00:00
|
|
|
Err(err) => {
|
|
|
|
result = Err(err);
|
|
|
|
}
|
|
|
|
Ok(val) => {
|
|
|
|
result = val.print(engine_state, stack, false, false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
2022-12-14 03:45:37 +00:00
|
|
|
result = pipeline_data.print(engine_state, stack, true, false);
|
2022-11-06 00:46:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
match result {
|
2022-10-10 12:32:55 +00:00
|
|
|
Err(err) => {
|
|
|
|
let working_set = StateWorkingSet::new(engine_state);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
2022-10-10 12:32:55 +00:00
|
|
|
report_error(&working_set, &err);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
2022-10-10 12:32:55 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
Ok(exit_code) => {
|
|
|
|
set_last_exit_code(stack, exit_code);
|
|
|
|
}
|
2022-03-16 18:17:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// reset vt processing, aka ansi because illbehaved externals can break it
|
|
|
|
#[cfg(windows)]
|
|
|
|
{
|
|
|
|
let _ = enable_vt_processing();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
Err(err) => {
|
2022-04-04 11:11:27 +00:00
|
|
|
set_last_exit_code(stack, 1);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
|
|
|
let working_set = StateWorkingSet::new(engine_state);
|
|
|
|
|
|
|
|
report_error(&working_set, &err);
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
2023-01-24 20:28:59 +00:00
|
|
|
perf(
|
|
|
|
&format!("eval_source {}", &fname),
|
|
|
|
start_time,
|
|
|
|
file!(),
|
|
|
|
line!(),
|
|
|
|
column!(),
|
2023-02-01 23:03:05 +00:00
|
|
|
engine_state.get_config().use_ansi_coloring,
|
2023-01-24 20:28:59 +00:00
|
|
|
);
|
2022-03-16 18:17:06 +00:00
|
|
|
|
|
|
|
true
|
|
|
|
}
|
|
|
|
|
2022-04-04 11:11:27 +00:00
|
|
|
fn set_last_exit_code(stack: &mut Stack, exit_code: i64) {
|
|
|
|
stack.add_env_var(
|
|
|
|
"LAST_EXIT_CODE".to_string(),
|
Reduced LOC by replacing several instances of `Value::Int {}`, `Value::Float{}`, `Value::Bool {}`, and `Value::String {}` with `Value::int()`, `Value::float()`, `Value::boolean()` and `Value::string()` (#7412)
# Description
While perusing Value.rs, I noticed the `Value::int()`, `Value::float()`,
`Value::boolean()` and `Value::string()` constructors, which seem
designed to make it easier to construct various Values, but which aren't
used often at all in the codebase. So, using a few find-replaces
regexes, I increased their usage. This reduces overall LOC because
structures like this:
```
Value::Int {
val: a,
span: head
}
```
are changed into
```
Value::int(a, head)
```
and are respected as such by the project's formatter.
There are little readability concerns because the second argument to all
of these is `span`, and it's almost always extremely obvious which is
the span at every callsite.
# User-Facing Changes
None.
# Tests + Formatting
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A
clippy::needless_collect` to check that you're using the standard code
style
- `cargo test --workspace` to check that all tests pass
# After Submitting
If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
2022-12-09 16:37:51 +00:00
|
|
|
Value::int(exit_code, Span::unknown()),
|
2022-04-04 11:11:27 +00:00
|
|
|
);
|
|
|
|
}
|
|
|
|
|
2022-04-15 22:38:27 +00:00
|
|
|
#[cfg(test)]
|
|
|
|
mod test {
|
|
|
|
use super::*;
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
fn test_gather_env_vars() {
|
|
|
|
let mut engine_state = EngineState::new();
|
|
|
|
let symbols = r##" !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"##;
|
|
|
|
|
|
|
|
gather_env_vars(
|
|
|
|
[
|
|
|
|
("FOO".into(), "foo".into()),
|
|
|
|
("SYMBOLS".into(), symbols.into()),
|
|
|
|
(symbols.into(), "symbols".into()),
|
|
|
|
]
|
|
|
|
.into_iter(),
|
|
|
|
&mut engine_state,
|
2022-06-10 18:01:08 +00:00
|
|
|
Path::new("t"),
|
2022-04-15 22:38:27 +00:00
|
|
|
);
|
|
|
|
|
2022-05-07 19:39:22 +00:00
|
|
|
let env = engine_state.render_env_vars();
|
2022-04-15 22:38:27 +00:00
|
|
|
|
2022-05-07 19:39:22 +00:00
|
|
|
assert!(
|
|
|
|
matches!(env.get(&"FOO".to_string()), Some(&Value::String { val, .. }) if val == "foo")
|
|
|
|
);
|
|
|
|
assert!(
|
|
|
|
matches!(env.get(&"SYMBOLS".to_string()), Some(&Value::String { val, .. }) if val == symbols)
|
|
|
|
);
|
|
|
|
assert!(
|
|
|
|
matches!(env.get(&symbols.to_string()), Some(&Value::String { val, .. }) if val == "symbols")
|
|
|
|
);
|
|
|
|
assert!(env.get(&"PWD".to_string()).is_some());
|
2022-04-15 22:38:27 +00:00
|
|
|
assert_eq!(env.len(), 4);
|
|
|
|
}
|
|
|
|
}
|