# Description
This speeds up writing messages to the plugin, because otherwise every
individual piece of the messages (not even the entire message) is
written with one syscall, leading to a lot of back and forth with the
kernel.
I learned this by running `strace` to debug something and saw a ton of
`write()` calls.
```nushell
# Before
1..10 | each { timeit { example seq 1 10000 | example sum } } | math avg
269ms 779µs 149ns
# After
> 1..10 | each { timeit { example seq 1 10000 | example sum } } | math avg
39ms 636µs 643ns
```
# User-Facing Changes
- Performance improvement
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Requested by @ayax79. This makes the custom value behavior more correct,
by calling the methods on the plugin to handle the custom values in
examples rather than the methods on the custom values themselves. This
helps for handle-type custom values (like what he's doing with
dataframes).
- Equality checking in `PluginTest::test_examples()` changed to use
`PluginInterface::custom_value_partial_cmp()`
- Base value rendering for `PluginSignature` changed to use
`Plugin::custom_value_to_base_value()`
- Had to be moved closer to `serve_plugin` for this reason, so the test
for writing signatures containing custom values was removed
- That behavior should still be tested to some degree, since if custom
values are not handled, signatures will fail to parse, so all of the
other tests won't work.
# User-Facing Changes
- `Record::sort_cols()` method added to share functionality required by
`PartialCmp`, and it might also be slightly faster
- Otherwise, everything should mostly be the same but better. Plugins
that don't implement special handling for custom values will still work
the same way, because the default implementation is just a pass-through
to the `CustomValue` methods.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
Because the plugin interface reader thread can be responsible for
sending a drop notification, it's possible for it to end up in a
deadlock where it's waiting for the response to the drop notification
call.
I decided that the best way to address this is to just discard the
response and not wait for it. It's not really important to synchronize
with the response to `Dropped`, so this is probably faster anyway.
cc @ayax79, this is your issue where polars is getting stuck
# User-Facing Changes
- A bug fix
- Custom value plugin: `custom-value handle update` command
# Tests + Formatting
Tried to add a test with a long pipeline with a lot of drops and run it
over and over to reproduce the deadlock.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
This keeps plugin custom values that have requested drop notification
around during the lifetime of a plugin call / stream by sending them to
a channel that gets persisted during the lifetime of the call.
Before this change, it was very likely that the drop notification would
be sent before the plugin ever had a chance to handle the value it
received.
Tests have been added to make sure this works - see the `custom_values`
plugin.
cc @ayax79
# User-Facing Changes
This is basically just a bugfix, just a slightly big one.
However, I did add an `as_mut_any()` function for custom values, to
avoid having to clone them. This is a breaking change.
- [x] `cargo hack` feature flag compatibility run
- [x] reedline released and pinned
- [x] `nu-plugin-test-support` added to release script
- [x] dependency tree checked
- [x] release notes
# Description
This changes the interface for plugins to always represent errors as
`LabeledError`s. This is good for altlang plugins, as it would suck for
them to have to implement and track `ShellError`. We save a lot of
generated code from the `ShellError` serde impl too, so `nu` and plugins
get to have a smaller binary size.
Reduces the release binary size by 1.2 MiB on my build configuration.
# User-Facing Changes
- Changes plugin protocol. `ShellError` no longer serialized.
- `ShellError` serialize output is different
- `ShellError` no longer deserializes to exactly the same value as
serialized
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Document in plugin protocol reference
# Description
The second `Value` is redundant and will consume five extra bytes on
each transmission of a custom value to/from a plugin.
# User-Facing Changes
This is a breaking change to the plugin protocol.
The [example in the protocol
reference](https://www.nushell.sh/contributor-book/plugin_protocol_reference.html#value)
becomes
```json
{
"Custom": {
"val": {
"type": "PluginCustomValue",
"name": "database",
"data": [36, 190, 127, 40, 12, 3, 46, 83],
"notify_on_drop": true
},
"span": {
"start": 320,
"end": 340
}
}
}
```
instead of
```json
{
"CustomValue": {
...
}
}
```
# After Submitting
Update plugin protocol reference
# Description
This is something that was discussed in the core team meeting last
Wednesday. @ayax79 is building `nu-plugin-polars` with all of the
dataframe commands into a plugin, and there are a lot of them, so it
would help to make the API more similar. At the same time, I think the
`Command` API is just better anyway. I don't think the difference is
justified, and the types for core commands have the benefit of requiring
less `.into()` because they often don't own their data
- Broke `signature()` up into `name()`, `usage()`, `extra_usage()`,
`search_terms()`, `examples()`
- `signature()` returns `nu_protocol::Signature`
- `examples()` returns `Vec<nu_protocol::Example>`
- `PluginSignature` and `PluginExample` no longer need to be used by
plugin developers
# User-Facing Changes
Breaking API for plugins yet again 😄
# Description
When implementing a `Command`, one must also import all the types
present in the function signatures for `Command`. This makes it so that
we often import the same set of types in each command implementation
file. E.g., something like this:
```rust
use nu_protocol::ast::Call;
use nu_protocol::engine::{Command, EngineState, Stack};
use nu_protocol::{
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, PipelineData,
ShellError, Signature, Span, Type, Value,
};
```
This PR adds the `nu_engine::command_prelude` module which contains the
necessary and commonly used types to implement a `Command`:
```rust
// command_prelude.rs
pub use crate::CallExt;
pub use nu_protocol::{
ast::{Call, CellPath},
engine::{Command, EngineState, Stack},
record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, IntoSpanned,
PipelineData, Record, ShellError, Signature, Span, Spanned, SyntaxShape, Type, Value,
};
```
This should reduce the boilerplate needed to implement a command and
also gives us a place to track the breadth of the `Command` API. I tried
to be conservative with what went into the prelude modules, since it
might be hard/annoying to remove items from the prelude in the future.
Let me know if something should be included or excluded.
# Description
@WindSoilder [reported on
Discord](https://discord.com/channels/601130461678272522/855947301380947968/1221233630093901834)
that some plugin stream tests have been failing on the CI. It seems to
just be a timing thing (probably due to busy CI), so this extends the
amount of time that we can wait for a condition to be true.
# User-Facing Changes
None
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
There wasn't really a good way to implement a command group style (e.g.
`from`, `query`, etc.) command in the past that just returns the help
text even if `--help` is not passed. This adds a new engine call that
just does that.
This is actually something I ran into before when developing the dbus
plugin, so it's nice to fix it.
# User-Facing Changes
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Document `GetHelp` engine call in proto
# Description
Adds a `nu-plugin-test-support` crate with an interface that supports
testing plugins.
Unlike in reality, these plugins run in the same process on separate
threads. This will allow
testing aspects of the plugin internal state and handling serialized
plugin custom values easily.
We still serialize their custom values and all of the engine to plugin
logic is still in play, so
from a logical perspective this should still expose any bugs that would
have been caused by that.
The only difference is that it doesn't run in a different process, and
doesn't try to serialize
everything to the final wire format for stdin/stdout.
TODO still:
- [x] Clean up warnings about private types exposed in trait definition
- [x] Automatically deserialize plugin custom values in the result so
they can be inspected
- [x] Automatic plugin examples test function
- [x] Write a bit more documentation
- [x] More tests
- [x] Add MIT License file to new crate
# User-Facing Changes
Plugin developers get a nice way to test their plugins.
# Tests + Formatting
Run the tests with `cargo test -p nu-plugin-test-support --
--show-output` to see some examples of what the failing test output for
examples can look like. I used the `difference` crate (MIT licensed) to
make it look nice.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Add a section to the book about testing
- [ ] Test some of the example plugins this way
- [ ] Add example tests to nu_plugin_template so plugin developers have
something to start with
# Description
Just a bunch of miscellaneous fixes to the Rust documentation that I
found recently while doing
a pass on some things.
# User-Facing Changes
None
# Description
This makes `LabeledError` much more capable of representing close to
everything a `miette::Diagnostic` can, including `ShellError`, and
allows plugins to generate multiple error spans, codes, help, etc.
`LabeledError` is now embeddable within `ShellError` as a transparent
variant.
This could also be used to improve `error make` and `try/catch` to
reflect `LabeledError` exactly in the future.
Also cleaned up some errors in existing plugins.
# User-Facing Changes
Breaking change for plugins. Nicer errors for users.
# Description
It was a bit ugly that when new `EngineCall`s or response types were
added, they needed to be added to multiple places with redundant code
just to change the types, even if they didn't have any stream content.
This fixes that and locates all of that logic in one place.
# User-Facing Changes
None
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
[Context on
Discord](https://discord.com/channels/601130461678272522/855947301380947968/1219425984990806207)
# Description
- Rename `CustomValue::value_string()` to `type_name()` to reflect its
usage better.
- Change print behavior to always call `to_base_value()` first, to give
the custom value better control over the output.
- Change `describe --detailed` to show the type name as the subtype,
rather than trying to describe the base value.
- Change custom `Type` to use `type_name()` rather than `typetag_name()`
to make things like `PluginCustomValue` more transparent
One question: should `describe --detailed` still include a description
of the base value somewhere? I'm torn on it, it seems possibly useful
for some things (maybe sqlite databases?), but having `describe -d` not
include the custom type name anywhere felt weird. Another option would
be to add another method to `CustomValue` for info to be displayed in
`describe`, so that it can be more type-specific?
# User-Facing Changes
Everything above has implications for printing and `describe` on custom
values
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
@ayax79 says that the dataframe commands all have dataframe custom
values in their examples, and they're used for tests.
Rather than send the custom values to the engine, if they're in
examples, this change just renders them using `to_base_value()` first.
That way we avoid potentially having to hold onto custom values in
`plugins.nu` that might not be valid indefinitely - as will be the case
for dataframes in particular - but we still avoid forcing plugin writers
to not use custom values in their examples.
# User-Facing Changes
- Custom values usable in plugin examples
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
# Description
We do a lot of visiting contained values in the serialization / validity
functions within `PluginCustomValue` utils. This adds
`Value::recurse_mut()` which wraps up most of that logic into something
that can be reused.
# Description
Adds the `AddEnvVar` plugin call, which allows plugins to set
environment variables in the caller's scope. This is the first engine
call that mutates the caller's stack, and opens the door to more
operations like this if needed.
This also comes with an extra benefit: in doing this, I needed to
refactor how context was handled, and I was able to avoid cloning
`EngineInterface` / `Stack` / `Call` in most cases that plugin calls are
used. They now only need to be cloned if the plugin call returns a
stream. The performance increase is welcome (5.5x faster on `inc`!):
```nushell
# Before
> timeit { 1..100 | each { |i| $"2.0.($i)" | inc -p } }
405ms 941µs 952ns
# After
> timeit { 1..100 | each { |i| $"2.0.($i)" | inc -p } }
73ms 68µs 749ns
```
# User-Facing Changes
- New engine call: `add_env_var()`
- Performance enhancement for plugin calls
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [x] Document env manipulation in plugins guide
- [x] Document `AddEnvVar` in plugin protocol
[Context on
Discord](https://discord.com/channels/601130461678272522/855947301380947968/1216517833312309419)
# Description
This is a significant breaking change to the plugin API, but one I think
is worthwhile. @ayax79 mentioned on Discord that while trying to start
on a dataframes plugin, he was a little disappointed that more wasn't
provided in terms of code organization for commands, particularly since
there are *a lot* of `dfr` commands.
This change treats plugins more like miniatures of the engine, with
dispatch of the command name being handled inherently, each command
being its own type, and each having their own signature within the trait
impl for the command type rather than having to find a way to centralize
it all into one `Vec`.
For the example plugins that have multiple commands, I definitely like
how this looks a lot better. This encourages doing code organization the
right way and feels very good.
For the plugins that have only one command, it's just a little bit more
boilerplate - but still worth it, in my opinion.
The `Box<dyn PluginCommand<Plugin = Self>>` type in `commands()` is a
little bit hairy, particularly for Rust beginners, but ultimately not so
bad, and it gives the desired flexibility for shared state for a whole
plugin + the individual commands.
# User-Facing Changes
Pretty big breaking change to plugin API, but probably one that's worth
making.
```rust
use nu_plugin::*;
use nu_protocol::{PluginSignature, PipelineData, Type, Value};
struct LowercasePlugin;
struct Lowercase;
// Plugins can now have multiple commands
impl PluginCommand for Lowercase {
type Plugin = LowercasePlugin;
// The signature lives with the command
fn signature(&self) -> PluginSignature {
PluginSignature::build("lowercase")
.usage("Convert each string in a stream to lowercase")
.input_output_type(Type::List(Type::String.into()), Type::List(Type::String.into()))
}
// We also provide SimplePluginCommand which operates on Value like before
fn run(
&self,
plugin: &LowercasePlugin,
engine: &EngineInterface,
call: &EvaluatedCall,
input: PipelineData,
) -> Result<PipelineData, LabeledError> {
let span = call.head;
Ok(input.map(move |value| {
value.as_str()
.map(|string| Value::string(string.to_lowercase(), span))
// Errors in a stream should be returned as values.
.unwrap_or_else(|err| Value::error(err, span))
}, None)?)
}
}
// Plugin now just has a list of commands, and the custom value op stuff still goes here
impl Plugin for LowercasePlugin {
fn commands(&self) -> Vec<Box<dyn PluginCommand<Plugin=Self>>> {
vec![Box::new(Lowercase)]
}
}
fn main() {
serve_plugin(&LowercasePlugin{}, MsgPackSerializer)
}
```
Time this however you like - we're already breaking stuff for 0.92, so
it might be good to do it now, but if it feels like a lot all at once,
it could wait.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Update examples in the book
- [x] Fix#12088 to match - this change would actually simplify it a
lot, because the methods are currently just duplicated between `Plugin`
and `StreamingPlugin`, but they only need to be on `Plugin` with this
change
# Description
The PR overhauls how IO redirection is handled, allowing more explicit
and fine-grain control over `stdout` and `stderr` output as well as more
efficient IO and piping.
To summarize the changes in this PR:
- Added a new `IoStream` type to indicate the intended destination for a
pipeline element's `stdout` and `stderr`.
- The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to
avoid adding 6 additional arguments to every eval function and
`Command::run`. The `stdout` and `stderr` streams can be temporarily
overwritten through functions on `Stack` and these functions will return
a guard that restores the original `stdout` and `stderr` when dropped.
- In the AST, redirections are now directly part of a `PipelineElement`
as a `Option<Redirection>` field instead of having multiple different
`PipelineElement` enum variants for each kind of redirection. This
required changes to the parser, mainly in `lite_parser.rs`.
- `Command`s can also set a `IoStream` override/redirection which will
apply to the previous command in the pipeline. This is used, for
example, in `ignore` to allow the previous external command to have its
stdout redirected to `Stdio::null()` at spawn time. In contrast, the
current implementation has to create an os pipe and manually consume the
output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`,
etc.) have precedence over overrides from commands.
This PR improves piping and IO speed, partially addressing #10763. Using
the `throughput` command from that issue, this PR gives the following
speedup on my setup for the commands below:
| Command | Before (MB/s) | After (MB/s) | Bash (MB/s) |
| --------------------------- | -------------:| ------------:|
-----------:|
| `throughput o> /dev/null` | 1169 | 52938 | 54305 |
| `throughput \| ignore` | 840 | 55438 | N/A |
| `throughput \| null` | Error | 53617 | N/A |
| `throughput \| rg 'x'` | 1165 | 3049 | 3736 |
| `(throughput) \| rg 'x'` | 810 | 3085 | 3815 |
(Numbers above are the median samples for throughput)
This PR also paves the way to refactor our `ExternalStream` handling in
the various commands. For example, this PR already fixes the following
code:
```nushell
^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world"
```
This returns an empty list on 0.90.1 and returns a highlighted "hello
world" on this PR.
Since the `stdout` and `stderr` `IoStream`s are available to commands
when they are run, then this unlocks the potential for more convenient
behavior. E.g., the `find` command can disable its ansi highlighting if
it detects that the output `IoStream` is not the terminal. Knowing the
output streams will also allow background job output to be redirected
more easily and efficiently.
# User-Facing Changes
- External commands returned from closures will be collected (in most
cases):
```nushell
1..2 | each {|_| nu -c "print a" }
```
This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n"
and then return an empty list.
```nushell
1..2 | each {|_| nu -c "print -e a" }
```
This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used
to return an empty list and print "a\na\n" to stderr.
- Trailing new lines are always trimmed for external commands when
piping into internal commands or collecting it as a value. (Failure to
decode the output as utf-8 will keep the trailing newline for the last
binary value.) In the current nushell version, the following three code
snippets differ only in parenthesis placement, but they all also have
different outputs:
1. `1..2 | each { ^echo a }`
```
a
a
╭────────────╮
│ empty list │
╰────────────╯
```
2. `1..2 | each { (^echo a) }`
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
3. `1..2 | (each { ^echo a })`
```
╭───┬───╮
│ 0 │ a │
│ │ │
│ 1 │ a │
│ │ │
╰───┴───╯
```
But in this PR, the above snippets will all have the same output:
```
╭───┬───╮
│ 0 │ a │
│ 1 │ a │
╰───┴───╯
```
- All existing flags on `run-external` are now deprecated.
- File redirections now apply to all commands inside a code block:
```nushell
(nu -c "print -e a"; nu -c "print -e b") e> test.out
```
This gives "a\nb\n" in `test.out` and prints nothing. The same result
would happen when printing to stdout and using a `o>` file redirection.
- External command output will (almost) never be ignored, and ignoring
output must be explicit now:
```nushell
(^echo a; ^echo b)
```
This prints "a\nb\n", whereas this used to print only "b\n". This only
applies to external commands; values and internal commands not in return
position will not print anything (e.g., `(echo a; echo b)` still only
prints "b").
- `complete` now always captures stderr (`do` is not necessary).
# After Submitting
The language guide and other documentation will need to be updated.
# Description
I get warnings message when running tests:
```
warning: unused import: `Feature`
--> crates/nu-plugin/src/protocol/mod.rs:21:25
|
21 | pub use protocol_info::{Feature, Protocol};
| ^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
```
I think it's useless can can be removed.
# Description
`rmp_serde` has two kinds of errors that contain I/O errors, and an EOF
can occur inside either of them, but we were only treating an EOF inside
an `InvalidMarkerRead` as an EOF, which would make sense for the
beginning of a message.
However, we should also treat an incomplete message + EOF as an EOF.
There isn't really any point in reporting that an EOF was received
mid-message.
This should fix the issue where the
`seq_describe_no_collect_succeeds_without_error` test would sometimes
fail, as doing a `describe --no-collect` followed by nushell exiting
could (but was not guaranteed to) cause this exact scenario.
# User-Facing Changes
Will probably remove useless `read error` messages from plugins after
exit of `nu`
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
# Description
There were two problems in `PersistentPlugin` which could cause a
deadlock:
1. There were two mutexes being used, and `get()` could potentially hold
both simultaneously if it had to spawn. This won't necessarily cause a
deadlock on its own, but it does mean that lock order is sensitive
2. `set_gc_config()` called `flush()` while still holding the lock,
meaning that the GC thread had to proceed before the lock was released.
However, waiting for the GC thread to proceed could mean waiting for the
GC thread to call `stop()`, which itself would try to lock the mutex.
So, it's not safe to wait for the GC thread while the lock is held. This
is fixed now.
I've also reverted #12177, as @IanManske reported that this was also
happening for him on Linux, and it seems to be this problem which should
not be platform-specific at all. I believe this solves it.
# User-Facing Changes
None
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
# Description
This adds three engine calls: `GetEnvVar`, `GetEnvVars`, for getting
environment variables from the plugin command context, and
`GetCurrentDir` for getting the current working directory.
Plugins are now launched in the directory of their executable to try to
make improper use of the current directory without first setting it more
obvious. Plugins previously launched in whatever the current directory
of the engine was at the time the plugin command was run, but switching
to persistent plugins broke this, because they stay in whatever
directory they launched in initially.
This also fixes the `gstat` plugin to use `get_current_dir()` to
determine its repo location, which was directly affected by this
problem.
# User-Facing Changes
- Adds new engine calls (`GetEnvVar`, `GetEnvVars`, `GetCurrentDir`)
- Runs plugins in a different directory from before, in order to catch
bugs
- Plugins will have to use the new engine calls if they do filesystem
stuff to work properly
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Document the working directory behavior on plugin launch
- [ ] Document the new engine calls + response type (`ValueMap`)
# Description
This causes `PersistentPlugin` to wait for the plugin garbage collector
to actually receive and process its config when setting it in
`set_gc_config`.
The motivation behind doing this is to make setting GC config in scripts
more deterministic. Before this change we couldn't really guarantee that
the GC could see your config before you started doing other things.
There is a slight cost to performance to doing this - we set config
before each plugin call because we don't necessarily know that it
reflects what's in `$env.config`, and now to do that we have to
synchronize with the GC thread.
This was probably the cause of spuriously failing tests as mentioned by
@sholderbach. Hopefully this fixes it. It might be the case that
launching threads on some platforms (or just on a really busy test
runner) sometimes takes a significant amount of time.
# User-Facing Changes
- possibly slightly worse performance for plugin calls
# Description
Adds support for the following operations on plugin custom values, in
addition to `to_base_value` which was already present:
- `follow_path_int()`
- `follow_path_string()`
- `partial_cmp()`
- `operation()`
- `Drop` (notification, if opted into with
`CustomValue::notify_plugin_on_drop`)
There are additionally customizable methods within the `Plugin` and
`StreamingPlugin` traits for implementing these functions in a way that
requires access to the plugin state, as a registered handle model such
as might be used in a dataframes plugin would.
`Value::append` was also changed to handle custom values correctly.
# User-Facing Changes
- Signature of `CustomValue::follow_path_string` and
`CustomValue::follow_path_int` changed to give access to the span of the
custom value itself, useful for some errors.
- Plugins using custom values have to be recompiled because the engine
will try to do custom value operations that aren't supported
- Plugins can do more things 🎉
# Tests + Formatting
Tests were added for all of the new custom values functionality.
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Document protocol reference `CustomValueOp` variants:
- [ ] `FollowPathInt`
- [ ] `FollowPathString`
- [ ] `PartialCmp`
- [ ] `Operation`
- [ ] `Dropped`
- [ ] Document `notify_on_drop` optional field in `PluginCustomValue`
# Description
This PR uses the new plugin protocol to intelligently keep plugin
processes running in the background for further plugin calls.
Running plugins can be seen by running the new `plugin list` command,
and stopped by running the new `plugin stop` command.
This is an enhancement for the performance of plugins, as starting new
plugin processes has overhead, especially for plugins in languages that
take a significant amount of time on startup. It also enables plugins
that have persistent state between commands, making the migration of
features like dataframes and `stor` to plugins possible.
Plugins are automatically stopped by the new plugin garbage collector,
configurable with `$env.config.plugin_gc`:
```nushell
$env.config.plugin_gc = {
# Configuration for plugin garbage collection
default: {
enabled: true # true to enable stopping of inactive plugins
stop_after: 10sec # how long to wait after a plugin is inactive to stop it
}
plugins: {
# alternate configuration for specific plugins, by name, for example:
#
# gstat: {
# enabled: false
# }
}
}
```
If garbage collection is enabled, plugins will be stopped after
`stop_after` passes after they were last active. Plugins are counted as
inactive if they have no running plugin calls. Reading the stream from
the response of a plugin call is still considered to be activity, but if
a plugin holds on to a stream but the call ends without an active
streaming response, it is not counted as active even if it is reading
it. Plugins can explicitly disable the GC as appropriate with
`engine.set_gc_disabled(true)`.
The `version` command now lists plugin names rather than plugin
commands. The list of plugin commands is accessible via `plugin list`.
Recommend doing this together with #12029, because it will likely force
plugin developers to do the right thing with mutability and lead to less
unexpected behavior when running plugins nested / in parallel.
# User-Facing Changes
- new command: `plugin list`
- new command: `plugin stop`
- changed command: `version` (now lists plugin names, rather than
commands)
- new config: `$env.config.plugin_gc`
- Plugins will keep running and be reused, at least for the configured
GC period
- Plugins that used mutable state in weird ways like `inc` did might
misbehave until fixed
- Plugins can disable GC if they need to
- Had to change plugin signature to accept `&EngineInterface` so that
the GC disable feature works. #12029 does this anyway, and I'm expecting
(resolvable) conflicts with that
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
Because there is some specific OS behavior required for plugins to not
respond to Ctrl-C directly, I've developed against and tested on both
Linux and Windows to ensure that works properly.
# After Submitting
I think this probably needs to be in the book somewhere
# Description
This allows plugins to make calls back to the engine to get config,
evaluate closures, and do other things that must be done within the
engine process.
Engine calls can both produce and consume streams as necessary. Closures
passed to plugins can both accept stream input and produce stream output
sent back to the plugin.
Engine calls referring to a plugin call's context can be processed as
long either the response hasn't been received, or the response created
streams that haven't ended yet.
This is a breaking API change for plugins. There are some pretty major
changes to the interface that plugins must implement, including:
1. Plugins now run with `&self` and must be `Sync`. Executing multiple
plugin calls in parallel is supported, and there's a chance that a
closure passed to a plugin could invoke the same plugin. Supporting
state across plugin invocations is left up to the plugin author to do in
whichever way they feel best, but the plugin object itself is still
shared. Even though the engine doesn't run multiple plugin calls through
the same process yet, I still considered it important to break the API
in this way at this stage. We might want to consider an optional
threadpool feature for performance.
2. Plugins take a reference to `EngineInterface`, which can be cloned.
This interface allows plugins to make calls back to the engine,
including for getting config and running closures.
3. Plugins no longer take the `config` parameter. This can be accessed
from the interface via the `.get_plugin_config()` engine call.
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Not only does this have plugin protocol changes, it will require plugins
to make some code changes before they will work again. But on the plus
side, the engine call feature is extensible, and we can add more things
to it as needed.
Plugin maintainers will have to change the trait signature at the very
least. If they were using `config`, they will have to call
`engine.get_plugin_config()` instead.
If they were using the mutable reference to the plugin, they will have
to come up with some strategy to work around it (for example, for `Inc`
I just cloned it). This shouldn't be such a big deal at the moment as
it's not like plugins have ever run as daemons with persistent state in
the past, and they don't in this PR either. But I thought it was
important to make the change before we support plugins as daemons, as an
exclusive mutable reference is not compatible with parallel plugin
calls.
I suggest this gets merged sometime *after* the current pending release,
so that we have some time to adjust to the previous plugin protocol
changes that don't require code changes before making ones that do.
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
I will document the additional protocol features (`EngineCall`,
`EngineCallResponse`), and constraints on plugin call processing if
engine calls are used - basically, to be aware that an engine call could
result in a nested plugin call, so the plugin should be able to handle
that.
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
This PR adds a new evaluator path with callbacks to a mutable trait
object implementing a Debugger trait. The trait object can do anything,
e.g., profiling, code coverage, step debugging. Currently,
entering/leaving a block and a pipeline element is marked with
callbacks, but more callbacks can be added as necessary. Not all
callbacks need to be used by all debuggers; unused ones are simply empty
calls. A simple profiler is implemented as a proof of concept.
The debugging support is implementing by making `eval_xxx()` functions
generic depending on whether we're debugging or not. This has zero
computational overhead, but makes the binary slightly larger (see
benchmarks below). `eval_xxx()` variants called from commands (like
`eval_block_with_early_return()` in `each`) are chosen with a dynamic
dispatch for two reasons: to not grow the binary size due to duplicating
the code of many commands, and for the fact that it isn't possible
because it would make Command trait objects object-unsafe.
In the future, I hope it will be possible to allow plugin callbacks such
that users would be able to implement their profiler plugins instead of
having to recompile Nushell.
[DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be
interesting to explore.
Try `help debug profile`.
## Screenshots
Basic output:
![profiler_new](https://github.com/nushell/nushell/assets/25571562/418b9df0-b659-4dcb-b023-2d5fcef2c865)
To profile with more granularity, increase the profiler depth (you'll
see that repeated `is-windows` calls take a large chunk of total time,
making it a good candidate for optimizing):
![profiler_new_m3](https://github.com/nushell/nushell/assets/25571562/636d756d-5d56-460c-a372-14716f65f37f)
## Benchmarks
### Binary size
Binary size increase vs. main: **+40360 bytes**. _(Both built with
`--release --features=extra,dataframe`.)_
### Time
```nushell
# bench_debug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'debug:'
let res2 = bench { debug profile $test } --pretty
print $res2
```
```nushell
# bench_nodebug.nu
use std bench
let test = {
1..100
| each {
ls | each {|row| $row.name | str length }
}
| flatten
| math avg
}
print 'no debug:'
let res1 = bench { do $test } --pretty
print $res1
```
`cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower
than `cargo run --release -- bench_nodebug.nu` due to the collection
overhead + gathering the report. This is expected. When gathering more
stuff, the overhead is obviously higher.
`cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I
didn't measure any difference. Both benchmarks report times between 97
and 103 ms randomly, without one being consistently higher than the
other. This suggests that at least in this particular case, when not
running any debugger, there is no runtime overhead.
## API changes
This PR adds a generic parameter to all `eval_xxx` functions that forces
you to specify whether you use the debugger. You can resolve it in two
ways:
* Use a provided helper that will figure it out for you. If you wanted
to use `eval_block(&engine_state, ...)`, call `let eval_block =
get_eval_block(&engine_state); eval_block(&engine_state, ...)`
* If you know you're in an evaluation path that doesn't need debugger
support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is
the case of hooks, for example).
I tried to add more explanation in the docstring of `debugger_trait.rs`.
## TODO
- [x] Better profiler output to reduce spam of iterative commands like
`each`
- [x] Resolve `TODO: DEBUG` comments
- [x] Resolve unwraps
- [x] Add doc comments
- [x] Add usage and extra usage for `debug profile`, explaining all
columns
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
Hopefully none.
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
Previously, the plugin itself would also print error messages about
mismatched versions, and there could be many of them while parsing a
`register` command which would be hard to follow. This removes that
behavior so that the error message is easier to read, and also makes the
error message on the engine side mention the plugin name so that it's
easier to tell which plugin needs to be updated.
The python plugin has also been modified to make testing this behavior
easier. Just change `NUSHELL_VERSION` in the script file to something
incompatible.
# User-Facing Changes
- Better error message
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
This PR introduces [workspaces
dependencies](https://doc.rust-lang.org/cargo/reference/workspaces.html#the-dependencies-table).
The advantages are:
- a single place where dependency versions are declared
- reduces the number of files to change when upgrading a dependency
- reduces the risk of accidentally depending on 2 different versions of
the same dependency
I've only done a few so far. If this PR is accepted, I might continue
and progressively do the rest.
# User-Facing Changes
N/A
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
N/A
# Description
Replace panics with errors in thread spawning.
Also adds `IntoSpanned` trait for easily constructing `Spanned`, and an
implementation of `From<Spanned<std::io::Error>>` for `ShellError`,
which is used to provide context for the error wherever there was a span
conveniently available. In general this should make it more convenient
to do the right thing with `std::io::Error` and always add a span to it
when it's possible to do so.
# User-Facing Changes
Fewer panics!
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# Description
This patches `StreamReader`'s iterator implementation to not return any
values after an I/O error has been encountered.
Without this, it's possible for a protocol error to cause the channel to
disconnect, in which case every call to `recv()` returns an error, which
causes the iterator to produce error values infinitely. There are some
commands that don't immediately stop after receiving an error so it's
possible that they just get stuck in an infinite error. This fixes that
so the error is only produced once, and then the stream ends
artificially.
# Description
This fixes a race condition where all interfaces to a plugin might have
been dropped, but both sides are still expecting input, and the
`PluginInterfaceManager` doesn't get a chance to see that the interfaces
have been dropped and stop trying to consume input.
As the manager needs to hold on to a writer, we can't automatically
close the stream, but we also can't interrupt it if it's in a waiting to
read. So the best solution is to send a message to the plugin that we
are no longer going to be sending it any plugin calls, so that it knows
that it can exit when it's done.
This race condition is a little bit tricky to trigger as-is, but can be
more noticeable when running plugins in a tight loop. If too many plugin
processes are spawned at one time, Nushell can start to encounter "too
many open files" errors, and not be very useful.
# User-Facing Changes
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
I will need to add `Goodbye` to the protocol docs
# Description
Bump nushell version to the dev version of 0.90.2
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
Merge after https://github.com/nushell/nushell/pull/11786
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
# User-Facing Changes
<!-- List of all changes that impact the user experience here. This
helps us keep track of breaking changes. -->
# Tests + Formatting
<!--
Don't forget to add tests that cover your changes.
Make sure you've run and fixed any issues with these commands:
- `cargo fmt --all -- --check` to check standard code formatting (`cargo
fmt --all` applies these changes)
- `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to
check that you're using the standard code style
- `cargo test --workspace` to check that all tests pass (on Windows make
sure to [enable developer
mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging))
- `cargo run -- -c "use std testing; testing run-tests --path
crates/nu-std"` to run the tests for the standard library
> **Note**
> from `nushell` you can also use the `toolkit` as follows
> ```bash
> use toolkit.nu # or use an `env_change` hook to activate it
automatically
> toolkit check pr
> ```
-->
# After Submitting
<!-- If your PR had any user-facing changes, update [the
documentation](https://github.com/nushell/nushell.github.io) after the
PR is merged, if necessary. This will help us keep the docs up to date.
-->
# Description
#11492 fixed flags for builtin commands but I missed that plugins don't
use the same `has_flag` that builtins do. This PR addresses this.
Unfortunately this means that return value of `has_flag` needs to change
from `bool` to `Result<bool, ShellError>` to produce an error when
explicit value is not a boolean (just like in case of `has_flag` for
builtin commands. It is not possible to check this in
`EvaluatedCall::try_from_call` because
# User-Facing Changes
Passing explicit values to flags of plugin commands (like `--flag=true`
`--flag=false`) should work now.
BREAKING: changed return value of `EvaluatedCall::has_flag` method from
`bool` to `Result<bool, ShellError>`
# Tests + Formatting
Added tests and updated documentation and examples
# Description
When nushell calls a plugin it now sends a configuration `Value` from
the nushell config under `$env.config.plugins.PLUGIN_SHORT_NAME`. This
allows plugin authors to read configuration provided by plugin users.
The `PLUGIN_SHORT_NAME` must match the registered filename after
`nu_plugin_`. If you register `target/debug/nu_plugin_config` the
`PLUGIN_NAME` will be `config` and the nushell config will loook like:
$env.config = {
# ...
plugins: {
config: [
some
values
]
}
}
Configuration may also use a closure which allows passing values from
`$env` to a plugin:
$env.config = {
# ...
plugins: {
config: {||
$env.some_value
}
}
}
This is a breaking change for the plugin API as the `Plugin::run()`
function now accepts a new configuration argument which is an
`&Option<Value>`. If no configuration was supplied the value is `None`.
Plugins compiled after this change should work with older nushell, and
will behave as if the configuration was not set.
Initially discussed in #10867
# User-Facing Changes
* Plugins can read configuration data stored in `$env.config.plugins`
* The plugin `CallInfo` now includes a `config` entry, existing plugins
will require updates
# Tests + Formatting
- 🟢 `toolkit fmt`
- 🟢 `toolkit clippy`
- 🟢 `toolkit test`
- 🟢 `toolkit test stdlib`
# After Submitting
- [ ] Update [Creating a plugin (in
Rust)](https://www.nushell.sh/contributor-book/plugins.html#creating-a-plugin-in-rust)
[source](https://github.com/nushell/nushell.github.io/blob/main/contributor-book/plugins.md)
- [ ] Add "Configuration" section to [Plugins
documentation](https://www.nushell.sh/contributor-book/plugins.html)