Fix bundling/path errors, sidecar support, iOS/Android simulator support, asset hotreload fixes, serverfn hotrelad, native tailwind support (#2779)

* wip: fix manganis import path
* upgrade tauri bundler, convert todo!() to unimplemented!()
* feat: mobile hotreloading + ios asset configuration
* remove oid demo and its associated env vars
* nuke all the things that cause cache thrashing
* swap to Asset type
* add some more logs
* display impl for attrvalue
* dont panic on collect failure
* cut down web deps to 150
* clean up deps in a lot of places, simplify build scripts
* clean up asset resolution and cli-dev profile
* wire up absolute paths for manganis asset in a particular mode
* move document related things around to shorten compile times
* move most things to `document::Item` and then decompose the cli-config
* switch link to stylesheet for clarity in examples
* move manganis workspace example to examples folder
* decompose manganis-cli-support
* dont need mobile demo anymore
* remove gloo dialogs
* html doesnt need document
* rename hotreload to devtools
* really clean up the html crate
* fix weird rsx spacing
* clean up desktop to use callback
* clean up document a bit
* re-wire up devsocket
* fix utf css
* yeeeet that js out of here
* synthetic web system works
* web crate almost done being cleaned up
* desktop mostly cleaned up too
* clean up cli a bit
* more cli cleanup
* cli builds again
* clean up cli, inline structs, cut down number of unique types where possible
* hotreload works again for desktop and this time, mobile too
* cfg out tungstenite
* devserver err
* more refactor to cli builder
* switch to unbounded_send
* new structure is much cleaner
* add http serve subcommand
* bundled hotreload
* kick stylsheets
* clean up a bit more, split up eventloop
* amazingly serve seems done
* change from pub to pub(crate) in cli
* remove tools
* bit more polish to cli
* fix issue with join
* gracefully handle fullstack without a server
* fullstack mobile demo
* fix launch function, move projects into example-projects folder
* hoist examples
* add "run" command
* clean up launch
* remove old manual websocket receiver
* doctor command
* allow desktop to scroll
* cut apart router crate
* dont put launch in prelude
* use dioxus::launch where possible
* rename rsx, cut out hotreload tests
* remove liveview project
* bump native
* fix compile for renderers
* move sync event response out of interpreter
* move render in serve
* rollback settings change
* cli compiles, huzzah!
* change uris for asset
* fix asset
* new tui screen
* new cargo-like tui works
* very very verty close
* it works! very small bug with incorrect grapheme calc
* Clean up devserver a bit
* status sytem
* tidy up debug filters
* clean up logging situation
* Fix a number of bugs with log printing
* new printing system is more reliable
* wire up more stuff
* things working but fullstack is having issues
* fullstack works again!
* hotreloading bundled assets works again
* bundled hotreload and beginnings of macos bundling
* Hotreload desktop
* combined server builds
* add build handle
* fix fullstack assets
* make open async, add some hooks for ios
* migrate filemap to runner
* wip global crate system
* fixup bundles + organize asset
* fix asset location bug
* all the bundled reloading!
* open ios simulator!
* full hotreload support for mobile + serverfn
* basic cleanups
* clean up dx
* Move filemap
* fix cutting of newlines
* assets workibg, some android
* hoist wry/tao
* use sync locks and headers to fix issues with android
* desktop -> mac/win/linux with alias
* better logging
* feat: workspace (entire computer!) hotreload
* should rebuild toggle, ios simulator bootup
* proper mobile support in launch
* more robust handling of assets
* fix cargo
* bring back some of tauri bundle
* make warnings go away, clippy happy on cli
* some final clippy cleanups
* fmt
* move manganis to its own folder
* upgrade bundle to stable
* drastically slim down manganis, prepping for merge
* typos, failing test, docsrs config
* remove static gen test
* nix static gen test
* we use --platform web instead of --platform fullstack now
* only bind dev urls in desktop/mobile
* install gtk
* nix static gen
* split build dir by app name
This commit is contained in:
Jonathan Kelley 2024-10-25 17:23:45 -07:00 committed by GitHub
parent ef436e4ed0
commit 7ec3453ca3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
156 changed files with 11545 additions and 9006 deletions

View file

@ -1,9 +1,13 @@
[profile]
[profile.dioxus-client]
[profile.dioxus-wasm]
inherits = "dev"
opt-level = 2
[profile.dioxus-server]
inherits = "dev"
opt-level = 2
[profile.dioxus-android]
inherits = "dev"
opt-level = 2

View file

@ -39,7 +39,7 @@ env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: "0"
RUST_BACKTRACE: 1
rust_nightly: nightly-2024-07-01
rust_nightly: nightly-2024-10-20
jobs:
check-msrv:
@ -142,8 +142,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: sudo apt-get update
- run: sudo apt install libwebkit2gtk-4.1-dev libgtk-3-dev libayatana-appindicator3-dev libxdo-dev
- name: Install Rust ${{ env.rust_nightly }}
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@nightly
with:
toolchain: ${{ env.rust_nightly }}
- uses: Swatinem/rust-cache@v2
@ -153,11 +155,12 @@ jobs:
- name: "doc --lib --all-features"
run: |
cargo doc --workspace --no-deps --all-features --document-private-items
env:
RUSTFLAGS: --cfg docsrs
RUSTDOCFLAGS: --cfg docsrs
# todo: re-enable warnings
# env:
# RUSTFLAGS: --cfg docsrs
# RUSTDOCFLAGS: --cfg docsrs
# todo: re-enable warnings, private items
# RUSTDOCFLAGS: --cfg docsrs -Dwarnings
# --document-private-items
# Check for invalid links in the repository
link-check:

5410
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,26 @@
[workspace]
resolver = "2"
members = [
"packages/dioxus",
"packages/dioxus-lib",
"packages/core",
"packages/core-types",
"packages/cli",
"packages/core-types",
"packages/core-macro",
"packages/config-macro",
"packages/router-macro",
"packages/extension",
"packages/router",
"packages/html",
"packages/html-internal-macro",
"packages/hooks",
"packages/web",
"packages/ssr",
"packages/desktop",
"packages/mobile",
"packages/interpreter",
"packages/liveview",
"packages/autofmt",
"packages/check",
"packages/cli-config",
@ -34,7 +54,30 @@ members = [
"packages/signals",
"packages/ssr",
"packages/static-generation",
"packages/web",
"packages/lazy-js-bundle",
"packages/cli-config",
"packages/devtools",
"packages/devtools-types",
"packages/isrg",
"packages/rsx-hotreload",
# Static generation examples
# "packages/static-generation/examples/simple",
# "packages/static-generation/examples/router",
# "packages/static-generation/examples/github-pages",
# Playwright tests
"packages/playwright-tests/liveview",
"packages/playwright-tests/web",
"packages/playwright-tests/fullstack",
"packages/playwright-tests/suspense-carousel",
"packages/playwright-tests/nested-suspense",
# manganis
"packages/manganis/manganis",
"packages/manganis/manganis-macro",
"packages/manganis/manganis-core",
# Full project examples
"example-projects/fullstack-hackernews",
@ -57,12 +100,10 @@ members = [
# Playwright tests
"packages/playwright-tests/liveview",
"packages/playwright-tests/web",
"packages/playwright-tests/static-generation",
"packages/playwright-tests/fullstack",
"packages/playwright-tests/suspense-carousel",
"packages/playwright-tests/nested-suspense",
]
exclude = ["examples/mobile_demo", "examples/openid_connect_demo"]
[workspace.package]
version = "0.6.0-alpha.3"
@ -75,16 +116,15 @@ dioxus-core = { path = "packages/core", version = "0.6.0-alpha.3" }
dioxus-core-types = { path = "packages/core-types", version = "0.6.0-alpha.3" }
dioxus-core-macro = { path = "packages/core-macro", version = "0.6.0-alpha.3" }
dioxus-config-macro = { path = "packages/config-macro", version = "0.6.0-alpha.3" }
dioxus-document = { path = "packages/document", version = "0.6.0-alpha.3" }
dioxus-router = { path = "packages/router", version = "0.6.0-alpha.3" }
dioxus-router-macro = { path = "packages/router-macro", version = "0.6.0-alpha.3" }
dioxus-document = { path = "packages/document", version = "0.6.0-alpha.3", default-features = false }
dioxus-html = { path = "packages/html", version = "0.6.0-alpha.3", default-features = false }
dioxus-html-internal-macro = { path = "packages/html-internal-macro", version = "0.6.0-alpha.3" }
dioxus-hooks = { path = "packages/hooks", version = "0.6.0-alpha.3" }
dioxus-web = { path = "packages/web", version = "0.6.0-alpha.3", default-features = false }
dioxus-isrg = { path = "packages/isrg", version = "0.6.0-alpha.3" }
dioxus-ssr = { path = "packages/ssr", version = "0.6.0-alpha.3", default-features = false }
dioxus-desktop = { path = "packages/desktop", version = "0.6.0-alpha.3", default-features = false }
dioxus-desktop = { path = "packages/desktop", version = "0.6.0-alpha.3" }
dioxus-mobile = { path = "packages/mobile", version = "0.6.0-alpha.3" }
dioxus-interpreter-js = { path = "packages/interpreter", version = "0.6.0-alpha.3" }
dioxus-liveview = { path = "packages/liveview", version = "0.6.0-alpha.3" }
@ -94,23 +134,26 @@ dioxus-rsx = { path = "packages/rsx", version = "0.6.0-alpha.3" }
dioxus-rsx-hotreload = { path = "packages/rsx-hotreload", version = "0.6.0-alpha.3" }
dioxus-rsx-rosetta = { path = "packages/rsx-rosetta", version = "0.6.0-alpha.3" }
dioxus-signals = { path = "packages/signals", version = "0.6.0-alpha.3" }
dioxus-cli-config = { path = "packages/cli-config", version = "0.6.0-alpha.3" }
generational-box = { path = "packages/generational-box", version = "0.6.0-alpha.3" }
dioxus-devtools = { path = "packages/devtools", version = "0.6.0-alpha.3" }
dioxus-devtools-types = { path = "packages/devtools-types", version = "0.6.0-alpha.3" }
dioxus-fullstack = { path = "packages/fullstack", version = "0.6.0-alpha.3" }
dioxus-fullstack = { path = "packages/fullstack", version = "0.6.0-alpha.3", default-features = false }
dioxus-static-site-generation = { path = "packages/static-generation", version = "0.6.0-alpha.3" }
dioxus_server_macro = { path = "packages/server-macro", version = "0.6.0-alpha.3", default-features = false }
dioxus-isrg = { path = "packages/isrg", version = "0.6.0-alpha.3" }
lazy-js-bundle = { path = "packages/lazy-js-bundle", version = "0.6.0-alpha.3" }
dioxus-cli-config = { path = "packages/cli-config", version = "0.6.0-alpha.3" }
generational-box = { path = "packages/generational-box", version = "0.6.0-alpha.3" }
manganis = { path = "packages/manganis/manganis", version = "0.6.0-alpha.3" }
manganis-macro = { path = "packages/manganis/manganis-macro", version = "0.6.0-alpha.3" }
manganis-core = { path = "packages/manganis/manganis-core", version = "0.6.0-alpha.3" }
manganis-cli-support = { version = "0.3.0-alpha.3", features = ["html"] }
manganis = { version = "0.3.0-alpha.3", default-features = false, features = ["html", "macro"]}
warnings = { version = "0.2.0" }
# a fork of pretty please for tests
# a fork of pretty please for tests - let's get off of this if we can!
prettier-please = { version = "0.3.0", features = ["verbatim"]}
clap = { version = "4.5.7" }
askama_escape = "0.10.3"
tracing = "0.1.37"
tracing-futures = "0.2.5"
@ -128,8 +171,8 @@ thiserror = "1.0.40"
prettyplease = { version = "0.2.20", features = ["verbatim"] }
const_format = "0.2.32"
cargo_toml = { version = "0.20.3" }
tauri-utils = { version = "=1.5.*" }
tauri-bundler = { version = "=1.4.*" }
tauri-utils = { version = "=2.0.2" }
tauri-bundler = { version = "=2.0.4" }
lru = "0.12.2"
async-trait = "0.1.77"
axum = "0.7.0"
@ -178,7 +221,7 @@ criterion = { version = "0.5" }
walrus = "*"
# desktop
wry = { version = "0.43.0", default-features = false }
wry = { version = "0.45.0", default-features = false }
tao = { version = "0.30.0", features = ["rwh_05"] }
webbrowser = "1.0.1"
infer = "0.16.0"
@ -192,13 +235,11 @@ core-foundation = "0.10.0"
objc = { version = "0.2.7", features = ["exception"] }
objc_id = "0.1.1"
[profile.dev.package.dioxus-core-macro]
opt-level = 3
# wasm bindgen is slooooooow, but it's because we're actually processing the wasm
# so, lets just bump up walrus to make it faster, no need for any special profiles
[profile.dev.package.walrus]
opt-level = 3
# our release profile should be fast to compile and fast to run
# when we ship our CI builds, we turn on LTO which improves perf leftover by turning on incremental
[profile.release]
incremental = true
debug = 0
# Disable debug assertions to check the released path of core and other packages, but build without optimizations to keep build times quick
[profile.release-unoptimized]
@ -219,15 +260,13 @@ documentation = "https://dioxuslabs.com"
keywords = ["dom", "ui", "gui", "react", "wasm"]
rust-version = "1.79.0"
publish = false
version = "0.6.0-alpha.2"
version = "0.6.0-alpha.3"
[dependencies]
manganis = { workspace = true, optional = true }
reqwest = { workspace = true, features = ["json"], optional = true }
ciborium = { workspace = true, optional = true }
base64 = { workspace = true, optional = true }
http-range = { version = "0.1.5", optional = true }
ciborium = { version = "0.2.1", optional = true }
base64 = { version = "0.21.0", optional = true }
tracing-subscriber = "0.3.17"
[dev-dependencies]
dioxus = { workspace = true, features = ["router"] }
@ -264,7 +303,6 @@ fullstack = ["dioxus/fullstack"]
axum = ["dioxus/axum"]
server = ["dioxus/axum"]
web = ["dioxus/web"]
collect-assets = ["dep:manganis"]
http = ["dep:reqwest", "dep:http-range"]
[[example]]

View file

@ -6,7 +6,7 @@
use dioxus::prelude::*;
use std::{collections::VecDeque, fmt::Debug, rc::Rc};
const STYLE: &str = asset!("./examples/assets/events.css");
const STYLE: Asset = asset!("/examples/assets/events.css");
fn main() {
dioxus::launch(app);

View file

@ -12,7 +12,7 @@ use dioxus::events::*;
use dioxus::html::input_data::keyboard_types::Key;
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/calculator.css");
const STYLE: Asset = asset!("/examples/assets/calculator.css");
fn main() {
dioxus::LaunchBuilder::desktop()

View file

@ -29,7 +29,10 @@ fn app() -> Element {
let mut state = use_signal(Calculator::new);
rsx! {
document::Link { rel: "stylesheet", href: asset!("./examples/assets/calculator.css") }
document::Link {
rel: "stylesheet",
href: asset!("./examples/assets/calculator.css"),
}
div { id: "wrapper",
div { class: "app",
div {
@ -39,15 +42,37 @@ fn app() -> Element {
div { class: "calculator-keypad",
div { class: "input-keys",
div { class: "function-keys",
CalculatorKey { name: "key-clear", onclick: move |_| state.write().clear_display(),
if state.read().display_value == "0" { "C" } else { "AC" }
CalculatorKey {
name: "key-clear",
onclick: move |_| state.write().clear_display(),
if state.read().display_value == "0" {
"C"
} else {
"AC"
}
}
CalculatorKey {
name: "key-sign",
onclick: move |_| state.write().toggle_sign(),
"±"
}
CalculatorKey {
name: "key-percent",
onclick: move |_| state.write().toggle_percent(),
"%"
}
CalculatorKey { name: "key-sign", onclick: move |_| state.write().toggle_sign(), "±" }
CalculatorKey { name: "key-percent", onclick: move |_| state.write().toggle_percent(), "%" }
}
div { class: "digit-keys",
CalculatorKey { name: "key-0", onclick: move |_| state.write().input_digit(0), "0" }
CalculatorKey { name: "key-dot", onclick: move |_| state.write().input_dot(), "" }
CalculatorKey {
name: "key-0",
onclick: move |_| state.write().input_digit(0),
"0"
}
CalculatorKey {
name: "key-dot",
onclick: move |_| state.write().input_dot(),
""
}
for k in 1..10 {
CalculatorKey {
key: "{k}",
@ -74,8 +99,16 @@ fn app() -> Element {
onclick: move |_| state.write().set_operator(Operator::Sub),
""
}
CalculatorKey { name: "key-add", onclick: move |_| state.write().set_operator(Operator::Add), "+" }
CalculatorKey { name: "key-equals", onclick: move |_| state.write().perform_operation(), "=" }
CalculatorKey {
name: "key-add",
onclick: move |_| state.write().set_operator(Operator::Add),
"+"
}
CalculatorKey {
name: "key-equals",
onclick: move |_| state.write().perform_operation(),
"="
}
}
}
}

View file

@ -5,7 +5,7 @@ use async_std::task::sleep;
use dioxus::prelude::*;
use web_time::Instant;
const STYLE: &str = asset!("./examples/assets/clock.css");
const STYLE: Asset = asset!("/examples/assets/clock.css");
fn main() {
dioxus::launch(app);

View file

@ -8,7 +8,7 @@ use std::rc::Rc;
use async_std::task::sleep;
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/roulette.css");
const STYLE: Asset = asset!("/examples/assets/roulette.css");
fn main() {
dioxus::launch(app);

View file

@ -2,7 +2,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/counter.css");
const STYLE: Asset = asset!("/examples/assets/counter.css");
fn main() {
dioxus::launch(app);

View file

@ -22,11 +22,18 @@ fn main() {
rsx! {
document::Link {
rel: "stylesheet",
href: asset!("https://unpkg.com/purecss@2.0.6/build/pure-min.css"),
href: "https://unpkg.com/purecss@2.0.6/build/pure-min.css",
integrity: "sha384-Uu6IeWbM+gzNVXJcM9XV3SohHtmWE+3VGi496jvgX1jyvDTXfdK+rfZc8C1Aehk5",
crossorigin: "anonymous"
crossorigin: "anonymous",
}
document::Link {
rel: "stylesheet",
href: asset!("/examples/assets/crm.css"),
}
document::Link {
rel: "stylesheet",
href: asset!("./examples/assets/crm.css"),
}
document::Link { rel: "stylesheet", href: asset!("./examples/assets/crm.css") }
h1 { "Dioxus CRM Example" }
Router::<Route> {}
}
@ -117,7 +124,7 @@ fn New() -> Element {
placeholder: "Last Name…",
required: true,
value: "{last_name}",
oninput: move |e| last_name.set(e.value())
oninput: move |e| last_name.set(e.value()),
}
}
@ -127,13 +134,21 @@ fn New() -> Element {
id: "description",
placeholder: "Description…",
value: "{description}",
oninput: move |e| description.set(e.value())
oninput: move |e| description.set(e.value()),
}
}
div { class: "pure-controls",
button { r#type: "submit", class: "pure-button pure-button-primary", "Save" }
Link { to: Route::List, class: "pure-button pure-button-primary red", "Cancel" }
button {
r#type: "submit",
class: "pure-button pure-button-primary",
"Save"
}
Link {
to: Route::List,
class: "pure-button pure-button-primary red",
"Cancel"
}
}
}
}

View file

@ -1,17 +1,12 @@
//! A simple example on how to use assets loading from the filesystem.
//!
//! If the feature "collect-assets" is enabled, the assets will be collected via the dioxus CLI and embedded into the
//! final bundle. This lets you do various useful things like minify, compress, and optimize your assets.
//!
//! We can still use assets without the CLI middleware, but generally larger apps will benefit from it.
//! Dioxus provides the asset!() macro which is a convenient way to load assets from the filesystem.
//! This ensures the asset makes it into the bundle through dependencies and is accessible in environments
//! like web and android where assets are lazily loaded using platform-specific APIs.
use dioxus::prelude::*;
#[cfg(not(feature = "collect-assets"))]
static ASSET_PATH: &str = "examples/assets/logo.png";
#[cfg(feature = "collect-assets")]
static ASSET_PATH: &str = asset!("examples/assets/logo.png".format(ImageType::Avif));
static ASSET_PATH: Asset = asset!("/examples/assets/logo.png");
fn main() {
dioxus::launch(app);
@ -21,7 +16,7 @@ fn app() -> Element {
rsx! {
div {
h1 { "This should show an image:" }
img { src: ASSET_PATH.to_string() }
img { src: ASSET_PATH }
}
}
}

View file

@ -7,7 +7,7 @@
use dioxus::desktop::{use_asset_handler, wry::http::Response};
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/custom_assets.css");
const STYLE: Asset = asset!("/examples/assets/custom_assets.css");
fn main() {
dioxus::LaunchBuilder::desktop().launch(app);

View file

@ -8,7 +8,7 @@ use std::sync::Arc;
use dioxus::prelude::*;
use dioxus::{html::HasFileData, prelude::dioxus_elements::FileEngine};
const STYLE: &str = asset!("./examples/assets/file_upload.css");
const STYLE: Asset = asset!("/examples/assets/file_upload.css");
fn main() {
dioxus::launch(app);

View file

@ -9,7 +9,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/flat_router.css");
const STYLE: Asset = asset!("/examples/assets/flat_router.css");
fn main() {
dioxus::launch(|| {

View file

@ -4,6 +4,7 @@ version = "0.1.0"
edition = "2021"
publish = false
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]

View file

@ -17,5 +17,5 @@ reqwest = { workspace = true }
[features]
default = []
server = ["dioxus/axum"]
server = ["dioxus/server"]
web = ["dioxus/web"]

View file

@ -0,0 +1,3 @@
body {
background-color: rgb(108, 104, 104);
}

View file

@ -1,7 +1,7 @@
//! Run with:
//!
//! ```sh
//! dx serve --platform fullstack
//! dx serve --platform web
//! ```
#![allow(non_snake_case, unused)]
@ -14,6 +14,7 @@ fn app() -> Element {
let server_future = use_server_future(get_server_data)?;
rsx! {
document::Link { href: asset!("/assets/hello.css"), rel: "stylesheet" }
h1 { "High-Five counter: {count}" }
button { onclick: move |_| count += 1, "Up high!" }
button { onclick: move |_| count -= 1, "Down low!" }

View file

@ -1,7 +1,7 @@
//! Run with:
//!
//! ```sh
//! dx serve --platform fullstack
//! dx serve --platform web
//! ```
use dioxus::prelude::*;

View file

@ -7,7 +7,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/counter.css");
const STYLE: Asset = asset!("/examples/assets/counter.css");
static COUNT: GlobalSignal<i32> = Signal::global(|| 0);
static DOUBLED_COUNT: GlobalMemo<i32> = Memo::global(|| COUNT() * 2);

View file

@ -8,7 +8,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/links.css");
const STYLE: Asset = asset!("/examples/assets/links.css");
fn main() {
dioxus::launch(app);

View file

@ -3,7 +3,6 @@
use dioxus::prelude::*;
fn main() {
tracing_subscriber::fmt::init();
dioxus::launch(app);
}
@ -12,25 +11,10 @@ fn app() -> Element {
// You can use the Meta component to render a meta tag into the head of the page
// Meta tags are useful to provide information about the page to search engines and social media sites
// This example sets up meta tags for the open graph protocol for social media previews
document::Meta {
property: "og:title",
content: "My Site",
}
document::Meta {
property: "og:type",
content: "website",
}
document::Meta {
property: "og:url",
content: "https://www.example.com",
}
document::Meta {
property: "og:image",
content: "https://example.com/image.jpg",
}
document::Meta {
name: "description",
content: "My Site is a site",
}
document::Meta { property: "og:title", content: "My Site" }
document::Meta { property: "og:type", content: "website" }
document::Meta { property: "og:url", content: "https://www.example.com" }
document::Meta { property: "og:image", content: "https://example.com/image.jpg" }
document::Meta { name: "description", content: "My Site is a site" }
}
}

View file

@ -24,7 +24,7 @@ fn app() -> Element {
rsx! {
document::Link {
rel: "stylesheet",
href: asset!("./examples/assets/overlay.css"),
href: asset!("/examples/assets/overlay.css"),
}
if show_overlay() {
div {

View file

@ -27,8 +27,11 @@ fn app() -> Element {
}
};
rsx!(
document::Link { rel: "stylesheet", href: asset!("./examples/assets/read_size.css") }
rsx! {
document::Link {
rel: "stylesheet",
href: asset!("./examples/assets/read_size.css"),
}
div {
width: "50%",
height: "50%",
@ -38,5 +41,5 @@ fn app() -> Element {
}
button { onclick: read_dims, "Read dimensions" }
)
}
}

View file

@ -7,7 +7,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/radio.css");
const STYLE: Asset = asset!("/examples/assets/radio.css");
fn main() {
dioxus::launch(app);

View file

@ -15,7 +15,10 @@ fn app() -> Element {
let mut dimensions = use_signal(Size2D::zero);
rsx!(
document::Link { rel: "stylesheet", href: asset!("./examples/assets/read_size.css") }
document::Link {
rel: "stylesheet",
href: asset!("./examples/assets/read_size.css"),
}
div {
width: "50%",
height: "50%",

View file

@ -8,7 +8,7 @@
use dioxus::prelude::*;
const STYLE: &str = asset!("./examples/assets/router.css");
const STYLE: Asset = asset!("/examples/assets/router.css");
fn main() {
dioxus::launch(|| {

View file

@ -2,22 +2,19 @@
use dioxus::prelude::*;
const _STYLE: &str = asset!("public/tailwind.css");
fn main() {
dioxus::launch(app);
}
pub fn app() -> Element {
let grey_background = true;
rsx!(
rsx! (
document::Link { rel: "stylesheet", href: asset!("/public/tailwind.css") }
div {
header {
class: "text-gray-400 body-font",
// you can use optional attributes to optionally apply a tailwind class
class: if grey_background {
"bg-gray-900"
},
class: if grey_background { "bg-gray-900" },
div { class: "container mx-auto flex flex-wrap p-5 flex-col md:flex-row items-center",
a { class: "flex title-font font-medium items-center text-white mb-4 md:mb-0",
StacksIcon {}
@ -63,7 +60,7 @@ pub fn app() -> Element {
class: "object-cover object-center rounded",
src: "https://i.imgur.com/oK6BLtw.png",
referrerpolicy: "no-referrer",
alt: "hero"
alt: "hero",
}
}
}

View file

@ -3,7 +3,6 @@
use dioxus::prelude::*;
fn main() {
tracing_subscriber::fmt::init();
dioxus::launch(app);
}

View file

@ -3,7 +3,7 @@
use dioxus::prelude::*;
use std::collections::HashMap;
const STYLE: &str = asset!("./examples/assets/todomvc.css");
const STYLE: Asset = asset!("/examples/assets/todomvc.css");
fn main() {
dioxus::launch(app);

View file

@ -28,7 +28,7 @@ impl<'a, 'b> MacroCollector<'a, 'b> {
}
}
impl<'a, 'b> Visit<'b> for MacroCollector<'a, 'b> {
impl<'b> Visit<'b> for MacroCollector<'_, 'b> {
fn visit_macro(&mut self, i: &'b Macro) {
// Visit the regular stuff - this will also ensure paths/attributes are visited
syn::visit::visit_macro(self, i);

View file

@ -12,11 +12,6 @@ pub const ASSET_ROOT_ENV: &str = "DIOXUS_ASSET_ROOT";
pub const APP_TITLE_ENV: &str = "DIOXUS_APP_TITLE";
pub const OUT_DIR: &str = "DIOXUS_OUT_DIR";
/// todo: this is not implemented but we're going to reserve this
///
/// technically this is only passed on "launch" so if you close the app, this will be lost
pub const IOS_DEVSERVER_ADDR_ENV: &str = "SIMCTL_CHILD_DIOXUS_DEVSERVER_ADDR";
/// Get the address of the devserver for use over a raw socket
///
/// This is not a websocket! There's no protocol!

View file

@ -10,19 +10,25 @@ keywords = ["react", "gui", "cli", "dioxus", "wasm"]
rust-version = "1.79.0"
[dependencies]
# cli core
clap = { version = "4.2", features = ["derive", "cargo"] }
dioxus-autofmt = { workspace = true }
dioxus-check = { workspace = true }
dioxus-rsx-rosetta = { workspace = true }
dioxus-rsx = { workspace = true }
dioxus-rsx-hotreload = { workspace = true }
dioxus-html = { workspace = true, features = ["hot-reload-context"] }
dioxus-core = { workspace = true, features = ["serialize"] }
dioxus-core-types = { workspace = true }
dioxus-devtools-types = { workspace = true }
dioxus-cli-config = { workspace = true }
dioxus-fullstack = { workspace = true }
clap = { workspace = true, features = ["derive", "cargo"] }
thiserror = { workspace = true }
wasm-bindgen-cli-support = "0.2"
wasm-bindgen-shared = "0.2"
colored = "2.0.0"
dioxus-cli-config = { workspace = true }
# features
log = "0.4.14"
fern = { version = "0.6.0", features = ["colored"] }
serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0.79"
uuid = { version = "1.3.0", features = ["v4"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
toml = { workspace = true }
fs_extra = "1.2.0"
cargo_toml = { workspace = true }
@ -30,7 +36,7 @@ futures-util = { workspace = true, features = ["async-await-macro"] }
notify = { workspace = true, features = ["serde"] }
html_parser = { workspace = true }
cargo_metadata = "0.18.1"
tokio = { workspace = true, features = ["fs", "sync", "rt", "macros", "process", "rt-multi-thread"] }
tokio = { workspace = true, features = ["full"] }
tokio-stream = "0.1.15"
chrono = "0.4.19"
anyhow = "1"
@ -38,12 +44,12 @@ hyper = { workspace = true }
hyper-util = "0.1.3"
hyper-rustls = { workspace = true }
rustls = { workspace = true }
subprocess = "0.2.9"
rayon = "1.8.0"
console = "0.15.8"
ctrlc = "3.2.3"
futures-channel = { workspace = true }
krates = { version = "0.17.0" }
cargo-config2 = { workspace = true, optional = true }
regex = "1.10.6"
axum = { workspace = true, features = ["ws"] }
@ -53,7 +59,7 @@ tower-http = { workspace = true, features = ["full"] }
proc-macro2 = { workspace = true, features = ["span-locations"] }
syn = { workspace = true, features = ["full", "extra-traits", "visit", "visit-mut"] }
headers = "0.3.7"
headers = "0.4.0"
walkdir = "2"
# tools download
@ -75,42 +81,39 @@ open = "5.0.1"
cargo-generate = "=0.21.3"
toml_edit = "0.22.20"
# bundling
tauri-bundler = { workspace = true }
tauri-utils = { workspace = true }
# formatting
# syn = { workspace = true }
prettyplease = { workspace = true }
# Assets
manganis-cli-support = { workspace = true, features = ["html"] }
brotli = "6.0.0"
dioxus-autofmt = { workspace = true }
dioxus-check = { workspace = true }
dioxus-rsx-rosetta = { workspace = true }
dioxus-rsx = { workspace = true }
dioxus-rsx-hotreload = { workspace = true }
dioxus-html = { workspace = true, features = ["hot-reload-context"] }
dioxus-core = { workspace = true, features = ["serialize"] }
dioxus-core-types = { workspace = true }
dioxus-devtools = { workspace = true }
ignore = "0.4.22"
env_logger = "0.11.3"
env_logger = { workspace = true }
tracing-subscriber = { version = "0.3.18", features = ["std", "env-filter"] }
console-subscriber = { version = "0.3.0", optional = true }
tracing = { workspace = true }
wasm-opt = { version = "0.116.1", optional = true }
ratatui = { version = "0.27.0", features = ["crossterm", "unstable"] }
crossterm = { version = "0.27.0", features = ["event-stream"] }
ansi-to-tui = "=5.0.0-rc.1"
crossterm = { version = "0.28.0", features = ["event-stream"] }
ansi-to-tui = "6.0"
ansi-to-html = "0.2.1"
ratatui = { version = "0.28.0", features = ["crossterm", "unstable"] }
# on macos, we need to specify the vendored feature on ssl when cross compiling
# [target.'cfg(target_os = "macos")'.dependencies]
# openssl = { version = "0.10", features = ["vendored"] }
# link intercept
tempfile = "3.3"
manganis-core = { workspace = true }
# Extracting data from an executable
object = {version="0.36.0", features=["wasm"]}
tokio-util = { version = "0.7.11", features = ["full"] }
itertools = "0.13.0"
throbber-widgets-tui = "0.7.0"
unicode-segmentation = "1.12.0"
handlebars = "6.1.0"
strum = { version = "0.26.3", features = ["derive"] }
tauri-utils = { workspace = true }
tauri-bundler = { workspace = true }
[build-dependencies]
built = { version = "=0.7.4", features = ["git2"] }
@ -119,24 +122,27 @@ built = { version = "=0.7.4", features = ["git2"] }
default = []
plugin = []
tokio-console = ["dep:console-subscriber"]
optimizations = ["dep:wasm-opt"]
bundle = []
# when releasing dioxus, we want to enable wasm-opt
# and then also maybe developing it too.
# making this optional cuts workspace deps down from 1000 to 500, so it's very nice for workspace adev
optimizations = ["wasm-opt", "asset-opt"]
asset-opt = []
wasm-opt = ["dep:wasm-opt"]
[[bin]]
path = "src/main.rs"
name = "dx"
[dev-dependencies]
tempfile = "3.3"
[package.metadata.binstall]
# temporarily, we're going to use the 0.5.0 download page for all binaries
pkg-url = "{ repo }/releases/download/v{ version }/dx-{ target }-v{ version }{ archive-suffix }"
# the old one...
# pkg-url = "{ repo }/releases/download/v0.5.0/dx-{ target }-v{ version }{ archive-suffix }"
# pkg-url = "{ repo }/releases/download/v{ version }/dx-{ target }{ archive-suffix }"
pkg-fmt = "tgz"
[package.metadata.binstall.overrides.x86_64-pc-windows-msvc]
pkg-fmt = "zip"
[package.metadata.docs.rs]
all-features = false
rustc-args = [ "--cfg", "docsrs" ]
rustdoc-args = [ "--cfg", "docsrs" ]

View file

@ -0,0 +1 @@
app-dir/

View file

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleIdentifier</key>
<string>com.dioxuslabs</string>
<key>CFBundleDisplayName</key>
<string>DioxusApp</string>
<key>CFBundleName</key>
<string>DioxusApp</string>
<key>CFBundleExecutable</key>
<string>DioxusApp</string>
<key>CFBundleVersion</key>
<string>0.1.0</string>
<key>CFBundleShortVersionString</key>
<string>0.1.0</string>
<key>CFBundleDevelopmentRegion</key>
<string>en_US</string>
<key>UILaunchStoryboardName</key>
<string></string>
<key>LSRequiresIPhoneOS</key>
<true/>
</dict>
</plist>

View file

@ -0,0 +1,38 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDisplayName</key>
<string>DioxusApp</string>
<key>CFBundleExecutable</key>
<string>DioxusApp</string>
<key>CFBundleIconFile</key>
<string>icon.icns</string>
<key>CFBundleIdentifier</key>
<string>org.dioxuslabs.dioxus-desktop</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>App</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>7.4.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSApplicationCategoryType</key>
<string>public.app-category.social-networking</string>
<key>LSMinimumSystemVersion</key>
<string>10.15</string>
</dict>
</plist>

View file

View file

@ -1,209 +1,113 @@
use crate::builder::{BuildRequest, Stage, UpdateBuildProgress, UpdateStage};
use crate::Result;
use crate::TraceSrc;
use anyhow::Context;
use brotli::enc::BrotliEncoderParams;
use futures_channel::mpsc::UnboundedSender;
use manganis_cli_support::{process_file, AssetManifest, AssetManifestExt, AssetType};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use std::fs;
use manganis_core::{LinkSection, ResourceAsset};
use object::{read::archive::ArchiveFile, File as ObjectFile, Object, ObjectSection};
use serde::{Deserialize, Serialize};
use std::path::Path;
use std::sync::atomic::AtomicUsize;
use std::sync::Arc;
use std::{ffi::OsString, path::PathBuf};
use std::{fs::File, io::Write};
use walkdir::WalkDir;
use std::{collections::HashMap, path::PathBuf};
/// The temp file name for passing manganis json from linker to current exec.
pub const MG_JSON_OUT: &str = "mg-out";
pub fn asset_manifest(build: &BuildRequest) -> Option<AssetManifest> {
let file_path = build.target_out_dir().join(MG_JSON_OUT);
let read = fs::read_to_string(&file_path).ok()?;
_ = fs::remove_file(file_path);
let json: Vec<String> = serde_json::from_str(&read).unwrap();
Some(AssetManifest::load(json))
/// A manifest of all assets collected from dependencies
///
/// This will be filled in primarily by incremental compilation artifacts.
#[derive(Debug, PartialEq, Default, Clone, Serialize, Deserialize)]
pub(crate) struct AssetManifest {
/// Map of bundled asset name to the asset itself
pub(crate) assets: HashMap<PathBuf, ResourceAsset>,
}
/// Create a head file that contains all of the imports for assets that the user project uses
pub fn create_assets_head(build: &BuildRequest, manifest: &AssetManifest) -> Result<()> {
let out_dir = build.target_out_dir();
std::fs::create_dir_all(&out_dir)?;
let mut file = File::create(out_dir.join("__assets_head.html"))?;
file.write_all(manifest.head().as_bytes())?;
Ok(())
}
/// Process any assets collected from the binary
pub(crate) fn process_assets(
build: &BuildRequest,
manifest: &AssetManifest,
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> anyhow::Result<()> {
let static_asset_output_dir = build.target_out_dir();
std::fs::create_dir_all(&static_asset_output_dir)
.context("Failed to create static asset output directory")?;
let assets_finished = Arc::new(AtomicUsize::new(0));
let assets = manifest.assets();
let asset_count = assets.len();
assets.par_iter().try_for_each_init(
|| progress.clone(),
move |progress, asset| {
if let AssetType::File(file_asset) = asset {
match process_file(file_asset, &static_asset_output_dir) {
Ok(_) => {
// Update the progress
tracing::info!(dx_src = ?TraceSrc::Build, "Optimized static asset {file_asset}");
let assets_finished =
assets_finished.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::OptimizingAssets,
update: UpdateStage::SetProgress(
assets_finished as f64 / asset_count as f64,
),
});
}
Err(err) => {
tracing::error!(dx_src = ?TraceSrc::Build, "Failed to copy static asset: {}", err);
return Err(err);
}
}
}
Ok::<(), anyhow::Error>(())
},
)?;
Ok(())
}
/// A guard that sets up the environment for the web renderer to compile in. This guard sets the location that assets will be served from
pub(crate) struct AssetConfigDropGuard;
impl AssetConfigDropGuard {
pub fn new(base_path: Option<&str>) -> Self {
// Set up the collect asset config
let base = match base_path {
Some(base) => format!("/{}/", base.trim_matches('/')),
None => "/".to_string(),
};
manganis_cli_support::Config::default()
.with_assets_serve_location(base)
.save();
Self {}
}
}
impl Drop for AssetConfigDropGuard {
fn drop(&mut self) {
// Reset the config
manganis_cli_support::Config::default().save();
}
}
pub(crate) fn copy_dir_to(
src_dir: PathBuf,
dest_dir: PathBuf,
pre_compress: bool,
) -> std::io::Result<()> {
let entries = std::fs::read_dir(&src_dir)?;
let mut children: Vec<std::thread::JoinHandle<std::io::Result<()>>> = Vec::new();
for entry in entries.flatten() {
let entry_path = entry.path();
let path_relative_to_src = entry_path.strip_prefix(&src_dir).unwrap();
let output_file_location = dest_dir.join(path_relative_to_src);
children.push(std::thread::spawn(move || {
if entry.file_type()?.is_dir() {
// If the file is a directory, recursively copy it into the output directory
if let Err(err) =
copy_dir_to(entry_path.clone(), output_file_location, pre_compress)
{
tracing::error!(
dx_src = ?TraceSrc::Build,
"Failed to pre-compress directory {}: {}",
entry_path.display(),
err
);
}
} else {
// Make sure the directory exists
std::fs::create_dir_all(output_file_location.parent().unwrap())?;
// Copy the file to the output directory
std::fs::copy(&entry_path, &output_file_location)?;
// Then pre-compress the file if needed
if pre_compress {
if let Err(err) = pre_compress_file(&output_file_location) {
tracing::error!(
dx_src = ?TraceSrc::Build,
"Failed to pre-compress static assets {}: {}",
output_file_location.display(),
err
);
}
// If pre-compression isn't enabled, we should remove the old compressed file if it exists
} else if let Some(compressed_path) = compressed_path(&output_file_location) {
_ = std::fs::remove_file(compressed_path);
}
impl AssetManifest {
pub(crate) fn load_from_file(path: &Path) -> anyhow::Result<Self> {
let src = std::fs::read_to_string(path)
.context("Failed to read asset manifest from filesystem")?;
serde_json::from_str(&src)
.with_context(|| format!("Failed to parse asset manifest from {path:?}\n{src}"))
}
Ok(())
}));
}
for child in children {
child.join().unwrap()?;
}
Ok(())
}
/// Get the path to the compressed version of a file
fn compressed_path(path: &Path) -> Option<PathBuf> {
let new_extension = match path.extension() {
Some(ext) => {
if ext.to_string_lossy().to_lowercase().ends_with("br") {
return None;
}
let mut ext = ext.to_os_string();
ext.push(".br");
ext
}
None => OsString::from("br"),
};
Some(path.with_extension(new_extension))
}
/// pre-compress a file with brotli
pub(crate) fn pre_compress_file(path: &Path) -> std::io::Result<()> {
let Some(compressed_path) = compressed_path(path) else {
/// Fill this manifest with a file object/rlib files, typically extracted from the linker intercepted
pub(crate) fn add_from_object_path(&mut self, path: PathBuf) -> anyhow::Result<()> {
let Some(ext) = path.extension() else {
return Ok(());
};
let file = std::fs::File::open(path)?;
let mut stream = std::io::BufReader::new(file);
let mut buffer = std::fs::File::create(compressed_path)?;
let params = BrotliEncoderParams::default();
brotli::BrotliCompress(&mut stream, &mut buffer, &params)?;
Ok(())
}
/// pre-compress all files in a folder
pub(crate) fn pre_compress_folder(path: &Path, pre_compress: bool) -> std::io::Result<()> {
let walk_dir = WalkDir::new(path);
for entry in walk_dir.into_iter().filter_map(|e| e.ok()) {
let entry_path = entry.path();
if entry_path.is_file() {
if pre_compress {
if let Err(err) = pre_compress_file(entry_path) {
tracing::error!(dx_src = ?TraceSrc::Build, "Failed to pre-compress file {entry_path:?}: {err}");
let Some(ext) = ext.to_str() else {
return Ok(());
};
let data = std::fs::read(path.clone())?;
match ext {
// Parse an unarchived object file
"o" => {
if let Ok(object) = object::File::parse(&*data) {
self.add_from_object_file(&object)?;
}
}
// If pre-compression isn't enabled, we should remove the old compressed file if it exists
else if let Some(compressed_path) = compressed_path(entry_path) {
_ = std::fs::remove_file(compressed_path);
// Parse an rlib as a collection of objects
"rlib" => {
if let Ok(archive) = object::read::archive::ArchiveFile::parse(&*data) {
self.add_from_archive_file(&archive, &data)?;
}
}
_ => {}
}
Ok(())
}
/// Fill this manifest from an rlib / ar file that contains many object files and their entryies
fn add_from_archive_file(&mut self, archive: &ArchiveFile, data: &[u8]) -> object::Result<()> {
// Look through each archive member for object files.
// Read the archive member's binary data (we know it's an object file)
// And parse it with the normal `object::File::parse` to find the manganis string.
for member in archive.members() {
let member = member?;
let name = String::from_utf8_lossy(member.name()).to_string();
// Check if the archive member is an object file and parse it.
if name.ends_with(".o") {
let data = member.data(data)?;
let object = object::File::parse(data)?;
_ = self.add_from_object_file(&object);
}
}
Ok(())
}
/// Fill this manifest with whatever tables might come from the object file
fn add_from_object_file(&mut self, obj: &ObjectFile) -> anyhow::Result<()> {
for section in obj.sections() {
let Ok(section_name) = section.name() else {
continue;
};
// Check if the link section matches the asset section for one of the platforms we support. This may not be the current platform if the user is cross compiling
let matches = LinkSection::ALL
.iter()
.any(|x| x.link_section == section_name);
if !matches {
continue;
}
let bytes = section
.uncompressed_data()
.context("Could not read uncompressed data from object file")?;
let as_str = std::str::from_utf8(&bytes)
.context("object file contained non utf8 encoding")?
.chars()
.filter(|c| !c.is_control())
.collect::<String>();
let assets = serde_json::Deserializer::from_str(&as_str).into_iter::<ResourceAsset>();
for as_resource in assets.flatten() {
// Some platforms (e.g. macOS) start the manganis section with a null byte, we need to filter that out before we deserialize the JSON
self.assets
.insert(as_resource.absolute.clone(), as_resource);
}
}
Ok(())
}
}

View file

@ -0,0 +1,404 @@
use super::{progress::ProgressTx, BuildArtifacts};
use crate::dioxus_crate::DioxusCrate;
use crate::Result;
use crate::{assets::AssetManifest, TraceSrc};
use crate::{build::BuildArgs, link::LinkAction};
use crate::{AppBundle, Platform};
use anyhow::Context;
use serde::Deserialize;
use std::{path::PathBuf, process::Stdio};
use tokio::{io::AsyncBufReadExt, process::Command};
#[derive(Clone, Debug)]
pub(crate) struct BuildRequest {
/// The configuration for the crate we are building
pub(crate) krate: DioxusCrate,
/// The arguments for the build
pub(crate) build: BuildArgs,
/// Status channel to send our progress updates to
pub(crate) progress: ProgressTx,
/// The target directory for the build
pub(crate) custom_target_dir: Option<PathBuf>,
}
impl BuildRequest {
pub fn new(krate: DioxusCrate, build: BuildArgs, progress: ProgressTx) -> Self {
Self {
build,
krate,
progress,
custom_target_dir: None,
}
}
/// Run the build command with a pretty loader, returning the executable output location
///
/// This will also run the fullstack build. Note that fullstack is handled separately within this
/// code flow rather than outside of it.
pub(crate) async fn build_all(self) -> Result<AppBundle> {
tracing::debug!("Running build command...");
let (app, server) =
futures_util::future::try_join(self.build_app(), self.build_server()).await?;
AppBundle::new(self, app, server).await
}
pub(crate) async fn build_app(&self) -> Result<BuildArtifacts> {
tracing::debug!("Building app...");
let exe = self.build_cargo().await?;
let assets = self.collect_assets().await?;
Ok(BuildArtifacts { exe, assets })
}
pub(crate) async fn build_server(&self) -> Result<Option<BuildArtifacts>> {
tracing::debug!("Building server...");
if !self.build.fullstack {
return Ok(None);
}
let mut cloned = self.clone();
cloned.build.platform = Some(Platform::Server);
Ok(Some(cloned.build_app().await?))
}
/// Run `cargo`, returning the location of the final executable
///
/// todo: add some stats here, like timing reports, crate-graph optimizations, etc
pub(crate) async fn build_cargo(&self) -> Result<PathBuf> {
tracing::debug!("Executing cargo...");
// Extract the unit count of the crate graph so build_cargo has more accurate data
let crate_count = self.get_unit_count_estimate().await;
// Update the status to show that we're starting the build and how many crates we expect to build
self.status_starting_build(crate_count);
let mut cmd = Command::new("cargo");
cmd.arg("rustc")
.current_dir(self.krate.crate_dir())
.arg("--message-format")
.arg("json-diagnostic-rendered-ansi")
.args(self.build_arguments())
.env("RUSTFLAGS", self.rust_flags());
if let Some(target_dir) = self.custom_target_dir.as_ref() {
cmd.env("CARGO_TARGET_DIR", target_dir);
}
// Android needs a special linker since the linker is actually tied to the android toolchain.
// For the sake of simplicity, we're going to pass the linker here using ourselves as the linker,
// but in reality we could simply use the android toolchain's linker as the path.
//
// We don't want to overwrite the user's .cargo/config.toml since that gets committed to git
// and we want everyone's install to be the same.
if self.build.platform() == Platform::Android {
cmd.env(
LinkAction::ENV_VAR_NAME,
LinkAction::LinkAndroid {
linker: "/Users/jonkelley/Library/Android/sdk/ndk/25.2.9519653/toolchains/llvm/prebuilt/darwin-x86_64/bin/aarch64-linux-android24-clang".into(),
extra_flags: vec![],
}
.to_json(),
);
}
tracing::trace!(dx_src = ?TraceSrc::Build, "Rust cargo args: {:?}", cmd);
let mut child = cmd
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.context("Failed to spawn cargo build")?;
let stdout = tokio::io::BufReader::new(child.stdout.take().unwrap());
let stderr = tokio::io::BufReader::new(child.stderr.take().unwrap());
let mut output_location = None;
let mut stdout = stdout.lines();
let mut stderr = stderr.lines();
let mut units_compiled = 0;
loop {
use cargo_metadata::Message;
let line = tokio::select! {
Ok(Some(line)) = stdout.next_line() => line,
Ok(Some(line)) = stderr.next_line() => line,
else => break,
};
let mut deserializer = serde_json::Deserializer::from_str(line.trim());
deserializer.disable_recursion_limit();
let message =
Message::deserialize(&mut deserializer).unwrap_or(Message::TextLine(line));
match message {
Message::BuildScriptExecuted(_) => units_compiled += 1,
Message::TextLine(line) => self.status_build_message(line),
Message::CompilerMessage(msg) => self.status_build_diagnostic(msg),
Message::CompilerArtifact(artifact) => {
units_compiled += 1;
match artifact.executable {
Some(executable) => output_location = Some(executable.into()),
None => self.status_build_progress(
units_compiled,
crate_count,
artifact.target.name,
self.build.platform(),
),
}
}
Message::BuildFinished(finished) => {
if !finished.success {
return Err(anyhow::anyhow!(
"Cargo build failed, signaled by the compiler"
)
.into());
}
}
_ => {}
}
}
if output_location.is_none() {
tracing::error!("Cargo build failed - no output location");
}
let out_location = output_location.context("Build did not return an executable")?;
tracing::debug!(
"Build completed successfully - output location: {:?}",
out_location
);
Ok(out_location)
}
/// Run the linker intercept and then fill in our AssetManifest from the incremental artifacts
///
/// This will execute `dx` with an env var set to force `dx` to operate as a linker, and then
/// traverse the .o and .rlib files rustc passes that new `dx` instance, collecting the link
/// tables marked by manganis and parsing them as a ResourceAsset.
pub(crate) async fn collect_assets(&self) -> anyhow::Result<AssetManifest> {
tracing::debug!("Collecting assets ...");
// If assets are skipped, we don't need to collect them
if self.build.skip_assets {
return Ok(AssetManifest::default());
}
// Create a temp file to put the output of the args
// We need to do this since rustc won't actually print the link args to stdout, so we need to
// give `dx` a file to dump its env::args into
let tmp_file = tempfile::NamedTempFile::new()?;
// Run `cargo rustc` again, but this time with a custom linker (dx) and an env var to force
// `dx` to act as a linker
//
// This will force `dx` to look through the incremental cache and find the assets from the previous build
Command::new("cargo")
// .env("RUSTFLAGS", self.rust_flags())
.arg("rustc")
.args(self.build_arguments())
.arg("--offline") /* don't use the network, should already be resolved */
.arg("--")
.arg(format!(
"-Clinker={}",
std::env::current_exe()
.unwrap()
.canonicalize()
.unwrap()
.display()
))
.env(
LinkAction::ENV_VAR_NAME,
LinkAction::BuildAssetManifest {
destination: tmp_file.path().to_path_buf(),
}
.to_json(),
)
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.output()
.await?;
// The linker wrote the manifest to the temp file, let's load it!
AssetManifest::load_from_file(tmp_file.path())
}
/// Create a list of arguments for cargo builds
pub(crate) fn build_arguments(&self) -> Vec<String> {
let mut cargo_args = Vec::new();
// Set the target, profile and features that vary between the app and server builds
if self.build.platform() == Platform::Server {
cargo_args.push("--profile".to_string());
cargo_args.push(self.build.server_profile.to_string());
} else {
if let Some(custom_profile) = &self.build.profile {
cargo_args.push("--profile".to_string());
cargo_args.push(custom_profile.to_string());
}
// todo: use the right arch based on the current arch
let custom_target = match self.build.platform() {
Platform::Web => Some("wasm32-unknown-unknown"),
Platform::Ios => match self.build.target_args.device {
Some(true) => Some("aarch64-apple-ios"),
_ => Some("aarch64-apple-ios-sim"),
},
Platform::Android => Some("aarch64-linux-android"),
Platform::Server => None,
// we're assuming we're building for the native platform for now... if you're cross-compiling
// the targets here might be different
Platform::MacOS => None,
Platform::Windows => None,
Platform::Linux => None,
Platform::Liveview => None,
};
if let Some(target) = custom_target.or(self.build.target_args.target.as_deref()) {
cargo_args.push("--target".to_string());
cargo_args.push(target.to_string());
}
}
if self.build.release {
cargo_args.push("--release".to_string());
}
if self.build.verbose {
cargo_args.push("--verbose".to_string());
} else {
cargo_args.push("--quiet".to_string());
}
let features = self.target_features();
if !features.is_empty() {
cargo_args.push("--features".to_string());
cargo_args.push(features.join(" "));
}
if let Some(ref package) = self.build.target_args.package {
cargo_args.push(String::from("-p"));
cargo_args.push(package.clone());
}
cargo_args.append(&mut self.build.cargo_args.clone());
match self.krate.executable_type() {
krates::cm::TargetKind::Bin => cargo_args.push("--bin".to_string()),
krates::cm::TargetKind::Lib => cargo_args.push("--lib".to_string()),
krates::cm::TargetKind::Example => cargo_args.push("--example".to_string()),
_ => {}
};
cargo_args.push(self.krate.executable_name().to_string());
tracing::debug!(dx_src = ?TraceSrc::Build, "cargo args: {:?}", cargo_args);
cargo_args
}
pub(crate) fn rust_flags(&self) -> String {
let mut rust_flags = std::env::var("RUSTFLAGS").unwrap_or_default();
if self.build.platform() == Platform::Android {
let cur_exe = std::env::current_exe().unwrap();
rust_flags.push_str(format!(" -Clinker={}", cur_exe.display()).as_str());
rust_flags.push_str(" -Clink-arg=-landroid");
rust_flags.push_str(" -Clink-arg=-llog");
rust_flags.push_str(" -Clink-arg=-lOpenSLES");
rust_flags.push_str(" -Clink-arg=-Wl,--export-dynamic");
}
rust_flags
}
/// Create the list of features we need to pass to cargo to build the app by merging together
/// either the client or server features depending on if we're building a server or not.
pub(crate) fn target_features(&self) -> Vec<String> {
let mut features = self.build.target_args.features.clone();
if self.build.platform() == Platform::Server {
features.extend(self.build.target_args.server_features.clone());
} else {
features.extend(self.build.target_args.client_features.clone());
}
features
}
pub(crate) fn all_target_features(&self) -> Vec<String> {
let mut features = self.target_features();
if !self.build.target_args.no_default_features {
features.extend(
self.krate
.package()
.features
.get("default")
.cloned()
.unwrap_or_default(),
);
}
features.dedup();
features
}
/// Try to get the unit graph for the crate. This is a nightly only feature which may not be available with the current version of rustc the user has installed.
pub(crate) async fn get_unit_count(&self) -> crate::Result<usize> {
#[derive(Debug, Deserialize)]
struct UnitGraph {
units: Vec<serde_json::Value>,
}
let output = tokio::process::Command::new("cargo")
.arg("+nightly")
.arg("build")
.arg("--unit-graph")
.arg("-Z")
.arg("unstable-options")
.args(self.build_arguments())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.output()
.await?;
if !output.status.success() {
return Err(anyhow::anyhow!("Failed to get unit count").into());
}
let output_text = String::from_utf8(output.stdout).context("Failed to get unit count")?;
let graph: UnitGraph =
serde_json::from_str(&output_text).context("Failed to get unit count")?;
Ok(graph.units.len())
}
/// Get an estimate of the number of units in the crate. If nightly rustc is not available, this will return an estimate of the number of units in the crate based on cargo metadata.
/// TODO: always use https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#unit-graph once it is stable
pub(crate) async fn get_unit_count_estimate(&self) -> usize {
// Try to get it from nightly
self.get_unit_count().await.unwrap_or_else(|_| {
// Otherwise, use cargo metadata
(self
.krate
.krates
.krates_filtered(krates::DepKind::Dev)
.iter()
.map(|k| k.targets.len())
.sum::<usize>() as f64
/ 3.5) as usize
})
}
}

View file

@ -0,0 +1,722 @@
use crate::Result;
use crate::{assets::AssetManifest, TraceSrc};
use crate::{BuildRequest, Platform};
use anyhow::Context;
use rayon::prelude::{IntoParallelRefIterator, ParallelIterator};
use std::sync::atomic::AtomicUsize;
use std::{
fs::create_dir_all,
path::{Path, PathBuf},
};
use wasm_bindgen_cli_support::Bindgen;
/// The end result of a build.
///
/// Contains the final asset manifest, the executables, and the workdir.
///
/// Every dioxus app can have an optional server executable which will influence the final bundle.
/// This is built in parallel with the app executable during the `build` phase and the progres/status
/// of the build is aggregated.
///
/// The server will *always* be dropped into the `web` folder since it is considered "web" in nature,
/// and will likely need to be combined with the public dir to be useful.
///
/// We do our best to assemble read-to-go bundles here, such that the "bundle" step for each platform
/// can just use the build dir
///
/// When we write the AppBundle to a folder, it'll contain each bundle for each platform under the app's name:
/// ```
/// dog-app/
/// build/
/// web/
/// server.exe
/// assets/
/// some-secret-asset.txt (a server-side asset)
/// public/
/// index.html
/// assets/
/// logo.png
/// desktop/
/// App.app
/// App.appimage
/// App.exe
/// server/
/// server
/// assets/
/// some-secret-asset.txt (a server-side asset)
/// ios/
/// App.app
/// App.ipa
/// android/
/// App.apk
/// bundle/
/// build.json
/// Desktop.app
/// Mobile_x64.ipa
/// Mobile_arm64.ipa
/// Mobile_rosetta.ipa
/// web.appimage
/// web/
/// server.exe
/// assets/
/// some-secret-asset.txt
/// public/
/// index.html
/// assets/
/// logo.png
/// style.css
/// ```
///
/// When deploying, the build.json file will provide all the metadata that dx-deploy will use to
/// push the app to stores, set up infra, manage versions, etc.
///
/// The format of each build will follow the name plus some metadata such that when distributing you
/// can easily trim off the metadata.
///
/// The idea here is that we can run any of the programs in the same way that they're deployed.
///
///
/// ## Bundle structure links
/// - apple: https://developer.apple.com/documentation/bundleresources/placing_content_in_a_bundle
/// - appimage: https://docs.appimage.org/packaging-guide/manual.html#ref-manual
///
/// ## Extra links
/// - xbuild: https://github.com/rust-mobile/xbuild/blob/master/xbuild/src/command/build.rs
#[derive(Debug)]
pub(crate) struct AppBundle {
pub(crate) build: BuildRequest,
pub(crate) app: BuildArtifacts,
pub(crate) server: Option<BuildArtifacts>,
}
#[derive(Debug)]
pub struct BuildArtifacts {
pub(crate) exe: PathBuf,
pub(crate) assets: AssetManifest,
}
impl AppBundle {
/// ## Web:
/// Create a folder that is somewhat similar to an app-image (exe + asset)
/// The server is dropped into the `web` folder, even if there's no `public` folder.
/// If there's no server (SPA/static-gen), we still use the `web` folder, but it only contains the
/// public folder.
/// ```
/// web/
/// server
/// assets/
/// public/
/// index.html
/// wasm/
/// app.wasm
/// glue.js
/// snippets/
/// ...
/// assets/
/// logo.png
/// ```
///
/// ## Linux:
/// https://docs.appimage.org/reference/appdir.html#ref-appdir
/// current_exe.join("Assets")
/// ```
/// app.appimage/
/// AppRun
/// app.desktop
/// package.json
/// assets/
/// logo.png
/// ```
///
/// ## Macos
/// We simply use the macos format where binaries are in `Contents/MacOS` and assets are in `Contents/Resources`
/// We put assets in an assets dir such that it generally matches every other platform and we can
/// output `/assets/blah` from manganis.
/// ```
/// App.app/
/// Contents/
/// Info.plist
/// MacOS/
/// Frameworks/
/// Resources/
/// assets/
/// blah.icns
/// blah.png
/// CodeResources
/// _CodeSignature/
/// ```
///
/// ## iOS
/// Not the same as mac! ios apps are a bit "flattened" in comparison. simpler format, presumably
/// since most ios apps don't ship frameworks/plugins and such.
///
/// todo(jon): include the signing and entitlements in this format diagram.
/// ```
/// App.app/
/// main
/// assets/
/// ```
///
/// ## Android:
///
/// Currently we need to generate a `src` type structure, not a pre-packaged apk structure, since
/// we need to compile kotlin and java. This pushes us into using gradle and following a structure
/// similar to that of cargo mobile2. Eventually I'd like to slim this down (drop buildSrc) and
/// drive the kotlin build ourselves. This would let us drop gradle (yay! no plugins!) but requires
/// us to manage dependencies (like kotlinc) ourselves (yuck!).
///
/// https://github.com/WanghongLin/miscellaneous/blob/master/tools/build-apk-manually.sh
///
/// Unfortunately, it seems that while we can drop the `android` build plugin, we still will need
/// gradle since kotlin is basically gradle-only.
///
/// Pre-build:
/// ```
/// app.apk/
/// .gradle
/// app/
/// src/
/// main/
/// assets/
/// jniLibs/
/// java/
/// kotlin/
/// res/
/// AndroidManifest.xml
/// build.gradle.kts
/// proguard-rules.pro
/// buildSrc/
/// build.gradle.kts
/// src/
/// main/
/// kotlin/
/// BuildTask.kt
/// build.gradle.kts
/// gradle.properties
/// gradlew
/// gradlew.bat
/// settings.gradle
/// ```
///
/// Final build:
/// ```
/// app.apk/
/// AndroidManifest.xml
/// classes.dex
/// assets/
/// logo.png
/// lib/
/// armeabi-v7a/
/// libmyapp.so
/// arm64-v8a/
/// libmyapp.so
/// ```
/// Notice that we *could* feasibly build this ourselves :)
///
/// ## Windows:
/// https://superuser.com/questions/749447/creating-a-single-file-executable-from-a-directory-in-windows
/// Windows does not provide an AppImage format, so instead we're going build the same folder
/// structure as an AppImage, but when distributing, we'll create a .exe that embeds the resources
/// as an embedded .zip file. When the app runs, it will implicitly unzip its resources into the
/// Program Files folder. Any subsequent launches of the parent .exe will simply call the AppRun.exe
/// entrypoint in the associated Program Files folder.
///
/// This is, in essence, the same as an installer, so we might eventually just support something like msi/msix
/// which functionally do the same thing but with a sleeker UI.
///
/// This means no installers are required and we can bake an updater into the host exe.
///
/// ## Handling asset lookups:
/// current_exe.join("assets")
/// ```
/// app.appimage/
/// main.exe
/// main.desktop
/// package.json
/// assets/
/// logo.png
/// ```
///
/// Since we support just a few locations, we could just search for the first that exists
/// - usr
/// - ../Resources
/// - assets
/// - Assets
/// - $cwd/assets
///
/// ```
/// assets::root() ->
/// mac -> ../Resources/
/// ios -> ../Resources/
/// android -> assets/
/// server -> assets/
/// liveview -> assets/
/// web -> /assets/
/// root().join(bundled)
/// ```
pub(crate) async fn new(
request: BuildRequest,
app: BuildArtifacts,
server: Option<BuildArtifacts>,
) -> Result<Self> {
let bundle = Self {
app,
server,
build: request,
};
tracing::debug!("Assembling app bundle");
bundle.build.status_start_bundle();
bundle.prepare_build_dir()?;
bundle.write_main_executable().await?;
bundle.write_server_executable().await?;
bundle.write_assets().await?;
bundle.write_metadata().await?;
bundle.optimize().await?;
Ok(bundle)
}
/// We only really currently care about:
///
/// - app dir (.app, .exe, .apk, etc)
/// - assets dir
/// - exe dir (.exe, .app, .apk, etc)
/// - extra scaffolding
///
/// It's not guaranteed that they're different from any other folder
fn prepare_build_dir(&self) -> Result<()> {
create_dir_all(self.app_dir())?;
create_dir_all(self.exe_dir())?;
create_dir_all(self.asset_dir())?;
// we could download the templates from somewhere (github?) but after having banged my head against
// cargo-mobile2 for ages, I give up with that. We're literally just going to hardcode the templates
// by writing them here.
if let Platform::Android = self.build.build.platform() {}
Ok(())
}
/// Take the output of rustc and make it into the main exe of the bundle
///
/// For wasm, we'll want to run `wasm-bindgen` to make it a wasm binary along with some other optimizations
/// Other platforms we might do some stripping or other optimizations
/// Move the executable to the workdir
async fn write_main_executable(&self) -> Result<()> {
match self.build.build.platform() {
// Run wasm-bindgen on the wasm binary and set its output to be in the bundle folder
// Also run wasm-opt on the wasm binary, and sets the index.html since that's also the "executable".
//
// The wasm stuff will be in a folder called "wasm" in the workdir.
//
// Final output format:
// ```
// dx/
// app/
// web/
// bundle/
// build/
// public/
// index.html
// wasm/
// app.wasm
// glue.js
// snippets/
// ...
// assets/
// logo.png
// ```
Platform::Web => {
// Run wasm-bindgen and drop its output into the assets folder under "dioxus"
self.build.status_wasm_bindgen_start();
self.run_wasm_bindgen(&self.app.exe.with_extension("wasm"), &self.exe_dir())
.await?;
// Only run wasm-opt if the feature is enabled
// Wasm-opt has an expensive build script that makes it annoying to keep enabled for iterative dev
// We put it behind the "wasm-opt" feature flag so that it can be disabled when iterating on the cli
self.build.status_wasm_opt_start();
self.run_wasm_opt(&self.exe_dir())?;
// Write the index.html file with the pre-configured contents we got from pre-rendering
std::fs::write(
self.app_dir().join("index.html"),
self.build.prepare_html()?,
)?;
}
// this will require some extra oomf to get the multi architecture builds...
// for now, we just copy the exe into the current arch (which, sorry, is hardcoded for my m1)
// we'll want to do multi-arch builds in the future, so there won't be *one* exe dir to worry about
// eventually `exe_dir` and `main_exe` will need to take in an arch and return the right exe path
//
// todo(jon): maybe just symlink this rather than copy it?
Platform::Android => {
// https://github.com/rust-mobile/xbuild/blob/master/xbuild/template/lib.rs
// https://github.com/rust-mobile/xbuild/blob/master/apk/src/lib.rs#L19
std::fs::copy(&self.app.exe, self.main_exe())?;
}
// These are all super simple, just copy the exe into the folder
// eventually, perhaps, maybe strip + encrypt the exe?
Platform::MacOS
| Platform::Windows
| Platform::Linux
| Platform::Ios
| Platform::Liveview
| Platform::Server => {
std::fs::copy(&self.app.exe, self.main_exe())?;
}
}
Ok(())
}
/// Copy the assets out of the manifest and into the target location
///
/// Should be the same on all platforms - just copy over the assets from the manifest into the output directory
async fn write_assets(&self) -> Result<()> {
// Server doesn't need assets - web will provide them
if self.build.build.platform() == Platform::Server {
return Ok(());
}
let asset_dir = self.asset_dir();
// First, clear the asset dir
// todo(jon): cache the asset dir, removing old files and only copying new ones that changed since the last build
_ = std::fs::remove_dir_all(&asset_dir);
_ = create_dir_all(&asset_dir);
// todo(jon): we also want to eventually include options for each asset's optimization and compression, which we currently aren't
let mut assets_to_transfer = vec![];
// Queue the bundled assets
for asset in self.app.assets.assets.keys() {
let bundled = self.app.assets.assets.get(asset).unwrap();
let from = bundled.absolute.clone();
let to = asset_dir.join(&bundled.bundled);
tracing::debug!("Copying asset {from:?} to {to:?}");
assets_to_transfer.push((from, to));
}
// And then queue the legacy assets
// ideally, one day, we can just check the rsx!{} calls for references to assets
for from in self.build.krate.legacy_asset_dir_files() {
let to = asset_dir.join(from.file_name().unwrap());
tracing::debug!("Copying legacy asset {from:?} to {to:?}");
assets_to_transfer.push((from, to));
}
let asset_count = assets_to_transfer.len();
let assets_finished = AtomicUsize::new(0);
// Parallel Copy over the assets and keep track of progress with an atomic counter
// todo: we want to use the fastfs variant that knows how to parallelize folders, too
assets_to_transfer.par_iter().try_for_each(|(from, to)| {
self.build.status_copying_asset(
assets_finished.fetch_add(0, std::sync::atomic::Ordering::SeqCst),
asset_count,
from.clone(),
);
// todo(jon): implement optimize + pre_compress on the asset type
let res = crate::fastfs::copy_asset(from, to);
if let Err(err) = res.as_ref() {
tracing::error!("Failed to copy asset {from:?}: {err}");
}
self.build.status_copying_asset(
assets_finished.fetch_add(1, std::sync::atomic::Ordering::SeqCst) + 1,
asset_count,
from.clone(),
);
res.map(|_| ())
})?;
Ok(())
}
/// The directory in which we'll put the main exe
///
/// Mac, Android, Web are a little weird
/// - mac wants to be in Contents/MacOS
/// - android wants to be in jniLibs/arm64-v8a (or others, depending on the platform / architecture)
/// - web wants to be in wasm (which... we don't really need to, we could just drop the wasm into public and it would work)
///
/// I think all others are just in the root folder
///
/// todo(jon): investigate if we need to put .wasm in `wasm`. It kinda leaks implementation details, which ideally we don't want to do.
pub fn exe_dir(&self) -> PathBuf {
match self.build.build.platform() {
Platform::MacOS => self.app_dir().join("Contents").join("MacOS"),
Platform::Android => self.app_dir().join("jniLibs").join("arm64-v8a"),
Platform::Web => self.app_dir().join("wasm"),
// these are all the same, I think?
Platform::Windows
| Platform::Linux
| Platform::Ios
| Platform::Server
| Platform::Liveview => self.app_dir(),
}
}
/// The item that we'll try to run directly if we need to.
///
/// todo(jon): we should name the app properly instead of making up the exe name. It's kinda okay for dev mode, but def not okay for prod
pub fn main_exe(&self) -> PathBuf {
// todo(jon): this could just be named `App` or the name of the app like `Raycast` in `Raycast.app`
match self.build.build.platform() {
Platform::MacOS => self.exe_dir().join("DioxusApp"),
Platform::Ios => self.exe_dir().join("DioxusApp"),
Platform::Server => self.exe_dir().join("server"),
Platform::Liveview => self.exe_dir().join("server"),
Platform::Windows => self.exe_dir().join("app.exe"),
Platform::Linux => self.exe_dir().join("AppRun"), // from the appimage spec, the root exe needs to be named `AppRun`
Platform::Android => self.exe_dir().join("libdioxusapp.so"), // from the apk spec, the root exe will actually be a shared library
Platform::Web => unimplemented!("there's no main exe on web"), // this will be wrong, I think, but not important?
}
}
pub fn asset_dir(&self) -> PathBuf {
match self.build.build.platform() {
// macos why are you weird
Platform::MacOS => self
.app_dir()
.join("Contents")
.join("Resources")
.join("assets"),
// everyone else is soooo normal, just app/assets :)
Platform::Web
| Platform::Ios
| Platform::Windows
| Platform::Linux
| Platform::Android
| Platform::Server
| Platform::Liveview => self.app_dir().join("assets"),
}
}
/// We always put the server in the `web` folder!
/// Only the `web` target will generate a `public` folder though
async fn write_server_executable(&self) -> Result<()> {
if let Some(server) = &self.server {
let to = self
.server_exe()
.expect("server should be set if we're building a server");
std::fs::create_dir_all(self.server_exe().unwrap().parent().unwrap())?;
tracing::debug!("Copying server executable from {server:?} to {to:?}");
// Remove the old server executable if it exists, since copying might corrupt it :(
// todo(jon): do this in more places, I think
_ = std::fs::remove_file(&to);
std::fs::copy(&server.exe, to)?;
}
Ok(())
}
/// todo(jon): use handlebars templates instead of these prebaked templates
async fn write_metadata(&self) -> Result<()> {
// write the Info.plist file
match self.build.build.platform() {
Platform::MacOS => {
let src = include_str!("../../assets/macos/mac.plist");
let dest = self.app_dir().join("Contents").join("Info.plist");
std::fs::write(dest, src)?;
}
Platform::Ios => {
let src = include_str!("../../assets/ios/ios.plist");
let dest = self.app_dir().join("Info.plist");
std::fs::write(dest, src)?;
}
// AndroidManifest.xml
// er.... maybe even all the kotlin/java/gradle stuff?
Platform::Android => {}
// Probably some custom format or a plist file (haha)
// When we do the proper bundle, we'll need to do something with wix templates, I think?
Platform::Windows => {}
// eventually we'll create the .appimage file, I guess?
Platform::Linux => {}
// These are served as folders, not appimages, so we don't need to do anything special (I think?)
// Eventually maybe write some secrets/.env files for the server?
// We could also distribute them as a deb/rpm for linux and msi for windows
Platform::Web => {}
Platform::Server => {}
Platform::Liveview => {}
}
Ok(())
}
/// Run the optimizers, obfuscators, minimizers, signers, etc
pub(crate) async fn optimize(&self) -> Result<()> {
match self.build.build.platform() {
Platform::Web => {
// Compress the asset dir
// If pre-compressing is enabled, we can pre_compress the wasm-bindgen output
let pre_compress = self
.build
.krate
.should_pre_compress_web_assets(self.build.build.release);
let bindgen_dir = self.exe_dir();
tokio::task::spawn_blocking(move || {
crate::fastfs::pre_compress_folder(&bindgen_dir, pre_compress)
})
.await
.unwrap()?;
}
Platform::MacOS => {}
Platform::Windows => {}
Platform::Linux => {}
Platform::Ios => {}
Platform::Android => {}
Platform::Server => {}
Platform::Liveview => {}
}
Ok(())
}
pub(crate) fn server_exe(&self) -> Option<PathBuf> {
if let Some(_server) = &self.server {
return Some(
self.build
.krate
.build_dir(Platform::Server, self.build.build.release)
.join("server"),
);
}
None
}
/// returns the path to .app/.apk/.appimage folder
///
/// we only add an extension to the folders where it sorta matters that it's named with the extension.
/// for example, on mac, the `.app` indicates we can `open` it and it pulls in icons, dylibs, etc.
///
/// for our simulator-based platforms, this is less important since they need to be zipped up anyways
/// to run in the simulator.
///
/// For windows/linux, it's also not important since we're just running the exe directly out of the folder
pub(crate) fn app_dir(&self) -> PathBuf {
let platform_dir = self
.build
.krate
.build_dir(self.build.build.platform(), self.build.build.release);
match self.build.build.platform() {
Platform::Web => platform_dir.join("public"),
Platform::Server => platform_dir.clone(), // ends up *next* to the public folder
// These might not actually need to be called `.app` but it does let us run these with `open`
Platform::MacOS => platform_dir.join("DioxusApp.app"),
Platform::Ios => platform_dir.join("DioxusApp.app"),
// in theory, these all could end up in the build dir
Platform::Linux => platform_dir.join("app"), // .appimage (after bundling)
Platform::Windows => platform_dir.join("app"), // .exe (after bundling)
Platform::Android => platform_dir.join("app"), // .apk (after bundling)
Platform::Liveview => platform_dir.join("app"), // .exe (after bundling)
}
}
pub(crate) async fn run_wasm_bindgen(
&self,
input_path: &Path,
bindgen_outdir: &Path,
) -> anyhow::Result<()> {
tracing::debug!(dx_src = ?TraceSrc::Bundle, "Running wasm-bindgen");
let input_path = input_path.to_path_buf();
let bindgen_outdir = bindgen_outdir.to_path_buf();
let name = self.build.krate.executable_name().to_string();
let keep_debug = self.build.krate.config.web.wasm_opt.debug || (!self.build.build.release);
let start = std::time::Instant::now();
tokio::task::spawn_blocking(move || {
Bindgen::new()
.input_path(&input_path)
.web(true)
.unwrap()
.debug(keep_debug)
.demangle(keep_debug)
.keep_debug(keep_debug)
.reference_types(true)
.remove_name_section(!keep_debug)
.remove_producers_section(!keep_debug)
.out_name(&name)
.generate(&bindgen_outdir)
})
.await
.context("Wasm-bindgen crashed while optimizing the wasm binary")?
.context("Failed to generate wasm-bindgen bindings")?;
tracing::debug!(dx_src = ?TraceSrc::Bundle, "wasm-bindgen complete in {:?}", start.elapsed());
Ok(())
}
#[allow(unused)]
pub(crate) fn run_wasm_opt(&self, bindgen_outdir: &std::path::Path) -> Result<()> {
if !self.build.build.release {
return Ok(());
};
self.build.status_optimizing_wasm();
#[cfg(feature = "optimizations")]
{
use crate::config::WasmOptLevel;
tracing::info!(dx_src = ?TraceSrc::Build, "Running optimization with wasm-opt...");
let mut options = match self.build.krate.config.web.wasm_opt.level {
WasmOptLevel::Z => {
wasm_opt::OptimizationOptions::new_optimize_for_size_aggressively()
}
WasmOptLevel::S => wasm_opt::OptimizationOptions::new_optimize_for_size(),
WasmOptLevel::Zero => wasm_opt::OptimizationOptions::new_opt_level_0(),
WasmOptLevel::One => wasm_opt::OptimizationOptions::new_opt_level_1(),
WasmOptLevel::Two => wasm_opt::OptimizationOptions::new_opt_level_2(),
WasmOptLevel::Three => wasm_opt::OptimizationOptions::new_opt_level_3(),
WasmOptLevel::Four => wasm_opt::OptimizationOptions::new_opt_level_4(),
};
let wasm_file =
bindgen_outdir.join(format!("{}_bg.wasm", self.build.krate.executable_name()));
let old_size = wasm_file.metadata()?.len();
options
// WASM bindgen relies on reference types
.enable_feature(wasm_opt::Feature::ReferenceTypes)
.debug_info(self.build.krate.config.web.wasm_opt.debug)
.run(&wasm_file, &wasm_file)
.map_err(|err| crate::Error::Other(anyhow::anyhow!(err)))?;
let new_size = wasm_file.metadata()?.len();
tracing::debug!(
dx_src = ?TraceSrc::Build,
"wasm-opt reduced WASM size from {} to {} ({:2}%)",
old_size,
new_size,
(new_size as f64 - old_size as f64) / old_size as f64 * 100.0
);
}
Ok(())
}
}

View file

@ -1,272 +0,0 @@
use super::web::install_web_build_tooling;
use super::BuildRequest;
use super::BuildResult;
use super::TargetPlatform;
use crate::assets::copy_dir_to;
use crate::assets::create_assets_head;
use crate::assets::{asset_manifest, process_assets, AssetConfigDropGuard};
use crate::builder::progress::build_cargo;
use crate::builder::progress::CargoBuildResult;
use crate::builder::progress::Stage;
use crate::builder::progress::UpdateBuildProgress;
use crate::builder::progress::UpdateStage;
use crate::config::Platform;
use crate::link::LinkCommand;
use crate::Result;
use crate::TraceSrc;
use anyhow::Context;
use futures_channel::mpsc::UnboundedSender;
use manganis_cli_support::AssetManifest;
use manganis_cli_support::ManganisSupportGuard;
use std::fs::create_dir_all;
use std::path::PathBuf;
use tracing::error;
impl BuildRequest {
/// Create a list of arguments for cargo builds
pub(crate) fn build_arguments(&self) -> Vec<String> {
let mut cargo_args = Vec::new();
if self.build_arguments.release {
cargo_args.push("--release".to_string());
}
if self.build_arguments.verbose {
cargo_args.push("--verbose".to_string());
} else {
cargo_args.push("--quiet".to_string());
}
if let Some(custom_profile) = &self.build_arguments.profile {
cargo_args.push("--profile".to_string());
cargo_args.push(custom_profile.to_string());
}
if !self.build_arguments.target_args.features.is_empty() {
let features_str = self.build_arguments.target_args.features.join(" ");
cargo_args.push("--features".to_string());
cargo_args.push(features_str);
}
if let Some(target) = self
.targeting_web()
.then_some("wasm32-unknown-unknown")
.or(self.build_arguments.target_args.target.as_deref())
{
cargo_args.push("--target".to_string());
cargo_args.push(target.to_string());
}
if let Some(ref platform) = self.build_arguments.target_args.package {
cargo_args.push(String::from("-p"));
cargo_args.push(platform.clone());
}
cargo_args.append(&mut self.build_arguments.cargo_args.clone());
match self.dioxus_crate.executable_type() {
krates::cm::TargetKind::Bin => {
cargo_args.push("--bin".to_string());
}
krates::cm::TargetKind::Lib => {
cargo_args.push("--lib".to_string());
}
krates::cm::TargetKind::Example => {
cargo_args.push("--example".to_string());
}
_ => {}
};
cargo_args.push(self.dioxus_crate.executable_name().to_string());
cargo_args
}
/// Create a build command for cargo
fn prepare_build_command(&self) -> Result<(tokio::process::Command, Vec<String>)> {
let mut cmd = tokio::process::Command::new("cargo");
cmd.arg("rustc");
if let Some(target_dir) = &self.target_dir {
cmd.env("CARGO_TARGET_DIR", target_dir);
}
cmd.current_dir(self.dioxus_crate.crate_dir())
.arg("--message-format")
.arg("json-diagnostic-rendered-ansi");
let cargo_args = self.build_arguments();
cmd.args(&cargo_args);
cmd.arg("--").args(self.rust_flags.clone());
Ok((cmd, cargo_args))
}
pub(crate) async fn build(
&self,
mut progress: UnboundedSender<UpdateBuildProgress>,
) -> Result<BuildResult> {
tracing::info!(
dx_src = ?TraceSrc::Build,
"Running build [{}] command...",
self.target_platform,
);
// Set up runtime guards
let mut dioxus_version = crate::dx_build_info::PKG_VERSION.to_string();
if let Some(hash) = crate::dx_build_info::GIT_COMMIT_HASH_SHORT {
let hash = &hash.trim_start_matches('g')[..4];
dioxus_version.push_str(&format!("-{hash}"));
}
let _manganis_support = ManganisSupportGuard::default();
let _asset_guard =
AssetConfigDropGuard::new(self.dioxus_crate.dioxus_config.web.app.base_path.as_deref());
// If this is a web, build make sure we have the web build tooling set up
if self.targeting_web() {
install_web_build_tooling(&mut progress).await?;
}
// Create the build command
let (cmd, cargo_args) = self.prepare_build_command()?;
// Run the build command with a pretty loader
let crate_count = self.get_unit_count_estimate().await;
let cargo_result = build_cargo(crate_count, cmd, &mut progress).await?;
// Post process the build result
let build_result = self
.post_process_build(cargo_args, &cargo_result, &mut progress)
.await
.context("Failed to post process build")?;
tracing::info!(
dx_src = ?TraceSrc::Build,
"Build completed: [{}]",
self.dioxus_crate.out_dir().display(),
);
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::Finished,
update: UpdateStage::Start,
});
Ok(build_result)
}
async fn post_process_build(
&self,
cargo_args: Vec<String>,
cargo_build_result: &CargoBuildResult,
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> Result<BuildResult> {
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::OptimizingAssets,
update: UpdateStage::Start,
});
let assets = self.collect_assets(cargo_args, progress).await?;
let file_name = self.dioxus_crate.executable_name();
// Move the final output executable into the dist folder
let out_dir = self.target_out_dir();
if !out_dir.is_dir() {
create_dir_all(&out_dir)?;
}
let mut output_path = out_dir.join(file_name);
if self.targeting_web() {
output_path.set_extension("wasm");
} else if cfg!(windows) {
output_path.set_extension("exe");
}
if let Some(res_path) = &cargo_build_result.output_location {
std::fs::copy(res_path, &output_path)?;
}
self.copy_assets_dir()?;
// Create the build result
let build_result = BuildResult {
executable: output_path,
target_platform: self.target_platform,
};
// If this is a web build, run web post processing steps
if self.targeting_web() {
self.post_process_web_build(&build_result, assets.as_ref(), progress)
.await?;
}
Ok(build_result)
}
async fn collect_assets(
&self,
cargo_args: Vec<String>,
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> anyhow::Result<Option<AssetManifest>> {
// If this is the server build, the client build already copied any assets we need
if self.target_platform == TargetPlatform::Server {
return Ok(None);
}
// If assets are skipped, we don't need to collect them
if self.build_arguments.skip_assets {
return Ok(None);
}
// Start Manganis linker intercept.
let linker_args = vec![format!("{}", self.target_out_dir().display())];
// Don't block the main thread - manganis should not be running its own std process but it's
// fine to wrap it here at the top
let build = self.clone();
let mut progress = progress.clone();
tokio::task::spawn_blocking(move || {
manganis_cli_support::start_linker_intercept(
&LinkCommand::command_name(),
cargo_args,
Some(linker_args),
)?;
let Some(assets) = asset_manifest(&build) else {
error!(dx_src = ?TraceSrc::Build, "the asset manifest was not provided by manganis and we were not able to collect assets");
return Err(anyhow::anyhow!("asset manifest was not provided by manganis"));
};
// Collect assets from the asset manifest the linker intercept created
process_assets(&build, &assets, &mut progress)?;
// Create the __assets_head.html file for bundling
create_assets_head(&build, &assets)?;
Ok(Some(assets))
})
.await
.map_err(|e| anyhow::anyhow!(e))?
}
pub fn copy_assets_dir(&self) -> anyhow::Result<()> {
tracing::info!(dx_src = ?TraceSrc::Build, "Copying public assets to the output directory...");
let out_dir = self.target_out_dir();
let asset_dir = self.dioxus_crate.asset_dir();
if asset_dir.is_dir() {
// Only pre-compress the assets from the web build. Desktop assets are not served, so they don't need to be pre_compressed
let pre_compress = self.targeting_web()
&& self
.dioxus_crate
.should_pre_compress_web_assets(self.build_arguments.release);
copy_dir_to(asset_dir, out_dir, pre_compress)?;
}
Ok(())
}
/// Get the output directory for a specific built target
pub fn target_out_dir(&self) -> PathBuf {
let out_dir = self.dioxus_crate.out_dir();
match self.build_arguments.platform {
Some(Platform::Fullstack | Platform::StaticGeneration) => match self.target_platform {
TargetPlatform::Web => out_dir.join("public"),
TargetPlatform::Desktop => out_dir.join("desktop"),
_ => out_dir,
},
_ => out_dir,
}
}
}

View file

@ -1,128 +0,0 @@
use toml_edit::Item;
use crate::builder::Build;
use crate::dioxus_crate::DioxusCrate;
use crate::builder::BuildRequest;
use std::io::Write;
use super::TargetPlatform;
static CLIENT_PROFILE: &str = "dioxus-client";
static SERVER_PROFILE: &str = "dioxus-server";
// The `opt-level=2` increases build times, but can noticeably decrease time
// between saving changes and being able to interact with an app. The "overall"
// time difference (between having and not having the optimization) can be
// almost imperceptible (~1 s) but also can be very noticeable (~6 s) — depends
// on setup (hardware, OS, browser, idle load).
// Find or create the client and server profiles in the .cargo/config.toml file
fn initialize_profiles(config: &DioxusCrate) -> crate::Result<()> {
let config_path = config.workspace_dir().join(".cargo/config.toml");
let mut config = match std::fs::read_to_string(&config_path) {
Ok(config) => config.parse::<toml_edit::DocumentMut>().map_err(|e| {
crate::Error::Other(anyhow::anyhow!("Failed to parse .cargo/config.toml: {}", e))
})?,
Err(_) => Default::default(),
};
if let Item::Table(table) = config
.as_table_mut()
.entry("profile")
.or_insert(Item::Table(Default::default()))
{
if let toml_edit::Entry::Vacant(entry) = table.entry(CLIENT_PROFILE) {
let mut client = toml_edit::Table::new();
client.insert("inherits", Item::Value("dev".into()));
client.insert("opt-level", Item::Value(2.into()));
entry.insert(Item::Table(client));
}
if let toml_edit::Entry::Vacant(entry) = table.entry(SERVER_PROFILE) {
let mut server = toml_edit::Table::new();
server.insert("inherits", Item::Value("dev".into()));
server.insert("opt-level", Item::Value(2.into()));
entry.insert(Item::Table(server));
}
}
// Write the config back to the file
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent)?;
}
let file = std::fs::File::create(config_path)?;
let mut buf_writer = std::io::BufWriter::new(file);
write!(buf_writer, "{}", config)?;
Ok(())
}
impl BuildRequest {
pub(crate) fn new_fullstack(
config: DioxusCrate,
build_arguments: Build,
serve: bool,
) -> Result<Vec<Self>, crate::Error> {
initialize_profiles(&config)?;
Ok(vec![
Self::new_client(serve, &config, &build_arguments),
Self::new_server(serve, &config, &build_arguments),
])
}
fn new_with_target_directory_rust_flags_and_features(
serve: bool,
config: &DioxusCrate,
build: &Build,
feature: Option<String>,
target_platform: TargetPlatform,
) -> Self {
let config = config.clone();
let mut build = build.clone();
// Add the server feature to the features we pass to the build
if let Some(feature) = feature {
build.target_args.features.push(feature);
}
// Add the server flags to the build arguments
Self {
serve,
build_arguments: build.clone(),
dioxus_crate: config,
rust_flags: Default::default(),
target_dir: None,
target_platform,
}
}
fn new_server(serve: bool, config: &DioxusCrate, build: &Build) -> Self {
let mut build = build.clone();
if !build.release && build.profile.is_none() {
build.profile = Some(CLIENT_PROFILE.to_string());
}
let client_feature = build.auto_detect_server_feature(config);
Self::new_with_target_directory_rust_flags_and_features(
serve,
config,
&build,
build.target_args.server_feature.clone().or(client_feature),
TargetPlatform::Server,
)
}
fn new_client(serve: bool, config: &DioxusCrate, build: &Build) -> Self {
let mut build = build.clone();
if !build.release && build.profile.is_none() {
build.profile = Some(SERVER_PROFILE.to_string());
}
let (client_feature, client_platform) = build.auto_detect_client_platform(config);
Self::new_with_target_directory_rust_flags_and_features(
serve,
config,
&build,
build.target_args.client_feature.clone().or(client_feature),
client_platform,
)
}
}

View file

@ -1,272 +1,18 @@
use crate::cli::serve::ServeArguments;
use crate::config::Platform;
use crate::dioxus_crate::DioxusCrate;
use crate::Result;
use crate::{build::Build, TraceSrc};
use futures_util::stream::select_all;
use futures_util::StreamExt;
use std::net::SocketAddr;
use std::str::FromStr;
use std::{path::PathBuf, process::Stdio};
use tokio::process::{Child, Command};
//! The primary entrypoint for our build + optimize + bundle engine
//!
//! Handles multiple ongoing tasks and allows you to queue up builds from interactive and non-interactive contexts
//!
//! Uses a request -> response architecture that allows you to monitor the progress with an optional message
//! receiver.
mod cargo;
mod fullstack;
mod prepare_html;
mod build;
mod bundle;
mod progress;
mod runner;
mod verify;
mod web;
pub use progress::{Stage, UpdateBuildProgress, UpdateStage};
/// The target platform for the build
/// This is very similar to the Platform enum, but we need to be able to differentiate between the
/// server and web targets for the fullstack platform
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum TargetPlatform {
Web,
Desktop,
Server,
Liveview,
}
impl FromStr for TargetPlatform {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"web" => Ok(Self::Web),
"desktop" => Ok(Self::Desktop),
"axum" | "server" => Ok(Self::Server),
"liveview" => Ok(Self::Liveview),
_ => Err(()),
}
}
}
impl std::fmt::Display for TargetPlatform {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
TargetPlatform::Web => write!(f, "web"),
TargetPlatform::Desktop => write!(f, "desktop"),
TargetPlatform::Server => write!(f, "server"),
TargetPlatform::Liveview => write!(f, "liveview"),
}
}
}
/// A request for a project to be built
#[derive(Clone)]
pub struct BuildRequest {
/// Whether the build is for serving the application
pub serve: bool,
/// The configuration for the crate we are building
pub dioxus_crate: DioxusCrate,
/// The target platform for the build
pub target_platform: TargetPlatform,
/// The arguments for the build
pub build_arguments: Build,
/// The rustc flags to pass to the build
pub rust_flags: Vec<String>,
/// The target directory for the build
pub target_dir: Option<PathBuf>,
}
impl BuildRequest {
pub fn create(
serve: bool,
dioxus_crate: &DioxusCrate,
build_arguments: impl Into<Build>,
) -> crate::Result<Vec<Self>> {
let build_arguments = build_arguments.into();
let platform = build_arguments.platform();
let single_platform = |platform| {
let dioxus_crate = dioxus_crate.clone();
vec![Self {
serve,
dioxus_crate,
build_arguments: build_arguments.clone(),
target_platform: platform,
rust_flags: Default::default(),
target_dir: Default::default(),
}]
};
Ok(match platform {
Platform::Liveview => single_platform(TargetPlatform::Liveview),
Platform::Web => single_platform(TargetPlatform::Web),
Platform::Desktop => single_platform(TargetPlatform::Desktop),
Platform::StaticGeneration | Platform::Fullstack => {
Self::new_fullstack(dioxus_crate.clone(), build_arguments, serve)?
}
})
}
pub(crate) async fn build_all_parallel(
build_requests: Vec<BuildRequest>,
) -> Result<Vec<BuildResult>> {
let multi_platform_build = build_requests.len() > 1;
let mut build_progress = Vec::new();
let mut set = tokio::task::JoinSet::new();
for build_request in build_requests {
let (tx, rx) = futures_channel::mpsc::unbounded();
build_progress.push((build_request.build_arguments.platform(), rx));
set.spawn(async move { build_request.build(tx).await });
}
// Watch the build progress as it comes in
loop {
let mut next = select_all(
build_progress
.iter_mut()
.map(|(platform, rx)| rx.map(move |update| (*platform, update))),
);
match next.next().await {
Some((platform, update)) => {
if multi_platform_build {
print!("{platform} build: ");
update.to_std_out();
} else {
update.to_std_out();
}
}
None => {
break;
}
}
}
let mut all_results = Vec::new();
while let Some(result) = set.join_next().await {
let result = result
.map_err(|_| crate::Error::Unique("Failed to build project".to_owned()))??;
all_results.push(result);
}
Ok(all_results)
}
/// Check if the build is targeting the web platform
pub fn targeting_web(&self) -> bool {
self.target_platform == TargetPlatform::Web
}
}
#[derive(Debug, Clone, Default)]
pub(crate) struct OpenArguments {
fullstack_address: Option<SocketAddr>,
devserver_addr: Option<SocketAddr>,
always_on_top: Option<bool>,
workspace: PathBuf,
asset_root: PathBuf,
app_title: String,
out_dir: PathBuf,
serve: bool,
}
impl OpenArguments {
#[allow(clippy::too_many_arguments)]
pub(crate) fn new(
serve: &ServeArguments,
fullstack_address: Option<SocketAddr>,
dioxus_crate: &DioxusCrate,
) -> Self {
Self {
devserver_addr: Some(serve.address.address()),
always_on_top: Some(serve.always_on_top.unwrap_or(true)),
serve: true,
fullstack_address,
workspace: dioxus_crate.workspace_dir().to_path_buf(),
asset_root: dioxus_crate.asset_dir().to_path_buf(),
app_title: dioxus_crate.dioxus_config.application.name.clone(),
out_dir: dioxus_crate.out_dir().to_path_buf(),
}
}
#[allow(clippy::too_many_arguments)]
pub(crate) fn new_for_static_generation_build(dioxus_crate: &DioxusCrate) -> Self {
Self {
workspace: dioxus_crate.workspace_dir().to_path_buf(),
asset_root: dioxus_crate.asset_dir().to_path_buf(),
app_title: dioxus_crate.dioxus_config.application.name.clone(),
out_dir: dioxus_crate.out_dir().to_path_buf(),
..Default::default()
}
}
}
#[derive(Debug, Clone)]
pub(crate) struct BuildResult {
pub executable: PathBuf,
pub target_platform: TargetPlatform,
}
impl BuildResult {
/// Open the executable if this is a native build
pub fn open(&self, arguments: OpenArguments) -> std::io::Result<Option<Child>> {
match self.target_platform {
TargetPlatform::Web => {
if let Some(address) = arguments.fullstack_address {
tracing::info!(dx_src = ?TraceSrc::Dev, "Serving web app on http://{} 🎉", address);
}
return Ok(None);
}
TargetPlatform::Desktop => {
tracing::info!(dx_src = ?TraceSrc::Dev, "Launching desktop app at {} 🎉", self.executable.display());
}
TargetPlatform::Server => {
// shut this up for now - the web app will take priority in logging
}
TargetPlatform::Liveview => {
if let Some(fullstack_address) = arguments.fullstack_address {
tracing::info!(
dx_src = ?TraceSrc::Dev,
"Launching liveview server on http://{:?} 🎉",
fullstack_address
);
}
}
}
if arguments.serve {
tracing::info!(dx_src = ?TraceSrc::Dev, "Press [o] to open the app manually.");
}
let executable = self.executable.canonicalize()?;
let mut cmd = Command::new(executable);
// Set the env vars that the clients will expect
// These need to be stable within a release version (ie 0.6.0)
cmd.env(dioxus_cli_config::CLI_ENABLED_ENV, "true");
if let Some(addr) = arguments.fullstack_address {
cmd.env(dioxus_cli_config::SERVER_IP_ENV, addr.ip().to_string());
cmd.env(dioxus_cli_config::SERVER_PORT_ENV, addr.port().to_string());
}
if let Some(always_on_top) = arguments.always_on_top {
cmd.env(
dioxus_cli_config::ALWAYS_ON_TOP_ENV,
always_on_top.to_string(),
);
}
cmd.env(
dioxus_cli_config::ASSET_ROOT_ENV,
arguments.asset_root.display().to_string(),
);
if let Some(devserver_addr) = arguments.devserver_addr {
cmd.env(
dioxus_cli_config::DEVSERVER_RAW_ADDR_ENV,
devserver_addr.to_string(),
);
}
cmd.env(dioxus_cli_config::APP_TITLE_ENV, arguments.app_title);
cmd.env(
dioxus_cli_config::OUT_DIR,
arguments.out_dir.display().to_string(),
);
cmd.stderr(Stdio::piped())
.stdout(Stdio::piped())
.kill_on_drop(true)
.current_dir(arguments.workspace);
Ok(Some(cmd.spawn()?))
}
}
pub(crate) use build::*;
pub(crate) use bundle::*;
pub(crate) use progress::*;
pub(crate) use runner::*;

View file

@ -1,205 +0,0 @@
//! Build the HTML file to load a web application. The index.html file may be created from scratch or modified from the `index.html` file in the crate root.
use super::{BuildRequest, UpdateBuildProgress};
use crate::Result;
use crate::TraceSrc;
use futures_channel::mpsc::UnboundedSender;
use manganis_cli_support::AssetManifest;
use std::fmt::Write;
use std::path::{Path, PathBuf};
const DEFAULT_HTML: &str = include_str!("../../assets/index.html");
const TOAST_HTML: &str = include_str!("../../assets/toast.html");
impl BuildRequest {
pub(crate) fn prepare_html(
&self,
assets: Option<&AssetManifest>,
_progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> Result<String> {
let mut html = html_or_default(&self.dioxus_crate.crate_dir());
// Inject any resources from the config into the html
self.inject_resources(&mut html, assets)?;
// Inject loading scripts if they are not already present
self.inject_loading_scripts(&mut html);
// Replace any special placeholders in the HTML with resolved values
self.replace_template_placeholders(&mut html);
let title = self.dioxus_crate.dioxus_config.web.app.title.clone();
replace_or_insert_before("{app_title}", "</title", &title, &mut html);
Ok(html)
}
// Inject any resources from the config into the html
fn inject_resources(&self, html: &mut String, assets: Option<&AssetManifest>) -> Result<()> {
// Collect all resources into a list of styles and scripts
let resources = &self.dioxus_crate.dioxus_config.web.resource;
let mut style_list = resources.style.clone().unwrap_or_default();
let mut script_list = resources.script.clone().unwrap_or_default();
if self.serve {
style_list.extend(resources.dev.style.iter().cloned());
script_list.extend(resources.dev.script.iter().cloned());
}
let mut head_resources = String::new();
// Add all styles to the head
for style in &style_list {
writeln!(
&mut head_resources,
"<link rel=\"stylesheet\" href=\"{}\">",
&style.to_str().unwrap(),
)?;
}
if !style_list.is_empty() {
self.send_resource_deprecation_warning(style_list, ResourceType::Style);
}
// Add all scripts to the head
for script in &script_list {
writeln!(
&mut head_resources,
"<script src=\"{}\"></script>",
&script.to_str().unwrap(),
)?;
}
if !script_list.is_empty() {
self.send_resource_deprecation_warning(script_list, ResourceType::Script);
}
// Inject any resources from manganis into the head
if let Some(assets) = assets {
head_resources.push_str(&assets.head());
}
replace_or_insert_before("{style_include}", "</head", &head_resources, html);
Ok(())
}
/// Inject loading scripts if they are not already present
fn inject_loading_scripts(&self, html: &mut String) {
// If it looks like we are already loading wasm or the current build opted out of injecting loading scripts, don't inject anything
if !self.build_arguments.inject_loading_scripts || html.contains("__wbindgen_start") {
return;
}
// If not, insert the script
*html = html.replace(
"</body",
r#"<script>
// We can't use a module script here because we need to start the script immediately when streaming
import("/{base_path}/assets/dioxus/{app_name}.js").then(
({ default: init }) => {
init("/{base_path}/assets/dioxus/{app_name}_bg.wasm").then((wasm) => {
if (wasm.__wbindgen_start == undefined) {
wasm.main();
}
});
}
);
</script>
{DX_TOAST_UTILITIES}
</body"#,
);
*html = match self.serve && !self.build_arguments.release {
true => html.replace("{DX_TOAST_UTILITIES}", TOAST_HTML),
false => html.replace("{DX_TOAST_UTILITIES}", ""),
};
// And try to insert preload links for the wasm and js files
*html = html.replace(
"</head",
r#"<link rel="preload" href="/{base_path}/assets/dioxus/{app_name}_bg.wasm" as="fetch" type="application/wasm" crossorigin="">
<link rel="preload" href="/{base_path}/assets/dioxus/{app_name}.js" as="script">
</head"#);
}
/// Replace any special placeholders in the HTML with resolved values
fn replace_template_placeholders(&self, html: &mut String) {
let base_path = self.dioxus_crate.dioxus_config.web.app.base_path();
*html = html.replace("{base_path}", base_path);
let app_name = &self.dioxus_crate.dioxus_config.application.name;
*html = html.replace("{app_name}", app_name);
}
fn send_resource_deprecation_warning(&self, paths: Vec<PathBuf>, variant: ResourceType) {
const RESOURCE_DEPRECATION_MESSAGE: &str = r#"The `web.resource` config has been deprecated in favor of head components and will be removed in a future release."#;
let replacement_components = paths
.iter()
.map(|path| {
let path = if path.exists() {
path.to_path_buf()
} else {
// If the path is absolute, make it relative to the current directory before we join it
// The path is actually a web path which is relative to the root of the website
let path = path.strip_prefix("/").unwrap_or(path);
let asset_dir_path = self.dioxus_crate.asset_dir().join(path);
if let Ok(absolute_path) = asset_dir_path.canonicalize() {
let absolute_crate_root =
self.dioxus_crate.crate_dir().canonicalize().unwrap();
PathBuf::from("./")
.join(absolute_path.strip_prefix(absolute_crate_root).unwrap())
} else {
path.to_path_buf()
}
};
match variant {
ResourceType::Style => format!(
" document::Link {{ rel: \"stylesheet\", href: asset!(css(\"{}\")) }}",
path.display()
),
ResourceType::Script => {
format!(" Script {{ src: asset!(file(\"{}\")) }}", path.display())
}
}
})
.collect::<Vec<_>>();
let replacement_components = format!("rsx! {{\n{}\n}}", replacement_components.join("\n"));
let section_name = match variant {
ResourceType::Style => "web.resource.style",
ResourceType::Script => "web.resource.script",
};
let message = format!(
"{RESOURCE_DEPRECATION_MESSAGE}\nTo migrate to head components, remove `{section_name}` and include the following rsx in your root component:\n```rust\n{replacement_components}\n```"
);
tracing::warn!(dx_src = ?TraceSrc::Build, "{}", message);
}
}
enum ResourceType {
Style,
Script,
}
/// Read the html file from the crate root or use the default html file
fn html_or_default(crate_root: &Path) -> String {
let custom_html_file = crate_root.join("index.html");
std::fs::read_to_string(custom_html_file).unwrap_or_else(|_| String::from(DEFAULT_HTML))
}
/// Replace a string or insert the new contents before a marker
fn replace_or_insert_before(
replace: &str,
or_insert_before: &str,
with: &str,
content: &mut String,
) {
if content.contains(replace) {
*content = content.replace(replace, with);
} else if let Some(pos) = content.find(or_insert_before) {
content.insert_str(pos, with);
}
}

View file

@ -1,225 +1,124 @@
//! Report progress about the build to the user. We use channels to report progress back to the CLI.
use crate::TraceSrc;
use super::BuildRequest;
use anyhow::Context;
use cargo_metadata::Message;
use futures_channel::mpsc::UnboundedSender;
use serde::Deserialize;
use std::ops::Deref;
use crate::{AppBundle, BuildRequest, Platform, TraceSrc};
use cargo_metadata::CompilerMessage;
use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
use std::path::PathBuf;
use std::process::Stdio;
use tokio::io::AsyncBufReadExt;
#[derive(Default, Debug, PartialOrd, Ord, PartialEq, Eq, Clone, Copy)]
pub enum Stage {
#[default]
Initializing = 0,
InstallingWasmTooling = 1,
Compiling = 2,
OptimizingWasm = 3,
OptimizingAssets = 4,
Finished = 5,
pub(crate) type ProgressTx = UnboundedSender<BuildUpdate>;
pub(crate) type ProgressRx = UnboundedReceiver<BuildUpdate>;
#[derive(Debug)]
#[allow(clippy::large_enum_variant)]
pub(crate) enum BuildUpdate {
Progress { stage: BuildStage },
CompilerMessage { message: CompilerMessage },
BuildReady { bundle: AppBundle },
BuildFailed { err: crate::Error },
}
impl Deref for Stage {
type Target = str;
fn deref(&self) -> &Self::Target {
match self {
Stage::Initializing => "Initializing",
Stage::InstallingWasmTooling => "Installing Wasm Tooling",
Stage::Compiling => "Compiling",
Stage::OptimizingWasm => "Optimizing Wasm",
Stage::OptimizingAssets => "Optimizing Assets",
Stage::Finished => "Finished",
}
}
}
impl std::fmt::Display for Stage {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.deref())
}
}
#[derive(Debug, Clone)]
pub struct UpdateBuildProgress {
pub stage: Stage,
pub update: UpdateStage,
}
impl UpdateBuildProgress {
pub fn to_std_out(&self) {
match &self.update {
UpdateStage::Start => println!("--- {} ---", self.stage),
UpdateStage::SetProgress(progress) => {
println!("Build progress {:0.0}%", progress * 100.0);
}
UpdateStage::Failed(message) => {
println!("Build failed: {}", message);
}
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub enum UpdateStage {
Start,
SetProgress(f64),
Failed(String),
}
pub(crate) async fn build_cargo(
#[non_exhaustive]
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub enum BuildStage {
Initializing,
Starting {
platform: Platform,
crate_count: usize,
mut cmd: tokio::process::Command,
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> anyhow::Result<CargoBuildResult> {
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::Compiling,
update: UpdateStage::Start,
});
let mut child = cmd
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.context("Failed to spawn cargo build")?;
let stdout = child.stdout.take().unwrap();
let stderr = child.stderr.take().unwrap();
let stdout = tokio::io::BufReader::new(stdout);
let stderr = tokio::io::BufReader::new(stderr);
let mut output_location = None;
let mut stdout = stdout.lines();
let mut stderr = stderr.lines();
let mut units_compiled = 0;
let mut errors = Vec::new();
loop {
let line = tokio::select! {
line = stdout.next_line() => {
line
}
line = stderr.next_line() => {
line
}
};
let Some(line) = line? else {
break;
};
let mut deserializer = serde_json::Deserializer::from_str(line.trim());
deserializer.disable_recursion_limit();
let message = Message::deserialize(&mut deserializer).unwrap_or(Message::TextLine(line));
match message {
Message::CompilerMessage(msg) => {
let message = msg.message;
tracing::info!(dx_src = ?TraceSrc::Cargo, dx_no_fmt = true, "{}", message.to_string());
const WARNING_LEVELS: &[cargo_metadata::diagnostic::DiagnosticLevel] = &[
cargo_metadata::diagnostic::DiagnosticLevel::Help,
cargo_metadata::diagnostic::DiagnosticLevel::Note,
cargo_metadata::diagnostic::DiagnosticLevel::Warning,
cargo_metadata::diagnostic::DiagnosticLevel::Error,
cargo_metadata::diagnostic::DiagnosticLevel::FailureNote,
cargo_metadata::diagnostic::DiagnosticLevel::Ice,
];
const FATAL_LEVELS: &[cargo_metadata::diagnostic::DiagnosticLevel] = &[
cargo_metadata::diagnostic::DiagnosticLevel::Error,
cargo_metadata::diagnostic::DiagnosticLevel::FailureNote,
cargo_metadata::diagnostic::DiagnosticLevel::Ice,
];
if WARNING_LEVELS.contains(&message.level) {
if let Some(rendered) = message.rendered {
errors.push(rendered);
}
}
if FATAL_LEVELS.contains(&message.level) {
return Err(anyhow::anyhow!(errors.join("\n")));
}
}
Message::CompilerArtifact(artifact) => {
units_compiled += 1;
if let Some(executable) = artifact.executable {
output_location = Some(executable.into());
} else {
let build_progress = units_compiled as f64 / crate_count as f64;
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::Compiling,
update: UpdateStage::SetProgress((build_progress).clamp(0.0, 1.00)),
});
}
}
Message::BuildScriptExecuted(_) => {
units_compiled += 1;
}
Message::BuildFinished(finished) => {
if !finished.success {
return Err(anyhow::anyhow!("Build failed"));
}
}
Message::TextLine(line) => {
tracing::info!(dx_src = ?TraceSrc::Cargo, dx_no_fmt = true, "{}", line);
}
_ => {
// Unknown message
}
}
}
Ok(CargoBuildResult { output_location })
}
pub(crate) struct CargoBuildResult {
pub(crate) output_location: Option<PathBuf>,
},
InstallingTooling {},
Compiling {
platform: Platform,
current: usize,
total: usize,
krate: String,
},
Bundling {},
RunningBindgen {},
OptimizingWasm {},
CopyingAssets {
current: usize,
total: usize,
path: PathBuf,
},
Success,
Failed,
Aborted,
Restarting,
}
impl BuildRequest {
/// Try to get the unit graph for the crate. This is a nightly only feature which may not be available with the current version of rustc the user has installed.
async fn get_unit_count(&self) -> Option<usize> {
#[derive(Debug, Deserialize)]
struct UnitGraph {
units: Vec<serde_json::Value>,
pub(crate) fn status_wasm_bindgen_start(&self) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::RunningBindgen {},
});
}
pub(crate) fn status_wasm_opt_start(&self) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::RunningBindgen {},
});
}
let mut cmd = tokio::process::Command::new("cargo");
cmd.arg("+nightly");
cmd.arg("build");
cmd.arg("--unit-graph");
cmd.arg("-Z").arg("unstable-options");
cmd.args(self.build_arguments());
let output = cmd
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.output()
.await
.ok()?;
if !output.status.success() {
return None;
pub(crate) fn status_start_bundle(&self) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::Bundling {},
});
}
let output_text = String::from_utf8(output.stdout).ok()?;
let graph: UnitGraph = serde_json::from_str(&output_text).ok()?;
Some(graph.units.len())
pub(crate) fn status_build_diagnostic(&self, message: CompilerMessage) {
_ = self
.progress
.unbounded_send(BuildUpdate::CompilerMessage { message });
}
/// Get an estimate of the number of units in the crate. If nightly rustc is not available, this will return an estimate of the number of units in the crate based on cargo metadata.
/// TODO: always use https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#unit-graph once it is stable
pub(crate) async fn get_unit_count_estimate(&self) -> usize {
// Try to get it from nightly
self.get_unit_count().await.unwrap_or_else(|| {
// Otherwise, use cargo metadata
(self
.dioxus_crate
.krates
.krates_filtered(krates::DepKind::Dev)
.iter()
.map(|k| k.targets.len())
.sum::<usize>() as f64
/ 3.5) as usize
})
pub(crate) fn status_build_message(&self, line: String) {
tracing::trace!(dx_src = ?TraceSrc::Cargo, "{line}");
}
pub(crate) fn status_build_progress(
&self,
count: usize,
total: usize,
name: String,
platform: Platform,
) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::Compiling {
current: count,
total,
krate: name,
platform,
},
});
}
pub(crate) fn status_starting_build(&self, crate_count: usize) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::Starting {
platform: self.build.platform(),
crate_count,
},
});
}
pub(crate) fn status_copying_asset(&self, current: usize, total: usize, path: PathBuf) {
tracing::trace!("Status copying asset {current}/{total} from {path:?}");
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::CopyingAssets {
current,
total,
path,
},
});
}
pub(crate) fn status_optimizing_wasm(&self) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::OptimizingWasm {},
});
}
pub(crate) fn status_installing_tooling(&self) {
_ = self.progress.unbounded_send(BuildUpdate::Progress {
stage: BuildStage::InstallingTooling {},
});
}
}

View file

@ -0,0 +1,287 @@
use crate::{
AppBundle, BuildArgs, BuildRequest, BuildStage, BuildUpdate, DioxusCrate, Platform, ProgressRx,
ProgressTx, Result,
};
use std::time::{Duration, Instant};
/// The component of the serve engine that watches ongoing builds and manages their state, handle,
/// and progress.
///
/// Previously, the builder allowed multiple apps to be built simultaneously, but this newer design
/// simplifies the code and allows only one app and its server to be built at a time.
///
/// Here, we track the number of crates being compiled, assets copied, the times of these events, and
/// other metadata that gives us useful indicators for the UI.
pub(crate) struct Builder {
// Components of the build
pub krate: DioxusCrate,
pub request: BuildRequest,
pub build: tokio::task::JoinHandle<Result<AppBundle>>,
pub tx: ProgressTx,
pub rx: ProgressRx,
// Metadata about the build that needs to be managed by watching build updates
// used to render the TUI
pub stage: BuildStage,
pub compiled_crates: usize,
pub compiled_crates_server: usize,
pub expected_crates: usize,
pub expected_crates_server: usize,
pub bundling_progress: f64,
pub compile_start: Option<Instant>,
pub compile_end: Option<Instant>,
pub compile_end_server: Option<Instant>,
pub bundle_start: Option<Instant>,
pub bundle_end: Option<Instant>,
}
impl Builder {
/// Create a new builder and immediately start a build
pub(crate) fn start(krate: &DioxusCrate, args: BuildArgs) -> Result<Self> {
let (tx, rx) = futures_channel::mpsc::unbounded();
let request = BuildRequest::new(krate.clone(), args, tx.clone());
Ok(Self {
krate: krate.clone(),
request: request.clone(),
stage: BuildStage::Initializing,
build: tokio::spawn(async move {
// On the first build, we want to verify the tooling
// We wont bother verifying on subsequent builds
request.verify_tooling().await?;
let res = request.build_all().await;
// The first launch gets some extra logging :)
if res.is_ok() {
tracing::info!("Build completed successfully, launching app! 💫")
}
res
}),
tx,
rx,
compiled_crates: 0,
expected_crates: 1,
expected_crates_server: 1,
compiled_crates_server: 0,
bundling_progress: 0.0,
compile_start: Some(Instant::now()),
compile_end: None,
compile_end_server: None,
bundle_start: None,
bundle_end: None,
})
}
/// Wait for any new updates to the builder - either it completed or gave us a message etc
pub(crate) async fn wait(&mut self) -> BuildUpdate {
use futures_util::StreamExt;
// Wait for the build to finish or for it to emit a status message
let update = tokio::select! {
Some(progress) = self.rx.next() => progress,
bundle = (&mut self.build) => {
// Replace the build with an infinitely pending task so we can select it again without worrying about deadlocks/spins
self.build = tokio::task::spawn(std::future::pending());
match bundle {
Ok(Ok(bundle)) => BuildUpdate::BuildReady { bundle },
Ok(Err(err)) => BuildUpdate::BuildFailed { err },
Err(err) => BuildUpdate::BuildFailed { err: crate::Error::Runtime(format!("Build panicked! {:?}", err)) },
}
},
};
tracing::trace!("Build update: {update:?}");
// Update the internal stage of the build so the UI can render it
match &update {
BuildUpdate::Progress { stage } => {
// Prevent updates from flowing in after the build has already finished
if !self.is_finished() {
self.stage = stage.clone();
}
match stage {
BuildStage::Initializing => {
self.compiled_crates = 0;
self.compiled_crates_server = 0;
self.bundling_progress = 0.0;
}
BuildStage::Starting {
crate_count,
platform,
} => {
if *platform == Platform::Server {
self.expected_crates_server = *crate_count;
} else {
self.expected_crates = *crate_count;
}
}
BuildStage::InstallingTooling {} => {}
BuildStage::Compiling {
current,
total,
platform,
..
} => {
if *platform == Platform::Server {
self.compiled_crates_server = *current;
self.expected_crates_server = *total;
} else {
self.compiled_crates = *current;
self.expected_crates = *total;
}
if self.compile_start.is_none() {
self.compile_start = Some(Instant::now());
}
}
BuildStage::Bundling {} => {
self.complete_compile();
self.bundling_progress = 0.0;
self.bundle_start = Some(Instant::now());
}
BuildStage::OptimizingWasm {} => {}
BuildStage::CopyingAssets { current, total, .. } => {
self.bundling_progress = *current as f64 / *total as f64;
}
BuildStage::Success => {
self.compiled_crates = self.expected_crates;
self.compiled_crates_server = self.expected_crates_server;
self.bundling_progress = 1.0;
}
BuildStage::Failed => {
self.compiled_crates = self.expected_crates;
self.compiled_crates_server = self.expected_crates_server;
self.bundling_progress = 1.0;
}
BuildStage::Aborted => {}
BuildStage::Restarting => {
self.compiled_crates = 0;
self.compiled_crates_server = 0;
self.expected_crates = 1;
self.bundling_progress = 0.0;
}
BuildStage::RunningBindgen {} => {
self.bundling_progress = 0.5;
}
}
}
BuildUpdate::CompilerMessage { .. } => {}
BuildUpdate::BuildReady { .. } => {
self.compiled_crates = self.expected_crates;
self.compiled_crates_server = self.expected_crates_server;
self.bundling_progress = 1.0;
self.stage = BuildStage::Success;
self.complete_compile();
self.bundle_end = Some(Instant::now());
}
BuildUpdate::BuildFailed { .. } => {
tracing::debug!("Setting builder to failed state");
self.stage = BuildStage::Failed;
}
}
update
}
/// Restart this builder with new build arguments.
pub(crate) fn rebuild(&mut self, args: BuildArgs) {
// Abort all the ongoing builds, cleaning up any loose artifacts and waiting to cleanly exit
self.abort_all();
// And then start a new build, resetting our progress/stage to the beginning and replacing the old tokio task
let request = BuildRequest::new(self.krate.clone(), args, self.tx.clone());
self.request = request.clone();
self.stage = BuildStage::Restarting;
// This build doesn't have any extra special logging - rebuilds would get pretty noisy
self.build = tokio::spawn(async move { request.build_all().await });
}
/// Shutdown the current build process
///
/// todo: might want to use a cancellation token here to allow cleaner shutdowns
pub(crate) fn abort_all(&mut self) {
self.build.abort();
self.stage = BuildStage::Aborted;
self.compiled_crates = 0;
self.compiled_crates_server = 0;
self.expected_crates = 1;
self.bundling_progress = 0.0;
self.compile_start = None;
self.bundle_start = None;
self.bundle_end = None;
self.compile_end = None;
}
/// Wait for the build to finish, returning the final bundle
/// Should only be used by code that's not interested in the intermediate updates and only cares about the final bundle
///
/// todo(jon): maybe we want to do some logging here? The build/bundle/run screens could be made to
/// use the TUI output for prettier outputs.
pub(crate) async fn finish(&mut self) -> Result<AppBundle> {
loop {
match self.wait().await {
BuildUpdate::BuildReady { bundle } => return Ok(bundle),
BuildUpdate::BuildFailed { err } => return Err(err),
BuildUpdate::Progress { .. } => {}
BuildUpdate::CompilerMessage { .. } => {}
}
}
}
fn complete_compile(&mut self) {
if self.compile_end.is_none() {
self.compiled_crates = self.expected_crates;
self.compile_end = Some(Instant::now());
self.compile_end_server = Some(Instant::now());
}
}
/// Get the total duration of the build, if all stages have completed
pub(crate) fn total_build_time(&self) -> Option<Duration> {
Some(self.compile_duration()? + self.bundle_duration()?)
}
pub(crate) fn compile_duration(&self) -> Option<Duration> {
Some(
self.compile_end
.unwrap_or_else(Instant::now)
.duration_since(self.compile_start?),
)
}
pub(crate) fn bundle_duration(&self) -> Option<Duration> {
Some(
self.bundle_end
.unwrap_or_else(Instant::now)
.duration_since(self.bundle_start?),
)
}
/// Return a number between 0 and 1 representing the progress of the app build
pub(crate) fn compile_progress(&self) -> f64 {
self.compiled_crates as f64 / self.expected_crates as f64
}
/// Return a number between 0 and 1 representing the progress of the server build
pub(crate) fn server_compile_progress(&self) -> f64 {
self.compiled_crates_server as f64 / self.expected_crates_server as f64
}
pub(crate) fn bundle_progress(&self) -> f64 {
self.bundling_progress
}
fn is_finished(&self) -> bool {
match self.stage {
BuildStage::Success => true,
BuildStage::Failed => true,
BuildStage::Aborted => true,
BuildStage::Restarting => false,
_ => false,
}
}
}

View file

@ -0,0 +1,148 @@
use std::process::Stdio;
use crate::{BuildRequest, Platform, Result, RustupShow};
use anyhow::Context;
use tokio::process::Command;
impl BuildRequest {
/// Install any tooling that might be required for this build.
///
/// This should generally be only called on the first build since it takes time to verify the tooling
/// is in place, and we don't want to slow down subsequent builds.
pub(crate) async fn verify_tooling(&self) -> Result<()> {
tracing::debug!("Verifying tooling...");
self.status_installing_tooling();
self.krate
.initialize_profiles()
.context("Failed to initialize profiles - dioxus can't build without them. You might need to initialize them yourself.")?;
let rustup = match RustupShow::from_cli().await {
Ok(out) => out,
Err(err) => {
tracing::error!("Failed to verify tooling: {err}\ndx will proceed, but you might run into errors later.");
return Ok(());
}
};
match self.build.platform() {
Platform::Web => self.verify_web_tooling(rustup).await?,
Platform::Ios => self.verify_ios_tooling(rustup).await?,
Platform::Android => self.verify_android_tooling(rustup).await?,
Platform::Linux => self.verify_linux_tooling(rustup).await?,
Platform::MacOS => {}
Platform::Windows => {}
Platform::Server => {}
Platform::Liveview => {}
}
Ok(())
}
pub(crate) async fn verify_web_tooling(&self, rustup: RustupShow) -> Result<()> {
if !rustup.has_wasm32_unknown_unknown() {
tracing::info!(
"Web platform requires wasm32-unknown-unknown to be installed. Installing..."
);
let _ = Command::new("rustup")
.args(["target", "add", "wasm32-unknown-unknown"])
.output()
.await?;
}
match self.krate.wasm_bindgen_version() {
Some(version) if version == wasm_bindgen_shared::SCHEMA_VERSION => {
tracing::debug!("wasm-bindgen version {version} is compatible with dioxus-cli ✅");
},
Some(version) => {
tracing::warn!(
"wasm-bindgen version {version} is not compatible with the cli crate. Attempting to upgrade the target wasm-bindgen crate manually..."
);
let output = Command::new("cargo")
.args([
"update",
"-p",
"wasm-bindgen",
"--precise",
&wasm_bindgen_shared::version(),
])
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.output()
.await;
match output {
Ok(output) if output.status.success() => tracing::info!("✅ wasm-bindgen updated successfully"),
Ok(output) => tracing::error!("Failed to update wasm-bindgen: {:?}", output),
Err(err) => tracing::error!("Failed to update wasm-bindgen: {err}"),
}
}
None => tracing::debug!("User is attempting a web build without wasm-bindgen detected. This is probably a bug in the dioxus-cli."),
}
Ok(())
}
/// Currently does nothing, but eventually we need to check that the mobile tooling is installed.
///
/// For ios, this would be just aarch64-apple-ios + aarch64-apple-ios-sim, as well as xcrun and xcode-select
///
/// We don't auto-install these yet since we're not doing an architecture check. We assume most users
/// are running on an Apple Silicon Mac, but it would be confusing if we installed these when we actually
/// should be installing the x86 versions.
pub(crate) async fn verify_ios_tooling(&self, _rustup: RustupShow) -> Result<()> {
// open the simulator
_ = tokio::process::Command::new("open")
.arg("/Applications/Xcode.app/Contents/Developer/Applications/Simulator.app")
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.status()
.await;
// Now xcrun to open the device
// todo: we should try and query the device list and/or parse it rather than hardcode this simulator
_ = tokio::process::Command::new("xcrun")
.args(["simctl", "boot", "83AE3067-987F-4F85-AE3D-7079EF48C967"])
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.status()
.await;
// if !rustup
// .installed_toolchains
// .contains(&"aarch64-apple-ios".to_string())
// {
// tracing::error!("You need to install aarch64-apple-ios to build for ios. Run `rustup target add aarch64-apple-ios` to install it.");
// }
// if !rustup
// .installed_toolchains
// .contains(&"aarch64-apple-ios-sim".to_string())
// {
// tracing::error!("You need to install aarch64-apple-ios to build for ios. Run `rustup target add aarch64-apple-ios` to install it.");
// }
Ok(())
}
/// Check if the android tooling is installed
///
/// looks for the android sdk + ndk
///
/// will do its best to fill in the missing bits by exploring the sdk structure
/// IE will attempt to use the Java installed from android studio if possible.
pub(crate) async fn verify_android_tooling(&self, _rustup: RustupShow) -> Result<()> {
Ok(())
}
/// Ensure the right dependencies are installed for linux apps.
/// This varies by distro, so we just do nothing for now.
///
/// Eventually, we want to check for the prereqs for wry/tao as outlined by tauri:
/// https://tauri.app/start/prerequisites/
pub(crate) async fn verify_linux_tooling(&self, _rustup: crate::RustupShow) -> Result<()> {
Ok(())
}
}

View file

@ -1,193 +1,196 @@
use super::BuildRequest;
use super::BuildResult;
use crate::assets::pre_compress_folder;
use crate::builder::progress::Stage;
use crate::builder::progress::UpdateBuildProgress;
use crate::builder::progress::UpdateStage;
use crate::error::{Error, Result};
use crate::TraceSrc;
use futures_channel::mpsc::UnboundedSender;
use manganis_cli_support::AssetManifest;
use std::path::Path;
use tokio::process::Command;
use wasm_bindgen_cli_support::Bindgen;
use crate::error::Result;
use crate::BuildRequest;
use std::fmt::Write;
use std::path::{Path, PathBuf};
// Attempt to automatically recover from a bindgen failure by updating the wasm-bindgen version
async fn update_wasm_bindgen_version() -> Result<()> {
let cli_bindgen_version = wasm_bindgen_shared::version();
tracing::info!(dx_src = ?TraceSrc::Build, "Attempting to recover from bindgen failure by setting the wasm-bindgen version to {cli_bindgen_version}...");
let output = Command::new("cargo")
.args([
"update",
"-p",
"wasm-bindgen",
"--precise",
&cli_bindgen_version,
])
.output()
.await;
let mut error_message = None;
if let Ok(output) = output {
if output.status.success() {
tracing::info!(dx_src = ?TraceSrc::Dev, "Successfully updated wasm-bindgen to {cli_bindgen_version}");
return Ok(());
} else {
error_message = Some(output);
}
}
if let Some(output) = error_message {
tracing::error!(dx_src = ?TraceSrc::Dev, "Failed to update wasm-bindgen: {:#?}", output);
}
Err(Error::BuildFailed(format!("WASM bindgen build failed!\nThis is probably due to the Bindgen version, dioxus-cli is using `{cli_bindgen_version}` which is not compatible with your crate.\nPlease reinstall the dioxus cli to fix this issue.\nYou can reinstall the dioxus cli by running `cargo install dioxus-cli --force` and then rebuild your project")))
}
/// Check if the wasm32-unknown-unknown target is installed and try to install it if not
pub(crate) async fn install_web_build_tooling(
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> Result<()> {
// If the user has rustup, we can check if the wasm32-unknown-unknown target is installed
// Otherwise we can just assume it is installed - which is not great...
// Eventually we can poke at the errors and let the user know they need to install the target
if let Ok(wasm_check_command) = Command::new("rustup").args(["show"]).output().await {
let wasm_check_output = String::from_utf8(wasm_check_command.stdout).unwrap();
if !wasm_check_output.contains("wasm32-unknown-unknown") {
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::InstallingWasmTooling,
update: UpdateStage::Start,
});
tracing::info!(dx_src = ?TraceSrc::Build, "`wasm32-unknown-unknown` target not detected, installing..");
let _ = Command::new("rustup")
.args(["target", "add", "wasm32-unknown-unknown"])
.output()
.await?;
}
}
Ok(())
}
const DEFAULT_HTML: &str = include_str!("../../assets/web/index.html");
const TOAST_HTML: &str = include_str!("../../assets/web/toast.html");
impl BuildRequest {
async fn run_wasm_bindgen(&self, input_path: &Path, bindgen_outdir: &Path) -> Result<()> {
tracing::info!(dx_src = ?TraceSrc::Build, "Running wasm-bindgen");
let input_path = input_path.to_path_buf();
let bindgen_outdir = bindgen_outdir.to_path_buf();
let keep_debug =
self.dioxus_crate.dioxus_config.web.wasm_opt.debug || (!self.build_arguments.release);
let name = self.dioxus_crate.dioxus_config.application.name.clone();
let run_wasm_bindgen = move || {
// [3] Bindgen the final binary for use easy linking
let mut bindgen_builder = Bindgen::new();
bindgen_builder
.input_path(&input_path)
.web(true)
.unwrap()
.debug(keep_debug)
.demangle(keep_debug)
.keep_debug(keep_debug)
.reference_types(true)
.remove_name_section(!keep_debug)
.remove_producers_section(!keep_debug)
.out_name(&name)
.generate(&bindgen_outdir)
.unwrap();
pub(crate) fn prepare_html(&self) -> Result<String> {
let mut html = {
let crate_root: &Path = &self.krate.crate_dir();
let custom_html_file = crate_root.join("index.html");
std::fs::read_to_string(custom_html_file).unwrap_or_else(|_| String::from(DEFAULT_HTML))
};
let bindgen_result = tokio::task::spawn_blocking(run_wasm_bindgen.clone()).await;
// WASM bindgen requires the exact version of the bindgen schema to match the version the CLI was built with
// If we get an error, we can try to recover by pinning the user's wasm-bindgen version to the version we used
if let Err(err) = bindgen_result {
tracing::error!(dx_src = ?TraceSrc::Build, "Bindgen build failed: {:?}", err);
update_wasm_bindgen_version().await?;
run_wasm_bindgen();
// Inject any resources from the config into the html
self.inject_resources(&mut html)?;
// Inject loading scripts if they are not already present
self.inject_loading_scripts(&mut html);
// Replace any special placeholders in the HTML with resolved values
self.replace_template_placeholders(&mut html);
let title = self.krate.config.web.app.title.clone();
replace_or_insert_before("{app_title}", "</title", &title, &mut html);
Ok(html)
}
fn is_dev_build(&self) -> bool {
!self.build.release
}
// Inject any resources from the config into the html
fn inject_resources(&self, html: &mut String) -> Result<()> {
// Collect all resources into a list of styles and scripts
let resources = &self.krate.config.web.resource;
let mut style_list = resources.style.clone().unwrap_or_default();
let mut script_list = resources.script.clone().unwrap_or_default();
if self.is_dev_build() {
style_list.extend(resources.dev.style.iter().cloned());
script_list.extend(resources.dev.script.iter().cloned());
}
let mut head_resources = String::new();
// Add all styles to the head
for style in &style_list {
writeln!(
&mut head_resources,
"<link rel=\"stylesheet\" href=\"{}\">",
&style.to_str().unwrap(),
)?;
}
// Add all scripts to the head
for script in &script_list {
writeln!(
&mut head_resources,
"<script src=\"{}\"></script>",
&script.to_str().unwrap(),
)?;
}
if !style_list.is_empty() {
self.send_resource_deprecation_warning(style_list, ResourceType::Style);
}
if !script_list.is_empty() {
self.send_resource_deprecation_warning(script_list, ResourceType::Script);
}
// Inject any resources from manganis into the head
// if let Some(assets) = assets {
// head_resources.push_str(&assets.head());
// }
replace_or_insert_before("{style_include}", "</head", &head_resources, html);
Ok(())
}
/// Post process the WASM build artifacts
pub(crate) async fn post_process_web_build(
&self,
build_result: &BuildResult,
assets: Option<&AssetManifest>,
progress: &mut UnboundedSender<UpdateBuildProgress>,
) -> Result<()> {
_ = progress.start_send(UpdateBuildProgress {
stage: Stage::OptimizingWasm,
update: UpdateStage::Start,
});
// Find the wasm file
let output_location = build_result.executable.clone();
let input_path = output_location.with_extension("wasm");
// Create the directory where the bindgen output will be placed
let bindgen_outdir = self.target_out_dir().join("assets").join("dioxus");
// Run wasm-bindgen
self.run_wasm_bindgen(&input_path, &bindgen_outdir).await?;
// Only run wasm-opt if the feature is enabled
// Wasm-opt has an expensive build script that makes it annoying to keep enabled for iterative dev
#[cfg(feature = "optimizations")]
{
// Run wasm-opt if this is a release build
if self.build_arguments.release {
use crate::config::WasmOptLevel;
tracing::info!(dx_src = ?TraceSrc::Build, "Running optimization with wasm-opt...");
let mut options = match self.dioxus_crate.dioxus_config.web.wasm_opt.level {
WasmOptLevel::Z => {
wasm_opt::OptimizationOptions::new_optimize_for_size_aggressively()
/// Inject loading scripts if they are not already present
fn inject_loading_scripts(&self, html: &mut String) {
// If it looks like we are already loading wasm or the current build opted out of injecting loading scripts, don't inject anything
if !self.build.inject_loading_scripts || html.contains("__wbindgen_start") {
return;
}
WasmOptLevel::S => wasm_opt::OptimizationOptions::new_optimize_for_size(),
WasmOptLevel::Zero => wasm_opt::OptimizationOptions::new_opt_level_0(),
WasmOptLevel::One => wasm_opt::OptimizationOptions::new_opt_level_1(),
WasmOptLevel::Two => wasm_opt::OptimizationOptions::new_opt_level_2(),
WasmOptLevel::Three => wasm_opt::OptimizationOptions::new_opt_level_3(),
WasmOptLevel::Four => wasm_opt::OptimizationOptions::new_opt_level_4(),
// If not, insert the script
*html = html.replace(
"</body",
r#"<script>
// We can't use a module script here because we need to start the script immediately when streaming
import("/{base_path}/wasm/{app_name}.js").then(
({ default: init }) => {
init("/{base_path}/wasm/{app_name}_bg.wasm").then((wasm) => {
if (wasm.__wbindgen_start == undefined) {
wasm.main();
}
});
}
);
</script>
{DX_TOAST_UTILITIES}
</body"#,
);
// Trim out the toasts if we're in release, or add them if we're serving
*html = match self.is_dev_build() {
true => html.replace("{DX_TOAST_UTILITIES}", TOAST_HTML),
false => html.replace("{DX_TOAST_UTILITIES}", ""),
};
let wasm_file = bindgen_outdir.join(format!(
"{}_bg.wasm",
self.dioxus_crate.dioxus_config.application.name
));
let old_size = wasm_file.metadata()?.len();
options
// WASM bindgen relies on reference types
.enable_feature(wasm_opt::Feature::ReferenceTypes)
.debug_info(self.dioxus_crate.dioxus_config.web.wasm_opt.debug)
.run(&wasm_file, &wasm_file)
.map_err(|err| Error::Other(anyhow::anyhow!(err)))?;
let new_size = wasm_file.metadata()?.len();
tracing::info!(
dx_src = ?TraceSrc::Build,
"wasm-opt reduced WASM size from {} to {} ({:2}%)",
old_size,
new_size,
(new_size as f64 - old_size as f64) / old_size as f64 * 100.0
// And try to insert preload links for the wasm and js files
*html = html.replace(
"</head",
r#"<link rel="preload" href="/{base_path}/wasm/{app_name}_bg.wasm" as="fetch" type="application/wasm" crossorigin="">
<link rel="preload" href="/{base_path}/wasm/{app_name}.js" as="script">
</head"#
);
}
/// Replace any special placeholders in the HTML with resolved values
fn replace_template_placeholders(&self, html: &mut String) {
let base_path = self.krate.config.web.app.base_path();
*html = html.replace("{base_path}", base_path);
let app_name = &self.krate.executable_name();
*html = html.replace("{app_name}", app_name);
}
// If pre-compressing is enabled, we can pre_compress the wasm-bindgen output
let pre_compress = self
.dioxus_crate
.should_pre_compress_web_assets(self.build_arguments.release);
tokio::task::spawn_blocking(move || pre_compress_folder(&bindgen_outdir, pre_compress))
.await
.unwrap()?;
fn send_resource_deprecation_warning(&self, paths: Vec<PathBuf>, variant: ResourceType) {
const RESOURCE_DEPRECATION_MESSAGE: &str = r#"The `web.resource` config has been deprecated in favor of head components and will be removed in a future release. Instead of including assets in the config, you can include assets with the `asset!` macro and add them to the head with `document::Link` and `Script` components."#;
// Create the index.html file
// Note that we do this last since the webserver will attempt to serve the index.html file
// If we do this too early, the wasm won't be ready but the index.html will be served, leading
// to test failures and broken pages.
let html = self.prepare_html(assets, progress)?;
let html_path = self.target_out_dir().join("index.html");
std::fs::write(html_path, html)?;
let replacement_components = paths
.iter()
.map(|path| {
let path = if path.exists() {
path.to_path_buf()
} else {
// If the path is absolute, make it relative to the current directory before we join it
// The path is actually a web path which is relative to the root of the website
let path = path.strip_prefix("/").unwrap_or(path);
let asset_dir_path = self.krate.legacy_asset_dir().join(path);
if let Ok(absolute_path) = asset_dir_path.canonicalize() {
let absolute_crate_root = self.krate.crate_dir().canonicalize().unwrap();
PathBuf::from("./")
.join(absolute_path.strip_prefix(absolute_crate_root).unwrap())
} else {
path.to_path_buf()
}
};
match variant {
ResourceType::Style => {
format!(" Stylesheet {{ href: asset!(\"{}\") }}", path.display())
}
ResourceType::Script => {
format!(" Script {{ src: asset!(\"{}\") }}", path.display())
}
}
})
.collect::<Vec<_>>();
let replacement_components = format!("rsx! {{\n{}\n}}", replacement_components.join("\n"));
let section_name = match variant {
ResourceType::Style => "web.resource.style",
ResourceType::Script => "web.resource.script",
};
Ok(())
tracing::debug!(
"{RESOURCE_DEPRECATION_MESSAGE}\nTo migrate to head components, remove `{section_name}` and include the following rsx in your root component:\n```rust\n{replacement_components}\n```"
);
}
}
enum ResourceType {
Style,
Script,
}
/// Replace a string or insert the new contents before a marker
fn replace_or_insert_before(
replace: &str,
or_insert_before: &str,
with: &str,
content: &mut String,
) {
if content.contains(replace) {
*content = content.replace(replace, with);
} else if let Some(pos) = content.find(or_insert_before) {
content.insert_str(pos, with);
}
}

View file

@ -0,0 +1,196 @@
use crate::{
config::BundleConfig, CustomSignCommandSettings, DebianSettings, MacOsSettings,
NSISInstallerMode, NsisSettings, PackageType, WebviewInstallMode, WindowsSettings, WixSettings,
};
pub(crate) fn make_tauri_bundler_settings(
bundle_config: BundleConfig,
) -> tauri_bundler::BundleSettings {
bundle_config.into()
}
impl From<NsisSettings> for tauri_bundler::NsisSettings {
fn from(val: NsisSettings) -> Self {
tauri_bundler::NsisSettings {
header_image: val.header_image,
sidebar_image: val.sidebar_image,
installer_icon: val.installer_icon,
install_mode: val.install_mode.into(),
languages: val.languages,
display_language_selector: val.display_language_selector,
custom_language_files: None,
template: None,
compression: tauri_utils::config::NsisCompression::None,
start_menu_folder: val.start_menu_folder,
installer_hooks: val.installer_hooks,
minimum_webview2_version: val.minimum_webview2_version,
}
}
}
impl From<BundleConfig> for tauri_bundler::BundleSettings {
fn from(val: BundleConfig) -> Self {
tauri_bundler::BundleSettings {
identifier: val.identifier,
publisher: val.publisher,
icon: val.icon,
resources: val.resources,
copyright: val.copyright,
category: val.category.and_then(|c| c.parse().ok()),
short_description: val.short_description,
long_description: val.long_description,
external_bin: val.external_bin,
deb: val.deb.map(Into::into).unwrap_or_default(),
macos: val.macos.map(Into::into).unwrap_or_default(),
windows: val.windows.map(Into::into).unwrap_or_default(),
..Default::default()
}
}
}
impl From<DebianSettings> for tauri_bundler::DebianSettings {
fn from(val: DebianSettings) -> Self {
tauri_bundler::DebianSettings {
depends: val.depends,
files: val.files,
desktop_template: val.desktop_template,
provides: val.provides,
conflicts: val.conflicts,
replaces: val.replaces,
section: val.section,
priority: val.priority,
changelog: val.changelog,
pre_install_script: val.pre_install_script,
post_install_script: val.post_install_script,
pre_remove_script: val.pre_remove_script,
post_remove_script: val.post_remove_script,
}
}
}
impl From<WixSettings> for tauri_bundler::WixSettings {
fn from(val: WixSettings) -> Self {
tauri_bundler::WixSettings {
language: tauri_bundler::bundle::WixLanguage({
let mut languages: Vec<_> = val
.language
.iter()
.map(|l| {
(
l.0.clone(),
tauri_bundler::bundle::WixLanguageConfig {
locale_path: l.1.clone(),
},
)
})
.collect();
if languages.is_empty() {
languages.push(("en-US".into(), Default::default()));
}
languages
}),
template: val.template,
fragment_paths: val.fragment_paths,
component_group_refs: val.component_group_refs,
component_refs: val.component_refs,
feature_group_refs: val.feature_group_refs,
feature_refs: val.feature_refs,
merge_refs: val.merge_refs,
enable_elevated_update_task: val.enable_elevated_update_task,
banner_path: val.banner_path,
dialog_image_path: val.dialog_image_path,
fips_compliant: val.fips_compliant,
version: val.version,
upgrade_code: val.upgrade_code,
}
}
}
impl From<MacOsSettings> for tauri_bundler::MacOsSettings {
fn from(val: MacOsSettings) -> Self {
tauri_bundler::MacOsSettings {
frameworks: val.frameworks,
minimum_system_version: val.minimum_system_version,
exception_domain: val.exception_domain,
signing_identity: val.signing_identity,
provider_short_name: val.provider_short_name,
entitlements: val.entitlements,
info_plist_path: val.info_plist_path,
files: val.files,
hardened_runtime: val.hardened_runtime,
}
}
}
#[allow(deprecated)]
impl From<WindowsSettings> for tauri_bundler::WindowsSettings {
fn from(val: WindowsSettings) -> Self {
tauri_bundler::WindowsSettings {
digest_algorithm: val.digest_algorithm,
certificate_thumbprint: val.certificate_thumbprint,
timestamp_url: val.timestamp_url,
tsp: val.tsp,
wix: val.wix.map(Into::into),
webview_install_mode: val.webview_install_mode.into(),
allow_downgrades: val.allow_downgrades,
nsis: val.nsis.map(Into::into),
sign_command: val.sign_command.map(Into::into),
icon_path: val.icon_path.unwrap_or("./icons/icon.ico".into()),
}
}
}
impl From<NSISInstallerMode> for tauri_utils::config::NSISInstallerMode {
fn from(val: NSISInstallerMode) -> Self {
match val {
NSISInstallerMode::CurrentUser => tauri_utils::config::NSISInstallerMode::CurrentUser,
NSISInstallerMode::PerMachine => tauri_utils::config::NSISInstallerMode::PerMachine,
NSISInstallerMode::Both => tauri_utils::config::NSISInstallerMode::Both,
}
}
}
impl From<PackageType> for tauri_bundler::PackageType {
fn from(value: PackageType) -> Self {
match value {
PackageType::MacOsBundle => Self::MacOsBundle,
PackageType::IosBundle => Self::IosBundle,
PackageType::WindowsMsi => Self::WindowsMsi,
PackageType::Deb => Self::Deb,
PackageType::Rpm => Self::Rpm,
PackageType::AppImage => Self::AppImage,
PackageType::Dmg => Self::Dmg,
PackageType::Updater => Self::Updater,
}
}
}
impl WebviewInstallMode {
fn into(self) -> tauri_utils::config::WebviewInstallMode {
match self {
Self::Skip => tauri_utils::config::WebviewInstallMode::Skip,
Self::DownloadBootstrapper { silent } => {
tauri_utils::config::WebviewInstallMode::DownloadBootstrapper { silent }
}
Self::EmbedBootstrapper { silent } => {
tauri_utils::config::WebviewInstallMode::EmbedBootstrapper { silent }
}
Self::OfflineInstaller { silent } => {
tauri_utils::config::WebviewInstallMode::OfflineInstaller { silent }
}
Self::FixedRuntime { path } => {
tauri_utils::config::WebviewInstallMode::FixedRuntime { path }
}
}
}
}
impl From<CustomSignCommandSettings> for tauri_bundler::CustomSignCommandSettings {
fn from(val: CustomSignCommandSettings) -> Self {
tauri_bundler::CustomSignCommandSettings {
cmd: val.cmd,
args: val.args,
}
}
}

View file

@ -1,6 +1,5 @@
use super::*;
use crate::DioxusCrate;
use build::TargetArgs;
use dioxus_autofmt::{IndentOptions, IndentType};
use rayon::prelude::*;
use std::{borrow::Cow, fs, path::Path, process::exit};
@ -10,35 +9,35 @@ use std::{borrow::Cow, fs, path::Path, process::exit};
/// Format some rsx
#[derive(Clone, Debug, Parser)]
pub struct Autoformat {
pub(crate) struct Autoformat {
/// Format rust code before the formatting the rsx macros
#[clap(long)]
pub all_code: bool,
pub(crate) all_code: bool,
/// Run in 'check' mode. Exits with 0 if input is formatted correctly. Exits
/// with 1 and prints a diff if formatting is required.
#[clap(short, long)]
pub check: bool,
pub(crate) check: bool,
/// Input rsx (selection)
#[clap(short, long)]
pub raw: Option<String>,
pub(crate) raw: Option<String>,
/// Input file
#[clap(short, long)]
pub file: Option<String>,
pub(crate) file: Option<String>,
/// Split attributes in lines or not
#[clap(short, long, default_value = "false")]
pub split_line_attributes: bool,
pub(crate) split_line_attributes: bool,
/// The package to build
#[clap(short, long)]
pub package: Option<String>,
pub(crate) package: Option<String>,
}
impl Autoformat {
pub fn autoformat(self) -> Result<()> {
pub(crate) fn autoformat(self) -> Result<()> {
let Autoformat {
check,
raw,
@ -158,7 +157,7 @@ fn format_file(
let mut if_write = false;
if format_rust_code {
let formatted = format_rust(&contents)
.map_err(|err| Error::ParseError(format!("Syntax Error:\n{}", err)))?;
.map_err(|err| Error::Parse(format!("Syntax Error:\n{}", err)))?;
if contents != formatted {
if_write = true;
contents = formatted;
@ -166,9 +165,9 @@ fn format_file(
}
let parsed = syn::parse_file(&contents)
.map_err(|err| Error::ParseError(format!("Failed to parse file: {}", err)))?;
.map_err(|err| Error::Parse(format!("Failed to parse file: {}", err)))?;
let edits = dioxus_autofmt::try_fmt_file(&contents, &parsed, indent)
.map_err(|err| Error::ParseError(format!("Failed to format file: {}", err)))?;
.map_err(|err| Error::Parse(format!("Failed to format file: {}", err)))?;
let len = edits.len();
if !edits.is_empty() {
@ -240,8 +239,11 @@ fn indentation_for(
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::inherit())
.output()?;
if !out.status.success() {
return Err(Error::CargoError("cargo fmt failed".into()));
return Err(Error::Runtime(format!(
"cargo fmt failed with status: {out:?}"
)));
}
let config = String::from_utf8_lossy(&out.stdout);
@ -252,18 +254,16 @@ fn indentation_for(
.and_then(|line| line.split_once('='))
.map(|(_, value)| value.trim() == "true")
.ok_or_else(|| {
Error::RuntimeError("Could not find hard_tabs option in rustfmt config".into())
Error::Runtime("Could not find hard_tabs option in rustfmt config".into())
})?;
let tab_spaces = config
.lines()
.find(|line| line.starts_with("tab_spaces "))
.and_then(|line| line.split_once('='))
.map(|(_, value)| value.trim().parse::<usize>())
.ok_or_else(|| {
Error::RuntimeError("Could not find tab_spaces option in rustfmt config".into())
})?
.ok_or_else(|| Error::Runtime("Could not find tab_spaces option in rustfmt config".into()))?
.map_err(|_| {
Error::RuntimeError("Could not parse tab_spaces option in rustfmt config".into())
Error::Runtime("Could not parse tab_spaces option in rustfmt config".into())
})?;
Ok(IndentOptions::new(
@ -288,7 +288,7 @@ fn format_syn_error(err: syn::Error) -> Error {
let start = err.span().start();
let line = start.line;
let column = start.column;
Error::ParseError(format!(
Error::Parse(format!(
"Syntax Error in line {} column {}:\n{}",
line, column, err
))

View file

@ -1,224 +1,175 @@
use std::str::FromStr;
use crate::{builder::OpenArguments, config::Platform};
use anyhow::Context;
use crate::{
builder::{BuildRequest, TargetPlatform},
dioxus_crate::DioxusCrate,
};
use super::*;
/// Information about the target to build
#[derive(Clone, Debug, Default, Deserialize, Parser)]
pub struct TargetArgs {
/// Build for nightly [default: false]
#[clap(long)]
pub nightly: bool,
/// Build a example [default: ""]
#[clap(long)]
pub example: Option<String>,
/// Build a binary [default: ""]
#[clap(long)]
pub bin: Option<String>,
/// The package to build
#[clap(short, long)]
pub package: Option<String>,
/// Space separated list of features to activate
#[clap(long)]
pub features: Vec<String>,
/// The feature to use for the client in a fullstack app [default: "web"]
#[clap(long)]
pub client_feature: Option<String>,
/// The feature to use for the server in a fullstack app [default: "server"]
#[clap(long)]
pub server_feature: Option<String>,
/// Rustc platform triple
#[clap(long)]
pub target: Option<String>,
}
use crate::{AppBundle, Builder, DioxusCrate, Platform, PROFILE_SERVER};
/// Build the Rust Dioxus app and all of its assets.
///
/// Produces a final output bundle designed to be run on the target platform.
#[derive(Clone, Debug, Default, Deserialize, Parser)]
#[clap(name = "build")]
pub struct Build {
pub(crate) struct BuildArgs {
/// Build in release mode [default: false]
#[clap(long, short)]
#[serde(default)]
pub release: bool,
/// This flag only applies to fullstack builds. By default fullstack builds will run with something in between debug and release mode. This flag will force the build to run in debug mode. [default: false]
#[clap(long)]
#[serde(default)]
pub force_debug: bool,
pub(crate) release: bool,
/// This flag only applies to fullstack builds. By default fullstack builds will run the server and client builds in parallel. This flag will force the build to run the server build first, then the client build. [default: false]
#[clap(long)]
#[serde(default)]
pub force_sequential: bool,
pub(crate) force_sequential: bool,
// Use verbose output [default: false]
/// Use verbose output [default: false]
#[clap(long)]
#[serde(default)]
pub verbose: bool,
pub(crate) verbose: bool,
/// Build with custom profile
/// Use trace output [default: false]
#[clap(long)]
pub profile: Option<String>,
#[serde(default)]
pub(crate) trace: bool,
/// Pass -Awarnings to the cargo build
#[clap(long)]
#[serde(default)]
pub(crate) silent: bool,
/// Build the app with custom a profile
#[clap(long)]
pub(crate) profile: Option<String>,
/// Build with custom profile for the fullstack server
#[clap(long, default_value_t = PROFILE_SERVER.to_string())]
pub(crate) server_profile: String,
/// Build platform: support Web & Desktop [default: "default_platform"]
#[clap(long, value_enum)]
pub platform: Option<Platform>,
pub(crate) platform: Option<Platform>,
/// Build the fullstack variant of this app, using that as the fileserver and backend
///
/// This defaults to `false` but will be overridden to true if the `fullstack` feature is enabled.
#[clap(long)]
pub(crate) fullstack: bool,
/// Run the ssg config of the app and generate the files
#[clap(long)]
pub(crate) ssg: bool,
/// Skip collecting assets from dependencies [default: false]
#[clap(long)]
#[serde(default)]
pub skip_assets: bool,
pub(crate) skip_assets: bool,
/// Extra arguments passed to cargo build
#[clap(last = true)]
pub cargo_args: Vec<String>,
pub(crate) cargo_args: Vec<String>,
/// Inject scripts to load the wasm and js files for your dioxus app if they are not already present [default: true]
#[clap(long, default_value_t = true)]
pub inject_loading_scripts: bool,
pub(crate) inject_loading_scripts: bool,
/// Information about the target to build
#[clap(flatten)]
pub target_args: TargetArgs,
pub(crate) target_args: TargetArgs,
}
impl Build {
pub fn resolve(&mut self, dioxus_crate: &mut DioxusCrate) -> Result<()> {
// Inherit the platform from the defaults
let platform = self
.platform
.unwrap_or_else(|| self.auto_detect_platform(dioxus_crate));
self.platform = Some(platform);
impl BuildArgs {
pub async fn build_it(&mut self) -> Result<()> {
self.build().await?;
Ok(())
}
// Add any features required to turn on the platform we are building for
pub(crate) async fn build(&mut self) -> Result<AppBundle> {
let krate =
DioxusCrate::new(&self.target_args).context("Failed to load Dioxus workspace")?;
self.resolve(&krate)?;
let bundle = Builder::start(&krate, self.clone())?.finish().await?;
println!(
"Successfully built! 💫\nBundle at {}",
bundle.app_dir().display()
);
Ok(bundle)
}
/// Update the arguments of the CLI by inspecting the DioxusCrate itself and learning about how
/// the user has configured their app.
///
/// IE if they've specified "fullstack" as a feature on `dioxus`, then we want to build the
/// fullstack variant even if they omitted the `--fullstack` flag.
pub(crate) fn resolve(&mut self, krate: &DioxusCrate) -> Result<()> {
let default_platform = krate.default_platform();
let auto_platform = krate.autodetect_platform();
// The user passed --platform XYZ but already has `default = ["ABC"]` in their Cargo.toml
// We want to strip out the default platform and use the one they passed, setting no-default-features
if self.platform.is_some() && default_platform.is_some() {
self.target_args.no_default_features = true;
self.target_args
.features
.extend(dioxus_crate.features_for_platform(platform));
.extend(krate.platformless_features());
}
// Inherit the platform from the args, or auto-detect it
if self.platform.is_none() {
let (platform, _feature) = auto_platform.ok_or_else(|| {
anyhow::anyhow!("No platform was specified and could not be auto-detected. Please specify a platform with `--platform <platform>` or set a default platform using a cargo feature.")
})?;
self.platform = Some(platform);
}
let platform = self
.platform
.expect("Platform to be set after autodetection");
// Add any features required to turn on the client
self.target_args
.client_features
.extend(krate.feature_for_platform(platform));
// Add any features required to turn on the server
// This won't take effect in the server is not built, so it's fine to just set it here even if it's not used
self.target_args
.server_features
.extend(krate.feature_for_platform(Platform::Server));
// Make sure we set the fullstack platform so we actually build the fullstack variant
// Users need to enable "fullstack" in their default feature set.
// todo(jon): fullstack *could* be a feature of the app, but right now we're assuming it's always enabled
self.fullstack = self.fullstack || krate.has_dioxus_feature("fullstack");
// Make sure we have a server feature if we're building a fullstack app
//
// todo(jon): eventually we want to let users pass a `--server <crate>` flag to specify a package to use as the server
// however, it'll take some time to support that and we don't have a great RPC binding layer between the two yet
if self.fullstack && self.target_args.server_features.is_empty() {
return Err(anyhow::anyhow!("Fullstack builds require a server feature on the target crate. Add a `server` feature to the crate and try again.").into());
}
// Set the profile of the build if it's not already set
// We do this for android/wasm since they require
if self.profile.is_none() {
match self.platform {
Some(Platform::Android) => {
self.profile = Some(crate::dioxus_crate::PROFILE_ANDROID.to_string());
}
Some(Platform::Web) => {
self.profile = Some(crate::dioxus_crate::PROFILE_WASM.to_string());
}
Some(Platform::Server) => {
self.profile = Some(crate::dioxus_crate::PROFILE_SERVER.to_string());
}
_ => {}
}
}
Ok(())
}
pub async fn build(&mut self, dioxus_crate: &mut DioxusCrate) -> Result<()> {
self.resolve(dioxus_crate)?;
let build_requests = BuildRequest::create(false, dioxus_crate, self.clone())?;
let builds = BuildRequest::build_all_parallel(build_requests).await?;
// If this is a static generation build, building involves running the server to generate static files
if self.platform.unwrap() == Platform::StaticGeneration {
println!("Building static site...");
for build in builds {
if let Some(mut result) =
build.open(OpenArguments::new_for_static_generation_build(dioxus_crate))?
{
result.wait().await?;
}
}
println!("Static site built!");
}
Ok(())
}
pub async fn run(&mut self) -> anyhow::Result<()> {
let mut dioxus_crate =
DioxusCrate::new(&self.target_args).context("Failed to load Dioxus workspace")?;
self.build(&mut dioxus_crate).await?;
Ok(())
}
pub(crate) fn auto_detect_client_platform(
&self,
resolved: &DioxusCrate,
) -> (Option<String>, TargetPlatform) {
self.find_dioxus_feature(resolved, |platform| {
matches!(platform, TargetPlatform::Web | TargetPlatform::Desktop)
})
.unwrap_or_else(|| (Some("web".to_string()), TargetPlatform::Web))
}
pub(crate) fn auto_detect_server_feature(&self, resolved: &DioxusCrate) -> Option<String> {
self.find_dioxus_feature(resolved, |platform| {
matches!(platform, TargetPlatform::Server)
})
.map(|(feature, _)| feature)
.unwrap_or_else(|| Some("server".to_string()))
}
fn auto_detect_platform(&self, resolved: &DioxusCrate) -> Platform {
self.auto_detect_platform_with_filter(resolved, |_| true).1
}
fn auto_detect_platform_with_filter(
&self,
resolved: &DioxusCrate,
filter_platform: fn(&Platform) -> bool,
) -> (Option<String>, Platform) {
self.find_dioxus_feature(resolved, filter_platform)
.unwrap_or_else(|| {
let default_platform = resolved.dioxus_config.application.default_platform;
(Some(default_platform.to_string()), default_platform)
})
}
fn find_dioxus_feature<P: FromStr>(
&self,
resolved: &DioxusCrate,
filter_platform: fn(&P) -> bool,
) -> Option<(Option<String>, P)> {
// First check the enabled features for any renderer enabled
for dioxus in resolved.krates.krates_by_name("dioxus") {
let Some(features) = resolved.krates.get_enabled_features(dioxus.kid) else {
continue;
};
if let Some(platform) = features
.iter()
.find_map(|platform| platform.parse::<P>().ok())
.filter(filter_platform)
{
return Some((None, platform));
}
}
// Then check the features that might get enabled
if let Some(platform) = resolved
.package()
.features
.iter()
.find_map(|(feature, enables)| {
enables
.iter()
.find_map(|f| {
f.strip_prefix("dioxus/")
.or_else(|| feature.strip_prefix("dep:dioxus/"))
.and_then(|f| f.parse::<P>().ok())
.filter(filter_platform)
})
.map(|platform| (Some(feature.clone()), platform))
})
{
return Some(platform);
}
None
}
/// Get the platform from the build arguments
pub fn platform(&self) -> Platform {
self.platform.unwrap_or_default()
pub(crate) fn platform(&self) -> Platform {
self.platform.expect("Platform was not set")
}
}

View file

@ -1,11 +1,10 @@
use crate::build::Build;
use crate::bundle_utils::make_tauri_bundler_settings;
use crate::DioxusCrate;
use crate::{build::BuildArgs, PackageType};
use anyhow::Context;
use std::env::current_dir;
use std::fs::create_dir_all;
use std::ops::Deref;
use std::str::FromStr;
use tauri_bundler::{BundleSettings, PackageSettings, SettingsBuilder};
use tauri_bundler::{PackageSettings, SettingsBuilder};
use super::*;
@ -16,77 +15,26 @@ pub struct Bundle {
/// The package types to bundle
#[clap(long)]
pub packages: Option<Vec<PackageType>>,
/// The arguments for the dioxus build
#[clap(flatten)]
pub build_arguments: Build,
}
impl Deref for Bundle {
type Target = Build;
fn deref(&self) -> &Self::Target {
&self.build_arguments
}
}
#[derive(Clone, Copy, Debug)]
pub enum PackageType {
MacOsBundle,
IosBundle,
WindowsMsi,
Deb,
Rpm,
AppImage,
Dmg,
Updater,
}
impl FromStr for PackageType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"macos" => Ok(PackageType::MacOsBundle),
"ios" => Ok(PackageType::IosBundle),
"msi" => Ok(PackageType::WindowsMsi),
"deb" => Ok(PackageType::Deb),
"rpm" => Ok(PackageType::Rpm),
"appimage" => Ok(PackageType::AppImage),
"dmg" => Ok(PackageType::Dmg),
_ => Err(format!("{} is not a valid package type", s)),
}
}
}
impl From<PackageType> for tauri_bundler::PackageType {
fn from(val: PackageType) -> Self {
match val {
PackageType::MacOsBundle => tauri_bundler::PackageType::MacOsBundle,
PackageType::IosBundle => tauri_bundler::PackageType::IosBundle,
PackageType::WindowsMsi => tauri_bundler::PackageType::WindowsMsi,
PackageType::Deb => tauri_bundler::PackageType::Deb,
PackageType::Rpm => tauri_bundler::PackageType::Rpm,
PackageType::AppImage => tauri_bundler::PackageType::AppImage,
PackageType::Dmg => tauri_bundler::PackageType::Dmg,
PackageType::Updater => tauri_bundler::PackageType::Updater,
}
}
pub(crate) build_arguments: BuildArgs,
}
impl Bundle {
pub async fn bundle(mut self) -> anyhow::Result<()> {
let mut dioxus_crate = DioxusCrate::new(&self.build_arguments.target_args)
pub(crate) async fn bundle(mut self) -> anyhow::Result<()> {
let krate = DioxusCrate::new(&self.build_arguments.target_args)
.context("Failed to load Dioxus workspace")?;
self.build_arguments.resolve(&mut dioxus_crate)?;
self.build_arguments.resolve(&krate)?;
// Build the app
self.build_arguments.build(&mut dioxus_crate).await?;
let bundle = self.build_arguments.build().await?;
// copy the binary to the out dir
let package = dioxus_crate.package();
let package = krate.package();
let mut name: PathBuf = dioxus_crate.executable_name().into();
let mut name: PathBuf = krate.executable_name().into();
if cfg!(windows) {
name.set_extension("exe");
}
@ -94,56 +42,32 @@ impl Bundle {
// bundle the app
let binaries = vec![
tauri_bundler::BundleBinary::new(name.display().to_string(), true)
.set_src_path(Some(dioxus_crate.workspace_dir().display().to_string())),
.set_src_path(Some(krate.workspace_dir().display().to_string())),
];
let mut bundle_settings: BundleSettings = dioxus_crate.dioxus_config.bundle.clone().into();
let bundle_config = krate.config.bundle.clone();
let mut bundle_settings = make_tauri_bundler_settings(bundle_config);
if cfg!(windows) {
let windows_icon_override = dioxus_crate
.dioxus_config
.bundle
.windows
.as_ref()
.map(|w| &w.icon_path);
let windows_icon_override = krate.config.bundle.windows.as_ref().map(|w| &w.icon_path);
if windows_icon_override.is_none() {
let icon_path = bundle_settings
.icon
.as_ref()
.and_then(|icons| icons.first());
let icon_path = if let Some(icon_path) = icon_path {
icon_path.into()
} else {
let path = PathBuf::from("./icons/icon.ico");
// create the icon if it doesn't exist
if !path.exists() {
create_dir_all(path.parent().unwrap()).unwrap();
let mut file = File::create(&path).unwrap();
file.write_all(include_bytes!("../../assets/icon.ico"))
.unwrap();
}
path
if let Some(icon_path) = icon_path {
bundle_settings.icon = Some(vec![icon_path.into()]);
};
bundle_settings.windows.icon_path = icon_path;
}
}
// Copy the assets in the dist directory to the bundle
let static_asset_output_dir = &dioxus_crate.dioxus_config.application.out_dir;
// Make sure the dist directory is relative to the crate directory
let static_asset_output_dir = static_asset_output_dir
.strip_prefix(dioxus_crate.workspace_dir())
.unwrap_or(static_asset_output_dir);
let static_asset_output_dir = static_asset_output_dir.display().to_string();
println!("Adding assets from {} to bundle", static_asset_output_dir);
// Don't copy the executable or the old bundle directory
let ignored_files = [
dioxus_crate.out_dir().join("bundle"),
dioxus_crate.out_dir().join(name),
];
let ignored_files = [krate
.bundle_dir(self.build_arguments.platform())
.join("bundle")];
for entry in std::fs::read_dir(&static_asset_output_dir)?.flatten() {
for entry in std::fs::read_dir(bundle.asset_dir())?.flatten() {
let path = entry.path().canonicalize()?;
if ignored_files.iter().any(|f| path.starts_with(f)) {
continue;
@ -173,14 +97,14 @@ impl Bundle {
}
let mut settings = SettingsBuilder::new()
.project_out_directory(dioxus_crate.out_dir())
.project_out_directory(krate.bundle_dir(self.build_arguments.platform()))
.package_settings(PackageSettings {
product_name: dioxus_crate.dioxus_config.application.name.clone(),
product_name: krate.executable_name().to_string(),
version: package.version.to_string(),
description: package.description.clone().unwrap_or_default(),
homepage: Some(package.homepage.clone().unwrap_or_default()),
authors: Some(package.authors.clone()),
default_run: Some(dioxus_crate.dioxus_config.application.name.clone()),
default_run: Some(krate.executable_name().to_string()),
})
.binaries(binaries)
.bundle_settings(bundle_settings);
@ -188,7 +112,7 @@ impl Bundle {
settings = settings.package_types(packages.iter().map(|p| (*p).into()).collect());
}
if let Some(target) = &self.target_args.target {
if let Some(target) = self.build_arguments.target_args.target.as_ref() {
settings = settings.target(target.to_string());
}
@ -198,7 +122,7 @@ impl Bundle {
#[cfg(target_os = "macos")]
std::env::set_var("CI", "true");
tauri_bundler::bundle::bundle_project(settings.unwrap()).unwrap_or_else(|err|{
tauri_bundler::bundle::bundle_project(&settings.unwrap()).unwrap_or_else(|err|{
#[cfg(target_os = "macos")]
panic!("Failed to bundle project: {:#?}\nMake sure you have automation enabled in your terminal (https://github.com/tauri-apps/tauri/issues/3055#issuecomment-1624389208) and full disk access enabled for your terminal (https://github.com/tauri-apps/tauri/issues/3055#issuecomment-1624389208)", err);
#[cfg(not(target_os = "macos"))]
@ -208,3 +132,21 @@ impl Bundle {
Ok(())
}
}
impl FromStr for PackageType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"macos" => Ok(PackageType::MacOsBundle),
"ios" => Ok(PackageType::IosBundle),
"msi" => Ok(PackageType::WindowsMsi),
"deb" => Ok(PackageType::Deb),
"rpm" => Ok(PackageType::Rpm),
"appimage" => Ok(PackageType::AppImage),
"dmg" => Ok(PackageType::Dmg),
"updater" => Ok(PackageType::Updater),
_ => Err(format!("{} is not a valid package type", s)),
}
}
}

View file

@ -1,29 +1,26 @@
use crate::build::TargetArgs;
use super::*;
use crate::DioxusCrate;
use futures_util::{stream::FuturesUnordered, StreamExt};
use std::{path::Path, process::exit};
use crate::DioxusCrate;
use super::*;
// For reference, the rustfmt main.rs file
// https://github.com/rust-lang/rustfmt/blob/master/src/bin/main.rs
/// Check the Rust files in the project for issues.
#[derive(Clone, Debug, Parser)]
pub struct Check {
pub(crate) struct Check {
/// Input file
#[clap(short, long)]
pub file: Option<PathBuf>,
pub(crate) file: Option<PathBuf>,
/// Information about the target to check
#[clap(flatten)]
pub target_args: TargetArgs,
pub(crate) target_args: TargetArgs,
}
impl Check {
// Todo: check the entire crate
pub async fn check(self) -> Result<()> {
pub(crate) async fn check(self) -> Result<()> {
match self.file {
// Default to checking the project
None => {

View file

@ -1,19 +1,15 @@
use crate::DioxusCrate;
use anyhow::Context;
use build::TargetArgs;
use super::*;
/// Clean build artifacts.
///
/// Simlpy runs `cargo clean`
#[derive(Clone, Debug, Parser)]
#[clap(name = "clean")]
pub struct Clean {}
pub(crate) struct Clean {}
impl Clean {
pub fn clean(self) -> anyhow::Result<()> {
let dioxus_crate =
DioxusCrate::new(&TargetArgs::default()).context("Failed to load Dioxus workspace")?;
/// todo(jon): we should add a config option that just wipes target/dx and target/dioxus-client instead of doing a full clean
pub(crate) fn clean(self) -> anyhow::Result<()> {
let output = Command::new("cargo")
.arg("clean")
.stdout(Stdio::piped())
@ -24,11 +20,6 @@ impl Clean {
return Err(anyhow::anyhow!("Cargo clean failed."));
}
let out_dir = &dioxus_crate.out_dir();
if out_dir.is_dir() {
remove_dir_all(out_dir)?;
}
Ok(())
}
}

View file

@ -1,13 +1,11 @@
use crate::build::TargetArgs;
use super::*;
use crate::TraceSrc;
use crate::{metadata::crate_root, CliSettings};
use super::*;
/// Dioxus config file controls
#[derive(Clone, Debug, Deserialize, Subcommand)]
#[clap(name = "config")]
pub enum Config {
pub(crate) enum Config {
/// Init `Dioxus.toml` for project/folder.
Init {
/// Init project name
@ -22,8 +20,10 @@ pub enum Config {
#[clap(long, default_value = "web")]
platform: String,
},
/// Format print Dioxus config.
FormatPrint {},
/// Create a custom html file.
CustomHtml {},
@ -36,7 +36,7 @@ pub enum Config {
}
#[derive(Debug, Clone, Copy, Deserialize, Subcommand)]
pub enum Setting {
pub(crate) enum Setting {
/// Set the value of the always-hot-reload setting.
AlwaysHotReload { value: BoolValue },
/// Set the value of the always-open-browser setting.
@ -61,7 +61,7 @@ impl Display for Setting {
// Clap complains if we use a bool directly and I can't find much info about it.
// "Argument 'value` is positional and it must take a value but action is SetTrue"
#[derive(Debug, Clone, Copy, Deserialize, clap::ValueEnum)]
pub enum BoolValue {
pub(crate) enum BoolValue {
True,
False,
}
@ -76,7 +76,7 @@ impl From<BoolValue> for bool {
}
impl Config {
pub fn config(self) -> Result<()> {
pub(crate) fn config(self) -> Result<()> {
let crate_root = crate_root()?;
match self {
Config::Init {
@ -101,13 +101,13 @@ impl Config {
Config::FormatPrint {} => {
println!(
"{:#?}",
crate::dioxus_crate::DioxusCrate::new(&TargetArgs::default())?.dioxus_config
crate::dioxus_crate::DioxusCrate::new(&TargetArgs::default())?.config
);
}
Config::CustomHtml {} => {
let html_path = crate_root.join("index.html");
let mut file = File::create(html_path)?;
let content = include_str!("../../assets/index.html");
let content = include_str!("../../assets/web/index.html");
file.write_all(content.as_bytes())?;
tracing::info!(dx_src = ?TraceSrc::Dev, "🚩 Create custom html file done.");
}

View file

@ -8,7 +8,7 @@ pub(crate) static DEFAULT_TEMPLATE: &str = "gh:dioxuslabs/dioxus-template";
#[derive(Clone, Debug, Default, Deserialize, Parser)]
#[clap(name = "new")]
pub struct Create {
pub(crate) struct Create {
/// Project name (required when `--yes` is used)
name: Option<String>,
@ -39,7 +39,7 @@ pub struct Create {
}
impl Create {
pub fn create(mut self) -> Result<()> {
pub(crate) fn create(mut self) -> Result<()> {
let metadata = cargo_metadata::MetadataCommand::new().exec().ok();
// If we're getting pass a `.` name, that's actually a path
@ -112,7 +112,7 @@ impl Create {
/// Post-creation actions for newly setup crates.
// Also used by `init`.
pub fn post_create(path: &Path, metadata: Option<Metadata>) -> Result<()> {
pub(crate) fn post_create(path: &Path, metadata: Option<Metadata>) -> Result<()> {
// 1. Add the new project to the workspace, if it exists.
// This must be executed first in order to run `cargo fmt` on the new project.
metadata.and_then(|metadata| {

View file

@ -0,0 +1,10 @@
use clap::Parser;
#[derive(Clone, Debug, Parser)]
pub struct Doctor {}
impl Doctor {
pub async fn run(self) -> anyhow::Result<()> {
Ok(())
}
}

View file

@ -4,7 +4,7 @@ use cargo_generate::{GenerateArgs, TemplatePath};
#[derive(Clone, Debug, Default, Deserialize, Parser)]
#[clap(name = "init")]
pub struct Init {
pub(crate) struct Init {
/// Template path
#[clap(default_value = DEFAULT_TEMPLATE, short, long)]
template: String,
@ -24,19 +24,21 @@ pub struct Init {
}
impl Init {
pub fn init(self) -> Result<()> {
pub(crate) fn init(self) -> Result<()> {
let metadata = cargo_metadata::MetadataCommand::new().exec().ok();
// Get directory name.
let name = std::env::current_dir()?
.file_name()
.map(|f| f.to_str().unwrap().to_string());
// https://github.com/console-rs/dialoguer/issues/294
ctrlc::set_handler(move || {
let _ = console::Term::stdout().show_cursor();
std::process::exit(0);
})
.expect("ctrlc::set_handler");
let args = GenerateArgs {
define: self.option,
init: true,

View file

@ -1,39 +1,108 @@
use crate::{assets, error::Result};
use clap::Parser;
use std::{fs, path::PathBuf};
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
#[derive(Clone, Debug, Parser)]
#[clap(name = "link", hide = true)]
pub struct LinkCommand {
// Allow us to accept any argument after `dx link`
#[clap(trailing_var_arg = true, allow_hyphen_values = true)]
pub args: Vec<String>,
use crate::assets::AssetManifest;
#[derive(Debug, Serialize, Deserialize)]
pub enum LinkAction {
BuildAssetManifest {
destination: PathBuf,
},
LinkAndroid {
linker: PathBuf,
extra_flags: Vec<String>,
},
}
impl LinkCommand {
pub fn link(self) -> Result<()> {
let Some((link_args, object_files)) = manganis_cli_support::linker_intercept(self.args)
else {
tracing::warn!("Invalid linker arguments.");
return Ok(());
};
impl LinkAction {
pub(crate) const ENV_VAR_NAME: &'static str = "dx-magic-link-file";
// Parse object files, deserialize JSON, & create a file to propagate JSON.
let json = manganis_cli_support::get_json_from_object_files(object_files);
let parsed = serde_json::to_string(&json).unwrap();
/// Should we write the input arguments to a file (aka act as a linker subprocess)?
///
/// Just check if the magic env var is set
pub(crate) fn from_env() -> Option<Self> {
std::env::var(Self::ENV_VAR_NAME)
.ok()
.map(|var| serde_json::from_str(&var).expect("Failed to parse magic env var"))
}
let out_dir = PathBuf::from(link_args.first().unwrap());
fs::create_dir_all(&out_dir).unwrap();
pub(crate) fn to_json(&self) -> String {
serde_json::to_string(self).unwrap()
}
let path = out_dir.join(assets::MG_JSON_OUT);
fs::write(path, parsed).unwrap();
/// Write the incoming linker args to a file
///
/// The file will be given by the dx-magic-link-arg env var itself, so we use
/// it both for determining if we should act as a linker and the for the file name itself.
///
/// This will panic if it fails
///
/// hmmmmmmmm tbh I'd rather just pass the object files back and do the parsing here, but the interface
/// is nicer to just bounce back the args and let the host do the parsing/canonicalization
pub(crate) fn run(self) -> anyhow::Result<()> {
match self {
// Literally just run the android linker :)
LinkAction::LinkAndroid {
linker,
extra_flags,
} => {
let mut cmd = std::process::Command::new(linker);
cmd.args(std::env::args().skip(1));
cmd.args(extra_flags);
cmd.stderr(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.status()
.expect("Failed to run android linker");
}
// Assemble an asset manifest by walking the object files being passed to us
LinkAction::BuildAssetManifest { destination: dest } => {
let mut args: Vec<_> = std::env::args().collect();
let mut manifest = AssetManifest::default();
// Handle command files, usually a windows thing.
if let Some(command) = args.iter().find(|arg| arg.starts_with('@')).cloned() {
let path = command.trim().trim_start_matches('@');
let file_binary = std::fs::read(path).unwrap();
// This may be a utf-16le file. Let's try utf-8 first.
let content = String::from_utf8(file_binary.clone()).unwrap_or_else(|_| {
// Convert Vec<u8> to Vec<u16> to convert into a String
let binary_u16le: Vec<u16> = file_binary
.chunks_exact(2)
.map(|a| u16::from_le_bytes([a[0], a[1]]))
.collect();
String::from_utf16_lossy(&binary_u16le)
});
// Gather linker args, and reset the args to be just the linker args
args = content
.lines()
.map(|line| {
let line_parsed = line.to_string();
let line_parsed = line_parsed.trim_end_matches('"').to_string();
let line_parsed = line_parsed.trim_start_matches('"').to_string();
line_parsed
})
.collect();
}
// Parse through linker args for `.o` or `.rlib` files.
for item in args {
if item.ends_with(".o") || item.ends_with(".rlib") {
let path_to_item = PathBuf::from(item);
if let Ok(path) = path_to_item.canonicalize() {
_ = manifest.add_from_object_path(path);
}
}
}
let contents = serde_json::to_string(&manifest).expect("Failed to write manifest");
std::fs::write(dest, contents).expect("Failed to write output file");
}
}
Ok(())
}
/// We need to pass the subcommand name to Manganis so this
/// helps centralize where we set the subcommand "name".
pub fn command_name() -> String {
"link".to_string()
}
}

View file

@ -1,62 +1,62 @@
pub mod autoformat;
pub mod build;
pub mod bundle;
pub mod check;
pub mod clean;
pub mod config;
pub mod create;
pub mod init;
pub mod link;
pub mod serve;
pub mod translate;
pub(crate) mod autoformat;
pub(crate) mod build;
pub(crate) mod bundle;
pub(crate) mod check;
pub(crate) mod clean;
pub(crate) mod config;
pub(crate) mod create;
pub(crate) mod doctor;
pub(crate) mod init;
pub(crate) mod link;
pub(crate) mod run;
pub(crate) mod serve;
pub(crate) mod target;
pub(crate) mod translate;
use crate::{custom_error, error::Result, Error};
pub(crate) use build::*;
pub(crate) use serve::*;
pub(crate) use target::*;
use crate::{error::Result, Error};
use anyhow::Context;
use clap::{Parser, Subcommand};
use html_parser::Dom;
use once_cell::sync::Lazy;
use serde::Deserialize;
use std::{
fmt::Display,
fs::{remove_dir_all, File},
fs::File,
io::{Read, Write},
path::PathBuf,
process::{Command, Stdio},
};
pub static VERSION: Lazy<String> = Lazy::new(|| {
format!(
"{} ({})",
crate::dx_build_info::PKG_VERSION,
crate::dx_build_info::GIT_COMMIT_HASH_SHORT.unwrap_or("was built without git repository")
)
});
/// Build, Bundle & Ship Dioxus Apps.
#[derive(Parser)]
#[clap(name = "dioxus", version = VERSION.as_str())]
pub struct Cli {
pub(crate) struct Cli {
#[clap(subcommand)]
pub action: Commands,
pub(crate) action: Commands,
/// Enable verbose logging.
#[clap(short)]
pub v: bool,
pub(crate) v: bool,
/// Specify a binary target.
#[clap(global = true, long)]
pub bin: Option<String>,
pub(crate) bin: Option<String>,
}
#[derive(Parser)]
pub enum Commands {
pub(crate) enum Commands {
/// Build the Dioxus project and all of its assets.
Build(build::Build),
Build(build::BuildArgs),
/// Translate a source file into Dioxus code.
Translate(translate::Translate),
/// Build, watch & serve the Dioxus project and all of its assets.
Serve(serve::Serve),
Serve(serve::ServeArgs),
/// Create a new project for Dioxus.
New(create::Create),
@ -79,13 +79,17 @@ pub enum Commands {
#[clap(name = "check")]
Check(check::Check),
/// Run the project without any hotreloading
#[clap(name = "run")]
Run(run::RunArgs),
/// Ensure all the tooling is installed and configured correctly
#[clap(name = "doctor")]
Doctor(doctor::Doctor),
/// Dioxus config file controls.
#[clap(subcommand)]
Config(config::Config),
/// Handles parsing of linker arguments for linker-based systems
/// such as Manganis and binary patching.
Link(link::LinkCommand),
}
impl Display for Commands {
@ -101,7 +105,16 @@ impl Display for Commands {
Commands::Autoformat(_) => write!(f, "fmt"),
Commands::Check(_) => write!(f, "check"),
Commands::Bundle(_) => write!(f, "bundle"),
Commands::Link(_) => write!(f, "link"),
Commands::Run(_) => write!(f, "run"),
Commands::Doctor(_) => write!(f, "doctor"),
}
}
}
pub(crate) static VERSION: Lazy<String> = Lazy::new(|| {
format!(
"{} ({})",
crate::dx_build_info::PKG_VERSION,
crate::dx_build_info::GIT_COMMIT_HASH_SHORT.unwrap_or("was built without git repository")
)
});

View file

@ -0,0 +1,60 @@
use super::*;
use crate::{serve::ServeUpdate, BuildArgs, Builder, DioxusCrate};
/// Run the project with the given arguments
#[derive(Clone, Debug, Parser)]
pub(crate) struct RunArgs {
/// Information about the target to build
#[clap(flatten)]
pub(crate) build_args: BuildArgs,
}
impl RunArgs {
pub(crate) async fn run(mut self) -> anyhow::Result<()> {
let krate = DioxusCrate::new(&self.build_args.target_args)
.context("Failed to load Dioxus workspace")?;
self.build_args.resolve(&krate)?;
println!("Building crate krate data: {:#?}", krate);
println!("Build args: {:#?}", self.build_args);
let bundle = Builder::start(&krate, self.build_args.clone())?
.finish()
.await?;
let devserver_ip = "127.0.0.1:8080".parse().unwrap();
let fullstack_ip = "127.0.0.1:8081".parse().unwrap();
let mut runner = crate::serve::AppRunner::start(&krate);
runner
.open(bundle, devserver_ip, Some(fullstack_ip), true)
.await?;
// Run the app, but mostly ignore all the other messages
// They won't generally be emitted
loop {
match runner.wait().await {
ServeUpdate::StderrReceived { platform, msg } => println!("[{platform}]: {msg}"),
ServeUpdate::StdoutReceived { platform, msg } => println!("[{platform}]: {msg}"),
ServeUpdate::ProcessExited { platform, status } => {
runner.kill(platform);
eprintln!("[{platform}]: process exited with status: {status:?}");
break;
}
ServeUpdate::BuildUpdate { .. } => {}
ServeUpdate::TracingLog { .. } => {}
ServeUpdate::Exit { .. } => break,
ServeUpdate::NewConnection => {}
ServeUpdate::WsMessage(_) => {}
ServeUpdate::FilesChanged { .. } => {}
ServeUpdate::RequestRebuild => {}
ServeUpdate::Redraw => {}
ServeUpdate::OpenApp => {}
ServeUpdate::ToggleShouldRebuild => {}
}
}
Ok(())
}
}

View file

@ -1,120 +1,108 @@
use crate::config::AddressArguments;
use crate::{settings, tracer::CLILogControl, DioxusCrate};
use anyhow::Context;
use build::Build;
use std::ops::Deref;
use super::*;
use crate::{AddressArguments, BuildArgs, DioxusCrate, Platform};
use anyhow::Context;
/// Arguments for the serve command
#[derive(Clone, Debug, Parser, Default)]
pub struct ServeArguments {
/// Serve the project
#[derive(Clone, Debug, Default, Parser)]
#[command(group = clap::ArgGroup::new("release-incompatible").multiple(true).conflicts_with("release"))]
#[clap(name = "serve")]
pub(crate) struct ServeArgs {
/// The arguments for the address the server will run on
#[clap(flatten)]
pub address: AddressArguments,
pub(crate) address: AddressArguments,
/// Open the app in the default browser [default: true - unless cli settings are set]
#[arg(long, default_missing_value="true", num_args=0..=1)]
pub open: Option<bool>,
pub(crate) open: Option<bool>,
/// Enable full hot reloading for the app [default: true - unless cli settings are set]
#[clap(long, group = "release-incompatible")]
pub hot_reload: Option<bool>,
pub(crate) hot_reload: Option<bool>,
/// Configure always-on-top for desktop apps [default: true - unless cli settings are set]
#[clap(long, default_missing_value = "true")]
pub always_on_top: Option<bool>,
pub(crate) always_on_top: Option<bool>,
/// Set cross-origin-policy to same-origin [default: false]
#[clap(name = "cross-origin-policy")]
#[clap(long)]
pub cross_origin_policy: bool,
pub(crate) cross_origin_policy: bool,
/// Additional arguments to pass to the executable
#[clap(long)]
pub args: Vec<String>,
pub(crate) args: Vec<String>,
/// Sets the interval in seconds that the CLI will poll for file changes on WSL.
#[clap(long, default_missing_value = "2")]
pub wsl_file_poll_interval: Option<u16>,
}
/// Run the WASM project on dev-server
#[derive(Clone, Debug, Default, Parser)]
#[command(group = clap::ArgGroup::new("release-incompatible").multiple(true).conflicts_with("release"))]
#[clap(name = "serve")]
pub struct Serve {
/// Arguments for the serve command
#[clap(flatten)]
pub(crate) server_arguments: ServeArguments,
/// Arguments for the dioxus build
#[clap(flatten)]
pub(crate) build_arguments: Build,
pub(crate) wsl_file_poll_interval: Option<u16>,
/// Run the server in interactive mode
#[arg(long, default_missing_value="true", num_args=0..=1, short = 'i')]
pub interactive: Option<bool>,
pub(crate) interactive: Option<bool>,
/// Arguments for the build itself
#[clap(flatten)]
pub(crate) build_arguments: BuildArgs,
}
impl Serve {
/// Resolve the serve arguments from the arguments or the config
fn resolve(&mut self, crate_config: &mut DioxusCrate) -> Result<()> {
// Set config settings.
let settings = settings::CliSettings::load();
impl ServeArgs {
/// Start the tui, builder, etc by resolving the arguments and then running the actual top-level serve function
pub(crate) async fn serve(mut self) -> Result<()> {
let krate = DioxusCrate::new(&self.build_arguments.target_args)
.context("Failed to load Dioxus workspace")?;
// Enable hot reload.
if self.server_arguments.hot_reload.is_none() {
self.server_arguments.hot_reload = Some(settings.always_hot_reload.unwrap_or(true));
if self.hot_reload.is_none() {
self.hot_reload = Some(krate.settings.always_hot_reload.unwrap_or(true));
}
// Open browser.
if self.server_arguments.open.is_none() {
self.server_arguments.open = Some(settings.always_open_browser.unwrap_or_default());
if self.open.is_none() {
self.open = Some(krate.settings.always_open_browser.unwrap_or_default());
}
// Set WSL file poll interval.
if self.server_arguments.wsl_file_poll_interval.is_none() {
self.server_arguments.wsl_file_poll_interval =
Some(settings.wsl_file_poll_interval.unwrap_or(2));
if self.wsl_file_poll_interval.is_none() {
self.wsl_file_poll_interval = Some(krate.settings.wsl_file_poll_interval.unwrap_or(2));
}
// Set always-on-top for desktop.
if self.server_arguments.always_on_top.is_none() {
self.server_arguments.always_on_top = Some(settings.always_on_top.unwrap_or(true))
if self.always_on_top.is_none() {
self.always_on_top = Some(krate.settings.always_on_top.unwrap_or(true))
}
crate_config.dioxus_config.desktop.always_on_top =
self.server_arguments.always_on_top.unwrap_or(true);
// Resolve the build arguments
self.build_arguments.resolve(crate_config)?;
self.build_arguments.resolve(&krate)?;
// Since this is a serve, adjust the outdir to be target/dx-dist/<crate name>
let mut dist_dir = crate_config.workspace_dir().join("target").join("dx-dist");
// Give us some space before we start printing things...
println!();
if crate_config.target.is_example() {
dist_dir = dist_dir.join("examples");
crate::serve::serve_all(self, krate).await
}
crate_config.dioxus_config.application.out_dir =
dist_dir.join(crate_config.executable_name());
Ok(())
pub(crate) fn should_hotreload(&self) -> bool {
self.hot_reload.unwrap_or(true)
}
pub async fn serve(mut self, log_control: CLILogControl) -> anyhow::Result<()> {
let mut dioxus_crate = DioxusCrate::new(&self.build_arguments.target_args)
.context("Failed to load Dioxus workspace")?;
pub(crate) fn build_args(&self) -> BuildArgs {
self.build_arguments.clone()
}
self.resolve(&mut dioxus_crate)?;
pub(crate) fn is_interactive_tty(&self) -> bool {
use crossterm::tty::IsTty;
std::io::stdout().is_tty() && self.interactive.unwrap_or(true)
}
crate::serve::serve_all(self, dioxus_crate, log_control).await?;
Ok(())
pub(crate) fn should_proxy_build(&self) -> bool {
match self.build_arguments.platform() {
Platform::Server => true,
_ => self.build_arguments.fullstack,
}
}
}
impl Deref for Serve {
type Target = Build;
impl std::ops::Deref for ServeArgs {
type Target = BuildArgs;
fn deref(&self) -> &Self::Target {
&self.build_arguments

View file

@ -0,0 +1,52 @@
use super::*;
/// Information about the target to build
#[derive(Clone, Debug, Default, Deserialize, Parser)]
pub(crate) struct TargetArgs {
/// Build for nightly [default: false]
#[clap(long)]
pub(crate) nightly: bool,
/// Build a example [default: ""]
#[clap(long)]
pub(crate) example: Option<String>,
/// Build a binary [default: ""]
#[clap(long)]
pub(crate) bin: Option<String>,
/// The package to build
#[clap(short, long)]
pub(crate) package: Option<String>,
/// Space separated list of features to activate
#[clap(long)]
pub(crate) features: Vec<String>,
/// The feature to use for the client in a fullstack app [default: "web"]
#[clap(long)]
pub(crate) client_features: Vec<String>,
/// The feature to use for the server in a fullstack app [default: "server"]
#[clap(long)]
pub(crate) server_features: Vec<String>,
/// Don't include the default features in the build
#[clap(long)]
pub(crate) no_default_features: bool,
/// The architecture to build for [default: "native"]
///
/// Can either be `arm | arm64 | x86 | x86_64 | native`
#[clap(long)]
pub(crate) arch: Option<String>,
/// Are we building for a device or just the simulator
/// If device is false, then we'll build for the simulator
#[clap(long)]
pub(crate) device: Option<bool>,
/// Rustc platform triple
#[clap(long)]
pub(crate) target: Option<String>,
}

View file

@ -1,35 +1,32 @@
use std::{io::IsTerminal as _, process::exit};
use dioxus_rsx::{BodyNode, CallBody, TemplateBody};
use crate::TraceSrc;
use super::*;
use crate::{Result, TraceSrc};
use dioxus_rsx::{BodyNode, CallBody, TemplateBody};
use std::{io::IsTerminal as _, process::exit};
/// Translate some source file into Dioxus code
#[derive(Clone, Debug, Parser)]
#[clap(name = "translate")]
pub struct Translate {
pub(crate) struct Translate {
/// Activate debug mode
// short and long flags (-d, --debug) will be deduced from the field's name
#[clap(short, long)]
pub component: bool,
pub(crate) component: bool,
/// Input file
#[clap(short, long)]
pub file: Option<String>,
pub(crate) file: Option<String>,
/// Input file
#[clap(short, long)]
pub raw: Option<String>,
pub(crate) raw: Option<String>,
/// Output file, stdout if not present
#[arg(short, long)]
pub output: Option<PathBuf>,
pub(crate) output: Option<PathBuf>,
}
impl Translate {
pub fn translate(self) -> Result<()> {
pub(crate) fn translate(self) -> Result<()> {
// Get the right input for the translation
let contents = determine_input(self.file, self.raw)?;
@ -85,7 +82,7 @@ fn write_svg_section(out: &mut String, svgs: Vec<BodyNode>) {
for (idx, icon) in svgs.into_iter().enumerate() {
let raw =
dioxus_autofmt::write_block_out(&CallBody::new(TemplateBody::new(vec![icon]))).unwrap();
out.push_str("\n\n pub fn icon_");
out.push_str("\n\n pub(crate) fn icon_");
out.push_str(&idx.to_string());
out.push_str("() -> Element {\n rsx! {");
indent_and_write(&raw, 2, out);
@ -122,7 +119,7 @@ fn determine_input(file: Option<String>, raw: Option<String>) -> Result<String>
// If neither exist, we try to read from stdin
if std::io::stdin().is_terminal() {
return custom_error!("No input file, source, or stdin to translate from.");
return Err(anyhow::anyhow!("No input file, source, or stdin to translate from.").into());
}
let mut buffer = String::new();

View file

@ -1,678 +1,13 @@
use clap::Parser;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::fmt::Display;
use std::net::{IpAddr, SocketAddr};
use std::path::PathBuf;
use std::str::FromStr;
#[derive(
Copy,
Clone,
Hash,
PartialEq,
Eq,
PartialOrd,
Ord,
Serialize,
Deserialize,
Debug,
Default,
clap::ValueEnum,
)]
#[non_exhaustive]
pub enum Platform {
/// Targeting the web platform using WASM
#[clap(name = "web")]
#[serde(rename = "web")]
#[default]
Web,
/// Targeting the desktop platform using Tao/Wry-based webview
#[clap(name = "desktop")]
#[serde(rename = "desktop")]
Desktop,
/// Targeting the server platform using Axum and Dioxus-Fullstack
#[clap(name = "fullstack")]
#[serde(rename = "fullstack")]
Fullstack,
/// Targeting the static generation platform using SSR and Dioxus-Fullstack
#[clap(name = "static-generation")]
#[serde(rename = "static-generation")]
StaticGeneration,
/// Targeting the static generation platform using SSR and Dioxus-Fullstack
#[clap(name = "liveview")]
#[serde(rename = "liveview")]
Liveview,
}
/// An error that occurs when a platform is not recognized
pub struct UnknownPlatformError;
impl std::error::Error for UnknownPlatformError {}
impl std::fmt::Debug for UnknownPlatformError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Unknown platform")
}
}
impl std::fmt::Display for UnknownPlatformError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Unknown platform")
}
}
impl FromStr for Platform {
type Err = UnknownPlatformError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"web" => Ok(Self::Web),
"desktop" => Ok(Self::Desktop),
"fullstack" => Ok(Self::Fullstack),
"static-generation" => Ok(Self::StaticGeneration),
"liveview" => Ok(Self::Liveview),
_ => Err(UnknownPlatformError),
}
}
}
impl Display for Platform {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let feature = self.feature_name();
f.write_str(feature)
}
}
impl Platform {
/// All platforms the dioxus CLI supports
pub const ALL: &'static [Self] = &[
Platform::Web,
Platform::Desktop,
Platform::Fullstack,
Platform::StaticGeneration,
];
/// Get the feature name for the platform in the dioxus crate
pub fn feature_name(&self) -> &str {
match self {
Platform::Web => "web",
Platform::Desktop => "desktop",
Platform::Fullstack => "fullstack",
Platform::StaticGeneration => "static-generation",
Platform::Liveview => "liveview",
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DioxusConfig {
pub application: ApplicationConfig,
#[serde(default)]
pub web: WebConfig,
#[serde(default)]
pub desktop: DesktopConfig,
#[serde(default)]
pub bundle: BundleConfig,
}
impl Default for DioxusConfig {
fn default() -> Self {
let name = default_name();
Self {
application: ApplicationConfig {
name: name.clone(),
default_platform: default_platform(),
out_dir: out_dir_default(),
asset_dir: asset_dir_default(),
sub_package: None,
},
web: WebConfig {
app: WebAppConfig {
title: default_title(),
base_path: None,
},
proxy: vec![],
watcher: Default::default(),
resource: WebResourceConfig {
dev: WebDevResourceConfig {
style: vec![],
script: vec![],
},
style: Some(vec![]),
script: Some(vec![]),
},
https: WebHttpsConfig {
enabled: None,
mkcert: None,
key_path: None,
cert_path: None,
},
pre_compress: true,
wasm_opt: Default::default(),
},
desktop: DesktopConfig::default(),
bundle: BundleConfig {
identifier: Some(format!("io.github.{name}")),
publisher: Some(name),
..Default::default()
},
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ApplicationConfig {
#[serde(default = "default_name")]
pub name: String,
#[serde(default = "default_platform")]
pub default_platform: Platform,
#[serde(default = "out_dir_default")]
pub out_dir: PathBuf,
#[serde(default = "asset_dir_default")]
pub asset_dir: PathBuf,
#[serde(default)]
pub sub_package: Option<String>,
}
fn default_name() -> String {
"my-cool-project".into()
}
fn default_platform() -> Platform {
Platform::Web
}
fn asset_dir_default() -> PathBuf {
PathBuf::from("public")
}
fn out_dir_default() -> PathBuf {
PathBuf::from("dist")
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WebConfig {
#[serde(default)]
pub app: WebAppConfig,
#[serde(default)]
pub proxy: Vec<WebProxyConfig>,
#[serde(default)]
pub watcher: WebWatcherConfig,
#[serde(default)]
pub resource: WebResourceConfig,
#[serde(default)]
pub https: WebHttpsConfig,
/// Whether to enable pre-compression of assets and wasm during a web build in release mode
#[serde(default = "true_bool")]
pub pre_compress: bool,
/// The wasm-opt configuration
#[serde(default)]
pub wasm_opt: WasmOptConfig,
}
impl Default for WebConfig {
fn default() -> Self {
Self {
pre_compress: true_bool(),
app: Default::default(),
https: Default::default(),
wasm_opt: Default::default(),
proxy: Default::default(),
watcher: Default::default(),
resource: Default::default(),
}
}
}
/// Represents configuration items for the desktop platform.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DesktopConfig {
/// Describes whether a debug-mode desktop app should be always-on-top.
#[serde(default)]
pub always_on_top: bool,
}
impl Default for DesktopConfig {
fn default() -> Self {
Self {
always_on_top: true,
}
}
}
/// The wasm-opt configuration
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct WasmOptConfig {
/// The wasm-opt level to use for release builds [default: s]
/// Options:
/// - z: optimize aggressively for size
/// - s: optimize for size
/// - 1: optimize for speed
/// - 2: optimize for more for speed
/// - 3: optimize for even more for speed
/// - 4: optimize aggressively for speed
#[serde(default)]
pub level: WasmOptLevel,
/// Keep debug symbols in the wasm file
#[serde(default = "false_bool")]
pub debug: bool,
}
/// The wasm-opt level to use for release web builds [default: 4]
#[derive(Default, Debug, Copy, Clone, Serialize, Deserialize)]
pub enum WasmOptLevel {
/// Optimize aggressively for size
#[serde(rename = "z")]
Z,
/// Optimize for size
#[serde(rename = "s")]
S,
/// Don't optimize
#[serde(rename = "0")]
Zero,
/// Optimize for speed
#[serde(rename = "1")]
One,
/// Optimize for more for speed
#[serde(rename = "2")]
Two,
/// Optimize for even more for speed
#[serde(rename = "3")]
Three,
/// Optimize aggressively for speed
#[serde(rename = "4")]
#[default]
Four,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WebAppConfig {
#[serde(default = "default_title")]
pub title: String,
pub base_path: Option<String>,
}
impl WebAppConfig {
/// Get the normalized base path for the application with `/` trimmed from both ends. If the base path is not set, this will return `.`.
pub fn base_path(&self) -> &str {
match &self.base_path {
Some(path) => path.trim_matches('/'),
None => ".",
}
}
}
impl Default for WebAppConfig {
fn default() -> Self {
Self {
title: default_title(),
base_path: None,
}
}
}
fn default_title() -> String {
"dioxus | ⛺".into()
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WebProxyConfig {
pub backend: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WebWatcherConfig {
#[serde(default = "watch_path_default")]
pub watch_path: Vec<PathBuf>,
#[serde(default)]
pub reload_html: bool,
#[serde(default = "true_bool")]
pub index_on_404: bool,
}
impl Default for WebWatcherConfig {
fn default() -> Self {
Self {
watch_path: watch_path_default(),
reload_html: false,
index_on_404: true,
}
}
}
fn watch_path_default() -> Vec<PathBuf> {
vec![PathBuf::from("src"), PathBuf::from("examples")]
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct WebResourceConfig {
pub dev: WebDevResourceConfig,
pub style: Option<Vec<PathBuf>>,
pub script: Option<Vec<PathBuf>>,
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct WebDevResourceConfig {
#[serde(default)]
pub style: Vec<PathBuf>,
#[serde(default)]
pub script: Vec<PathBuf>,
}
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct WebHttpsConfig {
pub enabled: Option<bool>,
pub mkcert: Option<bool>,
pub key_path: Option<String>,
pub cert_path: Option<String>,
}
fn true_bool() -> bool {
true
}
fn false_bool() -> bool {
false
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct BundleConfig {
pub identifier: Option<String>,
pub publisher: Option<String>,
pub icon: Option<Vec<String>>,
pub resources: Option<Vec<String>>,
pub copyright: Option<String>,
pub category: Option<String>,
pub short_description: Option<String>,
pub long_description: Option<String>,
pub external_bin: Option<Vec<String>>,
pub deb: Option<DebianSettings>,
pub macos: Option<MacOsSettings>,
pub windows: Option<WindowsSettings>,
}
impl From<BundleConfig> for tauri_bundler::BundleSettings {
fn from(val: BundleConfig) -> Self {
tauri_bundler::BundleSettings {
identifier: val.identifier,
publisher: val.publisher,
icon: val.icon,
resources: val.resources,
copyright: val.copyright,
category: val.category.and_then(|c| c.parse().ok()),
short_description: val.short_description,
long_description: val.long_description,
external_bin: val.external_bin,
deb: val.deb.map(Into::into).unwrap_or_default(),
macos: val.macos.map(Into::into).unwrap_or_default(),
windows: val.windows.map(Into::into).unwrap_or_default(),
..Default::default()
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct DebianSettings {
pub depends: Option<Vec<String>>,
pub files: HashMap<PathBuf, PathBuf>,
pub nsis: Option<NsisSettings>,
}
impl From<DebianSettings> for tauri_bundler::DebianSettings {
fn from(val: DebianSettings) -> Self {
tauri_bundler::DebianSettings {
depends: val.depends,
files: val.files,
desktop_template: None,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct WixSettings {
pub language: Vec<(String, Option<PathBuf>)>,
pub template: Option<PathBuf>,
pub fragment_paths: Vec<PathBuf>,
pub component_group_refs: Vec<String>,
pub component_refs: Vec<String>,
pub feature_group_refs: Vec<String>,
pub feature_refs: Vec<String>,
pub merge_refs: Vec<String>,
pub skip_webview_install: bool,
pub license: Option<PathBuf>,
pub enable_elevated_update_task: bool,
pub banner_path: Option<PathBuf>,
pub dialog_image_path: Option<PathBuf>,
pub fips_compliant: bool,
}
impl From<WixSettings> for tauri_bundler::WixSettings {
fn from(val: WixSettings) -> Self {
tauri_bundler::WixSettings {
language: tauri_bundler::bundle::WixLanguage({
let mut languages: Vec<_> = val
.language
.iter()
.map(|l| {
(
l.0.clone(),
tauri_bundler::bundle::WixLanguageConfig {
locale_path: l.1.clone(),
},
)
})
.collect();
if languages.is_empty() {
languages.push(("en-US".into(), Default::default()));
}
languages
}),
template: val.template,
fragment_paths: val.fragment_paths,
component_group_refs: val.component_group_refs,
component_refs: val.component_refs,
feature_group_refs: val.feature_group_refs,
feature_refs: val.feature_refs,
merge_refs: val.merge_refs,
skip_webview_install: val.skip_webview_install,
license: val.license,
enable_elevated_update_task: val.enable_elevated_update_task,
banner_path: val.banner_path,
dialog_image_path: val.dialog_image_path,
fips_compliant: val.fips_compliant,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct MacOsSettings {
pub frameworks: Option<Vec<String>>,
pub minimum_system_version: Option<String>,
pub license: Option<String>,
pub exception_domain: Option<String>,
pub signing_identity: Option<String>,
pub provider_short_name: Option<String>,
pub entitlements: Option<String>,
pub info_plist_path: Option<PathBuf>,
}
impl From<MacOsSettings> for tauri_bundler::MacOsSettings {
fn from(val: MacOsSettings) -> Self {
tauri_bundler::MacOsSettings {
frameworks: val.frameworks,
minimum_system_version: val.minimum_system_version,
license: val.license,
exception_domain: val.exception_domain,
signing_identity: val.signing_identity,
provider_short_name: val.provider_short_name,
entitlements: val.entitlements,
info_plist_path: val.info_plist_path,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WindowsSettings {
pub digest_algorithm: Option<String>,
pub certificate_thumbprint: Option<String>,
pub timestamp_url: Option<String>,
pub tsp: bool,
pub wix: Option<WixSettings>,
pub icon_path: Option<PathBuf>,
pub webview_install_mode: WebviewInstallMode,
pub webview_fixed_runtime_path: Option<PathBuf>,
pub allow_downgrades: bool,
pub nsis: Option<NsisSettings>,
}
impl From<WindowsSettings> for tauri_bundler::WindowsSettings {
fn from(val: WindowsSettings) -> Self {
tauri_bundler::WindowsSettings {
digest_algorithm: val.digest_algorithm,
certificate_thumbprint: val.certificate_thumbprint,
timestamp_url: val.timestamp_url,
tsp: val.tsp,
wix: val.wix.map(Into::into),
icon_path: val.icon_path.unwrap_or("icons/icon.ico".into()),
webview_install_mode: val.webview_install_mode.into(),
webview_fixed_runtime_path: val.webview_fixed_runtime_path,
allow_downgrades: val.allow_downgrades,
nsis: val.nsis.map(Into::into),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NsisSettings {
pub template: Option<PathBuf>,
pub license: Option<PathBuf>,
pub header_image: Option<PathBuf>,
pub sidebar_image: Option<PathBuf>,
pub installer_icon: Option<PathBuf>,
pub install_mode: NSISInstallerMode,
pub languages: Option<Vec<String>>,
pub custom_language_files: Option<HashMap<String, PathBuf>>,
pub display_language_selector: bool,
}
impl From<NsisSettings> for tauri_bundler::NsisSettings {
fn from(val: NsisSettings) -> Self {
tauri_bundler::NsisSettings {
license: val.license,
header_image: val.header_image,
sidebar_image: val.sidebar_image,
installer_icon: val.installer_icon,
install_mode: val.install_mode.into(),
languages: val.languages,
display_language_selector: val.display_language_selector,
custom_language_files: None,
template: None,
compression: None,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum NSISInstallerMode {
CurrentUser,
PerMachine,
Both,
}
impl From<NSISInstallerMode> for tauri_utils::config::NSISInstallerMode {
fn from(val: NSISInstallerMode) -> Self {
match val {
NSISInstallerMode::CurrentUser => tauri_utils::config::NSISInstallerMode::CurrentUser,
NSISInstallerMode::PerMachine => tauri_utils::config::NSISInstallerMode::PerMachine,
NSISInstallerMode::Both => tauri_utils::config::NSISInstallerMode::Both,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum WebviewInstallMode {
Skip,
DownloadBootstrapper { silent: bool },
EmbedBootstrapper { silent: bool },
OfflineInstaller { silent: bool },
FixedRuntime { path: PathBuf },
}
impl WebviewInstallMode {
fn into(self) -> tauri_utils::config::WebviewInstallMode {
match self {
Self::Skip => tauri_utils::config::WebviewInstallMode::Skip,
Self::DownloadBootstrapper { silent } => {
tauri_utils::config::WebviewInstallMode::DownloadBootstrapper { silent }
}
Self::EmbedBootstrapper { silent } => {
tauri_utils::config::WebviewInstallMode::EmbedBootstrapper { silent }
}
Self::OfflineInstaller { silent } => {
tauri_utils::config::WebviewInstallMode::OfflineInstaller { silent }
}
Self::FixedRuntime { path } => {
tauri_utils::config::WebviewInstallMode::FixedRuntime { path }
}
}
}
}
impl Default for WebviewInstallMode {
fn default() -> Self {
Self::OfflineInstaller { silent: false }
}
}
/// The arguments for the address the server will run on
#[derive(Clone, Debug, Parser)]
pub struct AddressArguments {
/// The port the server will run on
#[clap(long)]
#[clap(default_value_t = default_port())]
pub port: u16,
/// The address the server will run on
#[clap(long, default_value_t = default_address())]
pub addr: std::net::IpAddr,
}
impl Default for AddressArguments {
fn default() -> Self {
Self {
port: default_port(),
addr: default_address(),
}
}
}
impl AddressArguments {
/// Get the address the server should run on
pub fn address(&self) -> SocketAddr {
SocketAddr::new(self.addr, self.port)
}
}
fn default_port() -> u16 {
8080
}
fn default_address() -> IpAddr {
IpAddr::V4(std::net::Ipv4Addr::new(127, 0, 0, 1))
}
mod app;
mod bundle;
mod desktop;
mod dioxus_config;
mod serve;
mod web;
pub(crate) use app::*;
pub(crate) use bundle::*;
pub(crate) use desktop::*;
pub(crate) use dioxus_config::*;
pub(crate) use serve::*;
pub(crate) use web::*;

View file

@ -0,0 +1,23 @@
use crate::Platform;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct ApplicationConfig {
#[serde(default = "default_platform")]
pub(crate) default_platform: Platform,
#[serde(default = "asset_dir_default")]
pub(crate) asset_dir: PathBuf,
#[serde(default)]
pub(crate) sub_package: Option<String>,
}
pub(crate) fn default_platform() -> Platform {
Platform::Web
}
pub(crate) fn asset_dir_default() -> PathBuf {
PathBuf::from("assets")
}

View file

@ -0,0 +1,204 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub(crate) struct BundleConfig {
pub(crate) identifier: Option<String>,
pub(crate) publisher: Option<String>,
pub(crate) icon: Option<Vec<String>>,
pub(crate) resources: Option<Vec<String>>,
pub(crate) copyright: Option<String>,
pub(crate) category: Option<String>,
pub(crate) short_description: Option<String>,
pub(crate) long_description: Option<String>,
pub(crate) external_bin: Option<Vec<String>>,
pub(crate) deb: Option<DebianSettings>,
pub(crate) macos: Option<MacOsSettings>,
pub(crate) windows: Option<WindowsSettings>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub(crate) struct DebianSettings {
// OS-specific settings:
/// the list of debian dependencies.
pub depends: Option<Vec<String>>,
/// the list of dependencies the package provides.
pub provides: Option<Vec<String>>,
/// the list of package conflicts.
pub conflicts: Option<Vec<String>>,
/// the list of package replaces.
pub replaces: Option<Vec<String>>,
/// List of custom files to add to the deb package.
/// Maps the path on the debian package to the path of the file to include (relative to the current working directory).
pub files: HashMap<PathBuf, PathBuf>,
/// Path to a custom desktop file Handlebars template.
///
/// Available variables: `categories`, `comment` (optional), `exec`, `icon` and `name`.
pub desktop_template: Option<PathBuf>,
/// Define the section in Debian Control file. See : <https://www.debian.org/doc/debian-policy/ch-archive.html#s-subsections>
pub section: Option<String>,
/// Change the priority of the Debian Package. By default, it is set to `optional`.
/// Recognized Priorities as of now are : `required`, `important`, `standard`, `optional`, `extra`
pub priority: Option<String>,
/// Path of the uncompressed Changelog file, to be stored at /usr/share/doc/package-name/changelog.gz. See
/// <https://www.debian.org/doc/debian-policy/ch-docs.html#changelog-files-and-release-notes>
pub changelog: Option<PathBuf>,
/// Path to script that will be executed before the package is unpacked. See
/// <https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html>
pub pre_install_script: Option<PathBuf>,
/// Path to script that will be executed after the package is unpacked. See
/// <https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html>
pub post_install_script: Option<PathBuf>,
/// Path to script that will be executed before the package is removed. See
/// <https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html>
pub pre_remove_script: Option<PathBuf>,
/// Path to script that will be executed after the package is removed. See
/// <https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html>
pub post_remove_script: Option<PathBuf>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub(crate) struct WixSettings {
pub(crate) language: Vec<(String, Option<PathBuf>)>,
pub(crate) template: Option<PathBuf>,
pub(crate) fragment_paths: Vec<PathBuf>,
pub(crate) component_group_refs: Vec<String>,
pub(crate) component_refs: Vec<String>,
pub(crate) feature_group_refs: Vec<String>,
pub(crate) feature_refs: Vec<String>,
pub(crate) merge_refs: Vec<String>,
pub(crate) skip_webview_install: bool,
pub(crate) license: Option<PathBuf>,
pub(crate) enable_elevated_update_task: bool,
pub(crate) banner_path: Option<PathBuf>,
pub(crate) dialog_image_path: Option<PathBuf>,
pub(crate) fips_compliant: bool,
/// MSI installer version in the format `major.minor.patch.build` (build is optional).
///
/// Because a valid version is required for MSI installer, it will be derived from [`PackageSettings::version`] if this field is not set.
///
/// The first field is the major version and has a maximum value of 255. The second field is the minor version and has a maximum value of 255.
/// The third and fourth fields have a maximum value of 65,535.
///
/// See <https://learn.microsoft.com/en-us/windows/win32/msi/productversion> for more info.
pub version: Option<String>,
/// A GUID upgrade code for MSI installer. This code **_must stay the same across all of your updates_**,
/// otherwise, Windows will treat your update as a different app and your users will have duplicate versions of your app.
///
/// By default, tauri generates this code by generating a Uuid v5 using the string `<productName>.exe.app.x64` in the DNS namespace.
/// You can use Tauri's CLI to generate and print this code for you by running `tauri inspect wix-upgrade-code`.
///
/// It is recommended that you set this value in your tauri config file to avoid accidental changes in your upgrade code
/// whenever you want to change your product name.
pub upgrade_code: Option<uuid::Uuid>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub(crate) struct MacOsSettings {
pub(crate) frameworks: Option<Vec<String>>,
pub(crate) minimum_system_version: Option<String>,
pub(crate) license: Option<String>,
pub(crate) exception_domain: Option<String>,
pub(crate) signing_identity: Option<String>,
pub(crate) provider_short_name: Option<String>,
pub(crate) entitlements: Option<String>,
pub(crate) info_plist_path: Option<PathBuf>,
/// List of custom files to add to the application bundle.
/// Maps the path in the Contents directory in the app to the path of the file to include (relative to the current working directory).
pub files: HashMap<PathBuf, PathBuf>,
/// Preserve the hardened runtime version flag, see <https://developer.apple.com/documentation/security/hardened_runtime>
///
/// Settings this to `false` is useful when using an ad-hoc signature, making it less strict.
pub hardened_runtime: bool,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WindowsSettings {
pub(crate) digest_algorithm: Option<String>,
pub(crate) certificate_thumbprint: Option<String>,
pub(crate) timestamp_url: Option<String>,
pub(crate) tsp: bool,
pub(crate) wix: Option<WixSettings>,
pub(crate) icon_path: Option<PathBuf>,
pub(crate) webview_install_mode: WebviewInstallMode,
pub(crate) webview_fixed_runtime_path: Option<PathBuf>,
pub(crate) allow_downgrades: bool,
pub(crate) nsis: Option<NsisSettings>,
/// Specify a custom command to sign the binaries.
/// This command needs to have a `%1` in it which is just a placeholder for the binary path,
/// which we will detect and replace before calling the command.
///
/// Example:
/// ```text
/// sign-cli --arg1 --arg2 %1
/// ```
///
/// By Default we use `signtool.exe` which can be found only on Windows so
/// if you are on another platform and want to cross-compile and sign you will
/// need to use another tool like `osslsigncode`.
pub sign_command: Option<CustomSignCommandSettings>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct NsisSettings {
pub(crate) template: Option<PathBuf>,
pub(crate) license: Option<PathBuf>,
pub(crate) header_image: Option<PathBuf>,
pub(crate) sidebar_image: Option<PathBuf>,
pub(crate) installer_icon: Option<PathBuf>,
pub(crate) install_mode: NSISInstallerMode,
pub(crate) languages: Option<Vec<String>>,
pub(crate) custom_language_files: Option<HashMap<String, PathBuf>>,
pub(crate) display_language_selector: bool,
pub(crate) start_menu_folder: Option<String>,
pub(crate) installer_hooks: Option<PathBuf>,
/// Try to ensure that the WebView2 version is equal to or newer than this version,
/// if the user's WebView2 is older than this version,
/// the installer will try to trigger a WebView2 update.
pub minimum_webview2_version: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) enum NSISInstallerMode {
CurrentUser,
PerMachine,
Both,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) enum WebviewInstallMode {
Skip,
DownloadBootstrapper { silent: bool },
EmbedBootstrapper { silent: bool },
OfflineInstaller { silent: bool },
FixedRuntime { path: PathBuf },
}
impl Default for WebviewInstallMode {
fn default() -> Self {
Self::OfflineInstaller { silent: false }
}
}
#[derive(Clone, Copy, Debug)]
pub(crate) enum PackageType {
MacOsBundle,
IosBundle,
WindowsMsi,
Deb,
Rpm,
AppImage,
Dmg,
Updater,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CustomSignCommandSettings {
/// The command to run to sign the binary.
pub cmd: String,
/// The arguments to pass to the command.
///
/// "%1" will be replaced with the path to the binary to be signed.
pub args: Vec<String>,
}

View file

@ -0,0 +1,5 @@
use serde::{Deserialize, Serialize};
/// Represents configuration items for the desktop platform.
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub(crate) struct DesktopConfig {}

View file

@ -0,0 +1,105 @@
use super::*;
use crate::Result;
use anyhow::Context;
use krates::{Krates, NodeId};
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct DioxusConfig {
pub(crate) application: ApplicationConfig,
#[serde(default)]
pub(crate) web: WebConfig,
#[serde(default)]
pub(crate) desktop: DesktopConfig,
#[serde(default)]
pub(crate) bundle: BundleConfig,
}
impl Default for DioxusConfig {
fn default() -> Self {
Self {
application: ApplicationConfig {
default_platform: default_platform(),
asset_dir: asset_dir_default(),
sub_package: None,
},
web: WebConfig {
app: WebAppConfig {
title: default_title(),
base_path: None,
},
proxy: vec![],
watcher: Default::default(),
resource: WebResourceConfig {
dev: WebDevResourceConfig {
style: vec![],
script: vec![],
},
style: Some(vec![]),
script: Some(vec![]),
},
https: WebHttpsConfig {
enabled: None,
mkcert: None,
key_path: None,
cert_path: None,
},
pre_compress: true,
wasm_opt: Default::default(),
},
desktop: DesktopConfig::default(),
bundle: BundleConfig::default(),
}
}
}
impl DioxusConfig {
pub fn load(krates: &Krates, package: NodeId) -> Result<Option<Self>> {
// Walk up from the cargo.toml to the root of the workspace looking for Dioxus.toml
let mut current_dir = krates[package]
.manifest_path
.parent()
.unwrap()
.as_std_path()
.to_path_buf()
.canonicalize()?;
let workspace_path = krates
.workspace_root()
.as_std_path()
.to_path_buf()
.canonicalize()?;
let mut dioxus_conf_file = None;
while current_dir.starts_with(&workspace_path) {
let config = ["Dioxus.toml", "dioxus.toml"]
.into_iter()
.map(|file| current_dir.join(file))
.find(|path| path.is_file());
// Try to find Dioxus.toml in the current directory
if let Some(new_config) = config {
dioxus_conf_file = Some(new_config.as_path().to_path_buf());
break;
}
// If we can't find it, go up a directory
current_dir = current_dir
.parent()
.context("Failed to find Dioxus.toml")?
.to_path_buf();
}
let Some(dioxus_conf_file) = dioxus_conf_file else {
return Ok(None);
};
toml::from_str::<DioxusConfig>(&std::fs::read_to_string(&dioxus_conf_file)?)
.map_err(|err| {
anyhow::anyhow!("Failed to parse Dioxus.toml at {dioxus_conf_file:?}: {err}").into()
})
.map(Some)
}
}

View file

@ -0,0 +1,34 @@
#![allow(unused)] // lots of configs...
use clap::Parser;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, SocketAddrV4};
/// The arguments for the address the server will run on
#[derive(Clone, Debug, Parser)]
pub(crate) struct AddressArguments {
/// The port the server will run on
#[clap(long)]
#[clap(default_value_t = default_port())]
pub(crate) port: u16,
/// The address the server will run on
#[clap(long, default_value_t = default_address())]
pub(crate) addr: std::net::IpAddr,
}
impl Default for AddressArguments {
fn default() -> Self {
Self {
port: default_port(),
addr: default_address(),
}
}
}
fn default_port() -> u16 {
8080
}
fn default_address() -> IpAddr {
IpAddr::V4(std::net::Ipv4Addr::new(127, 0, 0, 1))
}

View file

@ -0,0 +1,180 @@
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebConfig {
#[serde(default)]
pub(crate) app: WebAppConfig,
#[serde(default)]
pub(crate) proxy: Vec<WebProxyConfig>,
#[serde(default)]
pub(crate) watcher: WebWatcherConfig,
#[serde(default)]
pub(crate) resource: WebResourceConfig,
#[serde(default)]
pub(crate) https: WebHttpsConfig,
/// Whether to enable pre-compression of assets and wasm during a web build in release mode
#[serde(default = "true_bool")]
pub(crate) pre_compress: bool,
/// The wasm-opt configuration
#[serde(default)]
pub(crate) wasm_opt: WasmOptConfig,
}
impl Default for WebConfig {
fn default() -> Self {
Self {
pre_compress: true_bool(),
app: Default::default(),
https: Default::default(),
wasm_opt: Default::default(),
proxy: Default::default(),
watcher: Default::default(),
resource: Default::default(),
}
}
}
/// The wasm-opt configuration
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub(crate) struct WasmOptConfig {
/// The wasm-opt level to use for release builds [default: s]
/// Options:
/// - z: optimize aggressively for size
/// - s: optimize for size
/// - 1: optimize for speed
/// - 2: optimize for more for speed
/// - 3: optimize for even more for speed
/// - 4: optimize aggressively for speed
#[serde(default)]
pub(crate) level: WasmOptLevel,
/// Keep debug symbols in the wasm file
#[serde(default = "false_bool")]
pub(crate) debug: bool,
}
/// The wasm-opt level to use for release web builds [default: 4]
#[derive(Default, Debug, Copy, Clone, Serialize, Deserialize)]
pub(crate) enum WasmOptLevel {
/// Optimize aggressively for size
#[serde(rename = "z")]
Z,
/// Optimize for size
#[serde(rename = "s")]
S,
/// Don't optimize
#[serde(rename = "0")]
Zero,
/// Optimize for speed
#[serde(rename = "1")]
One,
/// Optimize for more for speed
#[serde(rename = "2")]
Two,
/// Optimize for even more for speed
#[serde(rename = "3")]
Three,
/// Optimize aggressively for speed
#[serde(rename = "4")]
#[default]
Four,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebAppConfig {
#[serde(default = "default_title")]
pub(crate) title: String,
pub(crate) base_path: Option<String>,
}
impl WebAppConfig {
/// Get the normalized base path for the application with `/` trimmed from both ends. If the base path is not set, this will return `.`.
pub(crate) fn base_path(&self) -> &str {
match &self.base_path {
Some(path) => path.trim_matches('/'),
None => ".",
}
}
}
impl Default for WebAppConfig {
fn default() -> Self {
Self {
title: default_title(),
base_path: None,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebProxyConfig {
pub(crate) backend: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebWatcherConfig {
#[serde(default = "watch_path_default")]
pub(crate) watch_path: Vec<PathBuf>,
#[serde(default)]
pub(crate) reload_html: bool,
#[serde(default = "true_bool")]
pub(crate) index_on_404: bool,
}
impl Default for WebWatcherConfig {
fn default() -> Self {
Self {
watch_path: watch_path_default(),
reload_html: false,
index_on_404: true,
}
}
}
fn watch_path_default() -> Vec<PathBuf> {
vec![PathBuf::from("src"), PathBuf::from("examples")]
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebResourceConfig {
pub(crate) dev: WebDevResourceConfig,
pub(crate) style: Option<Vec<PathBuf>>,
pub(crate) script: Option<Vec<PathBuf>>,
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub(crate) struct WebDevResourceConfig {
#[serde(default)]
pub(crate) style: Vec<PathBuf>,
#[serde(default)]
pub(crate) script: Vec<PathBuf>,
}
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub(crate) struct WebHttpsConfig {
pub(crate) enabled: Option<bool>,
pub(crate) mkcert: Option<bool>,
pub(crate) key_path: Option<String>,
pub(crate) cert_path: Option<String>,
}
fn true_bool() -> bool {
true
}
fn false_bool() -> bool {
false
}
pub(crate) fn default_title() -> String {
"dioxus | ⛺".into()
}

View file

@ -1,86 +1,567 @@
use crate::build::TargetArgs;
use crate::config::{DioxusConfig, Platform};
use crate::CliSettings;
use crate::{config::DioxusConfig, TargetArgs};
use crate::{Platform, Result};
use anyhow::Context;
use krates::{cm::Target, KrateDetails};
use krates::{cm::TargetKind, Cmd, Krates, NodeId};
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use std::sync::Arc;
use std::{
fmt::{Display, Formatter},
path::PathBuf,
};
use std::{io::Write, path::Path};
use toml_edit::Item;
use crate::metadata::CargoError;
// Contains information about the crate we are currently in and the dioxus config for that crate
#[derive(Clone)]
pub(crate) struct DioxusCrate {
pub(crate) krates: Arc<Krates>,
pub(crate) package: NodeId,
pub(crate) config: DioxusConfig,
pub(crate) target: Target,
pub(crate) settings: CliSettings,
}
/// Load the dioxus config from a path
fn load_dioxus_config(
krates: &Krates,
package: NodeId,
) -> Result<Option<DioxusConfig>, CrateConfigError> {
fn acquire_dioxus_toml(dir: &std::path::Path) -> Option<PathBuf> {
["Dioxus.toml", "dioxus.toml"]
.into_iter()
.map(|file| dir.join(file))
.find(|path| path.is_file())
pub(crate) static PROFILE_WASM: &str = "dioxus-wasm";
pub(crate) static PROFILE_ANDROID: &str = "dioxus-android";
pub(crate) static PROFILE_SERVER: &str = "dioxus-server";
impl DioxusCrate {
pub(crate) fn new(target: &TargetArgs) -> Result<Self> {
let mut cmd = Cmd::new();
cmd.features(target.features.clone());
let krates = krates::Builder::new()
.build(cmd, |_| {})
.context("Failed to run cargo metadata")?;
let package = find_main_package(&krates, target.package.clone())?;
let dioxus_config = DioxusConfig::load(&krates, package)?.unwrap_or_default();
let package_name = krates[package].name.clone();
let target_kind = if target.example.is_some() {
TargetKind::Example
} else {
TargetKind::Bin
};
let target_name = target
.example
.clone()
.or(target.bin.clone())
.unwrap_or(package_name);
let main_package = &krates[package];
let target = main_package
.targets
.iter()
.find(|target| {
target_name == target.name.as_str() && target.kind.contains(&target_kind)
})
.with_context(|| format!("Failed to find target {target_name}"))?
.clone();
let settings = CliSettings::load();
Ok(Self {
krates: Arc::new(krates),
package,
config: dioxus_config,
target,
settings,
})
}
// Walk up from the cargo.toml to the root of the workspace looking for Dioxus.toml
let mut current_dir = krates[package]
/// Compose an asset directory. Represents the typical "public" directory
/// with publicly available resources (configurable in the `Dioxus.toml`).
pub(crate) fn legacy_asset_dir(&self) -> PathBuf {
self.crate_dir().join(&self.config.application.asset_dir)
}
/// Get the list of files in the "legacy" asset directory
pub(crate) fn legacy_asset_dir_files(&self) -> Vec<PathBuf> {
let mut files = vec![];
let Ok(read_dir) = self.legacy_asset_dir().read_dir() else {
return files;
};
for entry in read_dir.flatten() {
files.push(entry.path());
}
files
}
/// Compose an out directory. Represents the typical "dist" directory that
/// is "distributed" after building an application (configurable in the
/// `Dioxus.toml`).
fn out_dir(&self) -> PathBuf {
let dir = self.workspace_dir().join("target").join("dx");
std::fs::create_dir_all(&dir).unwrap();
dir
}
/// Create a workdir for the given platform
/// This can be used as a temporary directory for the build, but in an observable way such that
/// you can see the files in the directory via `target`
///
/// target/dx/build/app/web/
/// target/dx/build/app/web/public/
/// target/dx/build/app/web/server.exe
pub(crate) fn build_dir(&self, platform: Platform, release: bool) -> PathBuf {
self.out_dir()
.join(self.executable_name())
.join(if release { "release" } else { "debug" })
.join(platform.build_folder_name())
}
/// target/dx/bundle/app/
/// target/dx/bundle/app/blah.app
/// target/dx/bundle/app/blah.exe
/// target/dx/bundle/app/public/
pub(crate) fn bundle_dir(&self, platform: Platform) -> PathBuf {
self.out_dir()
.join(self.executable_name())
.join("bundle")
.join(platform.build_folder_name())
}
/// Get the workspace directory for the crate
pub(crate) fn workspace_dir(&self) -> PathBuf {
self.krates.workspace_root().as_std_path().to_path_buf()
}
/// Get the directory of the crate
pub(crate) fn crate_dir(&self) -> PathBuf {
self.package()
.manifest_path
.parent()
.unwrap()
.as_std_path()
.to_path_buf()
.canonicalize()?;
let workspace_path = krates
.workspace_root()
.as_std_path()
.to_path_buf()
.canonicalize()?;
let mut dioxus_conf_file = None;
while current_dir.starts_with(&workspace_path) {
// Try to find Dioxus.toml in the current directory
if let Some(new_config) = acquire_dioxus_toml(&current_dir) {
dioxus_conf_file = Some(new_config.as_path().to_path_buf());
break;
}
// If we can't find it, go up a directory
current_dir = current_dir
.parent()
.ok_or(CrateConfigError::CurrentPackageNotFound)?
.to_path_buf();
}
let Some(dioxus_conf_file) = dioxus_conf_file else {
return Ok(None);
/// Get the main source file of the target
pub(crate) fn main_source_file(&self) -> PathBuf {
self.target.src_path.as_std_path().to_path_buf()
}
/// Get the package we are currently in
pub(crate) fn package(&self) -> &krates::cm::Package {
&self.krates[self.package]
}
/// Get the name of the package we are compiling
pub(crate) fn executable_name(&self) -> &str {
&self.target.name
}
/// Get the type of executable we are compiling
pub(crate) fn executable_type(&self) -> krates::cm::TargetKind {
self.target.kind[0].clone()
}
/// Try to autodetect the platform from the package by reading its features
///
/// Read the default-features list and/or the features list on dioxus to see if we can autodetect the platform
pub(crate) fn autodetect_platform(&self) -> Option<(Platform, String)> {
let krate = self.krates.krates_by_name("dioxus").next()?;
// We're going to accumulate the platforms that are enabled
// This will let us create a better warning if multiple platforms are enabled
let manually_enabled_platforms = self
.krates
.get_enabled_features(krate.kid)?
.iter()
.flat_map(|feature| {
Platform::autodetect_from_cargo_feature(feature).map(|f| (f, feature.to_string()))
})
.collect::<Vec<_>>();
if manually_enabled_platforms.len() > 1 {
tracing::error!("Multiple platforms are enabled. Please specify a platform with `--platform <platform>` or set a single default platform using a cargo feature.");
for platform in manually_enabled_platforms {
tracing::error!(" - {platform:?}");
}
return None;
}
if manually_enabled_platforms.len() == 1 {
return manually_enabled_platforms.first().cloned();
}
// Let's try and find the list of platforms from the feature list
// This lets apps that specify web + server to work without specifying the platform.
// This is because we treat `server` as a binary thing rather than a dedicated platform, so at least we can disambiguate it
let possible_platforms = self
.package()
.features
.iter()
.filter_map(|(feature, _features)| {
match Platform::autodetect_from_cargo_feature(feature) {
Some(platform) => Some((platform, feature.to_string())),
None => {
let auto_implicit = _features
.iter()
.filter_map(|f| {
if !f.starts_with("dioxus?/") && !f.starts_with("dioxus/") {
return None;
}
let rest = f
.trim_start_matches("dioxus/")
.trim_start_matches("dioxus?/");
Platform::autodetect_from_cargo_feature(rest)
})
.collect::<Vec<_>>();
if auto_implicit.len() == 1 {
Some((auto_implicit.first().copied().unwrap(), feature.to_string()))
} else {
None
}
}
}
})
.filter(|platform| platform.0 != Platform::Server)
.collect::<Vec<_>>();
if possible_platforms.len() == 1 {
return possible_platforms.first().cloned();
}
tracing::warn!("Could not autodetect platform. Platform must be explicitly specified. Pass `--platform <platform>` or set a default platform using a cargo feature.");
None
}
/// Check if dioxus is being built with a particular feature
pub(crate) fn has_dioxus_feature(&self, filter: &str) -> bool {
self.krates.krates_by_name("dioxus").any(|dioxus| {
self.krates
.get_enabled_features(dioxus.kid)
.map(|features| features.contains(filter))
.unwrap_or_default()
})
}
/// Get the features required to build for the given platform
pub(crate) fn feature_for_platform(&self, platform: Platform) -> Option<String> {
let package = self.package();
// Try to find the feature that activates the dioxus feature for the given platform
let dioxus_feature = platform.feature_name();
package.features.iter().find_map(|(key, features)| {
// if the feature is just the name of the platform, we use that
if key == dioxus_feature {
return Some(key.clone());
}
// Otherwise look for the feature that starts with dioxus/ or dioxus?/ and matches the platform
for feature in features {
if let Some((_, after_dioxus)) = feature.split_once("dioxus") {
if let Some(dioxus_feature_enabled) =
after_dioxus.trim_start_matches('?').strip_prefix('/')
{
// If that enables the feature we are looking for, return that feature
if dioxus_feature_enabled == dioxus_feature {
return Some(key.clone());
}
}
}
}
None
})
}
/// Check if assets should be pre_compressed. This will only be true in release mode if the user
/// has enabled pre_compress in the web config.
pub(crate) fn should_pre_compress_web_assets(&self, release: bool) -> bool {
self.config.web.pre_compress && release
}
// The `opt-level=2` increases build times, but can noticeably decrease time
// between saving changes and being able to interact with an app (for wasm/web). The "overall"
// time difference (between having and not having the optimization) can be
// almost imperceptible (~1 s) but also can be very noticeable (~6 s) — depends
// on setup (hardware, OS, browser, idle load).
//
// Find or create the client and server profiles in the .cargo/config.toml file
pub(crate) fn initialize_profiles(&self) -> crate::Result<()> {
let config_path = self.workspace_dir().join(".cargo/config.toml");
let mut config = match std::fs::read_to_string(&config_path) {
Ok(config) => config.parse::<toml_edit::DocumentMut>().map_err(|e| {
crate::Error::Other(anyhow::anyhow!("Failed to parse .cargo/config.toml: {}", e))
})?,
Err(_) => Default::default(),
};
let cfg = toml::from_str::<DioxusConfig>(&std::fs::read_to_string(&dioxus_conf_file)?)
.map_err(|err| {
CrateConfigError::LoadDioxusConfig(LoadDioxusConfigError {
location: dioxus_conf_file.display().to_string(),
error: err.to_string(),
})
})
.map(Some);
match cfg {
Ok(Some(mut cfg)) => {
let name = cfg.application.name.clone();
if cfg.bundle.identifier.is_none() {
cfg.bundle.identifier = Some(format!("io.github.{name}"));
}
if cfg.bundle.publisher.is_none() {
cfg.bundle.publisher = Some(name);
if let Item::Table(table) = config
.as_table_mut()
.entry("profile")
.or_insert(Item::Table(Default::default()))
{
if let toml_edit::Entry::Vacant(entry) = table.entry(PROFILE_WASM) {
let mut client = toml_edit::Table::new();
client.insert("inherits", Item::Value("dev".into()));
client.insert("opt-level", Item::Value(2.into()));
entry.insert(Item::Table(client));
}
Ok(Some(cfg))
if let toml_edit::Entry::Vacant(entry) = table.entry(PROFILE_SERVER) {
let mut server = toml_edit::Table::new();
server.insert("inherits", Item::Value("dev".into()));
server.insert("opt-level", Item::Value(2.into()));
entry.insert(Item::Table(server));
}
cfg => cfg,
if let toml_edit::Entry::Vacant(entry) = table.entry(PROFILE_ANDROID) {
let mut android = toml_edit::Table::new();
android.insert("inherits", Item::Value("dev".into()));
android.insert("opt-level", Item::Value(2.into()));
entry.insert(Item::Table(android));
}
}
// Write the config back to the file
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent)?;
}
let file = std::fs::File::create(config_path)?;
let mut buf_writer = std::io::BufWriter::new(file);
write!(buf_writer, "{}", config)?;
Ok(())
}
fn default_ignore_list(&self) -> Vec<&'static str> {
vec![
".git",
".github",
".vscode",
"target",
"node_modules",
"dist",
"*~",
".*",
"*.lock",
"*.log",
"*.rs",
]
}
/// Create a new gitignore map for this target crate
///
/// todo(jon): this is a bit expensive to build, so maybe we should cache it?
pub fn workspace_gitignore(&self) -> ignore::gitignore::Gitignore {
let crate_dir = self.crate_dir();
let mut ignore_builder = ignore::gitignore::GitignoreBuilder::new(&crate_dir);
ignore_builder.add(crate_dir.join(".gitignore"));
let workspace_dir = self.workspace_dir();
ignore_builder.add(workspace_dir.join(".gitignore"));
for path in self.default_ignore_list() {
ignore_builder
.add_line(None, path)
.expect("failed to add path to file excluder");
}
ignore_builder.build().unwrap()
}
/// Return the version of the wasm-bindgen crate if it exists
pub fn wasm_bindgen_version(&self) -> Option<String> {
self.krates
.krates_by_name("wasm-bindgen")
.next()
.map(|krate| krate.krate.version.to_string())
}
pub(crate) fn default_platform(&self) -> Option<Platform> {
let default = self.package().features.get("default")?;
// we only trace features 1 level deep..
for feature in default.iter() {
// If the user directly specified a platform we can just use that.
if feature.starts_with("dioxus/") {
let dx_feature = feature.trim_start_matches("dioxus/");
let auto = Platform::autodetect_from_cargo_feature(dx_feature);
if auto.is_some() {
return auto;
}
}
// If the user is specifying an internal feature that points to a platform, we can use that
let internal_feature = self.package().features.get(feature);
if let Some(internal_feature) = internal_feature {
for feature in internal_feature {
if feature.starts_with("dioxus/") {
let dx_feature = feature.trim_start_matches("dioxus/");
let auto = Platform::autodetect_from_cargo_feature(dx_feature);
if auto.is_some() {
return auto;
}
}
}
}
}
None
}
/// Gather the features that are enabled for the package
pub(crate) fn platformless_features(&self) -> Vec<String> {
let default = self.package().features.get("default").unwrap();
let mut kept_features = vec![];
// Only keep the top-level features in the default list that don't point to a platform directly
// IE we want to drop `web` if default = ["web"]
'top: for feature in default {
// Don't keep features that point to a platform via dioxus/blah
if feature.starts_with("dioxus/") {
let dx_feature = feature.trim_start_matches("dioxus/");
if Platform::autodetect_from_cargo_feature(dx_feature).is_some() {
continue 'top;
}
}
// Don't keep features that point to a platform via an internal feature
if let Some(internal_feature) = self.package().features.get(feature) {
for feature in internal_feature {
if feature.starts_with("dioxus/") {
let dx_feature = feature.trim_start_matches("dioxus/");
if Platform::autodetect_from_cargo_feature(dx_feature).is_some() {
continue 'top;
}
}
}
}
// Otherwise we can keep it
kept_features.push(feature.to_string());
}
kept_features
}
/// Return the list of paths that we should watch for changes.
pub(crate) fn watch_paths(&self) -> Vec<PathBuf> {
let mut watched_paths = vec![];
// Get a list of *all* the crates with Rust code that we need to watch.
// This will end up being dependencies in the workspace and non-workspace dependencies on the user's computer.
let mut watched_crates = self.local_dependencies();
watched_crates.push(self.crate_dir());
// Now, watch all the folders in the crates, but respecting their respective ignore files
for krate_root in watched_crates {
// Build the ignore builder for this crate, but with our default ignore list as well
let ignore = self.ignore_for_krate(&krate_root);
for entry in krate_root.read_dir().unwrap() {
let Ok(entry) = entry else {
continue;
};
if ignore
.matched(entry.path(), entry.path().is_dir())
.is_ignore()
{
continue;
}
watched_paths.push(entry.path().to_path_buf());
}
}
watched_paths.dedup();
watched_paths
}
fn ignore_for_krate(&self, path: &Path) -> ignore::gitignore::Gitignore {
let mut ignore_builder = ignore::gitignore::GitignoreBuilder::new(path);
for path in self.default_ignore_list() {
ignore_builder
.add_line(None, path)
.expect("failed to add path to file excluder");
}
ignore_builder.build().unwrap()
}
/// Get all the Manifest paths for dependencies that we should watch. Will not return anything
/// in the `.cargo` folder - only local dependencies will be watched.
///
/// This returns a list of manifest paths
///
/// Extend the watch path to include:
///
/// - the assets directory - this is so we can hotreload CSS and other assets by default
/// - the Cargo.toml file - this is so we can hotreload the project if the user changes dependencies
/// - the Dioxus.toml file - this is so we can hotreload the project if the user changes the Dioxus config
pub(crate) fn local_dependencies(&self) -> Vec<PathBuf> {
let mut paths = vec![];
for (dependency, _edge) in self.krates.get_deps(self.package) {
let krate = match dependency {
krates::Node::Krate { krate, .. } => krate,
krates::Node::Feature { krate_index, .. } => &self.krates[krate_index.index()],
};
if krate
.manifest_path
.components()
.any(|c| c.as_str() == ".cargo")
{
continue;
}
paths.push(
krate
.manifest_path
.parent()
.unwrap()
.to_path_buf()
.into_std_path_buf(),
);
}
paths
}
pub(crate) fn all_watched_crates(&self) -> Vec<PathBuf> {
let mut krates: Vec<PathBuf> = self
.local_dependencies()
.into_iter()
.map(|p| {
p.parent()
.expect("Local manifest to exist and have a parent")
.to_path_buf()
})
.chain(Some(self.crate_dir()))
.collect();
krates.dedup();
krates
}
}
impl std::fmt::Debug for DioxusCrate {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("DioxusCrate")
.field("package", &self.krates[self.package])
.field("dioxus_config", &self.config)
.field("target", &self.target)
.finish()
}
}
// Find the main package in the workspace
fn find_main_package(package: Option<String>, krates: &Krates) -> Result<NodeId, CrateConfigError> {
fn find_main_package(krates: &Krates, package: Option<String>) -> Result<NodeId> {
let kid = match package {
Some(package) => {
let mut workspace_members = krates.workspace_members();
@ -103,7 +584,7 @@ fn find_main_package(package: Option<String>, krates: &Krates) -> Result<NodeId,
}
}
found.ok_or_else(|| CrateConfigError::PackageNotFound(package.clone()))?
found.ok_or_else(|| anyhow::anyhow!("Failed to find package {package}"))?
}
None => {
// Otherwise find the package that is the closest parent of the current directory
@ -131,235 +612,10 @@ fn find_main_package(package: Option<String>, krates: &Krates) -> Result<NodeId,
}
closest_parent
.map(|(id, _)| id)
.ok_or(CrateConfigError::CurrentPackageNotFound)?
.context("Failed to find current package")?
}
};
let package = krates.nid_for_kid(kid).unwrap();
Ok(package)
}
// Contains information about the crate we are currently in and the dioxus config for that crate
#[derive(Clone)]
pub struct DioxusCrate {
pub krates: Arc<Krates>,
pub package: NodeId,
pub dioxus_config: DioxusConfig,
pub target: Target,
}
impl DioxusCrate {
pub fn new(target: &TargetArgs) -> Result<Self, CrateConfigError> {
let mut cmd = Cmd::new();
cmd.features(target.features.clone());
let builder = krates::Builder::new();
let krates = builder.build(cmd, |_| {})?;
let package = find_main_package(target.package.clone(), &krates)?;
let dioxus_config = load_dioxus_config(&krates, package)?.unwrap_or_default();
let package_name = krates[package].name.clone();
let target_kind = if target.example.is_some() {
TargetKind::Example
} else {
TargetKind::Bin
};
let target_name = target
.example
.clone()
.or(target.bin.clone())
.unwrap_or(package_name);
let main_package = &krates[package];
let target = main_package
.targets
.iter()
.find(|target| {
target_name == target.name.as_str() && target.kind.contains(&target_kind)
})
.ok_or(CrateConfigError::TargetNotFound(target_name))?
.clone();
Ok(Self {
krates: Arc::new(krates),
package,
dioxus_config,
target,
})
}
/// Compose an asset directory. Represents the typical "public" directory
/// with publicly available resources (configurable in the `Dioxus.toml`).
pub fn asset_dir(&self) -> PathBuf {
self.crate_dir()
.join(&self.dioxus_config.application.asset_dir)
}
/// Compose an out directory. Represents the typical "dist" directory that
/// is "distributed" after building an application (configurable in the
/// `Dioxus.toml`).
pub fn out_dir(&self) -> PathBuf {
self.workspace_dir()
.join(&self.dioxus_config.application.out_dir)
}
/// Get the workspace directory for the crate
pub fn workspace_dir(&self) -> PathBuf {
self.krates.workspace_root().as_std_path().to_path_buf()
}
/// Get the directory of the crate
pub fn crate_dir(&self) -> PathBuf {
self.package()
.manifest_path
.parent()
.unwrap()
.as_std_path()
.to_path_buf()
}
/// Get the main source file of the target
pub fn main_source_file(&self) -> PathBuf {
self.target.src_path.as_std_path().to_path_buf()
}
/// Get the package we are currently in
pub fn package(&self) -> &krates::cm::Package {
&self.krates[self.package]
}
/// Get the name of the package we are compiling
pub fn executable_name(&self) -> &str {
&self.target.name
}
/// Get the type of executable we are compiling
pub fn executable_type(&self) -> krates::cm::TargetKind {
self.target.kind[0].clone()
}
pub fn features_for_platform(&mut self, platform: Platform) -> Vec<String> {
let package = self.package();
// Try to find the feature that activates the dioxus feature for the given platform
let dioxus_feature = platform.feature_name();
let feature = package.features.iter().find_map(|(key, features)| {
// Find a feature that starts with dioxus/ or dioxus?/
for feature in features {
if let Some((_, after_dioxus)) = feature.split_once("dioxus") {
if let Some(dioxus_feature_enabled) =
after_dioxus.trim_start_matches('?').strip_prefix('/')
{
// If that enables the feature we are looking for, return that feature
if dioxus_feature_enabled == dioxus_feature {
return Some(key.clone());
}
}
}
}
None
});
feature.into_iter().collect()
}
/// Check if assets should be pre_compressed. This will only be true in release mode if the user has enabled pre_compress in the web config.
pub fn should_pre_compress_web_assets(&self, release: bool) -> bool {
self.dioxus_config.web.pre_compress && release
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Executable {
pub name: String,
pub ty: ExecutableType,
}
#[derive(Debug, Copy, Clone, Serialize, Deserialize)]
pub enum ExecutableType {
Binary,
Lib,
Example,
}
impl ExecutableType {
/// Get the name of the executable if it is a binary or an example.
pub fn executable(&self) -> bool {
matches!(self, Self::Binary | Self::Example)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LoadDioxusConfigError {
location: String,
error: String,
}
impl std::fmt::Display for LoadDioxusConfigError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{} {}", self.location, self.error)
}
}
impl std::error::Error for LoadDioxusConfigError {}
#[derive(Debug)]
#[non_exhaustive]
pub enum CrateConfigError {
Cargo(CargoError),
Io(std::io::Error),
Toml(toml::de::Error),
LoadDioxusConfig(LoadDioxusConfigError),
TargetNotFound(String),
Krates(krates::Error),
PackageNotFound(String),
CurrentPackageNotFound,
}
impl From<CargoError> for CrateConfigError {
fn from(err: CargoError) -> Self {
Self::Cargo(err)
}
}
impl From<std::io::Error> for CrateConfigError {
fn from(err: std::io::Error) -> Self {
Self::Io(err)
}
}
impl From<toml::de::Error> for CrateConfigError {
fn from(err: toml::de::Error) -> Self {
Self::Toml(err)
}
}
impl From<LoadDioxusConfigError> for CrateConfigError {
fn from(err: LoadDioxusConfigError) -> Self {
Self::LoadDioxusConfig(err)
}
}
impl From<krates::Error> for CrateConfigError {
fn from(err: krates::Error) -> Self {
Self::Krates(err)
}
}
impl Display for CrateConfigError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::Cargo(err) => write!(f, "{}", err),
Self::Io(err) => write!(f, "{}", err),
Self::Toml(err) => write!(f, "{}", err),
Self::LoadDioxusConfig(err) => write!(f, "{}", err),
Self::TargetNotFound(target) => {
write!(f, "Failed to find target with name: {}", target)
}
Self::Krates(err) => write!(f, "{}", err),
Self::PackageNotFound(package) => write!(f, "Package not found: {}", package),
Self::CurrentPackageNotFound => write!(f, "Failed to find current package"),
}
}
}
impl std::error::Error for CrateConfigError {}

View file

@ -1,11 +1,10 @@
use crate::metadata::CargoError;
use thiserror::Error as ThisError;
use crate::{metadata::CargoError, CrateConfigError, LoadDioxusConfigError};
pub type Result<T, E = Error> = std::result::Result<T, E>;
pub(crate) type Result<T, E = Error> = std::result::Result<T, E>;
#[derive(ThisError, Debug)]
pub enum Error {
pub(crate) enum Error {
/// Used when errors need to propagate but are too unique to be typed
#[error("{0}")]
Unique(String),
@ -14,37 +13,22 @@ pub enum Error {
IO(#[from] std::io::Error),
#[error("Format Error: {0}")]
FormatError(#[from] std::fmt::Error),
Format(#[from] std::fmt::Error),
#[error("Format failed: {0}")]
ParseError(String),
Parse(String),
#[error("Runtime Error: {0}")]
RuntimeError(String),
#[error("Failed to write error")]
FailedToWrite,
#[error("Build Failed: {0}")]
BuildFailed(String),
Runtime(String),
#[error("Cargo Error: {0}")]
CargoError(String),
#[error("Couldn't retrieve cargo metadata")]
CargoMetadata(#[source] cargo_metadata::Error),
#[error("{0}")]
CustomError(String),
Cargo(#[from] CargoError),
#[error("Invalid proxy URL: {0}")]
InvalidProxy(#[from] hyper::http::uri::InvalidUri),
#[error("Failed to establish proxy: {0}")]
ProxySetupError(String),
#[error("Error proxying request: {0}")]
ProxyRequestError(hyper::Error),
ProxySetup(String),
#[error(transparent)]
Other(#[from] anyhow::Error),
@ -64,43 +48,12 @@ impl From<String> for Error {
impl From<html_parser::Error> for Error {
fn from(e: html_parser::Error) -> Self {
Self::ParseError(e.to_string())
Self::Parse(e.to_string())
}
}
impl From<hyper::Error> for Error {
fn from(e: hyper::Error) -> Self {
Self::RuntimeError(e.to_string())
Self::Runtime(e.to_string())
}
}
impl From<LoadDioxusConfigError> for Error {
fn from(e: LoadDioxusConfigError) -> Self {
Self::RuntimeError(e.to_string())
}
}
impl From<CargoError> for Error {
fn from(e: CargoError) -> Self {
Self::CargoError(e.to_string())
}
}
impl From<CrateConfigError> for Error {
fn from(e: CrateConfigError) -> Self {
Self::RuntimeError(e.to_string())
}
}
#[macro_export]
macro_rules! custom_error {
($msg:literal $(,)?) => {
Err(Error::CustomError(format!($msg)))
};
($err:expr $(,)?) => {
Err(Error::from($err))
};
($fmt:expr, $($arg:tt)*) => {
Err(Error::CustomError(format!($fmt, $($arg)*)))
};
}

126
packages/cli/src/fastfs.rs Normal file
View file

@ -0,0 +1,126 @@
//! Methods for working with the filesystem that are faster than the std fs methods
//! Uses stuff like rayon, caching, and other optimizations
//!
//! Allows configuration in case you want to do some work while copying and allows you to track progress
use std::{
ffi::OsString,
path::{Path, PathBuf},
};
use brotli::enc::BrotliEncoderParams;
use walkdir::WalkDir;
pub fn copy_asset(src: &Path, dest: &Path) -> std::io::Result<()> {
if src.is_dir() {
copy_dir_to(src, dest, false)?;
} else {
std::fs::copy(src, dest)?;
}
Ok(())
}
pub(crate) fn copy_dir_to(
src_dir: &Path,
dest_dir: &Path,
pre_compress: bool,
) -> std::io::Result<()> {
let entries = std::fs::read_dir(src_dir)?;
let mut children: Vec<std::thread::JoinHandle<std::io::Result<()>>> = Vec::new();
for entry in entries.flatten() {
let entry_path = entry.path();
let path_relative_to_src = entry_path.strip_prefix(src_dir).unwrap();
let output_file_location = dest_dir.join(path_relative_to_src);
children.push(std::thread::spawn(move || {
if entry.file_type()?.is_dir() {
// If the file is a directory, recursively copy it into the output directory
if let Err(err) = copy_dir_to(&entry_path, &output_file_location, pre_compress) {
tracing::error!(
"Failed to pre-compress directory {}: {}",
entry_path.display(),
err
);
}
} else {
// Make sure the directory exists
std::fs::create_dir_all(output_file_location.parent().unwrap())?;
// Copy the file to the output directory
std::fs::copy(&entry_path, &output_file_location)?;
// Then pre-compress the file if needed
if pre_compress {
if let Err(err) = pre_compress_file(&output_file_location) {
tracing::error!(
"Failed to pre-compress static assets {}: {}",
output_file_location.display(),
err
);
}
// If pre-compression isn't enabled, we should remove the old compressed file if it exists
} else if let Some(compressed_path) = compressed_path(&output_file_location) {
_ = std::fs::remove_file(compressed_path);
}
}
Ok(())
}));
}
for child in children {
child.join().unwrap()?;
}
Ok(())
}
/// Get the path to the compressed version of a file
fn compressed_path(path: &Path) -> Option<PathBuf> {
let new_extension = match path.extension() {
Some(ext) => {
if ext.to_string_lossy().to_lowercase().ends_with("br") {
return None;
}
let mut ext = ext.to_os_string();
ext.push(".br");
ext
}
None => OsString::from("br"),
};
Some(path.with_extension(new_extension))
}
/// pre-compress a file with brotli
pub(crate) fn pre_compress_file(path: &Path) -> std::io::Result<()> {
let Some(compressed_path) = compressed_path(path) else {
return Ok(());
};
let file = std::fs::File::open(path)?;
let mut stream = std::io::BufReader::new(file);
let mut buffer = std::fs::File::create(compressed_path)?;
let params = BrotliEncoderParams::default();
brotli::BrotliCompress(&mut stream, &mut buffer, &params)?;
Ok(())
}
/// pre-compress all files in a folder
pub(crate) fn pre_compress_folder(path: &Path, pre_compress: bool) -> std::io::Result<()> {
let walk_dir = WalkDir::new(path);
for entry in walk_dir.into_iter().filter_map(|e| e.ok()) {
let entry_path = entry.path();
if entry_path.is_file() {
if pre_compress {
if let Err(err) = pre_compress_file(entry_path) {
tracing::error!("Failed to pre-compress file {entry_path:?}: {err}");
}
}
// If pre-compression isn't enabled, we should remove the old compressed file if it exists
else if let Some(compressed_path) = compressed_path(entry_path) {
_ = std::fs::remove_file(compressed_path);
}
}
}
Ok(())
}

168
packages/cli/src/filemap.rs Normal file
View file

@ -0,0 +1,168 @@
use dioxus_core::internal::{
HotReloadTemplateWithLocation, HotReloadedTemplate, TemplateGlobalKey,
};
use dioxus_core_types::HotReloadingContext;
use dioxus_rsx::CallBody;
use dioxus_rsx_hotreload::{ChangedRsx, HotReloadResult};
use std::path::PathBuf;
use std::{collections::HashMap, path::Path};
use syn::spanned::Spanned;
/// A struct that stores state of rsx! files and their parsed bodies.
///
/// This keeps track of changes to rsx files and helps determine if a file can be hotreloaded or if
/// the project needs to be rebuilt.
pub(crate) struct HotreloadFilemap {
/// Map of rust files to their contents
///
/// Once this is created, we won't change the contents, to preserve the ability to hotreload
/// from the original source mapping, unless the file change results in a full rebuild.
map: HashMap<PathBuf, CachedFile>,
}
struct CachedFile {
contents: String,
most_recent: Option<String>,
templates: HashMap<TemplateGlobalKey, HotReloadedTemplate>,
}
pub enum HotreloadResult {
Rsx(Vec<HotReloadTemplateWithLocation>),
Notreloadable,
NotParseable,
}
impl HotreloadFilemap {
/// Create a new empty filemap.
///
/// Make sure to fill the filemap, either automatically with `fill_from_filesystem` or manually with `add_file`;
pub fn new() -> Self {
Self {
map: Default::default(),
}
}
/// Add a file to the filemap.
pub(crate) fn add_file(&mut self, path: PathBuf, contents: String) {
self.map.insert(
path,
CachedFile {
contents,
most_recent: None,
templates: Default::default(),
},
);
}
/// Commit the changes to the filemap, overwriting the contents of the files
///
/// Removes any cached templates and replaces the contents of the files with the most recent
///
/// todo: we should-reparse the contents so we never send a new version, ever
pub fn force_rebuild(&mut self) {
for cached_file in self.map.values_mut() {
if let Some(most_recent) = cached_file.most_recent.take() {
cached_file.contents = most_recent;
}
cached_file.templates.clear();
}
}
/// Try to update the rsx in a file, returning the templates that were hotreloaded
///
/// If the templates could not be hotreloaded, this will return an error. This error isn't fatal, per se,
/// but it does mean that we could not successfully hotreload the file in-place.
///
/// It's expected that the file path you pass in is relative the crate root. We have no way of
/// knowing if it's *not*, so we'll assume it is.
///
/// This does not do any caching on what intermediate state, like previous hotreloads, so you need
/// to do that yourself.
pub(crate) fn update_rsx<Ctx: HotReloadingContext>(
&mut self,
path: &Path,
new_contents: String,
) -> HotreloadResult {
// Get the cached file if it exists
let Some(cached_file) = self.map.get_mut(path) else {
return HotreloadResult::NotParseable;
};
// We assume we can parse the old file and the new file
// We should just ignore hotreloading files that we can't parse
// todo(jon): we could probably keep the old `File` around instead of re-parsing on every hotreload
let (Ok(old_file), Ok(new_file)) = (
syn::parse_file(&cached_file.contents),
syn::parse_file(&new_contents),
) else {
tracing::debug!("Diff rsx returned not parseable");
return HotreloadResult::NotParseable;
};
// Update the most recent version of the file, so when we force a rebuild, we keep operating on the most recent version
cached_file.most_recent = Some(new_contents);
// todo(jon): allow server-fn hotreloading
// also whyyyyyyyyy is this (new, old) instead of (old, new)? smh smh smh
let Some(changed_rsx) = dioxus_rsx_hotreload::diff_rsx(&new_file, &old_file) else {
tracing::debug!("Diff rsx returned notreladable");
return HotreloadResult::Notreloadable;
};
let mut out_templates = vec![];
for ChangedRsx { old, new } in changed_rsx {
let old_start = old.span().start();
let old_parsed = syn::parse2::<CallBody>(old.tokens);
let new_parsed = syn::parse2::<CallBody>(new.tokens);
let (Ok(old_call_body), Ok(new_call_body)) = (old_parsed, new_parsed) else {
continue;
};
// Format the template location, normalizing the path
let file_name: String = path
.components()
.map(|c| c.as_os_str().to_string_lossy())
.collect::<Vec<_>>()
.join("/");
// Returns a list of templates that are hotreloadable
let results = HotReloadResult::new::<Ctx>(
&old_call_body.body,
&new_call_body.body,
file_name.clone(),
);
// If no result is returned, we can't hotreload this file and need to keep the old file
let Some(results) = results else {
return HotreloadResult::Notreloadable;
};
// Only send down templates that have roots, and ideally ones that have changed
// todo(jon): maybe cache these and don't send them down if they're the same
for (index, template) in results.templates {
if template.roots.is_empty() {
continue;
}
// Create the key we're going to use to identify this template
let key = TemplateGlobalKey {
file: file_name.clone(),
line: old_start.line,
column: old_start.column + 1,
index,
};
// if the template is the same, don't send its
if cached_file.templates.get(&key) == Some(&template) {
continue;
};
cached_file.templates.insert(key.clone(), template.clone());
out_templates.push(HotReloadTemplateWithLocation { template, key });
}
}
HotreloadResult::Rsx(out_templates)
}
}

View file

@ -1,88 +1,77 @@
#![doc = include_str!("../README.md")]
#![doc(html_logo_url = "https://avatars.githubusercontent.com/u/79236386")]
#![doc(html_favicon_url = "https://avatars.githubusercontent.com/u/79236386")]
#![cfg_attr(docsrs, feature(doc_cfg))]
pub mod assets;
pub mod builder;
pub mod cli;
pub mod config;
pub mod dioxus_crate;
pub mod dx_build_info;
pub mod error;
pub mod metadata;
pub mod serve;
pub mod settings;
pub mod tracer;
mod assets;
mod builder;
mod bundle_utils;
mod cli;
mod config;
mod dioxus_crate;
mod dx_build_info;
mod error;
mod fastfs;
mod filemap;
mod metadata;
mod platform;
mod profiles;
mod rustup;
mod serve;
mod settings;
mod tooling;
mod tracer;
pub(crate) use builder::*;
pub(crate) use cli::*;
pub(crate) use config::*;
pub(crate) use dioxus_crate::*;
pub(crate) use error::*;
pub(crate) use filemap::*;
pub(crate) use platform::*;
pub(crate) use rustup::*;
pub(crate) use settings::*;
pub(crate) use tracer::{TraceMsg, TraceSrc};
pub(crate) use tracer::*;
use anyhow::Context;
use clap::Parser;
use Commands::*;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Cli::parse();
// If we're being ran as a linker (likely from ourselves), we want to act as a linker instead.
if let Some(link_action) = link::LinkAction::from_env() {
return link_action.run();
}
let log_control = tracer::build_tracing();
// Start the tracer so it captures logs from the build engine before we start the builder
TraceController::initialize();
match args.action {
match Cli::parse().action {
Translate(opts) => opts
.translate()
.context(error_wrapper("Translation of HTML into RSX failed")),
.context("⛔️ Translation of HTML into RSX failed:"),
New(opts) => opts
.create()
.context(error_wrapper("Creating new project failed")),
New(opts) => opts.create().context("🚫 Creating new project failed:"),
Init(opts) => opts
.init()
.context(error_wrapper("Initializing a new project failed")),
Init(opts) => opts.init().context("🚫 Initializing a new project failed:"),
Config(opts) => opts
.config()
.context(error_wrapper("Configuring new project failed")),
Config(opts) => opts.config().context("🚫 Configuring new project failed:"),
Autoformat(opts) => opts
.autoformat()
.context(error_wrapper("Error autoformatting RSX")),
Autoformat(opts) => opts.autoformat().context("🚫 Error autoformatting RSX:"),
Check(opts) => opts
.check()
.await
.context(error_wrapper("Error checking RSX")),
Check(opts) => opts.check().await.context("🚫 Error checking RSX:"),
Link(opts) => opts
.link()
.context(error_wrapper("Error with linker passthrough")),
Clean(opts) => opts.clean().context("🚫 Cleaning project failed:"),
Build(mut opts) => opts
.run()
.await
.context(error_wrapper("Building project failed")),
Build(mut opts) => opts.build_it().await.context("🚫 Building project failed:"),
Clean(opts) => opts
.clean()
.context(error_wrapper("Cleaning project failed")),
Serve(opts) => opts.serve().await.context("🚫 Serving project failed:"),
Serve(opts) => opts
.serve(log_control)
.await
.context(error_wrapper("Serving project failed")),
Bundle(opts) => opts.bundle().await.context("🚫 Bundling project failed:"),
Bundle(opts) => opts
.bundle()
.await
.context(error_wrapper("Bundling project failed")),
Run(opts) => opts.run().await.context("🚫 Running project failed:"),
Doctor(opts) => opts.run().await.context("🚫 Checking project failed:"),
}
}
/// Simplifies error messages that use the same pattern.
fn error_wrapper(message: &str) -> String {
format!("🚫 {message}:")
}

View file

@ -8,12 +8,12 @@ use std::{
};
#[derive(Debug, Clone)]
pub struct CargoError {
pub(crate) struct CargoError {
msg: String,
}
impl CargoError {
pub fn new(msg: String) -> Self {
pub(crate) fn new(msg: String) -> Self {
Self { msg }
}
}

View file

@ -0,0 +1,186 @@
use serde::{Deserialize, Serialize};
use std::fmt::Display;
use std::str::FromStr;
#[derive(
Copy,
Clone,
Hash,
PartialEq,
Eq,
PartialOrd,
Ord,
Serialize,
Deserialize,
Debug,
Default,
clap::ValueEnum,
)]
#[non_exhaustive]
pub(crate) enum Platform {
/// Targeting the web platform using WASM
#[clap(name = "web")]
#[serde(rename = "web")]
#[default]
Web,
/// Targeting macos desktop
/// When running on macos, you can also use `--platform desktop` to build for the desktop
#[cfg_attr(target_os = "macos", clap(alias = "desktop"))]
#[clap(name = "macos")]
#[serde(rename = "macos")]
MacOS,
/// Targeting windows desktop
/// When running on windows, you can also use `--platform desktop` to build for the desktop
#[cfg_attr(target_os = "windows", clap(alias = "desktop"))]
#[clap(name = "windows")]
#[serde(rename = "windows")]
Windows,
/// Targeting linux desktop
/// When running on linux, you can also use `--platform desktop` to build for the desktop
#[cfg_attr(target_os = "linux", clap(alias = "desktop"))]
#[clap(name = "linux")]
#[serde(rename = "linux")]
Linux,
/// Targeting the ios platform
///
/// Can't work properly if you're not building from an Apple device.
#[clap(name = "ios")]
#[serde(rename = "ios")]
Ios,
/// Targeting the android platform
#[clap(name = "android")]
#[serde(rename = "android")]
Android,
/// Targeting the server platform using Axum and Dioxus-Fullstack
///
/// This is implicitly passed if `fullstack` is enabled as a feature. Using this variant simply
/// means you're only building the server variant without the `.wasm` to serve.
#[clap(name = "server")]
#[serde(rename = "server")]
Server,
/// Targeting the static generation platform using SSR and Dioxus-Fullstack
#[clap(name = "liveview")]
#[serde(rename = "liveview")]
Liveview,
}
/// An error that occurs when a platform is not recognized
pub(crate) struct UnknownPlatformError;
impl std::fmt::Display for UnknownPlatformError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Unknown platform")
}
}
impl FromStr for Platform {
type Err = UnknownPlatformError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"web" => Ok(Self::Web),
"macos" => Ok(Self::MacOS),
"windows" => Ok(Self::Windows),
"linux" => Ok(Self::Linux),
"liveview" => Ok(Self::Liveview),
"server" => Ok(Self::Server),
"ios" => Ok(Self::Ios),
"android" => Ok(Self::Android),
_ => Err(UnknownPlatformError),
}
}
}
impl Display for Platform {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Platform::Web => "web",
Platform::MacOS => "macos",
Platform::Windows => "windows",
Platform::Linux => "linux",
Platform::Ios => "ios",
Platform::Android => "android",
Platform::Server => "server",
Platform::Liveview => "liveview",
})
}
}
impl Platform {
/// Get the feature name for the platform in the dioxus crate
pub(crate) fn feature_name(&self) -> &str {
match self {
Platform::Web => "web",
Platform::MacOS => "desktop",
Platform::Windows => "desktop",
Platform::Linux => "desktop",
Platform::Server => "server",
Platform::Liveview => "liveview",
Platform::Ios => "mobile",
Platform::Android => "mobile",
}
}
/// Get the name of the folder we need to generate for this platform
///
/// Note that web and server share the same platform folder since we'll export the web folder as a bundle on its own
pub(crate) fn build_folder_name(&self) -> &'static str {
match self {
Platform::Web => "web",
Platform::Server => "web",
Platform::Liveview => "liveview",
Platform::Ios => "ios",
Platform::Android => "android",
Platform::Windows => "windows",
Platform::Linux => "linux",
Platform::MacOS => "macos",
}
}
pub(crate) fn expected_name(&self) -> &'static str {
match self {
Platform::Web => "Web",
Platform::MacOS => "Desktop MacOS",
Platform::Windows => "Desktop Windows",
Platform::Linux => "Desktop Linux",
Platform::Ios => "Mobile iOS",
Platform::Android => "Mobile Android",
Platform::Server => "Server",
Platform::Liveview => "Liveview",
}
}
pub(crate) fn autodetect_from_cargo_feature(feature: &str) -> Option<Self> {
match feature {
"web" => Some(Platform::Web),
"desktop" => {
#[cfg(target_os = "macos")]
{
Some(Platform::MacOS)
}
#[cfg(target_os = "windows")]
{
Some(Platform::Windows)
}
#[cfg(target_os = "linux")]
{
Some(Platform::Linux)
}
}
"mobile" => {
tracing::warn!("Could not autodetect mobile platform. Mobile platforms must be explicitly specified. Pass `--platform ios` or `--platform android` instead.");
None
}
"liveview" => Some(Platform::Liveview),
"server" => Some(Platform::Server),
_ => None,
}
}
}

170
packages/cli/src/rustup.rs Normal file
View file

@ -0,0 +1,170 @@
use crate::Result;
use anyhow::Context;
use std::path::PathBuf;
use tokio::process::Command;
#[derive(Debug, Default)]
pub struct RustupShow {
pub default_host: String,
pub rustup_home: PathBuf,
pub installed_toolchains: Vec<String>,
pub installed_targets: Vec<String>,
pub active_rustc: String,
pub active_toolchain: String,
}
impl RustupShow {
/// Collect the output of `rustup show` and parse it
pub async fn from_cli() -> Result<RustupShow> {
let output = Command::new("rustup").args(["show"]).output().await?;
let stdout =
String::from_utf8(output.stdout).context("Failed to parse rustup show output")?;
Ok(RustupShow::from_stdout(stdout))
}
/// Parse the output of `rustup show`
pub fn from_stdout(output: String) -> RustupShow {
// I apologize for this hand-rolled parser
let mut result = RustupShow::default();
let mut current_section = "";
for line in output.lines() {
let line = line.trim();
if line.is_empty() {
continue;
}
if line.starts_with("Default host: ") {
result.default_host = line.strip_prefix("Default host: ").unwrap().to_string();
} else if line.starts_with("rustup home: ") {
result.rustup_home =
PathBuf::from(line.strip_prefix("rustup home: ").unwrap().trim());
} else if line == "installed toolchains" {
current_section = "toolchains";
} else if line == "installed targets for active toolchain" {
current_section = "targets";
} else if line == "active toolchain" {
current_section = "active_toolchain";
} else {
if line.starts_with("---") || line.is_empty() {
continue;
}
match current_section {
"toolchains" => result
.installed_toolchains
.push(line.trim_end_matches(" (default)").to_string()),
"targets" => result.installed_targets.push(line.to_string()),
"active_toolchain" => {
if result.active_toolchain.is_empty() {
result.active_toolchain = line.to_string();
} else if line.starts_with("rustc ") {
result.active_rustc = line.to_string();
}
}
_ => {}
}
}
}
result
}
pub fn has_wasm32_unknown_unknown(&self) -> bool {
self.installed_targets
.contains(&"wasm32-unknown-unknown".to_string())
}
}
#[test]
fn parses_rustup_show() {
let output = r#"
Default host: aarch64-apple-darwin
rustup home: /Users/jonkelley/.rustup
installed toolchains
--------------------
stable-aarch64-apple-darwin (default)
nightly-2021-07-06-aarch64-apple-darwin
nightly-2021-09-24-aarch64-apple-darwin
nightly-2022-03-10-aarch64-apple-darwin
nightly-2023-03-18-aarch64-apple-darwin
nightly-2024-01-11-aarch64-apple-darwin
nightly-aarch64-apple-darwin
1.58.1-aarch64-apple-darwin
1.60.0-aarch64-apple-darwin
1.68.2-aarch64-apple-darwin
1.69.0-aarch64-apple-darwin
1.71.1-aarch64-apple-darwin
1.72.1-aarch64-apple-darwin
1.73.0-aarch64-apple-darwin
1.74.1-aarch64-apple-darwin
1.77.2-aarch64-apple-darwin
1.78.0-aarch64-apple-darwin
1.79.0-aarch64-apple-darwin
1.49-aarch64-apple-darwin
1.55-aarch64-apple-darwin
1.56-aarch64-apple-darwin
1.57-aarch64-apple-darwin
1.66-aarch64-apple-darwin
1.69-aarch64-apple-darwin
1.70-aarch64-apple-darwin
1.74-aarch64-apple-darwin
installed targets for active toolchain
--------------------------------------
aarch64-apple-darwin
aarch64-apple-ios
aarch64-apple-ios-sim
aarch64-linux-android
aarch64-unknown-linux-gnu
armv7-linux-androideabi
i686-linux-android
thumbv6m-none-eabi
thumbv7em-none-eabihf
wasm32-unknown-unknown
x86_64-apple-darwin
x86_64-apple-ios
x86_64-linux-android
x86_64-pc-windows-msvc
x86_64-unknown-linux-gnu
active toolchain
----------------
stable-aarch64-apple-darwin (default)
rustc 1.79.0 (129f3b996 2024-06-10)
"#;
let show = RustupShow::from_stdout(output.to_string());
assert_eq!(show.default_host, "aarch64-apple-darwin");
assert_eq!(show.rustup_home, PathBuf::from("/Users/jonkelley/.rustup"));
assert_eq!(
show.active_toolchain,
"stable-aarch64-apple-darwin (default)"
);
assert_eq!(show.active_rustc, "rustc 1.79.0 (129f3b996 2024-06-10)");
assert_eq!(show.installed_toolchains.len(), 26);
assert_eq!(show.installed_targets.len(), 15);
assert_eq!(
show.installed_targets,
vec![
"aarch64-apple-darwin".to_string(),
"aarch64-apple-ios".to_string(),
"aarch64-apple-ios-sim".to_string(),
"aarch64-linux-android".to_string(),
"aarch64-unknown-linux-gnu".to_string(),
"armv7-linux-androideabi".to_string(),
"i686-linux-android".to_string(),
"thumbv6m-none-eabi".to_string(),
"thumbv7em-none-eabihf".to_string(),
"wasm32-unknown-unknown".to_string(),
"x86_64-apple-darwin".to_string(),
"x86_64-apple-ios".to_string(),
"x86_64-linux-android".to_string(),
"x86_64-pc-windows-msvc".to_string(),
"x86_64-unknown-linux-gnu".to_string(),
]
)
}

View file

@ -0,0 +1,172 @@
use ratatui::prelude::*;
use std::fmt::{self, Display, Formatter};
/// A buffer that can be rendered to and then dumped as raw ansi codes
///
/// This is taken from a PR on the ratatui repo (https://github.com/ratatui/ratatui/pull/1065) and
/// modified to be more appropriate for our use case.
pub struct AnsiStringBuffer {
buf: Buffer,
}
// The sentinel character used to mark the end of the ansi string so when we dump it, we know where to stop
// Not sure if we actually still need this....
const SENTINEL: &str = "";
impl AnsiStringBuffer {
/// Creates a new `AnsiStringBuffer` with the given width and height.
pub(crate) fn new(width: u16, height: u16) -> Self {
Self {
buf: Buffer::empty(Rect::new(0, 0, width, height)),
}
}
/// Renders the given widget to the buffer, returning the string with the ansi codes
pub(crate) fn render(mut self, widget: impl Widget) -> String {
widget.render(self.buf.area, &mut self.buf);
self.trim_end();
self.to_string()
}
/// Trims the buffer to the last line, returning the number of cells to be rendered
#[allow(deprecated)]
fn trim_end(&mut self) {
for y in 0..self.buf.area.height {
let start_x = self.buf.area.width;
let mut first_non_empty = start_x - 1;
for x in (0..start_x).rev() {
if self.buf.get(x, y) != &buffer::Cell::EMPTY {
break;
}
first_non_empty = x;
}
self.buf.get_mut(first_non_empty, y).set_symbol(SENTINEL);
}
}
fn write_fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
let mut last_style = None;
for y in 0..self.buf.area.height {
for x in 0..self.buf.area.width {
let cell = self.buf.cell((x, y)).unwrap();
if cell.symbol() == SENTINEL {
f.write_str("\n")?;
break;
}
let style = (cell.fg, cell.bg, cell.modifier);
if last_style.is_none() || last_style != Some(style) {
write_cell_style(f, cell)?;
last_style = Some(style);
}
f.write_str(cell.symbol())?;
}
}
f.write_str("\u{1b}[0m")
}
}
impl Display for AnsiStringBuffer {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
self.write_fmt(f)
}
}
fn write_cell_style(f: &mut Formatter, cell: &buffer::Cell) -> fmt::Result {
f.write_str("\u{1b}[")?;
write_modifier(f, cell.modifier)?;
write_fg(f, cell.fg)?;
write_bg(f, cell.bg)?;
f.write_str("m")
}
fn write_modifier(f: &mut Formatter, modifier: Modifier) -> fmt::Result {
if modifier.contains(Modifier::BOLD) {
f.write_str("1;")?;
}
if modifier.contains(Modifier::DIM) {
f.write_str("2;")?;
}
if modifier.contains(Modifier::ITALIC) {
f.write_str("3;")?;
}
if modifier.contains(Modifier::UNDERLINED) {
f.write_str("4;")?;
}
if modifier.contains(Modifier::SLOW_BLINK) {
f.write_str("5;")?;
}
if modifier.contains(Modifier::RAPID_BLINK) {
f.write_str("6;")?;
}
if modifier.contains(Modifier::REVERSED) {
f.write_str("7;")?;
}
if modifier.contains(Modifier::HIDDEN) {
f.write_str("8;")?;
}
if modifier.contains(Modifier::CROSSED_OUT) {
f.write_str("9;")?;
}
Ok(())
}
fn write_fg(f: &mut Formatter, color: Color) -> fmt::Result {
f.write_str(match color {
Color::Reset => "39",
Color::Black => "30",
Color::Red => "31",
Color::Green => "32",
Color::Yellow => "33",
Color::Blue => "34",
Color::Magenta => "35",
Color::Cyan => "36",
Color::Gray => "37",
Color::DarkGray => "90",
Color::LightRed => "91",
Color::LightGreen => "92",
Color::LightYellow => "93",
Color::LightBlue => "94",
Color::LightMagenta => "95",
Color::LightCyan => "96",
Color::White => "97",
_ => "",
})?;
if let Color::Rgb(red, green, blue) = color {
f.write_fmt(format_args!("38;2;{red};{green};{blue}"))?;
}
if let Color::Indexed(i) = color {
f.write_fmt(format_args!("38;5;{i}"))?;
}
f.write_str(";")
}
fn write_bg(f: &mut Formatter, color: Color) -> fmt::Result {
f.write_str(match color {
Color::Reset => "49",
Color::Black => "40",
Color::Red => "41",
Color::Green => "42",
Color::Yellow => "43",
Color::Blue => "44",
Color::Magenta => "45",
Color::Cyan => "46",
Color::Gray => "47",
Color::DarkGray => "100",
Color::LightRed => "101",
Color::LightGreen => "102",
Color::LightYellow => "103",
Color::LightBlue => "104",
Color::LightMagenta => "105",
Color::LightCyan => "106",
Color::White => "107",
_ => "",
})?;
if let Color::Rgb(red, green, blue) = color {
f.write_fmt(format_args!("48;2;{red};{green};{blue}"))?;
}
if let Color::Indexed(i) = color {
f.write_fmt(format_args!("48;5;{i}"))?;
}
Ok(())
}

View file

@ -1,189 +0,0 @@
use crate::builder::BuildRequest;
use crate::builder::BuildResult;
use crate::builder::TargetPlatform;
use crate::builder::UpdateBuildProgress;
use crate::dioxus_crate::DioxusCrate;
use crate::serve::next_or_pending;
use crate::serve::Serve;
use crate::Result;
use futures_channel::mpsc::UnboundedReceiver;
use futures_util::future::OptionFuture;
use futures_util::stream::select_all;
use futures_util::StreamExt;
use std::process::Stdio;
use tokio::{
process::{Child, Command},
task::JoinHandle,
};
/// A handle to ongoing builds and then the spawned tasks themselves
pub struct Builder {
/// The results of the build
build_results: Option<JoinHandle<Result<Vec<BuildResult>>>>,
/// The progress of the builds
build_progress: Vec<(TargetPlatform, UnboundedReceiver<UpdateBuildProgress>)>,
/// The application we are building
config: DioxusCrate,
/// The arguments for the build
serve: Serve,
/// The children of the build process
pub children: Vec<(TargetPlatform, Child)>,
}
impl Builder {
/// Create a new builder
pub fn new(config: &DioxusCrate, serve: &Serve) -> Self {
let serve = serve.clone();
let config = config.clone();
Self {
build_results: None,
build_progress: Vec::new(),
config: config.clone(),
serve,
children: Vec::new(),
}
}
/// Start a new build - killing the current one if it exists
pub fn build(&mut self) -> Result<()> {
self.shutdown();
let build_requests =
BuildRequest::create(true, &self.config, self.serve.build_arguments.clone())?;
let mut set = tokio::task::JoinSet::new();
for build_request in build_requests {
let (mut tx, rx) = futures_channel::mpsc::unbounded();
self.build_progress
.push((build_request.target_platform, rx));
set.spawn(async move {
let res = build_request.build(tx.clone()).await;
if let Err(err) = &res {
let _ = tx.start_send(UpdateBuildProgress {
stage: crate::builder::Stage::Finished,
update: crate::builder::UpdateStage::Failed(format!("{err}")),
});
}
res
});
}
self.build_results = Some(tokio::spawn(async move {
let mut all_results = Vec::new();
while let Some(result) = set.join_next().await {
let res = result.map_err(|err| {
crate::Error::Unique(format!("Panic while building project: {err:?}"))
})??;
all_results.push(res);
}
Ok(all_results)
}));
Ok(())
}
/// Wait for any new updates to the builder - either it completed or gave us a message etc
pub async fn wait(&mut self) -> Result<BuilderUpdate> {
// Wait for build progress
let mut next = select_all(
self.build_progress
.iter_mut()
.map(|(platform, rx)| rx.map(move |update| (*platform, update))),
);
let next = next_or_pending(next.next());
// The ongoing builds directly
let results: OptionFuture<_> = self.build_results.as_mut().into();
let results = next_or_pending(results);
// The process exits
let children_empty = self.children.is_empty();
let process_exited = self
.children
.iter_mut()
.map(|(target, child)| Box::pin(async move { (*target, child.wait().await) }));
let process_exited = async move {
if children_empty {
return futures_util::future::pending().await;
}
futures_util::future::select_all(process_exited).await
};
// Wait for the next build result
tokio::select! {
build_results = results => {
self.build_results = None;
// If we have a build result, bubble it up to the main loop
let build_results = build_results.map_err(|_| crate::Error::Unique("Build join failed".to_string()))??;
Ok(BuilderUpdate::Ready { results: build_results })
}
(platform, update) = next => {
// If we have a build progress, send it to the screen
Ok(BuilderUpdate::Progress { platform, update })
}
((target, exit_status), _, _) = process_exited => {
Ok(BuilderUpdate::ProcessExited { status: exit_status, target_platform: target })
}
}
}
/// Shutdown the current build process
pub(crate) fn shutdown(&mut self) {
for (_, mut child) in self.children.drain(..) {
// Gracefully shtudown the desktop app
// It might have a receiver to do some cleanup stuff
if let Some(pid) = child.id() {
// on unix, we can send a signal to the process to shut down
#[cfg(unix)]
{
_ = Command::new("kill")
.args(["-s", "TERM", &pid.to_string()])
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn();
}
// on windows, use the `taskkill` command
#[cfg(windows)]
{
_ = Command::new("taskkill")
.args(["/F", "/PID", &pid.to_string()])
.stderr(Stdio::null())
.stdout(Stdio::null())
.spawn();
}
}
// Todo: add a timeout here to kill the process if it doesn't shut down within a reasonable time
_ = child.start_kill();
}
if let Some(tasks) = self.build_results.take() {
tasks.abort();
}
self.build_progress.clear();
}
}
pub enum BuilderUpdate {
Progress {
platform: TargetPlatform,
update: UpdateBuildProgress,
},
Ready {
results: Vec<BuildResult>,
},
ProcessExited {
target_platform: TargetPlatform,
status: Result<std::process::ExitStatus, std::io::Error>,
},
}

View file

@ -0,0 +1,32 @@
/// Detects if `dx` is being ran in a WSL environment.
///
/// We determine this based on whether the keyword `microsoft` or `wsl` is contained within the [`WSL_1`] or [`WSL_2`] files.
/// This may fail in the future as it isn't guaranteed by Microsoft.
/// See https://github.com/microsoft/WSL/issues/423#issuecomment-221627364
pub(crate) fn is_wsl() -> bool {
const WSL_1: &str = "/proc/sys/kernel/osrelease";
const WSL_2: &str = "/proc/version";
const WSL_KEYWORDS: [&str; 2] = ["microsoft", "wsl"];
// Test 1st File
if let Ok(content) = std::fs::read_to_string(WSL_1) {
let lowercase = content.to_lowercase();
for keyword in WSL_KEYWORDS {
if lowercase.contains(keyword) {
return true;
}
}
}
// Test 2nd File
if let Ok(content) = std::fs::read_to_string(WSL_2) {
let lowercase = content.to_lowercase();
for keyword in WSL_KEYWORDS {
if lowercase.contains(keyword) {
return true;
}
}
}
false
}

View file

@ -0,0 +1,451 @@
use crate::{AppBundle, Platform, Result};
use anyhow::Context;
use std::{
net::SocketAddr,
path::{Path, PathBuf},
process::Stdio,
};
use tokio::{
io::{AsyncBufReadExt, BufReader, Lines},
process::{Child, ChildStderr, ChildStdout, Command},
};
/// A handle to a running app.
///
/// Also includes a handle to its server if it exists.
/// The actual child processes might not be present (web) or running (died/killed).
///
/// The purpose of this struct is to accumulate state about the running app and its server, like
/// any runtime information needed to hotreload the app or send it messages.
///
/// We might want to bring in websockets here too, so we know the exact channels the app is using to
/// communicate with the devserver. Currently that's a broadcast-type system, so this struct isn't super
/// duper useful.
pub(crate) struct AppHandle {
pub(crate) app: AppBundle,
// These might be None if the app died or the user did not specify a server
pub(crate) app_child: Option<Child>,
pub(crate) server_child: Option<Child>,
// stdio for the app so we can read its stdout/stderr
// we don't map stdin today (todo) but most apps don't need it
pub(crate) app_stdout: Option<Lines<BufReader<ChildStdout>>>,
pub(crate) app_stderr: Option<Lines<BufReader<ChildStderr>>>,
pub(crate) server_stdout: Option<Lines<BufReader<ChildStdout>>>,
pub(crate) server_stderr: Option<Lines<BufReader<ChildStderr>>>,
/// The virtual directory that assets will be served from
/// Used mostly for apk/ipa builds since they live in simulator
pub(crate) runtime_asst_dir: Option<PathBuf>,
}
impl AppHandle {
pub async fn new(app: AppBundle) -> Result<Self> {
Ok(AppHandle {
app,
runtime_asst_dir: None,
app_child: None,
app_stderr: None,
app_stdout: None,
server_child: None,
server_stdout: None,
server_stderr: None,
})
}
pub(crate) async fn open(
&mut self,
devserver_ip: SocketAddr,
fullstack_address: Option<SocketAddr>,
open_browser: bool,
) -> Result<()> {
if let Some(addr) = fullstack_address {
tracing::debug!("Proxying fullstack server from port {:?}", addr);
}
// Set the env vars that the clients will expect
// These need to be stable within a release version (ie 0.6.0)
let mut envs = vec![
("DIOXUS_CLI_ENABLED", "true".to_string()),
(
dioxus_cli_config::DEVSERVER_RAW_ADDR_ENV,
devserver_ip.to_string(),
),
// unset the cargo dirs in the event we're running `dx` locally
// since the child process will inherit the env vars, we don't want to confuse the downstream process
("CARGO_MANIFEST_DIR", "".to_string()),
];
if let Some(addr) = fullstack_address {
envs.push((dioxus_cli_config::SERVER_IP_ENV, addr.ip().to_string()));
envs.push((dioxus_cli_config::SERVER_PORT_ENV, addr.port().to_string()));
}
// Launch the server if we have one and consume its stdout/stderr
if let Some(server) = self.app.server_exe() {
tracing::debug!("Launching server from path: {server:?}");
let mut child = Command::new(server)
.envs(envs.clone())
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.kill_on_drop(true)
.spawn()?;
let stdout = BufReader::new(child.stdout.take().unwrap());
let stderr = BufReader::new(child.stderr.take().unwrap());
self.server_stdout = Some(stdout.lines());
self.server_stderr = Some(stderr.lines());
self.server_child = Some(child);
}
// We try to use stdin/stdout to communicate with the app
let running_process = match self.app.build.build.platform() {
// Unfortunately web won't let us get a proc handle to it (to read its stdout/stderr) so instead
// use use the websocket to communicate with it. I wish we could merge the concepts here,
// like say, opening the socket as a subprocess, but alas, it's simpler to do that somewhere else.
Platform::Web => {
// Only the first build we open the web app, after that the user knows it's running
if open_browser {
self.open_web(envs, devserver_ip);
}
None
}
Platform::Ios => Some(self.open_ios_sim(envs).await?),
// https://developer.android.com/studio/run/emulator-commandline
Platform::Android => {
tracing::error!("Android is not yet supported, sorry!");
None
}
// These are all just basically running the main exe, but with slightly different resource dir paths
Platform::Server
| Platform::MacOS
| Platform::Windows
| Platform::Linux
| Platform::Liveview => Some(self.open_with_main_exe(envs)?),
};
// If we have a running process, we need to attach to it and wait for its outputs
if let Some(mut child) = running_process {
let stdout = BufReader::new(child.stdout.take().unwrap());
let stderr = BufReader::new(child.stderr.take().unwrap());
self.app_stdout = Some(stdout.lines());
self.app_stderr = Some(stderr.lines());
self.app_child = Some(child);
}
Ok(())
}
/// Hotreload an asset in the running app.
///
/// This will modify the build dir in place! Be careful! We generally assume you want all bundles
/// to reflect the latest changes, so we will modify the bundle.
///
/// However, not all platforms work like this, so we might also need to update a separate asset
/// dir that the system simulator might be providing. We know this is the case for ios simulators
/// and haven't yet checked for android.
///
/// This will return the bundled name of the asset such that we can send it to the clients letting
/// them know what to reload. It's not super important that this is robust since most clients will
/// kick all stylsheets without necessarily checking the name.
pub(crate) fn hotreload_bundled_asset(&self, changed_file: &PathBuf) -> Option<PathBuf> {
let mut bundled_name = None;
// Use the build dir if there's no runtime asset dir as the override. For the case of ios apps,
// we won't actually be using the build dir.
let asset_dir = match self.runtime_asst_dir.as_ref() {
Some(dir) => dir.to_path_buf().join("assets/"),
None => self.app.asset_dir(),
};
tracing::debug!("Hotreloading asset {changed_file:?} in target {asset_dir:?}");
// If the asset shares the same name in the bundle, reload that
let legacy_asset_dir = self.app.build.krate.legacy_asset_dir();
if changed_file.starts_with(&legacy_asset_dir) {
tracing::debug!("Hotreloading legacy asset {changed_file:?}");
let trimmed = changed_file.strip_prefix(legacy_asset_dir).unwrap();
let res = std::fs::copy(changed_file, asset_dir.join(trimmed));
bundled_name = Some(trimmed.to_path_buf());
if let Err(e) = res {
tracing::debug!("Failed to hotreload legacy asset {e}");
}
}
// The asset might've been renamed thanks to the manifest, let's attempt to reload that too
if let Some(resource) = self.app.app.assets.assets.get(changed_file).as_ref() {
let res = std::fs::copy(changed_file, asset_dir.join(&resource.bundled));
bundled_name = Some(PathBuf::from(&resource.bundled));
if let Err(e) = res {
tracing::debug!("Failed to hotreload asset {e}");
}
}
// Now we can return the bundled asset name to send to the hotreload engine
bundled_name
}
/// Open the native app simply by running its main exe
///
/// Eventually, for mac, we want to run the `.app` with `open` to fix issues with `dylib` paths,
/// but for now, we just run the exe directly. Very few users should be caring about `dylib` search
/// paths right now, but they will when we start to enable things like swift integration.
///
/// Server/liveview/desktop are all basically the same, though
fn open_with_main_exe(&mut self, envs: Vec<(&str, String)>) -> Result<Child> {
let child = Command::new(self.app.main_exe())
.envs(envs)
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.kill_on_drop(true)
.spawn()?;
Ok(child)
}
/// Open the web app by opening the browser to the given address.
/// Check if we need to use https or not, and if so, add the protocol.
/// Go to the basepath if that's set too.
fn open_web(&self, _envs: Vec<(&str, String)>, address: SocketAddr) {
let base_path = self.app.build.krate.config.web.app.base_path.clone();
let https = self
.app
.build
.krate
.config
.web
.https
.enabled
.unwrap_or_default();
let protocol = if https { "https" } else { "http" };
let base_path = match base_path.as_deref() {
Some(base_path) => format!("/{}", base_path.trim_matches('/')),
None => "".to_owned(),
};
_ = open::that(format!("{protocol}://{address}{base_path}"));
}
/// Use `xcrun` to install the app to the simulator
/// With simulators, we're free to basically do anything, so we don't need to do any fancy codesigning
/// or entitlements, or anything like that.
///
/// However, if there's no simulator running, this *might* fail.
///
/// TODO(jon): we should probably check if there's a simulator running before trying to install,
/// and open the simulator if we have to.
async fn open_ios_sim(&mut self, envs: Vec<(&str, String)>) -> Result<Child> {
tracing::debug!("Installing app to simulator {:?}", self.app.app_dir());
let res = Command::new("xcrun")
.arg("simctl")
.arg("install")
.arg("booted")
.arg(self.app.app_dir())
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.output()
.await?;
tracing::debug!("Installed app to simulator with exit code: {res:?}");
// Remap the envs to the correct simctl env vars
// iOS sim lets you pass env vars but they need to be in the format "SIMCTL_CHILD_XXX=XXX"
let ios_envs = envs
.iter()
.map(|(k, v)| (format!("SIMCTL_CHILD_{k}"), v.clone()));
let child = Command::new("xcrun")
.arg("simctl")
.arg("launch")
.arg("--console")
.arg("booted")
.arg("com.dioxuslabs")
.envs(ios_envs)
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.kill_on_drop(true)
.spawn()?;
tracing::debug!("Launched app on simulator with exit code: {child:?}");
Ok(child)
}
/// We have this whole thing figured out, but we don't actually use it yet.
///
/// Launching on devices is more complicated and requires us to codesign the app, which we don't
/// currently do.
///
/// Converting these commands shouldn't be too hard, but device support would imply we need
/// better support for codesigning and entitlements.
#[allow(unused)]
async fn open_ios_device(&self) -> Result<()> {
// APP_PATH="target/aarch64-apple-ios/debug/bundle/ios/DioxusApp.app"
// # get the device id by jq-ing the json of the device list
// xcrun devicectl list devices --json-output target/deviceid.json
// DEVICE_UUID=$(jq -r '.result.devices[0].identifier' target/deviceid.json)
// xcrun devicectl device install app --device "${DEVICE_UUID}" "${APP_PATH}" --json-output target/xcrun.json
// # get the installation url by jq-ing the json of the device install
// INSTALLATION_URL=$(jq -r '.result.installedApplications[0].installationURL' target/xcrun.json)
// # launch the app
// # todo: we can just background it immediately and then pick it up for loading its logs
// xcrun devicectl device process launch --device "${DEVICE_UUID}" "${INSTALLATION_URL}"
// # # launch the app and put it in background
// # xcrun devicectl device process launch --no-activate --verbose --device "${DEVICE_UUID}" "${INSTALLATION_URL}" --json-output "${XCRUN_DEVICE_PROCESS_LAUNCH_LOG_DIR}"
// # # Extract background PID of status app
// # STATUS_PID=$(jq -r '.result.process.processIdentifier' "${XCRUN_DEVICE_PROCESS_LAUNCH_LOG_DIR}")
// # "${GIT_ROOT}/scripts/wait-for-metro-port.sh" 2>&1
// # # now that metro is ready, resume the app from background
// # xcrun devicectl device process resume --device "${DEVICE_UUID}" --pid "${STATUS_PID}" > "${XCRUN_DEVICE_PROCESS_RESUME_LOG_DIR}" 2>&1
use serde_json::Value;
let app_path = self.app.app_dir();
install_app(&app_path).await?;
// 2. Determine which device the app was installed to
let device_uuid = get_device_uuid().await?;
// 3. Get the installation URL of the app
let installation_url = get_installation_url(&device_uuid, &app_path).await?;
// 4. Launch the app into the background, paused
launch_app_paused(&device_uuid, &installation_url).await?;
// 5. Pick up the paused app and resume it
resume_app(&device_uuid).await?;
async fn install_app(app_path: &PathBuf) -> Result<()> {
let output = Command::new("xcrun")
.args(["simctl", "install", "booted"])
.arg(app_path)
.output()
.await?;
if !output.status.success() {
return Err(format!("Failed to install app: {:?}", output).into());
}
Ok(())
}
async fn get_device_uuid() -> Result<String> {
let output = Command::new("xcrun")
.args([
"devicectl",
"list",
"devices",
"--json-output",
"target/deviceid.json",
])
.output()
.await?;
let json: Value =
serde_json::from_str(&std::fs::read_to_string("target/deviceid.json")?)
.context("Failed to parse xcrun output")?;
let device_uuid = json["result"]["devices"][0]["identifier"]
.as_str()
.ok_or("Failed to extract device UUID")?
.to_string();
Ok(device_uuid)
}
async fn get_installation_url(device_uuid: &str, app_path: &Path) -> Result<String> {
let output = Command::new("xcrun")
.args([
"devicectl",
"device",
"install",
"app",
"--device",
device_uuid,
&app_path.display().to_string(),
"--json-output",
"target/xcrun.json",
])
.output()
.await?;
if !output.status.success() {
return Err(format!("Failed to install app: {:?}", output).into());
}
let json: Value = serde_json::from_str(&std::fs::read_to_string("target/xcrun.json")?)
.context("Failed to parse xcrun output")?;
let installation_url = json["result"]["installedApplications"][0]["installationURL"]
.as_str()
.ok_or("Failed to extract installation URL")?
.to_string();
Ok(installation_url)
}
async fn launch_app_paused(device_uuid: &str, installation_url: &str) -> Result<()> {
let output = Command::new("xcrun")
.args([
"devicectl",
"device",
"process",
"launch",
"--no-activate",
"--verbose",
"--device",
device_uuid,
installation_url,
"--json-output",
"target/launch.json",
])
.output()
.await?;
if !output.status.success() {
return Err(format!("Failed to launch app: {:?}", output).into());
}
Ok(())
}
async fn resume_app(device_uuid: &str) -> Result<()> {
let json: Value = serde_json::from_str(&std::fs::read_to_string("target/launch.json")?)
.context("Failed to parse xcrun output")?;
let status_pid = json["result"]["process"]["processIdentifier"]
.as_u64()
.ok_or("Failed to extract process identifier")?;
let output = Command::new("xcrun")
.args([
"devicectl",
"device",
"process",
"resume",
"--device",
device_uuid,
"--pid",
&status_pid.to_string(),
])
.output()
.await?;
if !output.status.success() {
return Err(format!("Failed to resume app: {:?}", output).into());
}
Ok(())
}
unimplemented!("dioxus-cli doesn't support ios devices yet.")
}
}

View file

@ -1,313 +0,0 @@
use dioxus_core::internal::{HotReloadTemplateWithLocation, HotReloadedTemplate};
use dioxus_core_types::HotReloadingContext;
use dioxus_rsx::CallBody;
use dioxus_rsx_hotreload::{diff_rsx, ChangedRsx};
use krates::cm::MetadataCommand;
use krates::Cmd;
pub use std::collections::HashMap;
use std::{ffi::OsStr, path::PathBuf};
pub use std::{fs, io, path::Path};
pub use std::{fs::File, io::Read};
use syn::spanned::Spanned;
pub struct FileMap {
pub map: HashMap<PathBuf, CachedSynFile>,
/// Any errors that occurred while building the FileMap that were not fatal
pub errors: Vec<io::Error>,
pub in_workspace: HashMap<PathBuf, Option<PathBuf>>,
}
/// A cached file that has been parsed
///
/// We store the templates found in this file
pub struct CachedSynFile {
pub raw: String,
pub templates: HashMap<String, HotReloadedTemplate>,
}
impl FileMap {
/// Create a new FileMap from a crate directory
///
/// TODO: this should be created with a gitignore filter
pub fn create<Ctx: HotReloadingContext>(path: PathBuf) -> io::Result<FileMap> {
Self::create_with_filter::<Ctx>(path, |p| {
// skip some stuff we know is large by default
p.file_name() == Some(OsStr::new("target"))
|| p.file_name() == Some(OsStr::new("node_modules"))
})
}
/// Create a new FileMap from a crate directory
///
/// Takes a filter that when returns true, the file will be filtered out (ie not tracked)
/// Note that this is inverted from a typical .filter() method.
pub fn create_with_filter<Ctx: HotReloadingContext>(
crate_dir: PathBuf,
mut filter: impl FnMut(&Path) -> bool,
) -> io::Result<FileMap> {
let FileMapSearchResult { map, errors } = find_rs_files(crate_dir.clone(), &mut filter);
let mut map = Self {
map,
errors,
in_workspace: HashMap::new(),
};
map.load_assets::<Ctx>(crate_dir.as_path());
Ok(map)
}
/// Start watching assets for changes
///
/// This just diffs every file against itself and populates the tracked assets as it goes
pub fn load_assets<Ctx: HotReloadingContext>(&mut self, crate_dir: &Path) {
let keys = self.map.keys().cloned().collect::<Vec<_>>();
for file in keys {
_ = self.update_rsx::<Ctx>(file.as_path(), crate_dir);
}
}
/// Insert a file into the map and force a full rebuild
fn full_rebuild(&mut self, file_path: PathBuf, src: String) -> HotreloadError {
let cached_file = CachedSynFile {
raw: src.clone(),
templates: HashMap::new(),
};
self.map.insert(file_path, cached_file);
HotreloadError::Notreloadable
}
/// Try to update the rsx in a file
pub fn update_rsx<Ctx: HotReloadingContext>(
&mut self,
file_path: &Path,
crate_dir: &Path,
) -> Result<Vec<HotReloadTemplateWithLocation>, HotreloadError> {
let src = std::fs::read_to_string(file_path)?;
// If we can't parse the contents we want to pass it off to the build system to tell the user that there's a syntax error
let syntax = syn::parse_file(&src).map_err(|_err| HotreloadError::Parse)?;
let in_workspace = self.child_in_workspace(crate_dir)?;
// Get the cached file if it exists, otherwise try to create it
let Some(old_cached) = self.map.get_mut(file_path) else {
// if this is a new file, rebuild the project
let mut map = FileMap::create::<Ctx>(crate_dir.to_path_buf())?;
if let Some(err) = map.errors.pop() {
return Err(HotreloadError::Failure(err));
}
// merge the new map into the old map
self.map.extend(map.map);
return Err(HotreloadError::Notreloadable);
};
// If the cached file is not a valid rsx file, rebuild the project, forcing errors
// TODO: in theory the error is simply in the RsxCallbody. We could attempt to parse it using partial expansion
// And collect out its errors instead of giving up to a full rebuild
let old = syn::parse_file(&old_cached.raw).map_err(|_e| HotreloadError::Parse)?;
let instances = match diff_rsx(&syntax, &old) {
// If the changes were just some rsx, we can just update the template
//
// However... if the changes involved code in the rsx itself, this should actually be a CodeChanged
Some(rsx_calls) => rsx_calls,
// If the changes were some code, we should insert the file into the map and rebuild
// todo: not sure we even need to put the cached file into the map, but whatever
None => {
return Err(self.full_rebuild(file_path.to_path_buf(), src));
}
};
let mut out_templates = vec![];
for calls in instances.into_iter() {
let ChangedRsx { old, new } = calls;
let old_start = old.span().start();
let old_parsed = syn::parse2::<CallBody>(old.tokens);
let new_parsed = syn::parse2::<CallBody>(new.tokens);
let (Ok(old_call_body), Ok(new_call_body)) = (old_parsed, new_parsed) else {
continue;
};
// if the file!() macro is invoked in a workspace, the path is relative to the workspace root, otherwise it's relative to the crate root
// we need to check if the file is in a workspace or not and strip the prefix accordingly
let prefix = match in_workspace {
Some(ref workspace) => workspace,
_ => crate_dir,
};
let Ok(file) = file_path.strip_prefix(prefix) else {
continue;
};
let template_location = template_location(old_start, file);
// Returns a list of templates that are hotreloadable
let hotreload_result = dioxus_rsx_hotreload::HotReloadResult::new::<Ctx>(
&old_call_body.body,
&new_call_body.body,
template_location.clone(),
);
// if the template is not hotreloadable, we need to do a full rebuild
let Some(mut results) = hotreload_result else {
return Err(self.full_rebuild(file_path.to_path_buf(), src));
};
// Be careful to not send the bad templates
results.templates.retain(|idx, template| {
// dioxus cannot handle empty templates...
if template.roots.is_empty() {
return false;
}
let template_location = format_template_name(&template_location, *idx);
// if the template is the same, don't send its
if old_cached.templates.get(&template_location) == Some(&*template) {
return false;
};
// Update the most recent idea of the template
// This lets us know if the template has changed so we don't need to send it
old_cached
.templates
.insert(template_location, template.clone());
true
});
out_templates.extend(results.templates.into_iter().map(|(idx, template)| {
HotReloadTemplateWithLocation {
location: format_template_name(&template_location, idx),
template,
}
}));
}
Ok(out_templates)
}
fn child_in_workspace(&mut self, crate_dir: &Path) -> io::Result<Option<PathBuf>> {
if let Some(in_workspace) = self.in_workspace.get(crate_dir) {
return Ok(in_workspace.clone());
}
let mut cmd = Cmd::new();
let manafest_path = crate_dir.join("Cargo.toml");
cmd.manifest_path(&manafest_path);
let cmd: MetadataCommand = cmd.into();
let metadata = cmd
.exec()
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?;
let in_workspace = metadata.workspace_root != crate_dir;
let workspace_path = in_workspace.then(|| metadata.workspace_root.into());
self.in_workspace
.insert(crate_dir.to_path_buf(), workspace_path.clone());
Ok(workspace_path)
}
}
pub fn template_location(old_start: proc_macro2::LineColumn, file: &Path) -> String {
let line = old_start.line;
let column = old_start.column + 1;
// Always ensure the path components are separated by `/`.
let path = file
.components()
.map(|c| c.as_os_str().to_string_lossy())
.collect::<Vec<_>>()
.join("/");
path + ":" + line.to_string().as_str() + ":" + column.to_string().as_str()
}
pub fn format_template_name(name: &str, index: usize) -> String {
format!("{}:{}", name, index)
}
struct FileMapSearchResult {
map: HashMap<PathBuf, CachedSynFile>,
errors: Vec<io::Error>,
}
// todo: we could just steal the mod logic from rustc itself
fn find_rs_files(root: PathBuf, filter: &mut impl FnMut(&Path) -> bool) -> FileMapSearchResult {
let mut files = HashMap::new();
let mut errors = Vec::new();
if root.is_dir() {
let read_dir = match fs::read_dir(root) {
Ok(read_dir) => read_dir,
Err(err) => {
errors.push(err);
return FileMapSearchResult { map: files, errors };
}
};
for entry in read_dir.flatten() {
let path = entry.path();
if !filter(&path) {
let FileMapSearchResult {
map,
errors: child_errors,
} = find_rs_files(path, filter);
errors.extend(child_errors);
files.extend(map);
}
}
} else if root.extension().and_then(|s| s.to_str()) == Some("rs") {
if let Ok(mut file) = File::open(root.clone()) {
let mut src = String::new();
match file.read_to_string(&mut src) {
Ok(_) => {
let cached_file = CachedSynFile {
raw: src.clone(),
templates: HashMap::new(),
};
// track assets while we're here
files.insert(root, cached_file);
}
Err(err) => {
errors.push(err);
}
}
}
}
FileMapSearchResult { map: files, errors }
}
#[derive(Debug)]
pub enum HotreloadError {
Failure(io::Error),
Parse,
Notreloadable,
}
impl std::fmt::Display for HotreloadError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Failure(err) => write!(f, "Failed to parse file: {}", err),
Self::Parse => write!(f, "Failed to parse file"),
Self::Notreloadable => write!(f, "Template is not hotreloadable"),
}
}
}
impl From<io::Error> for HotreloadError {
fn from(err: io::Error) -> Self {
HotreloadError::Failure(err)
}
}

View file

@ -1,253 +1,258 @@
use std::future::{poll_fn, Future, IntoFuture};
use std::task::Poll;
use crate::builder::OpenArguments;
use crate::cli::serve::Serve;
use crate::dioxus_crate::DioxusCrate;
use crate::tracer::CLILogControl;
use crate::Result;
use crate::{
builder::{Stage, TargetPlatform, UpdateBuildProgress, UpdateStage},
BuildUpdate, Builder, DioxusCrate, Error, Platform, Result, ServeArgs, TraceController,
TraceSrc,
};
use futures_util::FutureExt;
use tokio::task::yield_now;
mod builder;
mod hot_reloading_file_map;
mod logs_tab;
mod ansi_buffer;
mod detect;
mod handle;
mod output;
mod proxy;
mod render;
mod runner;
mod server;
mod update;
mod watcher;
use builder::*;
use output::*;
use server::*;
use watcher::*;
pub(crate) use handle::*;
pub(crate) use output::*;
pub(crate) use runner::*;
pub(crate) use server::*;
pub(crate) use update::*;
pub(crate) use watcher::*;
/// For *all* builds the CLI spins up a dedicated webserver, file watcher, and build infrastructure to serve the project.
/// For *all* builds, the CLI spins up a dedicated webserver, file watcher, and build infrastructure to serve the project.
///
/// This includes web, desktop, mobile, fullstack, etc.
///
/// Platform specifics:
/// -------------------
/// - Web: we need to attach a filesystem server to our devtools webserver to serve the project. We
/// want to emulate GithubPages here since most folks are deploying there and expect things like
/// basepath to match.
/// - Fullstack: We spin up the same dev server but in this case the fullstack server itself needs to
/// proxy all dev requests to our dev server
/// - Desktop: We spin up the dev server but without a filesystem server.
/// - Mobile: Basically the same as desktop.
///
/// Notes:
/// - All filesystem changes are tracked here
/// - We send all updates to connected websocket connections. Even desktop connects via the websocket
/// - Right now desktop compiles tokio-tungstenite to do the connection but we could in theory reuse
/// the websocket logic from the webview for thinner builds.
/// When fullstack is enabled, we'll also build for the `server` target and then hotreload the server.
/// The "server" is special here since "fullstack" is functionally just an addition to the regular client
/// setup.
///
/// Todos(Jon):
/// - I'd love to be able to configure the CLI while it's running so we can change settingaon the fly.
/// This would require some light refactoring and potentially pulling in something like ratatui.
/// - Build a custom subscriber for logs by tools within this
/// - Handle logs from the build engine separately?
/// - Consume logs from the wasm for web/fullstack
/// - I'd love to be able to configure the CLI while it's running so we can change settings on the fly.
/// - I want us to be able to detect a `server_fn` in the project and then upgrade from a static server
/// to a dynamic one on the fly.
pub async fn serve_all(
serve: Serve,
dioxus_crate: DioxusCrate,
log_control: CLILogControl,
) -> Result<()> {
// Start the screen first so we collect build logs.
let mut screen = Output::start(&serve, log_control).expect("Failed to open terminal logger");
let mut builder = Builder::new(&dioxus_crate, &serve);
pub(crate) async fn serve_all(args: ServeArgs, krate: DioxusCrate) -> Result<()> {
let mut tracer = TraceController::redirect();
// Start the first build
builder.build()?;
// Note that starting the builder will queue up a build immediately
let mut builder = Builder::start(&krate, args.build_args())?;
let mut devserver = WebServer::start(&krate, &args)?;
let mut watcher = Watcher::start(&krate, &args);
let mut runner = AppRunner::start(&krate);
let mut screen = Output::start(&args)?;
let mut server = Server::start(&serve, &dioxus_crate);
let mut watcher = Watcher::start(&serve, &dioxus_crate);
// This is our default splash screen. We might want to make this a fancier splash screen in the future
// Also, these commands might not be the most important, but it's all we've got enabled right now
tracing::info!(
r#"Serving your Dioxus app: {} 🚀
let is_hot_reload = serve.server_arguments.hot_reload.unwrap_or(true);
- Press `ctrl+c` to exit the server
- Press `r` to rebuild the app
- Press `o` to open the app
- Press `t` to toggle cargo output
- Press `/` for more commands and shortcuts
loop {
// Make sure we don't hog the CPU: these loop { select! {} } blocks can starve the executor
yield_now().await;
Learn more at https://dioxuslabs.com/learn/0.6/getting_started"#,
krate.executable_name()
);
let err: Result<(), Error> = loop {
// Draw the state of the server to the screen
screen.render(&serve, &dioxus_crate, &builder, &server, &watcher);
screen.render(&args, &krate, &builder, &devserver, &watcher);
// And then wait for any updates before redrawing
tokio::select! {
// rebuild the project or hotreload it
_ = watcher.wait(), if is_hot_reload => {
if !watcher.pending_changes() {
continue
let msg = tokio::select! {
msg = builder.wait() => ServeUpdate::BuildUpdate(msg),
msg = watcher.wait() => msg,
msg = devserver.wait() => msg,
msg = screen.wait() => msg,
msg = runner.wait() => msg,
msg = tracer.wait() => msg,
};
match msg {
ServeUpdate::FilesChanged { files } => {
if files.is_empty() || !args.should_hotreload() {
continue;
}
let changed_files = watcher.dequeue_changed_files(&dioxus_crate);
let changed = changed_files.first().cloned();
let file = files[0].display().to_string();
let file = file.trim_start_matches(&krate.crate_dir().display().to_string());
// if change is hotreloadable, hotreload it
// and then send that update to all connected clients
if let Some(hr) = watcher.attempt_hot_reload(&dioxus_crate, changed_files) {
if let Some(hr) = runner.attempt_hot_reload(files) {
// Only send a hotreload message for templates and assets - otherwise we'll just get a full rebuild
if hr.templates.is_empty() && hr.assets.is_empty() && hr.unknown_files.is_empty() {
continue
if hr.templates.is_empty()
&& hr.assets.is_empty()
&& hr.unknown_files.is_empty()
{
tracing::debug!(dx_src = ?TraceSrc::Dev, "Ignoring file change: {}", file);
continue;
}
if let Some(changed_path) = changed {
let path_relative = changed_path.strip_prefix(dioxus_crate.crate_dir()).map(|p| p.display().to_string()).unwrap_or_else(|_| changed_path.display().to_string());
tracing::info!(dx_src = ?TraceSrc::Dev, "Hotreloaded {}", path_relative);
}
tracing::info!(dx_src = ?TraceSrc::Dev, "Hotreloading: {}", file);
devserver.send_hotreload(hr).await;
} else if runner.should_full_rebuild {
tracing::info!(dx_src = ?TraceSrc::Dev, "Full rebuild: {}", file);
server.send_hotreload(hr).await;
} else {
// If the change is not binary patchable, rebuild the project
// We're going to kick off a new build, interrupting the current build if it's ongoing
builder.build()?;
builder.rebuild(args.build_arguments.clone());
// Clear the hot reload changes
watcher.clear_hot_reload_changes();
// Clear the hot reload changes so we don't have out-of-sync issues with changed UI
runner.clear_hot_reload_changes();
runner.file_map.force_rebuild();
// Tell the server to show a loading page for any new requests
server.start_build().await;
devserver.start_build().await;
} else {
tracing::warn!(
"Rebuild required but is currently paused - press `r` to rebuild manually"
)
}
}
// reload the page
msg = server.wait() => {
// Run the server in the background
// Waiting for updates here lets us tap into when clients are added/removed
match msg {
Some(ServerUpdate::NewConnection) => {
if let Some(msg) = watcher.applied_hot_reload_changes() {
server.send_hotreload(msg).await;
}
}
Some(ServerUpdate::Message(msg)) => {
screen.new_ws_message(TargetPlatform::Web, msg);
}
None => {}
}
ServeUpdate::NewConnection => {
devserver
.send_hotreload(runner.applied_hot_reload_changes())
.await;
runner.client_connected().await;
}
// Received a message from the devtools server - currently we only use this for
// logging, so we just forward it the tui
ServeUpdate::WsMessage(msg) => {
screen.push_ws_message(Platform::Web, msg);
}
// Handle updates from the build engine
application = builder.wait() => {
// Wait for logs from the build engine
// These will cause us to update the screen
// We also can check the status of the builds here in case we have multiple ongoing builds
match application {
Ok(BuilderUpdate::Progress { platform, update }) => {
let update_clone = update.clone();
screen.new_build_progress(platform, update_clone);
server.update_build_status(screen.build_progress.progress(), update.stage.to_string()).await;
ServeUpdate::BuildUpdate(update) => {
// Queue any logs to be printed if need be
screen.new_build_update(&update);
// And then update the websocketed clients with the new build status in case they want it
devserver.new_build_update(&update).await;
// And then open the app if it's ready
// todo: there might be more things to do here that require coordination with other pieces of the CLI
// todo: maybe we want to shuffle the runner around to send an "open" command instead of doing that
match update {
// Send rebuild start message.
UpdateBuildProgress { stage: Stage::Compiling, update: UpdateStage::Start } => server.send_reload_start().await,
// Send rebuild failed message.
UpdateBuildProgress { stage: Stage::Finished, update: UpdateStage::Failed(_) } => server.send_reload_failed().await,
_ => {},
BuildUpdate::Progress { .. } => {}
BuildUpdate::CompilerMessage { message } => {
screen.push_cargo_log(message);
}
BuildUpdate::BuildFailed { err } => {
tracing::error!("Build failed: {}", err);
}
Ok(BuilderUpdate::Ready { results }) => {
if !results.is_empty() {
builder.children.clear();
}
// If we have a build result, open it
for build_result in results.iter() {
let child = build_result.open(
OpenArguments::new(
&serve.server_arguments,
server.fullstack_address(),
&dioxus_crate
BuildUpdate::BuildReady { bundle } => {
let handle = runner
.open(
bundle,
devserver.devserver_address(),
devserver.proxied_server_address(),
args.open.unwrap_or(false),
)
.await;
match handle {
// Update the screen + devserver with the new handle info
Ok(_handle) => {
devserver.send_reload_command().await;
}
Err(e) => tracing::error!("Failed to open app: {}", e),
}
}
}
}
// If the process exited *cleanly*, we can exit
ServeUpdate::ProcessExited { status, platform } => {
if !status.success() {
tracing::error!("Application [{platform}] exited with error: {status}");
} else {
tracing::info!(
r#"Application [{platform}] exited gracefully.
- To restart the app, press `r` to rebuild or `o` to open
- To exit the server, press `ctrl+c`"#
);
match child {
Ok(Some(child_proc)) => builder.children.push((build_result.target_platform, child_proc)),
Err(e) => {
tracing::error!(dx_src = ?TraceSrc::Build, "Failed to open build result: {e}");
break;
}
runner.kill(platform);
}
ServeUpdate::StdoutReceived { platform, msg } => {
screen.push_stdio(platform, msg, tracing::Level::INFO);
}
ServeUpdate::StderrReceived { platform, msg } => {
screen.push_stdio(platform, msg, tracing::Level::ERROR);
}
ServeUpdate::TracingLog { log } => {
screen.push_log(log);
}
ServeUpdate::RequestRebuild => {
// The spacing here is important-ish: we want
// `Full rebuild:` to line up with
// `Hotreloading:` to keep the alignment during long edit sessions
tracing::info!("Full rebuild: triggered manually");
builder.rebuild(args.build_arguments.clone());
runner.file_map.force_rebuild();
devserver.start_build().await
}
ServeUpdate::OpenApp => {
runner.open_existing(&devserver).await;
}
ServeUpdate::Redraw => {
// simply returning will cause a redraw
}
ServeUpdate::ToggleShouldRebuild => {
runner.should_full_rebuild = !runner.should_full_rebuild;
tracing::info!(
"Automatic rebuilds are currently: {}",
if runner.should_full_rebuild {
"enabled"
} else {
"disabled"
}
)
}
ServeUpdate::Exit { error } => match error {
Some(err) => break Err(anyhow::anyhow!("{}", err).into()),
None => break Ok(()),
},
_ => {}
}
}
};
// Make sure we immediately capture the stdout/stderr of the executable -
// otherwise it'll clobber our terminal output
screen.new_ready_app(&mut builder, results);
// And then finally tell the server to reload
server.send_reload_command().await;
},
// If the desktop process exited *cleanly*, we can exit
Ok(BuilderUpdate::ProcessExited { status, target_platform }) => {
// Then remove the child process
builder.children.retain(|(platform, _)| *platform != target_platform);
match (target_platform, status) {
(TargetPlatform::Desktop, Ok(status)) => {
if status.success() {
break;
}
else {
tracing::error!(dx_src = ?TraceSrc::Dev, "Application exited with status: {status}");
}
},
// Ignore the static generation platform exiting
(_ , Ok(_)) => {},
(_, Err(e)) => {
tracing::error!(dx_src = ?TraceSrc::Dev, "Application exited with error: {e}");
}
}
}
Err(err) => {
server.send_build_error(err).await;
}
}
}
// Handle input from the user using our settings
res = screen.wait() => {
match res {
Ok(false) => {}
// Request a rebuild.
Ok(true) => {
builder.build()?;
server.start_build().await
},
// Shutdown the server.
Err(_) => break,
}
}
}
}
// Run our cleanup logic here - maybe printing as we go?
// todo: more printing, logging, error handling in this phase
_ = devserver.shutdown().await;
_ = screen.shutdown();
_ = server.shutdown().await;
builder.shutdown();
builder.abort_all();
tracer.shutdown();
if let Err(err) = err {
eprintln!("Exiting with error: {}", err);
}
Ok(())
}
// Grab the output of a future that returns an option or wait forever
pub(crate) fn next_or_pending<F, T>(f: F) -> impl Future<Output = T>
where
F: IntoFuture<Output = Option<T>>,
{
let pinned = f.into_future().fuse();
let mut pinned = Box::pin(pinned);
poll_fn(move |cx| {
let next = pinned.as_mut().poll(cx);
match next {
Poll::Ready(Some(next)) => Poll::Ready(next),
_ => Poll::Pending,
}
})
.fuse()
}

File diff suppressed because it is too large Load diff

View file

@ -56,13 +56,13 @@ impl ProxyClient {
/// - the exact path of the proxy config's backend URL, e.g. /api
/// - the exact path with a trailing slash, e.g. /api/
/// - any subpath of the backend URL, e.g. /api/foo/bar
pub fn add_proxy(mut router: Router, proxy: &WebProxyConfig) -> Result<Router> {
pub(crate) fn add_proxy(mut router: Router, proxy: &WebProxyConfig) -> Result<Router> {
let url: Uri = proxy.backend.parse()?;
let path = url.path().to_string();
let trimmed_path = path.trim_start_matches('/');
if trimmed_path.is_empty() {
return Err(crate::Error::ProxySetupError(format!(
return Err(crate::Error::ProxySetup(format!(
"Proxy backend URL must have a non-empty path, e.g. {}/api instead of {}",
proxy.backend.trim_end_matches('/'),
proxy.backend
@ -142,6 +142,9 @@ pub(crate) fn proxy_to(
}
let uri = req.uri().clone();
// retry with backoff
let res = client.send(req).await.map_err(handle_error);
match res {
@ -150,6 +153,7 @@ pub(crate) fn proxy_to(
if uri.path().starts_with("/assets")
|| uri.path().starts_with("/_dioxus")
|| uri.path().starts_with("/public")
|| uri.path().starts_with("/wasm")
{
tracing::trace!(dx_src = ?TraceSrc::Dev, "[{}] {}", res.status().as_u16(), uri);
} else {
@ -284,7 +288,7 @@ mod test {
};
let router = super::add_proxy(Router::new(), &config);
match router.unwrap_err() {
crate::Error::ProxySetupError(e) => {
crate::Error::ProxySetup(e) => {
assert_eq!(
e,
"Proxy backend URL must have a non-empty path, e.g. http://localhost:8000/api instead of http://localhost:8000"

View file

@ -1,487 +0,0 @@
use super::BuildProgress;
use crate::{config::Platform, TraceMsg, TraceSrc};
use ansi_to_tui::IntoText as _;
use ratatui::{
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Style, Stylize},
text::{Line, Span, Text},
widgets::{Block, Borders, Clear, List, ListState, Paragraph, Widget, Wrap},
Frame,
};
use regex::Regex;
use std::fmt::Write as _;
use std::rc::Rc;
use tracing::Level;
pub struct TuiLayout {
/// The entire TUI body.
_body: Rc<[Rect]>,
/// The console where build logs are displayed.
console: Rc<[Rect]>,
// The filter drawer if the drawer is open.
filter_drawer: Option<Rc<[Rect]>>,
// The border that separates the console and info bars.
border_sep: Rect,
// The status bar that displays build status, platform, versions, etc.
status_bar: Rc<[Rect]>,
// Misc
filter_list_state: ListState,
}
impl TuiLayout {
pub fn new(frame_size: Rect, filter_open: bool) -> Self {
// The full layout
let body = Layout::default()
.direction(Direction::Vertical)
.constraints([
// Footer Status
Constraint::Length(1),
// Border Separator
Constraint::Length(1),
// Console
Constraint::Fill(1),
// Padding
Constraint::Length(1),
])
.split(frame_size);
let mut console_constraints = vec![Constraint::Fill(1)];
if filter_open {
console_constraints.push(Constraint::Length(1));
console_constraints.push(Constraint::Length(25));
}
// Build the console, where logs go.
let console = Layout::default()
.direction(Direction::Horizontal)
.constraints(console_constraints)
.split(body[2]);
let filter_drawer = match filter_open {
false => None,
true => Some(
Layout::default()
.direction(Direction::Horizontal)
.constraints([
Constraint::Length(1),
Constraint::Fill(1),
Constraint::Length(1),
])
.split(console[2]),
),
};
// Build the status bar.
let status_bar = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Fill(1), Constraint::Fill(1)])
.split(body[0]);
// Specify borders
let border_sep_top = body[1];
Self {
_body: body,
console,
filter_drawer,
border_sep: border_sep_top,
status_bar,
filter_list_state: ListState::default(),
}
}
/// Render all decorations.
pub fn render_decor(&self, frame: &mut Frame, filter_open: bool) {
frame.render_widget(
Block::new()
.borders(Borders::TOP)
.border_style(Style::new().white()),
self.border_sep,
);
if filter_open {
frame.render_widget(
Block::new()
.borders(Borders::LEFT)
.border_style(Style::new().white()),
self.console[1],
);
}
}
/// Render the console and it's logs, returning the number of lines required to render the entire log output.
pub fn render_console(
&self,
frame: &mut Frame,
scroll_position: u16,
messages: &[TraceMsg],
enabled_filters: &[String],
) -> u16 {
const LEVEL_MAX: usize = "BUILD: ".len();
let mut out_text = Text::default();
// Assemble the messages
for msg in messages.iter() {
let mut sub_line_padding = 0;
let text = msg.content.trim_end().into_text().unwrap_or_default();
for (idx, line) in text.lines.into_iter().enumerate() {
// Don't add any formatting for cargo messages.
let out_line = if msg.source != TraceSrc::Cargo {
if idx == 0 {
match msg.source {
TraceSrc::Dev => {
let mut spans = vec![Span::from(" DEV: ").light_magenta()];
for span in line.spans {
spans.push(span);
}
spans
}
TraceSrc::Build => {
let mut spans = vec![Span::from("BUILD: ").light_blue()];
for span in line.spans {
spans.push(span);
}
spans
}
_ => {
// Build level tag: `INFO:``
// We don't subtract 1 here for `:` because we still want at least 1 padding.
let padding =
build_msg_padding(LEVEL_MAX - msg.level.to_string().len() - 2);
let level = format!("{padding}{}: ", msg.level);
sub_line_padding += level.len();
let level_span = Span::from(level);
let level_span = match msg.level {
Level::TRACE => level_span.black(),
Level::DEBUG => level_span.light_magenta(),
Level::INFO => level_span.light_green(),
Level::WARN => level_span.light_yellow(),
Level::ERROR => level_span.light_red(),
};
let mut out_line = vec![level_span];
for span in line.spans {
out_line.push(span);
}
out_line
}
}
} else {
// Not the first line. Append the padding and merge into list.
let padding = build_msg_padding(sub_line_padding);
let mut out_line = vec![Span::from(padding)];
for span in line.spans {
out_line.push(span);
}
out_line
}
} else {
line.spans
};
out_text.push_line(Line::from(out_line));
}
}
// Only show messages for filters that are enabled.
let mut included_line_ids = Vec::new();
for filter in enabled_filters {
let re = Regex::new(filter);
for (index, line) in out_text.lines.iter().enumerate() {
let line_str = line.to_string();
match re {
Ok(ref re) => {
// sort by provided regex
if re.is_match(&line_str) {
included_line_ids.push(index);
}
}
Err(_) => {
// default to basic string storing
if line_str.contains(filter) {
included_line_ids.push(index);
}
}
}
}
}
included_line_ids.sort_unstable();
included_line_ids.dedup();
let out_lines = out_text.lines;
let mut out_text = Text::default();
if enabled_filters.is_empty() {
for line in out_lines {
out_text.push_line(line.clone());
}
} else {
for id in included_line_ids {
if let Some(line) = out_lines.get(id) {
out_text.push_line(line.clone());
}
}
}
let (console_width, _console_height) = self.get_console_size();
let paragraph = Paragraph::new(out_text)
.left_aligned()
.wrap(Wrap { trim: false });
let num_lines_wrapping = paragraph.line_count(console_width) as u16;
paragraph
.scroll((scroll_position, 0))
.render(self.console[0], frame.buffer_mut());
num_lines_wrapping
}
/// Render the status bar.
pub fn render_status_bar(
&self,
frame: &mut Frame,
_platform: Platform,
build_progress: &BuildProgress,
more_modal_open: bool,
filter_menu_open: bool,
dx_version: &str,
) {
// left aligned text
let mut spans = vec![
Span::from("🧬 dx").white(),
Span::from(" ").white(),
Span::from(dx_version).white(),
Span::from(" | ").dark_gray(),
];
// If there is build progress, render the current status.
let is_build_progress = !build_progress.current_builds.is_empty();
if is_build_progress {
// If the build failed, show a failed status.
// Otherwise, render current status.
let build_failed = build_progress
.current_builds
.values()
.any(|b| b.failed.is_some());
if build_failed {
spans.push(Span::from("Build failed ❌").red());
} else {
// spans.push(Span::from("status: ").gray());
let build = build_progress
.current_builds
.values()
.min_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal))
.unwrap();
spans.extend_from_slice(&build.make_spans(Rect::new(
0,
0,
build.max_layout_size(),
1,
)));
}
}
// right aligned text
let more_span = Span::from("[/] more");
let more_span = match more_modal_open {
true => more_span.light_yellow(),
false => more_span.gray(),
};
let filter_span = Span::from("[f] filter");
let filter_span = match filter_menu_open {
true => filter_span.light_yellow(),
false => filter_span.gray(),
};
// Right-aligned text
let right_line = Line::from(vec![
Span::from("[o] open").gray(),
Span::from(" | ").gray(),
Span::from("[r] rebuild").gray(),
Span::from(" | ").gray(),
filter_span,
Span::from(" | ").dark_gray(),
more_span,
]);
frame.render_widget(
Paragraph::new(Line::from(spans)).left_aligned(),
self.status_bar[0],
);
// Render the info
frame.render_widget(
Paragraph::new(right_line).right_aligned(),
self.status_bar[1],
);
}
/// Renders the "more" modal to show extra info/keybinds accessible via the more keybind.
pub fn render_more_modal(&self, frame: &mut Frame) {
let modal = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Fill(1), Constraint::Length(5)])
.split(self.console[0])[1];
frame.render_widget(Clear, modal);
frame.render_widget(Block::default().borders(Borders::ALL), modal);
// Render under construction message
frame.render_widget(
Paragraph::new("Under construction, please check back at a later date!")
.alignment(Alignment::Center),
modal,
);
}
/// Render the filter drawer menu.
pub fn render_filter_menu(
&mut self,
frame: &mut Frame,
filters: &[(String, bool)],
selected_filter_index: usize,
search_mode: bool,
search_input: Option<&String>,
) {
let Some(ref filter_drawer) = self.filter_drawer else {
return;
};
// Vertical layout
let container = Layout::default()
.constraints([
Constraint::Length(4),
Constraint::Fill(1),
Constraint::Length(7),
])
.direction(Direction::Vertical)
.split(filter_drawer[1]);
// Render the search section.
let top_area = Layout::default()
.constraints([
Constraint::Length(1),
Constraint::Length(1),
Constraint::Length(1),
Constraint::Length(1),
])
.direction(Direction::Vertical)
.split(container[0]);
let search_title = Line::from("Search").gray();
let search_input_block = Block::new().bg(Color::White);
let search_text = match search_input {
Some(s) => s,
None => {
if search_mode {
"..."
} else {
"[enter] to type..."
}
}
};
let search_input = Paragraph::new(Line::from(search_text))
.fg(Color::Black)
.block(search_input_block);
frame.render_widget(search_title, top_area[1]);
frame.render_widget(search_input, top_area[2]);
// Render the filters
let list_area = container[1];
let mut list_items = Vec::new();
for (filter, enabled) in filters {
let filter = Span::from(filter);
let filter = match enabled {
true => filter.light_yellow(),
false => filter.dark_gray(),
};
list_items.push(filter);
}
list_items.reverse();
let list = List::new(list_items).highlight_symbol("» ");
self.filter_list_state.select(Some(selected_filter_index));
frame.render_stateful_widget(list, list_area, &mut self.filter_list_state);
// Render the keybind list at the bottom.
let keybinds = container[2];
let lines = vec![
Line::from(""),
Line::from("[↑] Up").white(),
Line::from("[↓] Down").white(),
Line::from("[←] Remove").white(),
Line::from("[→] Toggle").white(),
Line::from("[enter] Type / Submit").white(),
];
let text = Text::from(lines);
frame.render_widget(text, keybinds);
}
/// Returns the height of the console TUI area in number of lines.
pub fn get_console_size(&self) -> (u16, u16) {
(self.console[0].width, self.console[0].height)
}
/// Render the current scroll position at the top right corner of the frame
pub(crate) fn render_current_scroll(
&self,
scroll_position: u16,
lines: u16,
console_height: u16,
frame: &mut Frame<'_>,
) {
let mut row = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Length(1)])
.split(self.console[0])[0];
// Hack: shove upwards the text to overlap with the border so text selection doesn't accidentally capture the number
row.y -= 1;
let max_scroll = lines.saturating_sub(console_height);
if max_scroll == 0 {
return;
}
let remaining_lines = max_scroll.saturating_sub(scroll_position);
if remaining_lines != 0 {
let text = vec![Span::from(format!(" {remaining_lines}")).dark_gray()];
frame.render_widget(
Paragraph::new(Line::from(text))
.alignment(Alignment::Right)
.block(Block::default()),
row,
);
}
}
}
/// Generate a string with a specified number of spaces.
fn build_msg_padding(padding_len: usize) -> String {
let mut padding = String::new();
for _ in 0..padding_len {
_ = write!(padding, " ");
}
padding
}

View file

@ -0,0 +1,312 @@
use super::{AppHandle, ServeUpdate, WebServer};
use crate::{
AppBundle, DioxusCrate, HotreloadFilemap, HotreloadResult, Platform, Result, TraceSrc,
};
use dioxus_core::internal::TemplateGlobalKey;
use dioxus_devtools_types::HotReloadMsg;
use dioxus_html::HtmlCtx;
use futures_util::{future::OptionFuture, stream::FuturesUnordered};
use ignore::gitignore::Gitignore;
use std::{
collections::{HashMap, HashSet},
net::SocketAddr,
path::PathBuf,
};
use tokio_stream::StreamExt;
pub(crate) struct AppRunner {
pub(crate) running: HashMap<Platform, AppHandle>,
pub(crate) krate: DioxusCrate,
pub(crate) file_map: HotreloadFilemap,
pub(crate) ignore: Gitignore,
pub(crate) applied_hot_reload_message: HotReloadMsg,
pub(crate) builds_opened: usize,
pub(crate) should_full_rebuild: bool,
}
impl AppRunner {
/// Create the AppRunner and then initialize the filemap with the crate directory.
pub(crate) fn start(krate: &DioxusCrate) -> Self {
let mut runner = Self {
running: Default::default(),
file_map: HotreloadFilemap::new(),
applied_hot_reload_message: Default::default(),
ignore: krate.workspace_gitignore(),
krate: krate.clone(),
builds_opened: 0,
should_full_rebuild: true,
};
// todo(jon): this might take a while so we should try and background it, or make it lazy somehow
// we could spawn a thread to search the FS and then when it returns we can fill the filemap
// in testing, if this hits a massive directory, it might take several seconds with no feedback.
for krate in krate.all_watched_crates() {
runner.fill_filemap(krate);
}
runner
}
pub(crate) async fn wait(&mut self) -> ServeUpdate {
// If there are no running apps, we can just return pending to avoid deadlocking
if self.running.is_empty() {
return futures_util::future::pending().await;
}
self.running
.iter_mut()
.map(|(platform, handle)| async {
use ServeUpdate::*;
let platform = *platform;
tokio::select! {
Some(Ok(Some(msg))) = OptionFuture::from(handle.app_stdout.as_mut().map(|f| f.next_line())) => {
StdoutReceived { platform, msg }
},
Some(Ok(Some(msg))) = OptionFuture::from(handle.app_stderr.as_mut().map(|f| f.next_line())) => {
StderrReceived { platform, msg }
},
Some(status) = OptionFuture::from(handle.app_child.as_mut().map(|f| f.wait())) => {
match status {
Ok(status) => ProcessExited { status, platform },
Err(_err) => todo!("handle error in process joining?"),
}
}
Some(Ok(Some(msg))) = OptionFuture::from(handle.server_stdout.as_mut().map(|f| f.next_line())) => {
StdoutReceived { platform: Platform::Server, msg }
},
Some(Ok(Some(msg))) = OptionFuture::from(handle.server_stderr.as_mut().map(|f| f.next_line())) => {
StderrReceived { platform: Platform::Server, msg }
},
Some(status) = OptionFuture::from(handle.server_child.as_mut().map(|f| f.wait())) => {
match status {
Ok(status) => ProcessExited { status, platform: Platform::Server },
Err(_err) => todo!("handle error in process joining?"),
}
}
else => futures_util::future::pending().await
}
})
.collect::<FuturesUnordered<_>>()
.next()
.await
.expect("Stream to pending if not empty")
}
/// Finally "bundle" this app and return a handle to it
pub(crate) async fn open(
&mut self,
app: AppBundle,
devserver_ip: SocketAddr,
fullstack_address: Option<SocketAddr>,
should_open_web: bool,
) -> Result<&AppHandle> {
let platform = app.build.build.platform();
// Drop the old handle
// todo(jon): we should instead be sending the kill signal rather than dropping the process
// This would allow a more graceful shutdown and fix bugs like desktop not retaining its size
self.kill(platform);
// wait a tiny sec for the processes to die so we don't have fullstack servers on top of each other
// todo(jon): we should allow rebinding to the same port in fullstack itself
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
// Start the new app before we kill the old one to give it a little bit of time
let mut handle = AppHandle::new(app).await?;
handle
.open(
devserver_ip,
fullstack_address,
self.builds_opened == 0 && should_open_web,
)
.await?;
self.builds_opened += 1;
self.running.insert(platform, handle);
Ok(self.running.get(&platform).unwrap())
}
pub(crate) fn kill(&mut self, platform: Platform) {
self.running.remove(&platform);
}
/// Open an existing app bundle, if it exists
pub(crate) async fn open_existing(&self, devserver: &WebServer) {
if let Some(address) = devserver.server_address() {
let url = format!("http://{address}");
tracing::debug!("opening url: {url}");
_ = open::that(url);
}
}
pub(crate) fn attempt_hot_reload(
&mut self,
modified_files: Vec<PathBuf>,
) -> Option<HotReloadMsg> {
// If we have any changes to the rust files, we need to update the file map
let mut templates = vec![];
// Prepare the hotreload message we need to send
let mut edited_rust_files = Vec::new();
let mut assets = Vec::new();
for path in modified_files {
// for various assets that might be linked in, we just try to hotreloading them forcefully
// That is, unless they appear in an include! macro, in which case we need to a full rebuild....
let Some(ext) = path.extension().and_then(|v| v.to_str()) else {
continue;
};
// If it's a rust file, we want to hotreload it using the filemap
if ext == "rs" {
edited_rust_files.push(path);
continue;
}
// Otherwise, it might be an asset and we should look for it in all the running apps
for runner in self.running.values() {
if let Some(bundled_name) = runner.hotreload_bundled_asset(&path) {
// todo(jon): don't hardcode this here
let asset_relative = PathBuf::from("/assets/").join(bundled_name);
assets.push(asset_relative);
}
}
}
// Multiple runners might have queued the same asset, so dedup them
assets.dedup();
// Process the rust files
for rust_file in edited_rust_files {
// Strip the prefix before sending it to the filemap
let Ok(path) = rust_file.strip_prefix(self.krate.workspace_dir()) else {
tracing::error!(
"Hotreloading file outside of the crate directory: {:?}",
rust_file
);
continue;
};
// And grabout the contents
let contents = std::fs::read_to_string(&rust_file).unwrap();
match self.file_map.update_rsx::<HtmlCtx>(path, contents) {
HotreloadResult::Rsx(new) => templates.extend(new),
// The rust file may have failed to parse, but that is most likely
// because the user is in the middle of adding new code
// We just ignore the error and let Rust analyzer warn about the problem
HotreloadResult::Notreloadable => return None,
HotreloadResult::NotParseable => {
tracing::debug!(dx_src = ?TraceSrc::Dev, "Error hotreloading file - not parseable {rust_file:?}")
}
}
}
let msg = HotReloadMsg {
templates,
assets,
unknown_files: vec![],
};
self.add_hot_reload_message(&msg);
Some(msg)
}
/// Get any hot reload changes that have been applied since the last full rebuild
pub(crate) fn applied_hot_reload_changes(&mut self) -> HotReloadMsg {
self.applied_hot_reload_message.clone()
}
/// Clear the hot reload changes. This should be called any time a new build is starting
pub(crate) fn clear_hot_reload_changes(&mut self) {
self.applied_hot_reload_message = Default::default();
}
/// Store the hot reload changes for any future clients that connect
fn add_hot_reload_message(&mut self, msg: &HotReloadMsg) {
let applied = &mut self.applied_hot_reload_message;
// Merge the assets, unknown files, and templates
// We keep the newer change if there is both a old and new change
let mut templates: HashMap<TemplateGlobalKey, _> = std::mem::take(&mut applied.templates)
.into_iter()
.map(|template| (template.key.clone(), template))
.collect();
let mut assets: HashSet<PathBuf> =
std::mem::take(&mut applied.assets).into_iter().collect();
let mut unknown_files: HashSet<PathBuf> = std::mem::take(&mut applied.unknown_files)
.into_iter()
.collect();
for template in &msg.templates {
templates.insert(template.key.clone(), template.clone());
}
assets.extend(msg.assets.iter().cloned());
unknown_files.extend(msg.unknown_files.iter().cloned());
applied.templates = templates.into_values().collect();
applied.assets = assets.into_iter().collect();
applied.unknown_files = unknown_files.into_iter().collect();
}
pub(crate) async fn client_connected(&mut self) {
for (platform, runner) in self.running.iter_mut() {
// Assign the runtime asset dir to the runner
if *platform == Platform::Ios {
// xcrun simctl get_app_container booted com.dioxuslabs
let res = tokio::process::Command::new("xcrun")
.arg("simctl")
.arg("get_app_container")
.arg("booted")
.arg("com.dioxuslabs")
.output()
.await;
if let Ok(res) = res {
tracing::debug!("Using runtime asset dir: {:?}", res);
if let Ok(out) = String::from_utf8(res.stdout) {
let out = out.trim();
tracing::debug!("Setting Runtime asset dir: {out:?}");
runner.runtime_asst_dir = Some(PathBuf::from(out));
}
}
}
}
}
/// Fill the filemap with files from the filesystem, using the given filter to determine which files to include.
///
/// You can use the filter with something like a gitignore to only include files that are relevant to your project.
/// We'll walk the filesystem from the given path and recursively search for all files that match the filter.
///
/// The filter function takes a path and returns true if the file should be included in the filemap.
/// Generally this will only be .rs files
///
/// If a file couldn't be parsed, we don't fail. Instead, we save the error.
pub fn fill_filemap(&mut self, path: PathBuf) {
if self.ignore.matched(&path, path.is_dir()).is_ignore() {
return;
}
// If the file is a .rs file, add it to the filemap
if path.extension().and_then(|s| s.to_str()) == Some("rs") {
if let Ok(contents) = std::fs::read_to_string(&path) {
if let Ok(path) = path.strip_prefix(self.krate.workspace_dir()) {
self.file_map.add_file(path.to_path_buf(), contents);
}
}
return;
}
// If it's not, we'll try to read the directory
if path.is_dir() {
if let Ok(read_dir) = std::fs::read_dir(&path) {
for entry in read_dir.flatten() {
self.fill_filemap(entry.path());
}
}
}
}
}

View file

@ -1,183 +1,183 @@
use crate::config::{Platform, WebHttpsConfig};
use crate::serve::{next_or_pending, Serve};
use crate::{dioxus_crate::DioxusCrate, TraceSrc};
use crate::{Error, Result};
use axum::extract::{Request, State};
use axum::middleware::{self, Next};
use crate::{
config::WebHttpsConfig,
serve::{ServeArgs, ServeUpdate},
BuildStage, BuildUpdate, DioxusCrate, Platform, Result, TraceSrc,
};
use anyhow::Context;
use axum::{
body::Body,
extract::{
ws::{Message, WebSocket},
WebSocketUpgrade,
Request, State, WebSocketUpgrade,
},
http::{
header::{HeaderName, HeaderValue, CACHE_CONTROL, EXPIRES, PRAGMA},
Method, Response, StatusCode,
},
middleware::{self, Next},
response::IntoResponse,
routing::{get, get_service},
Extension, Router,
};
use axum_server::tls_rustls::RustlsConfig;
use dioxus_devtools::{DevserverMsg, HotReloadMsg};
use dioxus_devtools_types::{DevserverMsg, HotReloadMsg};
use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
use futures_util::stream;
use futures_util::{stream::FuturesUnordered, StreamExt};
use hyper::header::ACCEPT;
use futures_util::{
future,
stream::{self, FuturesUnordered},
StreamExt,
};
use hyper::HeaderMap;
use serde::{Deserialize, Serialize};
use std::net::TcpListener;
use std::path::Path;
use std::sync::Arc;
use std::sync::RwLock;
use std::{
convert::Infallible,
fs, io,
net::{IpAddr, SocketAddr},
process::Command,
net::{IpAddr, SocketAddr, TcpListener},
path::Path,
sync::Arc,
sync::RwLock,
};
use tokio::task::JoinHandle;
use tower::ServiceBuilder;
use tower_http::{
cors::{Any, CorsLayer},
cors::Any,
services::fs::{ServeDir, ServeFileSystemResponseBody},
ServiceBuilderExt,
};
pub enum ServerUpdate {
NewConnection,
Message(Message),
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(tag = "type", content = "data")]
enum Status {
ClientInit {
application_name: String,
platform: String,
},
Building {
progress: f64,
build_message: String,
},
BuildError {
error: String,
},
Ready,
}
#[derive(Debug, Clone)]
struct SharedStatus(Arc<RwLock<Status>>);
impl SharedStatus {
fn new(status: Status) -> Self {
Self(Arc::new(RwLock::new(status)))
}
fn set(&self, status: Status) {
*self.0.write().unwrap() = status;
}
fn get(&self) -> Status {
self.0.read().unwrap().clone()
}
}
pub(crate) struct Server {
pub hot_reload_sockets: Vec<WebSocket>,
pub build_status_sockets: Vec<WebSocket>,
pub ip: SocketAddr,
pub new_hot_reload_sockets: UnboundedReceiver<WebSocket>,
pub new_build_status_sockets: UnboundedReceiver<WebSocket>,
_server_task: JoinHandle<Result<()>>,
/// We proxy (not hot reloading) fullstack requests to this port
pub fullstack_port: Option<u16>,
/// The webserver that serves statics assets (if fullstack isn't already doing that) and the websocket
/// communication layer that we use to send status updates and hotreloads to the client.
///
/// todo(jon): we should merge the build status and hotreload sockets into just a "devtools" socket
/// which carries all the message types. This would make it easier for us to add more message types
/// and better tooling on the pages that we serve.
pub(crate) struct WebServer {
devserver_ip: IpAddr,
devserver_port: u16,
proxied_port: Option<u16>,
hot_reload_sockets: Vec<WebSocket>,
build_status_sockets: Vec<WebSocket>,
new_hot_reload_sockets: UnboundedReceiver<WebSocket>,
new_build_status_sockets: UnboundedReceiver<WebSocket>,
build_status: SharedStatus,
application_name: String,
platform: String,
platform: Platform,
}
impl Server {
pub fn start(serve: &Serve, cfg: &DioxusCrate) -> Self {
impl WebServer {
/// Start the development server.
/// This will set up the default http server if there's no server specified (usually via fullstack).
///
/// This will also start the websocket server that powers the devtools. If you want to communicate
/// with connected devtools clients, this is the place to do it.
pub(crate) fn start(krate: &DioxusCrate, args: &ServeArgs) -> Result<Self> {
let (hot_reload_sockets_tx, hot_reload_sockets_rx) = futures_channel::mpsc::unbounded();
let (build_status_sockets_tx, build_status_sockets_rx) = futures_channel::mpsc::unbounded();
let build_status = SharedStatus::new(Status::Building {
progress: 0.0,
build_message: "Starting the build...".to_string(),
});
let devserver_ip = args.address.addr;
let devserver_port = args.address.port;
let devserver_address = SocketAddr::new(devserver_ip, devserver_port);
let addr = serve.server_arguments.address.address();
let start_browser = serve.server_arguments.open.unwrap_or_default();
// All servers will end up behind us (the devserver) but on a different port
// This is so we can serve a loading screen as well as devtools without anything particularly fancy
let proxied_port = args
.should_proxy_build()
.then(|| get_available_port(devserver_ip))
.flatten();
// If we're serving a fullstack app, we need to find a port to proxy to
let fullstack_port = if matches!(
serve.build_arguments.platform(),
Platform::Liveview | Platform::Fullstack
) {
get_available_port(addr.ip())
} else {
None
};
let proxied_address = proxied_port.map(|port| SocketAddr::new(devserver_ip, port));
let fullstack_address = fullstack_port.map(|port| SocketAddr::new(addr.ip(), port));
let router = setup_router(
serve,
cfg,
// Set up the router with some shared state that we'll update later to reflect the current state of the build
let build_status = SharedStatus::new_with_starting_build();
let router = build_devserver_router(
args,
krate,
hot_reload_sockets_tx,
build_status_sockets_tx,
fullstack_address,
proxied_address,
build_status.clone(),
)?;
// Create the listener that we'll pass into the devserver, but save its IP here so
// we can display it to the user in the tui
let listener = std::net::TcpListener::bind(devserver_address).with_context(|| {
anyhow::anyhow!(
"Failed to bind server to: {devserver_address}, is there another devserver running?\nTo run multiple devservers, use the --port flag to specify a different port"
)
.unwrap();
})?;
// Actually just start the server, cloning in a few bits of config
let web_config = cfg.dioxus_config.web.https.clone();
let base_path = cfg.dioxus_config.web.app.base_path.clone();
let platform = serve.platform();
let _server_task = tokio::spawn(async move {
let web_config = web_config.clone();
// HTTPS
// Before console info so it can stop if mkcert isn't installed or fails
// todo: this is the only async thing here - might be nice to
let rustls: Option<RustlsConfig> = get_rustls(&web_config).await.unwrap();
// And finally, start the server mainloop
tokio::spawn(devserver_mainloop(
krate.config.web.https.clone(),
listener,
router,
));
// Open the browser
if start_browser && platform != Platform::Desktop {
open_browser(base_path, addr, rustls.is_some());
}
// Start the server with or without rustls
if let Some(rustls) = rustls {
axum_server::bind_rustls(addr, rustls)
.serve(router.into_make_service())
.await?
} else {
// Create a TCP listener bound to the address
axum::serve(
tokio::net::TcpListener::bind(&addr).await?,
router.into_make_service(),
)
.await?
}
Ok(())
});
Self {
Ok(Self {
build_status,
proxied_port,
devserver_ip,
devserver_port,
hot_reload_sockets: Default::default(),
build_status_sockets: Default::default(),
new_hot_reload_sockets: hot_reload_sockets_rx,
new_build_status_sockets: build_status_sockets_rx,
_server_task,
ip: addr,
fullstack_port,
application_name: krate.executable_name().to_string(),
platform: args.build_arguments.platform(),
})
}
build_status,
application_name: cfg.dioxus_config.application.name.clone(),
platform: serve.build_arguments.platform().to_string(),
/// Wait for new clients to be connected and then save them
pub(crate) async fn wait(&mut self) -> ServeUpdate {
let mut new_hot_reload_socket = self.new_hot_reload_sockets.next();
let mut new_build_status_socket = self.new_build_status_sockets.next();
let mut new_message = self
.hot_reload_sockets
.iter_mut()
.enumerate()
.map(|(idx, socket)| async move { (idx, socket.next().await) })
.collect::<FuturesUnordered<_>>();
tokio::select! {
new_hot_reload_socket = &mut new_hot_reload_socket => {
if let Some(new_socket) = new_hot_reload_socket {
drop(new_message);
self.hot_reload_sockets.push(new_socket);
return ServeUpdate::NewConnection;
} else {
panic!("Could not receive a socket - the devtools could not boot - the port is likely already in use");
}
}
new_build_status_socket = &mut new_build_status_socket => {
if let Some(mut new_socket) = new_build_status_socket {
drop(new_message);
// Update the socket with project info and current build status
let project_info = SharedStatus::new(Status::ClientInit { application_name: self.application_name.clone(), platform: self.platform });
if project_info.send_to(&mut new_socket).await.is_ok() {
_ = self.build_status.send_to(&mut new_socket).await;
self.build_status_sockets.push(new_socket);
}
return future::pending::<ServeUpdate>().await;
} else {
panic!("Could not receive a socket - the devtools could not boot - the port is likely already in use");
}
}
Some((idx, message)) = new_message.next() => {
match message {
Some(Ok(message)) => return ServeUpdate::WsMessage(message),
_ => {
drop(new_message);
_ = self.hot_reload_sockets.remove(idx);
}
}
}
}
future::pending().await
}
pub(crate) async fn shutdown(&mut self) {
self.send_shutdown().await;
for socket in self.hot_reload_sockets.drain(..) {
_ = socket.close().await;
}
}
@ -186,10 +186,7 @@ impl Server {
let mut i = 0;
while i < self.build_status_sockets.len() {
let socket = &mut self.build_status_sockets[i];
if send_build_status_to(&self.build_status, socket)
.await
.is_err()
{
if self.build_status.send_to(socket).await.is_err() {
self.build_status_sockets.remove(i);
} else {
i += 1;
@ -198,7 +195,7 @@ impl Server {
}
/// Sends a start build message to all clients.
pub async fn start_build(&mut self) {
pub(crate) async fn start_build(&mut self) {
self.build_status.set(Status::Building {
progress: 0.0,
build_message: "Starting the build...".to_string(),
@ -207,19 +204,54 @@ impl Server {
}
/// Sends an updated build status to all clients.
pub async fn update_build_status(&mut self, progress: f64, build_message: String) {
if !matches!(self.build_status.get(), Status::Building { .. }) {
return;
}
pub(crate) async fn new_build_update(&mut self, update: &BuildUpdate) {
match update {
BuildUpdate::Progress { stage } => {
// Todo(miles): wire up more messages into the splash screen UI
match stage {
BuildStage::Success => {}
BuildStage::Failed => self.send_reload_failed().await,
BuildStage::Restarting => self.send_reload_start().await,
BuildStage::Initializing => {}
BuildStage::InstallingTooling {} => {}
BuildStage::Compiling {
current,
total,
krate,
..
} => {
self.build_status.set(Status::Building {
progress,
build_message,
progress: (*current as f64 / *total as f64).clamp(0.0, 1.0),
build_message: format!("{krate} compiling"),
});
self.send_build_status().await;
}
BuildStage::OptimizingWasm {} => {}
BuildStage::Aborted => {}
BuildStage::CopyingAssets { .. } => {}
_ => {}
}
}
BuildUpdate::CompilerMessage { .. } => {}
BuildUpdate::BuildReady { .. } => {}
BuildUpdate::BuildFailed { err } => {
let error = err.to_string();
self.build_status.set(Status::BuildError {
error: ansi_to_html::convert(&error).unwrap_or(error),
});
self.send_build_status().await;
}
}
}
/// Sends hot reloadable changes to all clients.
pub async fn send_hotreload(&mut self, reload: HotReloadMsg) {
pub(crate) async fn send_hotreload(&mut self, reload: HotReloadMsg) {
if reload.is_empty() {
return;
}
tracing::debug!("Sending hotreload to clients {:?}", reload);
let msg = DevserverMsg::HotReload(reload);
let msg = serde_json::to_string(&msg).unwrap();
@ -235,80 +267,22 @@ impl Server {
}
}
/// Wait for new clients to be connected and then save them
pub async fn wait(&mut self) -> Option<ServerUpdate> {
let mut new_hot_reload_socket = self.new_hot_reload_sockets.next();
let mut new_build_status_socket = self.new_build_status_sockets.next();
let mut new_message = self
.hot_reload_sockets
.iter_mut()
.enumerate()
.map(|(idx, socket)| async move { (idx, socket.next().await) })
.collect::<FuturesUnordered<_>>();
let next_new_message = next_or_pending(new_message.next());
tokio::select! {
new_hot_reload_socket = &mut new_hot_reload_socket => {
if let Some(new_socket) = new_hot_reload_socket {
drop(new_message);
self.hot_reload_sockets.push(new_socket);
return Some(ServerUpdate::NewConnection);
} else {
panic!("Could not receive a socket - the devtools could not boot - the port is likely already in use");
}
}
new_build_status_socket = &mut new_build_status_socket => {
if let Some(mut new_socket) = new_build_status_socket {
drop(new_message);
// Update the socket with project info and current build status
let project_info = SharedStatus::new(Status::ClientInit { application_name: self.application_name.clone(), platform: self.platform.clone() });
if send_build_status_to(&project_info, &mut new_socket).await.is_ok() {
_ = send_build_status_to(&self.build_status, &mut new_socket).await;
self.build_status_sockets.push(new_socket);
}
return None;
} else {
panic!("Could not receive a socket - the devtools could not boot - the port is likely already in use");
}
}
(idx, message) = next_new_message => {
match message {
Some(Ok(message)) => return Some(ServerUpdate::Message(message)),
_ => {
drop(new_message);
_ = self.hot_reload_sockets.remove(idx);
}
}
}
}
None
}
/// Converts a `cargo` error to HTML and sends it to clients.
pub async fn send_build_error(&mut self, error: Error) {
let error = error.to_string();
self.build_status.set(Status::BuildError {
error: ansi_to_html::convert(&error).unwrap_or(error),
});
self.send_build_status().await;
}
/// Tells all clients that a full rebuild has started.
pub async fn send_reload_start(&mut self) {
pub(crate) async fn send_reload_start(&mut self) {
self.send_devserver_message(DevserverMsg::FullReloadStart)
.await;
}
/// Tells all clients that a full rebuild has failed.
pub async fn send_reload_failed(&mut self) {
pub(crate) async fn send_reload_failed(&mut self) {
self.send_devserver_message(DevserverMsg::FullReloadFailed)
.await;
}
/// Tells all clients to reload if possible for new changes.
pub async fn send_reload_command(&mut self) {
pub(crate) async fn send_reload_command(&mut self) {
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
self.build_status.set(Status::Ready);
self.send_build_status().await;
self.send_devserver_message(DevserverMsg::FullReloadCommand)
@ -316,7 +290,7 @@ impl Server {
}
/// Send a shutdown message to all connected clients.
pub async fn send_shutdown(&mut self) {
pub(crate) async fn send_shutdown(&mut self) {
self.send_devserver_message(DevserverMsg::Shutdown).await;
}
@ -329,18 +303,48 @@ impl Server {
}
}
pub async fn shutdown(&mut self) {
self.send_shutdown().await;
for socket in self.hot_reload_sockets.drain(..) {
_ = socket.close().await;
}
/// Get the address the devserver should run on
pub fn devserver_address(&self) -> SocketAddr {
SocketAddr::new(self.devserver_ip, self.devserver_port)
}
/// Get the address the fullstack server should run on if we're serving a fullstack app
pub fn fullstack_address(&self) -> Option<SocketAddr> {
self.fullstack_port
.map(|port| SocketAddr::new(self.ip.ip(), port))
// Get the address the server should run on if we're serving the user's server
pub fn proxied_server_address(&self) -> Option<SocketAddr> {
self.proxied_port
.map(|port| SocketAddr::new(self.devserver_ip, port))
}
pub fn server_address(&self) -> Option<SocketAddr> {
match self.platform {
Platform::Web | Platform::Server => Some(self.devserver_address()),
_ => self.proxied_server_address(),
}
}
}
async fn devserver_mainloop(
https_cfg: WebHttpsConfig,
listener: TcpListener,
router: Router,
) -> Result<()> {
// We have a native listener that we're going to give to tokio, so we need to make it non-blocking
let _ = listener.set_nonblocking(true);
// If we're not using rustls, just use regular axum
if https_cfg.enabled != Some(true) {
axum::serve(listener.try_into().unwrap(), router.into_make_service()).await?;
return Ok(());
}
// If we're using rustls, we need to get the cert/key paths and then set up rustls
let (cert_path, key_path) = get_rustls(&https_cfg).await?;
let rustls = axum_server::tls_rustls::RustlsConfig::from_pem_file(cert_path, key_path).await?;
axum_server::from_tcp_rustls(listener, rustls)
.serve(router.into_make_service())
.await?;
Ok(())
}
/// Sets up and returns a router
@ -350,44 +354,24 @@ impl Server {
/// - Setting up the proxy to the endpoint specified in the config
/// - Setting up the file serve service
/// - Setting up the websocket endpoint for devtools
fn setup_router(
serve: &Serve,
config: &DioxusCrate,
fn build_devserver_router(
args: &ServeArgs,
krate: &DioxusCrate,
hot_reload_sockets: UnboundedSender<WebSocket>,
build_status_sockets: UnboundedSender<WebSocket>,
fullstack_address: Option<SocketAddr>,
build_status: SharedStatus,
) -> Result<Router> {
let mut router = Router::new();
let platform = serve.build_arguments.platform();
// Setup proxy for the endpoint specified in the config
for proxy_config in config.dioxus_config.web.proxy.iter() {
for proxy_config in krate.config.web.proxy.iter() {
router = super::proxy::add_proxy(router, proxy_config)?;
}
// server the dir if it's web, otherwise let the fullstack server itself handle it
match platform {
Platform::Web | Platform::StaticGeneration => {
// Route file service to output the .wasm and assets if this is a web build
let base_path = format!(
"/{}",
config
.dioxus_config
.web
.app
.base_path
.as_deref()
.unwrap_or_default()
.trim_matches('/')
);
router = router.nest_service(&base_path, build_serve_dir(serve, config, platform));
}
Platform::Liveview | Platform::Fullstack => {
// For fullstack and static generation, forward all requests to the server
if args.should_proxy_build() {
// For fullstack, liveview, and server, forward all requests to the inner server
let address = fullstack_address.unwrap();
router = router.nest_service("/",super::proxy::proxy_to(
format!("http://{address}").parse().unwrap(),
true,
@ -401,8 +385,22 @@ fn setup_router(
.unwrap()
},
));
}
_ => {}
} else {
// Otherwise, just serve the dir ourselves
// Route file service to output the .wasm and assets if this is a web build
let base_path = format!(
"/{}",
krate
.config
.web
.app
.base_path
.as_deref()
.unwrap_or_default()
.trim_matches('/')
);
router = router.nest_service(&base_path, build_serve_dir(args, krate));
}
// Setup middleware to intercept html requests if the build status is "Building"
@ -419,7 +417,7 @@ fn setup_router(
"/",
get(
|ws: WebSocketUpgrade, ext: Extension<UnboundedSender<WebSocket>>| async move {
tracing::info!("Incoming hotreload websocket request: {ws:?}");
tracing::debug!("New devtool websocket connection");
ws.on_upgrade(move |socket| async move { _ = ext.0.unbounded_send(socket) })
},
),
@ -438,7 +436,7 @@ fn setup_router(
// Setup cors
router = router.layer(
CorsLayer::new()
tower_http::cors::CorsLayer::new()
// allow `GET` and `POST` when accessing the resource
.allow_methods([Method::GET, Method::POST])
// allow requests from any origin
@ -449,11 +447,9 @@ fn setup_router(
Ok(router)
}
fn build_serve_dir(
serve: &Serve,
cfg: &DioxusCrate,
platform: Platform,
) -> axum::routing::MethodRouter {
fn build_serve_dir(args: &ServeArgs, cfg: &DioxusCrate) -> axum::routing::MethodRouter {
use tower::ServiceBuilder;
static CORS_UNSAFE: (HeaderValue, HeaderValue) = (
HeaderValue::from_static("unsafe-none"),
HeaderValue::from_static("unsafe-none"),
@ -464,17 +460,15 @@ fn build_serve_dir(
HeaderValue::from_static("same-origin"),
);
let (coep, coop) = match serve.server_arguments.cross_origin_policy {
let (coep, coop) = match args.cross_origin_policy {
true => CORS_REQUIRE.clone(),
false => CORS_UNSAFE.clone(),
};
let out_dir = match platform {
// Static generation only serves files from the public directory
Platform::StaticGeneration => cfg.out_dir().join("public"),
_ => cfg.out_dir(),
};
let index_on_404 = cfg.dioxus_config.web.watcher.index_on_404;
let out_dir = cfg
.build_dir(Platform::Web, args.build_arguments.release)
.join("public");
let index_on_404 = cfg.config.web.watcher.index_on_404;
get_service(
ServiceBuilder::new()
@ -508,18 +502,23 @@ fn no_cache(
// If there's a 404 and we're supposed to index on 404, upgrade that failed request to the index.html
// We might want to isnert a header here saying we *did* that but oh well
if response.status() == StatusCode::NOT_FOUND && index_on_404 {
// First try to find a 404.html or 404/index.html file
let out_dir_404_html = out_dir.join("404.html");
let out_dir_404_index_html = out_dir.join("404").join("index.html");
let path = if out_dir_404_html.exists() {
out_dir_404_html
} else if out_dir_404_index_html.exists() {
out_dir_404_index_html
} else {
// If we can't find a 404.html or 404/index.html, just use the index.html
out_dir.join("index.html")
};
let body = Body::from(std::fs::read_to_string(path).unwrap());
let fallback = out_dir.join("index.html");
let contents = std::fs::read_to_string(fallback).unwrap_or_else(|_| {
String::from(
r#"
<!DOCTYPE html>
<html>
<head>
<title>Err 404 - dx is not serving a web app</title>
</head>
<body>
<p>Err 404 - dioxus is not currently serving a web app</p>
</body>
</html>
"#,
)
});
let body = Body::from(contents);
response = Response::builder()
.status(StatusCode::OK)
@ -528,32 +527,28 @@ fn no_cache(
};
insert_no_cache_headers(response.headers_mut());
response
}
pub fn insert_no_cache_headers(headers: &mut HeaderMap) {
pub(crate) fn insert_no_cache_headers(headers: &mut HeaderMap) {
headers.insert(CACHE_CONTROL, HeaderValue::from_static("no-cache"));
headers.insert(PRAGMA, HeaderValue::from_static("no-cache"));
headers.insert(EXPIRES, HeaderValue::from_static("0"));
}
/// Returns an enum of rustls config
pub async fn get_rustls(web_config: &WebHttpsConfig) -> Result<Option<RustlsConfig>> {
if web_config.enabled != Some(true) {
return Ok(None);
async fn get_rustls(web_config: &WebHttpsConfig) -> Result<(String, String)> {
// If we're not using mkcert, just use the cert/key paths given to use in the config
if !web_config.mkcert.unwrap_or(false) {
if let (Some(key), Some(cert)) = (web_config.key_path.clone(), web_config.cert_path.clone())
{
return Ok((cert, key));
} else {
// missing cert or key
return Err("https is enabled but cert or key path is missing".into());
}
}
let (cert_path, key_path) = match web_config.mkcert {
Some(true) => get_rustls_with_mkcert(web_config)?,
_ => get_rustls_without_mkcert(web_config)?,
};
Ok(Some(
RustlsConfig::from_pem_file(cert_path, key_path).await?,
))
}
pub fn get_rustls_with_mkcert(web_config: &WebHttpsConfig) -> Result<(String, String)> {
const DEFAULT_KEY_PATH: &str = "ssl/key.pem";
const DEFAULT_CERT_PATH: &str = "ssl/cert.pem";
@ -573,7 +568,7 @@ pub fn get_rustls_with_mkcert(web_config: &WebHttpsConfig) -> Result<(String, St
_ = fs::create_dir("ssl");
}
let cmd = Command::new("mkcert")
let cmd = tokio::process::Command::new("mkcert")
.args([
"-install",
"-key-file",
@ -599,33 +594,18 @@ pub fn get_rustls_with_mkcert(web_config: &WebHttpsConfig) -> Result<(String, St
return Err("failed to generate mkcert certificates".into());
}
Ok(mut cmd) => {
cmd.wait()?;
cmd.wait().await?;
}
}
Ok((cert_path, key_path))
}
pub fn get_rustls_without_mkcert(web_config: &WebHttpsConfig) -> Result<(String, String)> {
// get paths to cert & key
if let (Some(key), Some(cert)) = (web_config.key_path.clone(), web_config.cert_path.clone()) {
Ok((cert, key))
} else {
// missing cert or key
Err("https is enabled but cert or key path is missing".into())
}
}
/// Open the browser to the address
pub(crate) fn open_browser(base_path: Option<String>, address: SocketAddr, https: bool) {
let protocol = if https { "https" } else { "http" };
let base_path = match base_path.as_deref() {
Some(base_path) => format!("/{}", base_path.trim_matches('/')),
None => "".to_owned(),
};
_ = open::that(format!("{protocol}://{address}{base_path}"));
}
/// Bind a listener to any point and return it
/// When the listener is dropped, the socket will be closed, but we'll still have a port that we
/// can bind our proxy to.
///
/// Todo: we might want to do this on every new build in case the OS tries to bind things to this port
/// and we don't already have something bound to it. There's no great way of "reserving" a port.
fn get_available_port(address: IpAddr) -> Option<u16> {
TcpListener::bind((address, 0))
.map(|listener| listener.local_addr().unwrap().port())
@ -639,7 +619,7 @@ async fn build_status_middleware(
next: Next,
) -> axum::response::Response {
// If the request is for html, and the status is "Building", return the loading page instead of the contents of the response
let accepts = request.headers().get(ACCEPT);
let accepts = request.headers().get(hyper::header::ACCEPT);
let accepts_html = accepts
.and_then(|v| v.to_str().ok())
.map(|v| v.contains("text/html"));
@ -647,7 +627,7 @@ async fn build_status_middleware(
if let Some(true) = accepts_html {
let status = state.get();
if status != Status::Ready {
let html = include_str!("../../assets/loading.html");
let html = include_str!("../../assets/web/loading.html");
return axum::response::Response::builder()
.status(StatusCode::OK)
// Load the html loader then keep loading forever
@ -663,10 +643,48 @@ async fn build_status_middleware(
next.run(request).await
}
async fn send_build_status_to(
build_status: &SharedStatus,
socket: &mut WebSocket,
) -> Result<(), axum::Error> {
let msg = serde_json::to_string(&build_status.get()).unwrap();
socket.send(Message::Text(msg)).await
#[derive(Debug, Clone)]
struct SharedStatus(Arc<RwLock<Status>>);
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(tag = "type", content = "data")]
enum Status {
ClientInit {
application_name: String,
platform: Platform,
},
Building {
progress: f64,
build_message: String,
},
BuildError {
error: String,
},
Ready,
}
impl SharedStatus {
fn new(status: Status) -> Self {
Self(Arc::new(RwLock::new(status)))
}
fn new_with_starting_build() -> Self {
Self::new(Status::Building {
progress: 0.0,
build_message: "Starting the build...".to_string(),
})
}
fn set(&self, status: Status) {
*self.0.write().unwrap() = status;
}
fn get(&self) -> Status {
self.0.read().unwrap().clone()
}
async fn send_to(&self, socket: &mut WebSocket) -> Result<(), axum::Error> {
let msg = serde_json::to_string(&self.get()).unwrap();
socket.send(Message::Text(msg)).await
}
}

View file

@ -0,0 +1,59 @@
use crate::{BuildUpdate, Platform, TraceMsg};
use axum::extract::ws::Message as WsMessage;
use std::{path::PathBuf, process::ExitStatus};
/// One fat enum to rule them all....
///
/// Thanks to libraries like winit for the inspiration
#[allow(clippy::large_enum_variant)]
pub(crate) enum ServeUpdate {
NewConnection,
WsMessage(WsMessage),
/// A build update from the build engine
BuildUpdate(BuildUpdate),
/// A running process has received a stdout.
/// May or may not be a complete line - do not treat it as a line. It will include a line if it is a complete line.
///
/// We will poll lines and any content in a 50ms interval
StdoutReceived {
platform: Platform,
msg: String,
},
/// A running process has received a stderr.
/// May or may not be a complete line - do not treat it as a line. It will include a line if it is a complete line.
///
/// We will poll lines and any content in a 50ms interval
StderrReceived {
platform: Platform,
msg: String,
},
ProcessExited {
platform: Platform,
status: ExitStatus,
},
FilesChanged {
files: Vec<PathBuf>,
},
/// Open an existing app bundle, if it exists
OpenApp,
RequestRebuild,
ToggleShouldRebuild,
Redraw,
TracingLog {
log: TraceMsg,
},
Exit {
error: Option<Box<dyn std::error::Error + Send + Sync>>,
},
}

View file

@ -1,172 +1,66 @@
use std::collections::{HashMap, HashSet};
use std::{fs, path::PathBuf, time::Duration};
use super::hot_reloading_file_map::HotreloadError;
use crate::serve::hot_reloading_file_map::FileMap;
use crate::TraceSrc;
use crate::{cli::serve::Serve, dioxus_crate::DioxusCrate};
use dioxus_devtools::HotReloadMsg;
use dioxus_html::HtmlCtx;
use super::detect::is_wsl;
use super::update::ServeUpdate;
use crate::{cli::serve::ServeArgs, dioxus_crate::DioxusCrate};
use futures_channel::mpsc::{UnboundedReceiver, UnboundedSender};
use futures_util::StreamExt;
use ignore::gitignore::Gitignore;
use notify::{
event::{MetadataKind, ModifyKind},
Config, EventKind,
Config, EventKind, RecursiveMode, Watcher as NotifyWatcher,
};
use std::{path::PathBuf, time::Duration};
/// This struct stores the file watcher and the filemap for the project.
///
/// This is where we do workspace discovery and recursively listen for changes in Rust files and asset
/// directories.
pub struct Watcher {
_tx: UnboundedSender<notify::Event>,
pub(crate) struct Watcher {
rx: UnboundedReceiver<notify::Event>,
_last_update_time: i64,
_watcher: Box<dyn notify::Watcher>,
queued_events: Vec<notify::Event>,
file_map: FileMap,
ignore: Gitignore,
applied_hot_reload_message: Option<HotReloadMsg>,
krate: DioxusCrate,
_tx: UnboundedSender<notify::Event>,
watcher: Box<dyn notify::Watcher>,
}
impl Watcher {
pub fn start(serve: &Serve, config: &DioxusCrate) -> Self {
pub(crate) fn start(krate: &DioxusCrate, serve: &ServeArgs) -> Self {
let (tx, rx) = futures_channel::mpsc::unbounded();
// Extend the watch path to include:
// - the assets directory - this is so we can hotreload CSS and other assets by default
// - the Cargo.toml file - this is so we can hotreload the project if the user changes dependencies
// - the Dioxus.toml file - this is so we can hotreload the project if the user changes the Dioxus config
let mut allow_watch_path = config.dioxus_config.web.watcher.watch_path.clone();
allow_watch_path.push(config.dioxus_config.application.asset_dir.clone());
allow_watch_path.push("Cargo.toml".to_string().into());
allow_watch_path.push("Dioxus.toml".to_string().into());
allow_watch_path.dedup();
let crate_dir = config.crate_dir();
let mut builder = ignore::gitignore::GitignoreBuilder::new(&crate_dir);
builder.add(crate_dir.join(".gitignore"));
let out_dir = config.out_dir();
let out_dir_str = out_dir.display().to_string();
let excluded_paths = vec![
".git",
".github",
".vscode",
"target",
"node_modules",
"dist",
&out_dir_str,
];
for path in excluded_paths {
builder
.add_line(None, path)
.expect("failed to add path to file excluder");
}
let ignore = builder.build().unwrap();
// Build the event handler for notify.
let notify_event_handler = {
let tx = tx.clone();
move |info: notify::Result<notify::Event>| {
if let Ok(e) = info {
if is_allowed_notify_event(&e) {
_ = tx.unbounded_send(e);
}
}
}
};
// If we are in WSL, we must use Notify's poll watcher due to an event propagation issue.
let is_wsl = is_wsl();
const NOTIFY_ERROR_MSG: &str = "Failed to create file watcher.\nEnsure you have the required permissions to watch the specified directories.";
// Create the file watcher.
let mut watcher: Box<dyn notify::Watcher> = match is_wsl {
true => {
let poll_interval = Duration::from_secs(
serve.server_arguments.wsl_file_poll_interval.unwrap_or(2) as u64,
);
Box::new(
notify::PollWatcher::new(
notify_event_handler,
Config::default().with_poll_interval(poll_interval),
)
.expect(NOTIFY_ERROR_MSG),
)
}
false => {
Box::new(notify::recommended_watcher(notify_event_handler).expect(NOTIFY_ERROR_MSG))
}
};
// Watch the specified paths
// todo: make sure we don't double-watch paths if they're nested
for sub_path in allow_watch_path {
let path = &config.crate_dir().join(sub_path);
// If the path is ignored, don't watch it
if ignore.matched(path, path.is_dir()).is_ignore() {
continue;
}
let mode = notify::RecursiveMode::Recursive;
if let Err(err) = watcher.watch(path, mode) {
tracing::warn!("Failed to watch path: {}", err);
}
}
// Probe the entire project looking for our rsx calls
// Whenever we get an update from the file watcher, we'll try to hotreload against this file map
let file_map = FileMap::create_with_filter::<HtmlCtx>(config.crate_dir(), |path| {
ignore.matched(path, path.is_dir()).is_ignore()
})
.unwrap();
Self {
let mut watcher = Self {
watcher: create_notify_watcher(serve, tx.clone()),
_tx: tx,
krate: krate.clone(),
rx,
_watcher: watcher,
file_map,
ignore,
queued_events: Vec::new(),
_last_update_time: chrono::Local::now().timestamp(),
applied_hot_reload_message: None,
}
ignore: krate.workspace_gitignore(),
};
watcher.watch_filesystem();
watcher
}
/// A cancel safe handle to the file watcher
///
/// todo: this should be simpler logic?
pub async fn wait(&mut self) {
// Pull off any queued events in succession
while let Ok(Some(event)) = self.rx.try_next() {
self.queued_events.push(event);
/// Wait for changed files to be detected
pub(crate) async fn wait(&mut self) -> ServeUpdate {
// Wait for the next file to change
let mut changes: Vec<_> = self.rx.next().await.into_iter().collect();
// Dequeue in bulk if we can, we might've received a lot of events in one go
while let Some(event) = self.rx.try_next().ok().flatten() {
changes.push(event);
}
if !self.queued_events.is_empty() {
return;
}
// If there are no queued events, wait for the next event
if let Some(event) = self.rx.next().await {
self.queued_events.push(event);
}
}
/// Deques changed files from the event queue, doing the proper intelligent filtering
pub fn dequeue_changed_files(&mut self, config: &DioxusCrate) -> Vec<PathBuf> {
// Filter the changes
let mut all_mods: Vec<PathBuf> = vec![];
// Decompose the events into a list of all the files that have changed
for event in self.queued_events.drain(..) {
// We only care about certain events.
if !is_allowed_notify_event(&event) {
continue;
for event in changes.drain(..) {
// Make sure we add new folders to the watch list, provided they're not matched by the ignore list
// We'll only watch new folders that are found under the crate, and then update our watcher to watch them
// This unfortunately won't pick up new krates added "at a distance" - IE krates not within the workspace.
if let EventKind::Create(_create_kind) = event.kind {
// If it's a new folder, watch it
// If it's a new cargo.toml (ie dep on the fly),
// todo(jon) support new folders on the fly
}
for path in event.paths {
@ -174,18 +68,8 @@ impl Watcher {
}
}
let mut modified_files = vec![];
// For the non-rust files, we want to check if it's an asset file
// This would mean the asset lives somewhere under the /assets directory or is referenced by magnanis in the linker
// todo: mg integration here
let _asset_dir = config
.dioxus_config
.application
.asset_dir
.canonicalize()
.ok();
// Collect the files that have changed
let mut files = vec![];
for path in all_mods.iter() {
if path.extension().is_none() {
continue;
@ -201,209 +85,99 @@ impl Watcher {
}
}
// If the extension is a backup file, or a hidden file, ignore it completely (no rebuilds)
if is_backup_file(path.to_path_buf()) {
tracing::trace!("Ignoring backup file: {:?}", path);
continue;
}
// If the path is ignored, don't watch it
if self.ignore.matched(path, path.is_dir()).is_ignore() {
continue;
}
modified_files.push(path.clone());
files.push(path.clone());
}
modified_files
tracing::debug!("Files changed: {files:?}");
ServeUpdate::FilesChanged { files }
}
pub fn attempt_hot_reload(
&mut self,
config: &DioxusCrate,
modified_files: Vec<PathBuf>,
) -> Option<HotReloadMsg> {
// If we have any changes to the rust files, we need to update the file map
let crate_dir = config.crate_dir();
let mut templates = vec![];
fn watch_filesystem(&mut self) {
// Watch the folders of the crates that we're interested in
for path in self.krate.watch_paths() {
tracing::debug!("Watching path {path:?}");
// Prepare the hotreload message we need to send
let mut edited_rust_files = Vec::new();
let mut assets = Vec::new();
let mut unknown_files = Vec::new();
for path in modified_files {
// for various assets that might be linked in, we just try to hotreloading them forcefully
// That is, unless they appear in an include! macro, in which case we need to a full rebuild....
let Some(ext) = path.extension().and_then(|v| v.to_str()) else {
continue;
};
match ext {
"rs" => edited_rust_files.push(path),
_ if path.starts_with("assets") => assets.push(path),
_ => unknown_files.push(path),
if let Err(err) = self.watcher.watch(&path, RecursiveMode::Recursive) {
handle_notify_error(err);
}
}
for rust_file in edited_rust_files {
match self.file_map.update_rsx::<HtmlCtx>(&rust_file, &crate_dir) {
Ok(hotreloaded_templates) => {
templates.extend(hotreloaded_templates);
}
// If the file is not reloadable, we need to rebuild
Err(HotreloadError::Notreloadable) => return None,
// The rust file may have failed to parse, but that is most likely
// because the user is in the middle of adding new code
// We just ignore the error and let Rust analyzer warn about the problem
Err(HotreloadError::Parse) => {}
// Otherwise just log the error
Err(err) => {
tracing::error!(dx_src = ?TraceSrc::Dev, "Error hotreloading file {rust_file:?}: {err}")
}
// Also watch the crates themselves, but not recursively, such that we can pick up new folders
for krate in self.krate.all_watched_crates() {
tracing::debug!("Watching path {krate:?}");
if let Err(err) = self.watcher.watch(&krate, RecursiveMode::NonRecursive) {
handle_notify_error(err);
}
}
let msg = HotReloadMsg {
templates,
assets,
unknown_files,
};
self.add_hot_reload_message(&msg);
Some(msg)
// Also watch the workspace dir, non recursively, such that we can pick up new folders there too
if let Err(err) = self
.watcher
.watch(&self.krate.workspace_dir(), RecursiveMode::NonRecursive)
{
handle_notify_error(err);
}
/// Get any hot reload changes that have been applied since the last full rebuild
pub fn applied_hot_reload_changes(&mut self) -> Option<HotReloadMsg> {
self.applied_hot_reload_message.clone()
}
/// Clear the hot reload changes. This should be called any time a new build is starting
pub fn clear_hot_reload_changes(&mut self) {
self.applied_hot_reload_message.take();
}
/// Store the hot reload changes for any future clients that connect
fn add_hot_reload_message(&mut self, msg: &HotReloadMsg) {
match &mut self.applied_hot_reload_message {
Some(applied) => {
// Merge the assets, unknown files, and templates
// We keep the newer change if there is both a old and new change
let mut templates: HashMap<String, _> = std::mem::take(&mut applied.templates)
.into_iter()
.map(|template| (template.location.clone(), template))
.collect();
let mut assets: HashSet<PathBuf> =
std::mem::take(&mut applied.assets).into_iter().collect();
let mut unknown_files: HashSet<PathBuf> =
std::mem::take(&mut applied.unknown_files)
.into_iter()
.collect();
for template in &msg.templates {
templates.insert(template.location.clone(), template.clone());
}
assets.extend(msg.assets.iter().cloned());
unknown_files.extend(msg.unknown_files.iter().cloned());
applied.templates = templates.into_values().collect();
applied.assets = assets.into_iter().collect();
applied.unknown_files = unknown_files.into_iter().collect();
}
None => {
self.applied_hot_reload_message = Some(msg.clone());
}
}
}
/// Ensure the changes we've received from the queue are actually legit changes to either assets or
/// rust code. We don't care about changes otherwise, unless we get a signal elsewhere to do a full rebuild
pub fn pending_changes(&mut self) -> bool {
!self.queued_events.is_empty()
}
}
fn is_backup_file(path: PathBuf) -> bool {
// If there's a tilde at the end of the file, it's a backup file
if let Some(name) = path.file_name() {
if let Some(name) = name.to_str() {
if name.ends_with('~') {
return true;
fn handle_notify_error(err: notify::Error) {
tracing::debug!("Failed to watch path: {}", err);
match err.kind {
notify::ErrorKind::Io(error) if error.kind() == std::io::ErrorKind::PermissionDenied => {
tracing::error!("Failed to watch path: permission denied. {:?}", err.paths)
}
notify::ErrorKind::MaxFilesWatch => {
tracing::error!("Failed to set up file watcher: too many files to watch")
}
_ => {}
}
// if the file is hidden, it's a backup file
if let Some(name) = path.file_name() {
if let Some(name) = name.to_str() {
if name.starts_with('.') {
return true;
}
}
}
false
}
/// Tests if the provided [`notify::Event`] is something we listen to so we can avoid unescessary hot reloads.
fn is_allowed_notify_event(event: &notify::Event) -> bool {
match event.kind {
fn create_notify_watcher(
serve: &ServeArgs,
tx: UnboundedSender<notify::Event>,
) -> Box<dyn NotifyWatcher> {
// Build the event handler for notify.
let handler = move |info: notify::Result<notify::Event>| {
let Ok(event) = info else {
return;
};
let is_allowed_notify_event = match event.kind {
EventKind::Modify(ModifyKind::Data(_)) => true,
EventKind::Modify(ModifyKind::Name(_)) => true,
EventKind::Create(_) => true,
EventKind::Remove(_) => true,
// The primary modification event on WSL's poll watcher.
EventKind::Modify(ModifyKind::Metadata(MetadataKind::WriteTime)) => true,
// Catch-all for unknown event types.
EventKind::Modify(ModifyKind::Any) => true,
EventKind::Modify(ModifyKind::Any) => false,
EventKind::Modify(ModifyKind::Metadata(_)) => false,
// Don't care about anything else.
EventKind::Create(_) => true,
EventKind::Remove(_) => true,
_ => false,
};
if is_allowed_notify_event {
_ = tx.unbounded_send(event);
}
}
const WSL_1: &str = "/proc/sys/kernel/osrelease";
const WSL_2: &str = "/proc/version";
const WSL_KEYWORDS: [&str; 2] = ["microsoft", "wsl"];
/// Detects if `dx` is being ran in a WSL environment.
///
/// We determine this based on whether the keyword `microsoft` or `wsl` is contained within the [`WSL_1`] or [`WSL_2`] files.
/// This may fail in the future as it isn't guaranteed by Microsoft.
/// See https://github.com/microsoft/WSL/issues/423#issuecomment-221627364
fn is_wsl() -> bool {
// Test 1st File
if let Ok(content) = fs::read_to_string(WSL_1) {
let lowercase = content.to_lowercase();
for keyword in WSL_KEYWORDS {
if lowercase.contains(keyword) {
return true;
}
}
}
// Test 2nd File
if let Ok(content) = fs::read_to_string(WSL_2) {
let lowercase = content.to_lowercase();
for keyword in WSL_KEYWORDS {
if lowercase.contains(keyword) {
return true;
}
}
}
false
}
#[test]
fn test_is_backup_file() {
assert!(is_backup_file(PathBuf::from("examples/test.rs~")));
assert!(is_backup_file(PathBuf::from("examples/.back")));
assert!(is_backup_file(PathBuf::from("test.rs~")));
assert!(is_backup_file(PathBuf::from(".back")));
assert!(!is_backup_file(PathBuf::from("val.rs")));
assert!(!is_backup_file(PathBuf::from(
"/Users/jonkelley/Development/Tinkering/basic_05_example/src/lib.rs"
)));
assert!(!is_backup_file(PathBuf::from("exmaples/val.rs")));
};
const NOTIFY_ERROR_MSG: &str = "Failed to create file watcher.\nEnsure you have the required permissions to watch the specified directories.";
if !is_wsl() {
return Box::new(notify::recommended_watcher(handler).expect(NOTIFY_ERROR_MSG));
}
let poll_interval = Duration::from_secs(serve.wsl_file_poll_interval.unwrap_or(2) as u64);
Box::new(
notify::PollWatcher::new(handler, Config::default().with_poll_interval(poll_interval))
.expect(NOTIFY_ERROR_MSG),
)
}

View file

@ -1,13 +1,8 @@
use crate::{Result, TraceSrc};
use serde::{Deserialize, Serialize};
use std::{
fs,
io::{Error, ErrorKind},
path::PathBuf,
};
use std::{fs, path::PathBuf};
use tracing::{debug, error, warn};
use crate::{CrateConfigError, TraceSrc};
const GLOBAL_SETTINGS_FILE_NAME: &str = "dioxus/settings.toml";
/// Describes cli settings from project or global level.
@ -18,26 +13,26 @@ const GLOBAL_SETTINGS_FILE_NAME: &str = "dioxus/settings.toml";
///
/// This allows users to control the cli settings with ease.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct CliSettings {
pub(crate) struct CliSettings {
/// Describes whether hot reload should always be on.
pub always_hot_reload: Option<bool>,
pub(crate) always_hot_reload: Option<bool>,
/// Describes whether the CLI should always open the browser for Web targets.
pub always_open_browser: Option<bool>,
pub(crate) always_open_browser: Option<bool>,
/// Describes whether desktop apps in development will be pinned always-on-top.
pub always_on_top: Option<bool>,
pub(crate) always_on_top: Option<bool>,
/// Describes the interval in seconds that the CLI should poll for file changes on WSL.
#[serde(default = "default_wsl_file_poll_interval")]
pub wsl_file_poll_interval: Option<u16>,
pub(crate) wsl_file_poll_interval: Option<u16>,
}
impl CliSettings {
/// Load the settings from the local, global, or default config in that order
pub fn load() -> Self {
pub(crate) fn load() -> Self {
Self::from_global().unwrap_or_default()
}
/// Get the current settings structure from global.
pub fn from_global() -> Option<Self> {
pub(crate) fn from_global() -> Option<Self> {
let Some(path) = dirs::data_local_dir() else {
warn!("failed to get local data directory, some config keys may be missing");
return None;
@ -63,18 +58,15 @@ impl CliSettings {
/// Save the current structure to the global settings toml.
/// This does not save to project-level settings.
pub fn save(self) -> Result<Self, CrateConfigError> {
pub(crate) fn save(self) -> Result<Self> {
let path = Self::get_settings_path().ok_or_else(|| {
error!(dx_src = ?TraceSrc::Dev, "failed to get settings path");
CrateConfigError::Io(Error::new(
ErrorKind::NotFound,
"failed to get settings path",
))
anyhow::anyhow!("failed to get settings path")
})?;
let data = toml::to_string_pretty(&self).map_err(|e| {
error!(dx_src = ?TraceSrc::Dev, ?self, "failed to parse config into toml");
CrateConfigError::Io(Error::new(ErrorKind::Other, e.to_string()))
anyhow::anyhow!("failed to parse config into toml: {e}")
})?;
// Create the directory structure if it doesn't exist.
@ -86,21 +78,23 @@ impl CliSettings {
?path,
"failed to create directories for settings file"
);
return Err(CrateConfigError::Io(e));
return Err(
anyhow::anyhow!("failed to create directories for settings file: {e}").into(),
);
}
// Write the data.
let result = fs::write(&path, data.clone());
if let Err(e) = result {
error!(?data, ?path, "failed to save global cli settings");
return Err(CrateConfigError::Io(e));
return Err(anyhow::anyhow!("failed to save global cli settings: {e}").into());
}
Ok(self)
}
/// Get the path to the settings toml file.
pub fn get_settings_path() -> Option<PathBuf> {
pub(crate) fn get_settings_path() -> Option<PathBuf> {
let Some(path) = dirs::data_local_dir() else {
warn!("failed to get local data directory, some config keys may be missing");
return None;
@ -110,7 +104,7 @@ impl CliSettings {
}
/// Modify the settings toml file
pub fn modify_settings(with: impl FnOnce(&mut CliSettings)) -> Result<(), CrateConfigError> {
pub(crate) fn modify_settings(with: impl FnOnce(&mut CliSettings)) -> Result<()> {
let mut settings = Self::load();
with(&mut settings);
settings.save()?;

View file

@ -0,0 +1 @@

View file

@ -14,41 +14,66 @@
//! 3. Build CLI layer for routing tracing logs to the TUI.
//! 4. Build fmt layer for non-interactive logging with a custom writer that prevents output during interactive mode.
use crate::builder::TargetPlatform;
use console::strip_ansi_codes;
use crate::{serve::ServeUpdate, Platform as TargetPlatform};
use cargo_metadata::{diagnostic::DiagnosticLevel, CompilerMessage};
use futures_channel::mpsc::{unbounded, UnboundedReceiver, UnboundedSender};
use std::fmt::Display;
use once_cell::sync::OnceCell;
use std::{
collections::HashMap,
env,
fmt::{Debug, Write as _},
fmt::{Debug, Display, Write as _},
fs,
io::{self, Write},
path::PathBuf,
sync::{
atomic::{AtomicBool, Ordering},
Arc, Mutex,
Mutex,
},
};
use tracing::Level;
use tracing::{field::Visit, Subscriber};
use tracing_subscriber::{
filter::filter_fn, fmt::format, prelude::*, registry::LookupSpan, EnvFilter, Layer,
};
use tracing::{field::Visit, Level, Subscriber};
use tracing_subscriber::{fmt::format, prelude::*, registry::LookupSpan, EnvFilter, Layer};
const LOG_ENV: &str = "DIOXUS_LOG";
const LOG_FILE_NAME: &str = "dx.log";
const DX_SRC_FLAG: &str = "dx_src";
const DX_NO_FMT_FLAG: &str = "dx_no_fmt";
pub fn log_path() -> PathBuf {
let tmp_dir = std::env::temp_dir();
tmp_dir.join(LOG_FILE_NAME)
}
/// Build tracing infrastructure.
pub fn build_tracing() -> CLILogControl {
let mut filter = EnvFilter::new("error,dx=info,dioxus-cli=info,manganis-cli-support=info");
static TUI_ENABLED: AtomicBool = AtomicBool::new(false);
static TUI_TX: OnceCell<UnboundedSender<TraceMsg>> = OnceCell::new();
pub(crate) struct TraceController {
pub(crate) tui_rx: UnboundedReceiver<TraceMsg>,
}
impl TraceController {
/// Get a handle to the trace controller.
pub fn redirect() -> Self {
let (tui_tx, tui_rx) = unbounded();
TUI_ENABLED.store(true, Ordering::SeqCst);
TUI_TX.set(tui_tx.clone()).unwrap();
Self { tui_rx }
}
/// Wait for the internal logger to send a message
pub(crate) async fn wait(&mut self) -> ServeUpdate {
use futures_util::StreamExt;
let log = self.tui_rx.next().await.expect("tracer should never die");
ServeUpdate::TracingLog { log }
}
pub(crate) fn shutdown(&self) {
TUI_ENABLED.store(false, Ordering::SeqCst);
}
/// Build tracing infrastructure.
pub fn initialize() {
let mut filter =
EnvFilter::new("error,dx=trace,dioxus-cli=debug,manganis-cli-support=debug");
if env::var(LOG_ENV).is_ok() {
filter = EnvFilter::from_env(LOG_ENV);
}
@ -64,45 +89,19 @@ pub fn build_tracing() -> CLILogControl {
}
};
// Create writer controller and custom writer.
let (output_tx, output_rx) = unbounded();
let output_enabled = Arc::new(AtomicBool::new(false));
let writer_control = CLILogControl {
output_rx,
output_enabled: output_enabled.clone(),
};
// Build CLI layer
let cli_layer = CLILayer::new(output_enabled.clone(), output_tx);
let cli_layer = CLILayer;
// Build fmt layer
let formatter = format::debug_fn(|writer, field, value| {
let fmt_layer = tracing_subscriber::fmt::layer()
.fmt_fields(
format::debug_fn(|writer, field, value| {
write!(writer, "{}", format_field(field.name(), value))
})
.delimited(" ");
// Format subscriber
let fmt_writer = Mutex::new(FmtLogWriter::new(output_enabled));
let fmt_layer = tracing_subscriber::fmt::layer()
.fmt_fields(formatter)
.with_writer(fmt_writer)
.without_time()
.with_filter(filter_fn(|meta| {
// Filter any logs with "dx_no_fmt" or is not user facing (no dx_src)
let mut fields = meta.fields().iter();
let has_src_flag = fields.any(|f| f.name() == DX_SRC_FLAG);
if !has_src_flag {
return false;
}
let has_fmt_flag = fields.any(|f| f.name() == DX_NO_FMT_FLAG);
if has_fmt_flag {
return false;
}
true
}));
.delimited(" "),
)
.with_writer(Mutex::new(FmtLogWriter {}))
.with_timer(tracing_subscriber::fmt::time::time());
let sub = tracing_subscriber::registry()
.with(filter)
@ -114,8 +113,7 @@ pub fn build_tracing() -> CLILogControl {
let sub = sub.with(console_subscriber::spawn());
sub.init();
writer_control
}
}
/// A logging layer that appends to a file.
@ -148,9 +146,7 @@ where
let mut visitor = CollectVisitor::new();
event.record(&mut visitor);
let new_line = if visitor.source == TraceSrc::Cargo
|| event.fields().any(|f| f.name() == DX_NO_FMT_FLAG)
{
let new_line = if visitor.source == TraceSrc::Cargo {
visitor.message
} else {
let meta = event.metadata();
@ -172,7 +168,7 @@ where
};
// Append logs
let new_data = strip_ansi_codes(&new_line).to_string();
let new_data = console::strip_ansi_codes(&new_line).to_string();
if let Ok(mut buf) = self.buffer.lock() {
*buf += &new_data;
@ -183,22 +179,7 @@ where
}
/// This is our "subscriber" (layer) that records structured data for the tui output.
struct CLILayer {
internal_output_enabled: Arc<AtomicBool>,
output_tx: UnboundedSender<TraceMsg>,
}
impl CLILayer {
pub fn new(
internal_output_enabled: Arc<AtomicBool>,
output_tx: UnboundedSender<TraceMsg>,
) -> Self {
Self {
internal_output_enabled,
output_tx,
}
}
}
struct CLILayer;
impl<S> Layer<S> for CLILayer
where
@ -210,21 +191,13 @@ where
event: &tracing::Event<'_>,
_ctx: tracing_subscriber::layer::Context<'_, S>,
) {
// We only care about user-facing logs.
let has_src_flag = event.fields().any(|f| f.name() == DX_SRC_FLAG);
if !has_src_flag {
return;
}
let mut visitor = CollectVisitor::new();
event.record(&mut visitor);
// If the TUI output is disabled we let fmt subscriber handle the logs
// EXCEPT for cargo logs which we just print.
if !self.internal_output_enabled.load(Ordering::SeqCst) {
if visitor.source == TraceSrc::Cargo
|| event.fields().any(|f| f.name() == DX_NO_FMT_FLAG)
{
if !TUI_ENABLED.load(Ordering::SeqCst) {
if visitor.source == TraceSrc::Cargo {
println!("{}", visitor.message);
}
return;
@ -244,8 +217,10 @@ where
visitor.source = TraceSrc::Dev;
}
self.output_tx
.unbounded_send(TraceMsg::new(visitor.source, *level, final_msg))
TUI_TX
.get()
.unwrap()
.unbounded_send(TraceMsg::text(visitor.source, *level, final_msg))
.unwrap();
}
@ -256,7 +231,6 @@ where
struct CollectVisitor {
message: String,
source: TraceSrc,
dx_user_msg: bool,
fields: HashMap<String, String>,
}
@ -265,7 +239,7 @@ impl CollectVisitor {
Self {
message: String::new(),
source: TraceSrc::Unknown,
dx_user_msg: false,
fields: HashMap::new(),
}
}
@ -285,7 +259,6 @@ impl Visit for CollectVisitor {
if name == DX_SRC_FLAG {
self.source = TraceSrc::from(value_string);
self.dx_user_msg = true;
return;
}
@ -293,51 +266,22 @@ impl Visit for CollectVisitor {
}
}
// Contains the sync primitives to control the CLIWriter.
pub struct CLILogControl {
pub output_rx: UnboundedReceiver<TraceMsg>,
pub output_enabled: Arc<AtomicBool>,
}
struct FmtLogWriter {
stdout: io::Stdout,
output_enabled: Arc<AtomicBool>,
}
impl FmtLogWriter {
pub fn new(output_enabled: Arc<AtomicBool>) -> Self {
Self {
stdout: io::stdout(),
output_enabled,
}
}
}
struct FmtLogWriter {}
impl Write for FmtLogWriter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// Handle selection between TUI or Terminal output.
if !self.output_enabled.load(Ordering::SeqCst) {
self.stdout.write(buf)
} else {
Ok(buf.len())
}
}
fn flush(&mut self) -> io::Result<()> {
if !self.output_enabled.load(Ordering::SeqCst) {
self.stdout.flush()
} else {
Ok(())
}
}
}
/// Formats a tracing field and value, removing any internal fields from the final output.
fn format_field(field_name: &str, value: &dyn Debug) -> String {
let mut out = String::new();
match field_name {
DX_SRC_FLAG => write!(out, ""),
DX_NO_FMT_FLAG => write!(out, ""),
"message" => write!(out, "{:?}", value),
_ => write!(out, "{}={:?}", field_name, value),
}
@ -350,15 +294,44 @@ fn format_field(field_name: &str, value: &dyn Debug) -> String {
pub struct TraceMsg {
pub source: TraceSrc,
pub level: Level,
pub content: String,
pub content: TraceContent,
pub timestamp: chrono::DateTime<chrono::Local>,
}
#[derive(Clone, PartialEq)]
#[allow(clippy::large_enum_variant)]
pub enum TraceContent {
Cargo(CompilerMessage),
Text(String),
}
impl TraceMsg {
pub fn new(source: TraceSrc, level: Level, content: String) -> Self {
pub fn text(source: TraceSrc, level: Level, content: String) -> Self {
Self {
source,
level,
content,
content: TraceContent::Text(content),
timestamp: chrono::Local::now(),
}
}
/// Create a new trace message from a cargo compiler message
///
/// All `cargo` messages are logged at the `TRACE` level since they get *very* noisy during development
pub fn cargo(content: CompilerMessage) -> Self {
Self {
level: match content.message.level {
DiagnosticLevel::Ice => Level::ERROR,
DiagnosticLevel::Error => Level::ERROR,
DiagnosticLevel::FailureNote => Level::ERROR,
DiagnosticLevel::Warning => Level::TRACE,
DiagnosticLevel::Note => Level::TRACE,
DiagnosticLevel::Help => Level::TRACE,
_ => Level::TRACE,
},
timestamp: chrono::Local::now(),
source: TraceSrc::Cargo,
content: TraceContent::Cargo(content),
}
}
}
@ -368,9 +341,8 @@ pub enum TraceSrc {
App(TargetPlatform),
Dev,
Build,
/// Provides no formatting.
Bundle,
Cargo,
/// Avoid using this
Unknown,
}
@ -385,12 +357,13 @@ impl From<String> for TraceSrc {
fn from(value: String) -> Self {
match value.as_str() {
"dev" => Self::Dev,
"build" => Self::Build,
"bld" => Self::Build,
"cargo" => Self::Cargo,
"web" => Self::App(TargetPlatform::Web),
"desktop" => Self::App(TargetPlatform::Desktop),
"app" => Self::App(TargetPlatform::Web),
"windows" => Self::App(TargetPlatform::Windows),
"macos" => Self::App(TargetPlatform::MacOS),
"linux" => Self::App(TargetPlatform::Linux),
"server" => Self::App(TargetPlatform::Server),
"liveview" => Self::App(TargetPlatform::Liveview),
_ => Self::Unknown,
}
}
@ -401,14 +374,19 @@ impl Display for TraceSrc {
match self {
Self::App(platform) => match platform {
TargetPlatform::Web => write!(f, "web"),
TargetPlatform::Desktop => write!(f, "desktop"),
TargetPlatform::MacOS => write!(f, "macos"),
TargetPlatform::Windows => write!(f, "windows"),
TargetPlatform::Linux => write!(f, "linux"),
TargetPlatform::Server => write!(f, "server"),
TargetPlatform::Liveview => write!(f, "server"),
TargetPlatform::Ios => write!(f, "ios"),
TargetPlatform::Android => write!(f, "android"),
TargetPlatform::Liveview => write!(f, "liveview"),
},
Self::Dev => write!(f, "dev"),
Self::Build => write!(f, "build"),
Self::Cargo => write!(f, "cargo"),
Self::Unknown => write!(f, "n/a"),
Self::Bundle => write!(f, "bundle"),
}
}
}

Some files were not shown because too many files have changed in this diff Show more