mirror of
https://github.com/Tonejs/Tone.js
synced 2024-12-26 03:23:11 +00:00
Update README.md
This commit is contained in:
parent
f6526c9aaa
commit
3c9ee76d48
1 changed files with 50 additions and 50 deletions
100
README.md
100
README.md
|
@ -1,13 +1,11 @@
|
||||||
Tone.js
|
# Tone.js
|
||||||
=========
|
|
||||||
|
|
||||||
[![codecov](https://codecov.io/gh/Tonejs/Tone.js/branch/dev/graph/badge.svg)](https://codecov.io/gh/Tonejs/Tone.js)
|
[![codecov](https://codecov.io/gh/Tonejs/Tone.js/branch/dev/graph/badge.svg)](https://codecov.io/gh/Tonejs/Tone.js)
|
||||||
|
|
||||||
|
|
||||||
Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers creating web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects. Additionally, Tone provides high-performance building blocks to create your own synthesizers, effects, and complex control signals.
|
Tone.js is a Web Audio framework for creating interactive music in the browser. The architecture of Tone.js aims to be familiar to both musicians and audio programmers creating web-based audio applications. On the high-level, Tone offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects. Additionally, Tone provides high-performance building blocks to create your own synthesizers, effects, and complex control signals.
|
||||||
|
|
||||||
* [API](https://tonejs.github.io/docs/)
|
- [API](https://tonejs.github.io/docs/)
|
||||||
* [Examples](https://tonejs.github.io/examples/)
|
- [Examples](https://tonejs.github.io/examples/)
|
||||||
|
|
||||||
# Installation
|
# Installation
|
||||||
|
|
||||||
|
@ -21,7 +19,7 @@ npm install tone@next // Or, alternatively, use the 'next' version
|
||||||
Add Tone.js to a project using the JavaScript `import` syntax:
|
Add Tone.js to a project using the JavaScript `import` syntax:
|
||||||
|
|
||||||
```js
|
```js
|
||||||
import * as Tone from 'tone';
|
import * as Tone from "tone";
|
||||||
```
|
```
|
||||||
|
|
||||||
Tone.js is also hosted at unpkg.com. It can be added directly within an HTML document, as long as it precedes any project scripts. [See the example here](https://github.com/Tonejs/Tone.js/blob/master/examples/simpleHtml.html) for more details.
|
Tone.js is also hosted at unpkg.com. It can be added directly within an HTML document, as long as it precedes any project scripts. [See the example here](https://github.com/Tonejs/Tone.js/blob/master/examples/simpleHtml.html) for more details.
|
||||||
|
@ -50,36 +48,36 @@ synth.triggerAttackRelease("C4", "8n");
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const synth = new Tone.Synth().toDestination();
|
const synth = new Tone.Synth().toDestination();
|
||||||
const now = Tone.now()
|
const now = Tone.now();
|
||||||
// trigger the attack immediately
|
// trigger the attack immediately
|
||||||
synth.triggerAttack("C4", now)
|
synth.triggerAttack("C4", now);
|
||||||
// wait one second before triggering the release
|
// wait one second before triggering the release
|
||||||
synth.triggerRelease(now + 1)
|
synth.triggerRelease(now + 1);
|
||||||
```
|
```
|
||||||
|
|
||||||
### triggerAttackRelease
|
### triggerAttackRelease
|
||||||
|
|
||||||
`triggerAttackRelease` is a combination of `triggerAttack` and `triggerRelease`
|
`triggerAttackRelease` is a combination of `triggerAttack` and `triggerRelease`
|
||||||
|
|
||||||
The first argument to the note which can either be a frequency in hertz (like `440`) or as "pitch-octave" notation (like `"D#2"`).
|
The first argument to the note which can either be a frequency in hertz (like `440`) or as "pitch-octave" notation (like `"D#2"`).
|
||||||
|
|
||||||
The second argument is the duration that the note is held. This value can either be in seconds, or as a [tempo-relative value](https://github.com/Tonejs/Tone.js/wiki/Time).
|
The second argument is the duration that the note is held. This value can either be in seconds, or as a [tempo-relative value](https://github.com/Tonejs/Tone.js/wiki/Time).
|
||||||
|
|
||||||
The third (optional) argument of `triggerAttackRelease` is _when_ along the AudioContext time the note should play. It can be used to schedule events in the future.
|
The third (optional) argument of `triggerAttackRelease` is _when_ along the AudioContext time the note should play. It can be used to schedule events in the future.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const synth = new Tone.Synth().toDestination();
|
const synth = new Tone.Synth().toDestination();
|
||||||
const now = Tone.now()
|
const now = Tone.now();
|
||||||
synth.triggerAttackRelease("C4", "8n", now)
|
synth.triggerAttackRelease("C4", "8n", now);
|
||||||
synth.triggerAttackRelease("E4", "8n", now + 0.5)
|
synth.triggerAttackRelease("E4", "8n", now + 0.5);
|
||||||
synth.triggerAttackRelease("G4", "8n", now + 1)
|
synth.triggerAttackRelease("G4", "8n", now + 1);
|
||||||
```
|
```
|
||||||
|
|
||||||
## Time
|
## Time
|
||||||
|
|
||||||
Web Audio has advanced, sample accurate scheduling capabilities. The AudioContext time is what the Web Audio API uses to schedule events, starts at 0 when the page loads and counts up in **seconds**.
|
Web Audio has advanced, sample accurate scheduling capabilities. The AudioContext time is what the Web Audio API uses to schedule events, starts at 0 when the page loads and counts up in **seconds**.
|
||||||
|
|
||||||
`Tone.now()` gets the current time of the AudioContext.
|
`Tone.now()` gets the current time of the AudioContext.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
setInterval(() => console.log(Tone.now()), 100);
|
setInterval(() => console.log(Tone.now()), 100);
|
||||||
|
@ -91,17 +89,17 @@ Tone.js abstracts away the AudioContext time. Instead of defining all values in
|
||||||
|
|
||||||
# Starting Audio
|
# Starting Audio
|
||||||
|
|
||||||
**IMPORTANT**: Browsers will not play _any_ audio until a user clicks something (like a play button). Run your Tone.js code only after calling `Tone.start()` from a event listener which is triggered by a user action such as "click" or "keydown".
|
**IMPORTANT**: Browsers will not play _any_ audio until a user clicks something (like a play button). Run your Tone.js code only after calling `Tone.start()` from a event listener which is triggered by a user action such as "click" or "keydown".
|
||||||
|
|
||||||
`Tone.start()` returns a promise, the audio will be ready only after that promise is resolved. Scheduling or playing audio before the AudioContext is running will result in silence or incorrect scheduling.
|
`Tone.start()` returns a promise, the audio will be ready only after that promise is resolved. Scheduling or playing audio before the AudioContext is running will result in silence or incorrect scheduling.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
//attach a click listener to a play button
|
//attach a click listener to a play button
|
||||||
document.querySelector('button')?.addEventListener('click', async () => {
|
document.querySelector("button")?.addEventListener("click", async () => {
|
||||||
await Tone.start()
|
await Tone.start();
|
||||||
console.log('audio is ready')
|
console.log("audio is ready");
|
||||||
})
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
# Scheduling
|
# Scheduling
|
||||||
|
|
||||||
|
@ -116,32 +114,32 @@ Multiple events and parts can be arranged and synchronized along the Transport.
|
||||||
const synthA = new Tone.FMSynth().toDestination();
|
const synthA = new Tone.FMSynth().toDestination();
|
||||||
const synthB = new Tone.AMSynth().toDestination();
|
const synthB = new Tone.AMSynth().toDestination();
|
||||||
//play a note every quarter-note
|
//play a note every quarter-note
|
||||||
const loopA = new Tone.Loop(time => {
|
const loopA = new Tone.Loop((time) => {
|
||||||
synthA.triggerAttackRelease("C2", "8n", time);
|
synthA.triggerAttackRelease("C2", "8n", time);
|
||||||
}, "4n").start(0);
|
}, "4n").start(0);
|
||||||
//play another note every off quarter-note, by starting it "8n"
|
//play another note every off quarter-note, by starting it "8n"
|
||||||
const loopB = new Tone.Loop(time => {
|
const loopB = new Tone.Loop((time) => {
|
||||||
synthB.triggerAttackRelease("C4", "8n", time);
|
synthB.triggerAttackRelease("C4", "8n", time);
|
||||||
}, "4n").start("8n");
|
}, "4n").start("8n");
|
||||||
// all loops start when the Transport is started
|
// all loops start when the Transport is started
|
||||||
Tone.getTransport().start()
|
Tone.getTransport().start();
|
||||||
// ramp up to 800 bpm over 10 seconds
|
// ramp up to 800 bpm over 10 seconds
|
||||||
Tone.Transport.bpm.rampTo(800, 10);
|
Tone.getTransport().bpm.rampTo(800, 10);
|
||||||
```
|
```
|
||||||
|
|
||||||
Since Javascript callbacks are **not precisely timed**, the sample-accurate time of the event is passed into the callback function. **Use this time value to schedule the events**.
|
Since Javascript callbacks are **not precisely timed**, the sample-accurate time of the event is passed into the callback function. **Use this time value to schedule the events**.
|
||||||
|
|
||||||
# Instruments
|
# Instruments
|
||||||
|
|
||||||
There are numerous synths to choose from including `Tone.FMSynth`, `Tone.AMSynth` and `Tone.NoiseSynth`.
|
There are numerous synths to choose from including `Tone.FMSynth`, `Tone.AMSynth` and `Tone.NoiseSynth`.
|
||||||
|
|
||||||
All of these instruments are **monophonic** (single voice) which means that they can only play one note at a time.
|
All of these instruments are **monophonic** (single voice) which means that they can only play one note at a time.
|
||||||
|
|
||||||
To create a **polyphonic** synthesizer, use `Tone.PolySynth`, which accepts a monophonic synth as its first parameter and automatically handles the note allocation so you can pass in multiple notes. The API is similar to the monophonic synths, except `triggerRelease` must be given a note or array of notes.
|
To create a **polyphonic** synthesizer, use `Tone.PolySynth`, which accepts a monophonic synth as its first parameter and automatically handles the note allocation so you can pass in multiple notes. The API is similar to the monophonic synths, except `triggerRelease` must be given a note or array of notes.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const synth = new Tone.PolySynth(Tone.Synth).toDestination();
|
const synth = new Tone.PolySynth(Tone.Synth).toDestination();
|
||||||
const now = Tone.now()
|
const now = Tone.now();
|
||||||
synth.triggerAttack("D4", now);
|
synth.triggerAttack("D4", now);
|
||||||
synth.triggerAttack("F4", now + 0.5);
|
synth.triggerAttack("F4", now + 0.5);
|
||||||
synth.triggerAttack("A4", now + 1);
|
synth.triggerAttack("A4", now + 1);
|
||||||
|
@ -152,30 +150,32 @@ synth.triggerRelease(["D4", "F4", "A4", "C5", "E5"], now + 4);
|
||||||
|
|
||||||
# Samples
|
# Samples
|
||||||
|
|
||||||
Sound generation is not limited to synthesized sounds. You can also load a sample and play that back in a number of ways. `Tone.Player` is one way to load and play back an audio file.
|
Sound generation is not limited to synthesized sounds. You can also load a sample and play that back in a number of ways. `Tone.Player` is one way to load and play back an audio file.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/gong_1.mp3").toDestination();
|
const player = new Tone.Player(
|
||||||
|
"https://tonejs.github.io/audio/berklee/gong_1.mp3"
|
||||||
|
).toDestination();
|
||||||
Tone.loaded().then(() => {
|
Tone.loaded().then(() => {
|
||||||
player.start();
|
player.start();
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
`Tone.loaded()` returns a promise which resolves when _all_ audio files are loaded. It's a helpful shorthand instead of waiting on each individual audio buffer's `onload` event to resolve.
|
`Tone.loaded()` returns a promise which resolves when _all_ audio files are loaded. It's a helpful shorthand instead of waiting on each individual audio buffer's `onload` event to resolve.
|
||||||
|
|
||||||
## Tone.Sampler
|
## Tone.Sampler
|
||||||
|
|
||||||
Multiple samples can also be combined into an instrument. If you have audio files organized by note, `Tone.Sampler` will pitch shift the samples to fill in gaps between notes. So for example, if you only have every 3rd note on a piano sampled, you could turn that into a full piano sample.
|
Multiple samples can also be combined into an instrument. If you have audio files organized by note, `Tone.Sampler` will pitch shift the samples to fill in gaps between notes. So for example, if you only have every 3rd note on a piano sampled, you could turn that into a full piano sample.
|
||||||
|
|
||||||
Unlike the other synths, Tone.Sampler is polyphonic so doesn't need to be passed into Tone.PolySynth
|
Unlike the other synths, Tone.Sampler is polyphonic so doesn't need to be passed into Tone.PolySynth
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const sampler = new Tone.Sampler({
|
const sampler = new Tone.Sampler({
|
||||||
urls: {
|
urls: {
|
||||||
"C4": "C4.mp3",
|
C4: "C4.mp3",
|
||||||
"D#4": "Ds4.mp3",
|
"D#4": "Ds4.mp3",
|
||||||
"F#4": "Fs4.mp3",
|
"F#4": "Fs4.mp3",
|
||||||
"A4": "A4.mp3",
|
A4: "A4.mp3",
|
||||||
},
|
},
|
||||||
release: 1,
|
release: 1,
|
||||||
baseUrl: "https://tonejs.github.io/audio/salamander/",
|
baseUrl: "https://tonejs.github.io/audio/salamander/",
|
||||||
|
@ -183,7 +183,7 @@ const sampler = new Tone.Sampler({
|
||||||
|
|
||||||
Tone.loaded().then(() => {
|
Tone.loaded().then(() => {
|
||||||
sampler.triggerAttackRelease(["Eb4", "G4", "Bb4"], 4);
|
sampler.triggerAttackRelease(["Eb4", "G4", "Bb4"], 4);
|
||||||
})
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
# Effects
|
# Effects
|
||||||
|
@ -195,21 +195,21 @@ const player = new Tone.Player({
|
||||||
url: "https://tonejs.github.io/audio/berklee/gurgling_theremin_1.mp3",
|
url: "https://tonejs.github.io/audio/berklee/gurgling_theremin_1.mp3",
|
||||||
loop: true,
|
loop: true,
|
||||||
autostart: true,
|
autostart: true,
|
||||||
})
|
});
|
||||||
//create a distortion effect
|
//create a distortion effect
|
||||||
const distortion = new Tone.Distortion(0.4).toDestination();
|
const distortion = new Tone.Distortion(0.4).toDestination();
|
||||||
//connect a player to the distortion
|
//connect a player to the distortion
|
||||||
player.connect(distortion);
|
player.connect(distortion);
|
||||||
```
|
```
|
||||||
|
|
||||||
The connection routing is flexible, connections can run serially or in parallel.
|
The connection routing is flexible, connections can run serially or in parallel.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const player = new Tone.Player({
|
const player = new Tone.Player({
|
||||||
url: "https://tonejs.github.io/audio/drum-samples/loops/ominous.mp3",
|
url: "https://tonejs.github.io/audio/drum-samples/loops/ominous.mp3",
|
||||||
autostart: true,
|
autostart: true,
|
||||||
});
|
});
|
||||||
const filter = new Tone.Filter(400, 'lowpass').toDestination();
|
const filter = new Tone.Filter(400, "lowpass").toDestination();
|
||||||
const feedbackDelay = new Tone.FeedbackDelay(0.125, 0.5).toDestination();
|
const feedbackDelay = new Tone.FeedbackDelay(0.125, 0.5).toDestination();
|
||||||
|
|
||||||
// connect the player to the feedback delay and filter in parallel
|
// connect the player to the feedback delay and filter in parallel
|
||||||
|
@ -217,13 +217,13 @@ player.connect(filter);
|
||||||
player.connect(feedbackDelay);
|
player.connect(feedbackDelay);
|
||||||
```
|
```
|
||||||
|
|
||||||
Multiple nodes can be connected to the same input enabling sources to share effects. `Tone.Gain` is useful utility node for creating complex routing.
|
Multiple nodes can be connected to the same input enabling sources to share effects. `Tone.Gain` is useful utility node for creating complex routing.
|
||||||
|
|
||||||
# Signals
|
# Signals
|
||||||
|
|
||||||
Like the underlying Web Audio API, Tone.js is built with audio-rate signal control over nearly everything. This is a powerful feature which allows for sample-accurate synchronization and scheduling of parameters.
|
Like the underlying Web Audio API, Tone.js is built with audio-rate signal control over nearly everything. This is a powerful feature which allows for sample-accurate synchronization and scheduling of parameters.
|
||||||
|
|
||||||
`Signal` properties have a few built in methods for creating automation curves.
|
`Signal` properties have a few built in methods for creating automation curves.
|
||||||
|
|
||||||
For example, the `frequency` parameter on `Oscillator` is a Signal so you can create a smooth ramp from one frequency to another.
|
For example, the `frequency` parameter on `Oscillator` is a Signal so you can create a smooth ramp from one frequency to another.
|
||||||
|
|
||||||
|
@ -247,13 +247,13 @@ To use MIDI files, you'll first need to convert them into a JSON format which To
|
||||||
|
|
||||||
# Performance
|
# Performance
|
||||||
|
|
||||||
Tone.js makes extensive use of the native Web Audio Nodes such as the GainNode and WaveShaperNode for all signal processing, which enables Tone.js to work well on both desktop and mobile browsers.
|
Tone.js makes extensive use of the native Web Audio Nodes such as the GainNode and WaveShaperNode for all signal processing, which enables Tone.js to work well on both desktop and mobile browsers.
|
||||||
|
|
||||||
[This wiki](https://github.com/Tonejs/Tone.js/wiki/Performance) article has some suggestions related to performance for best practices.
|
[This wiki](https://github.com/Tonejs/Tone.js/wiki/Performance) article has some suggestions related to performance for best practices.
|
||||||
|
|
||||||
# Testing
|
# Testing
|
||||||
|
|
||||||
Tone.js runs an extensive test suite using [mocha](https://mochajs.org/) and [chai](http://chaijs.com/) with nearly 100% coverage. Passing builds on the 'dev' branch are published on npm as `tone@next`.
|
Tone.js runs an extensive test suite using [mocha](https://mochajs.org/) and [chai](http://chaijs.com/) with nearly 100% coverage. Passing builds on the 'dev' branch are published on npm as `tone@next`.
|
||||||
|
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
|
@ -263,9 +263,9 @@ If you have questions (or answers) that are not necessarily bugs/issues, please
|
||||||
|
|
||||||
# References and Inspiration
|
# References and Inspiration
|
||||||
|
|
||||||
* [Many of Chris Wilson's Repositories](https://github.com/cwilso)
|
- [Many of Chris Wilson's Repositories](https://github.com/cwilso)
|
||||||
* [Many of Mohayonao's Repositories](https://github.com/mohayonao)
|
- [Many of Mohayonao's Repositories](https://github.com/mohayonao)
|
||||||
* [The Spec](http://webaudio.github.io/web-audio-api/)
|
- [The Spec](http://webaudio.github.io/web-audio-api/)
|
||||||
* [Sound on Sound - Synth Secrets](http://www.soundonsound.com/sos/may99/articles/synthsec.htm)
|
- [Sound on Sound - Synth Secrets](http://www.soundonsound.com/sos/may99/articles/synthsec.htm)
|
||||||
* [Miller Puckette - Theory and Techniques of Electronic Music](http://msp.ucsd.edu/techniques.htm)
|
- [Miller Puckette - Theory and Techniques of Electronic Music](http://msp.ucsd.edu/techniques.htm)
|
||||||
* [standardized-audio-context](https://github.com/chrisguttandin/standardized-audio-context)
|
- [standardized-audio-context](https://github.com/chrisguttandin/standardized-audio-context)
|
||||||
|
|
Loading…
Reference in a new issue