Added new Dynamic FX Generator.

Added AudioContext polyfill.
Added start of ROADMAP.
This commit is contained in:
photonstorm 2017-01-11 04:38:15 +00:00
parent 5893665a38
commit 36825cd487
8 changed files with 695 additions and 1 deletions

164
v3/dev-guide/ROADMAP.md Normal file
View file

@ -0,0 +1,164 @@
# Phaser 3 Development Roadmap
The following is a list of all the key areas of the Phaser 2 API, and how they'll map to the Phaser 3 API.
## Animation
V2:
Animation Manager
Animation Parser
Animation Class
FrameData
Frame Class
Creature run-times libs
V3:
The Texture Manager now handles all Texture parsing. It splits up Texture Atlases, creates Frame objects and handles Frame functions like Crop.
TODO:
* Define the format and API calls that Animations will take in Phaser 3, and decide upon if we require a 'central' Animation registry, rather than creating them multiple times, per Sprite instance.
* Decide if the Creature libs can still be supported.
## Camera
V2:
The Camera was essentially a Rectangle object with some special commands, to allow for Camera effects (shake, flash) and the tracking of Game Objects. It could never properly handle rotation or scaling.
V3:
The Camera is now a display level object with its own Transform, allowing you to rotate and scale, and have it update the scene correctly.
TODO:
* Camera effects (fade, flash)
* Camera follow / target
Filter
Group
Plugins
Scale Manager
Signals
Stage
State Manager
World
Game Objects
BitmapData
BitmapText
Button
Creature
Graphics
Image
Particle
RenderTexture
RetroFont
Rope
Sprite
SpriteBatch
Text
TileSprite
Video
Geometry
Circle
Ellipse
Hermite
Line
Matrix
Point
Polygon
Rectangle
RoundedRectangle
Input
Input Manager
Keyboard + Key
Mouse
MSPointer
Touch
Pointer
Gamepad
Loader
Cache
Math
Math functions
QuadTree
Random Data Generator
Net
Particles
Arcade Physics Emitter + Particle
Physics
Arcade Physics
Ninja Physics
P2 Physics
Renderer
Canvas
Graphics Primitives
Canvas Tint
WebGL
RenderTextures
Sprite Batch
Filters
Graphics Primitives
Sound
Sound Manager
Sound
AudioSprite
Tilemap
Tilemap class
Tilemap Layer
Tileset
Tile
ImageCollection
Time
Master Time
Timer
TimerEvent
Tween
Tween Manager
Tween + TweenData
Easing functions
Utils
ArraySet
ArrayUtils
Canvas Utils
Canvas Pool
Color
Debug
Device
DOM
EarCut
LinkedList
RequestAnimationFrame
Generic Utils

View file

@ -1,4 +1,4 @@
var CHECKSUM = {
build: 'ca0e6af0-d6c6-11e6-8a0b-c583e07b191c'
build: 'cd5e7c40-d7b6-11e6-b751-619825ddff5c'
};
module.exports = CHECKSUM;

View file

@ -25,6 +25,8 @@ var Phaser = {
},
Sound: require('./sound'),
Utils: {
Array: require('./utils/array/'),

View file

@ -0,0 +1,182 @@
/* Copyright 2013 Chris Wilson
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
This monkeypatch library is intended to be included in projects that are
written to the proper AudioContext spec (instead of webkitAudioContext),
and that use the new naming and proper bits of the Web Audio API (e.g.
using BufferSourceNode.start() instead of BufferSourceNode.noteOn()), but may
have to run on systems that only support the deprecated bits.
This library should be harmless to include if the browser supports
unprefixed "AudioContext", and/or if it supports the new names.
The patches this library handles:
if window.AudioContext is unsupported, it will be aliased to webkitAudioContext().
if AudioBufferSourceNode.start() is unimplemented, it will be routed to noteOn() or
noteGrainOn(), depending on parameters.
The following aliases only take effect if the new names are not already in place:
AudioBufferSourceNode.stop() is aliased to noteOff()
AudioContext.createGain() is aliased to createGainNode()
AudioContext.createDelay() is aliased to createDelayNode()
AudioContext.createScriptProcessor() is aliased to createJavaScriptNode()
AudioContext.createPeriodicWave() is aliased to createWaveTable()
OscillatorNode.start() is aliased to noteOn()
OscillatorNode.stop() is aliased to noteOff()
OscillatorNode.setPeriodicWave() is aliased to setWaveTable()
AudioParam.setTargetAtTime() is aliased to setTargetValueAtTime()
This library does NOT patch the enumerated type changes, as it is
recommended in the specification that implementations support both integer
and string types for AudioPannerNode.panningModel, AudioPannerNode.distanceModel
BiquadFilterNode.type and OscillatorNode.type.
*/
(function (global, exports, perf) {
'use strict';
function fixSetTarget(param) {
if (!param) // if NYI, just return
return;
if (!param.setTargetAtTime)
param.setTargetAtTime = param.setTargetValueAtTime;
}
if (window.hasOwnProperty('webkitAudioContext') &&
!window.hasOwnProperty('AudioContext')) {
window.AudioContext = webkitAudioContext;
if (!AudioContext.prototype.hasOwnProperty('createGain'))
AudioContext.prototype.createGain = AudioContext.prototype.createGainNode;
if (!AudioContext.prototype.hasOwnProperty('createDelay'))
AudioContext.prototype.createDelay = AudioContext.prototype.createDelayNode;
if (!AudioContext.prototype.hasOwnProperty('createScriptProcessor'))
AudioContext.prototype.createScriptProcessor = AudioContext.prototype.createJavaScriptNode;
if (!AudioContext.prototype.hasOwnProperty('createPeriodicWave'))
AudioContext.prototype.createPeriodicWave = AudioContext.prototype.createWaveTable;
AudioContext.prototype.internal_createGain = AudioContext.prototype.createGain;
AudioContext.prototype.createGain = function() {
var node = this.internal_createGain();
fixSetTarget(node.gain);
return node;
};
AudioContext.prototype.internal_createDelay = AudioContext.prototype.createDelay;
AudioContext.prototype.createDelay = function(maxDelayTime) {
var node = maxDelayTime ? this.internal_createDelay(maxDelayTime) : this.internal_createDelay();
fixSetTarget(node.delayTime);
return node;
};
AudioContext.prototype.internal_createBufferSource = AudioContext.prototype.createBufferSource;
AudioContext.prototype.createBufferSource = function() {
var node = this.internal_createBufferSource();
if (!node.start) {
node.start = function ( when, offset, duration ) {
if ( offset || duration )
this.noteGrainOn( when || 0, offset, duration );
else
this.noteOn( when || 0 );
};
} else {
node.internal_start = node.start;
node.start = function( when, offset, duration ) {
if( typeof duration !== 'undefined' )
node.internal_start( when || 0, offset, duration );
else
node.internal_start( when || 0, offset || 0 );
};
}
if (!node.stop) {
node.stop = function ( when ) {
this.noteOff( when || 0 );
};
} else {
node.internal_stop = node.stop;
node.stop = function( when ) {
node.internal_stop( when || 0 );
};
}
fixSetTarget(node.playbackRate);
return node;
};
AudioContext.prototype.internal_createDynamicsCompressor = AudioContext.prototype.createDynamicsCompressor;
AudioContext.prototype.createDynamicsCompressor = function() {
var node = this.internal_createDynamicsCompressor();
fixSetTarget(node.threshold);
fixSetTarget(node.knee);
fixSetTarget(node.ratio);
fixSetTarget(node.reduction);
fixSetTarget(node.attack);
fixSetTarget(node.release);
return node;
};
AudioContext.prototype.internal_createBiquadFilter = AudioContext.prototype.createBiquadFilter;
AudioContext.prototype.createBiquadFilter = function() {
var node = this.internal_createBiquadFilter();
fixSetTarget(node.frequency);
fixSetTarget(node.detune);
fixSetTarget(node.Q);
fixSetTarget(node.gain);
return node;
};
if (AudioContext.prototype.hasOwnProperty( 'createOscillator' )) {
AudioContext.prototype.internal_createOscillator = AudioContext.prototype.createOscillator;
AudioContext.prototype.createOscillator = function() {
var node = this.internal_createOscillator();
if (!node.start) {
node.start = function ( when ) {
this.noteOn( when || 0 );
};
} else {
node.internal_start = node.start;
node.start = function ( when ) {
node.internal_start( when || 0);
};
}
if (!node.stop) {
node.stop = function ( when ) {
this.noteOff( when || 0 );
};
} else {
node.internal_stop = node.stop;
node.stop = function( when ) {
node.internal_stop( when || 0 );
};
}
if (!node.setPeriodicWave)
node.setPeriodicWave = node.setWaveTable;
fixSetTarget(node.frequency);
fixSetTarget(node.detune);
return node;
};
}
}
if (window.hasOwnProperty('webkitOfflineAudioContext') &&
!window.hasOwnProperty('OfflineAudioContext')) {
window.OfflineAudioContext = webkitOfflineAudioContext;
}
}(window));

View file

@ -1,5 +1,6 @@
require('./Array.forEach');
require('./Array.isArray');
require('./AudioContextMonkeyPatch');
require('./console');
require('./Function.bind');
require('./Math.trunc');

331
v3/src/sound/dynamic/FX.js Normal file
View file

@ -0,0 +1,331 @@
var Between = require('../../math/Between');
var GetObjectValue = require('../../utils/GetObjectValue');
// Phaser.Sound.Dynamic.FX
// Based on Sound.js by KittyKatAttack
// https://github.com/kittykatattack/sound.js
// frequency, //The sound's fequency pitch in Hertz
// attack, //The time, in seconds, to fade the sound in
// decay, //The time, in seconds, to fade the sound out
// type, //waveform type: "sine", "triangle", "square", "sawtooth"
// volume, //The sound's maximum volume
// panValue, //The speaker pan. left: -1, middle: 0, right: 1
// wait, //The time, in seconds, to wait before playing the sound
// pitchBend, //The number of Hz in which to bend the sound's pitch down
// reverse, //If `reverse` is true the pitch will bend up
// random, //A range, in Hz, within which to randomize the pitch
// dissonance, //A value in Hz. It creates 2 dissonant frequencies above and below the target pitch
// echo, //An array: [delayTimeInSeconds, feedbackTimeInSeconds, filterValueInHz]
// reverb, //An array: [durationInSeconds, decayRateInSeconds, reverse]
// timeout //A number, in seconds, which is the maximum duration for sound effects
var FX = function (ctx, config)
{
this.audioContext = ctx;
this.frequencyValue = GetObjectValue(config, 'frequency', 200);
this.attack = GetObjectValue(config, 'attack', 0);
this.decay = GetObjectValue(config, 'decay', 1);
this.type = GetObjectValue(config, 'type', 'sine');
this.volumeValue = GetObjectValue(config, 'volume', 1);
this.panValue = GetObjectValue(config, 'pan', 0);
this.wait = GetObjectValue(config, 'wait', 0);
this.pitchBendAmount = GetObjectValue(config, 'pitchBend', 0);
this.reverse = GetObjectValue(config, 'reverse', false);
this.randomValue = GetObjectValue(config, 'random', 0);
this.dissonance = GetObjectValue(config, 'dissonance', 0);
this.echo = GetObjectValue(config, 'echo', false);
this.echoDelay = GetObjectValue(config, 'echo.delay', 0);
this.echoFeedback = GetObjectValue(config, 'echo.feedback', 0);
this.echoFilter = GetObjectValue(config, 'echo.filter', 0);
this.reverb = GetObjectValue(config, 'reverb', false);
this.reverbDuration = GetObjectValue(config, 'reverb.duration', 0);
this.reverbDecay = GetObjectValue(config, 'reverb.decay', 0);
this.reverbReverse = GetObjectValue(config, 'reverb.reverse', false);
this.timeout = GetObjectValue(config, 'timeout', false);
this.volume = ctx.createGain();
this.pan = (!ctx.createStereoPanner) ? ctx.createPanner() : ctx.createStereoPanner();
this.volume.connect(this.pan);
this.pan.connect(ctx.destination);
// Set the values
this.volume.gain.value = this.volumeValue;
if (!ctx.createStereoPanner)
{
this.pan.setPosition(this.panValue, 0, 1 - Math.abs(this.panValue));
}
else
{
this.pan.pan.value = this.panValue;
}
// Create an oscillator, gain and pan nodes, and connect them together to the destination
var oscillator = ctx.createOscillator();
oscillator.connect(this.volume);
oscillator.type = this.type;
// Optionally randomize the pitch if `randomValue` > 0.
// A random pitch is selected that's within the range specified by `frequencyValue`.
// The random pitch will be either above or below the target frequency.
if (this.randomValue > 0)
{
oscillator.frequency.value = Between(
this.frequencyValue - this.randomValue / 2,
this.frequencyValue + this.randomValue / 2
);
}
else
{
oscillator.frequency.value = this.frequencyValue;
}
// Apply effects
if (this.attack > 0)
{
this.fadeIn(this.volume);
}
this.fadeOut(this.volume);
if (this.pitchBendAmount > 0)
{
this.pitchBend(oscillator);
}
if (this.echo)
{
this.addEcho(this.volume);
}
if (this.reverb)
{
this.addReverb(this.volume);
}
if (this.dissonance > 0)
{
this.addDissonance();
}
this.play(oscillator);
var _this = this;
oscillator.onended = function ()
{
console.log('onended');
_this.pan.disconnect();
_this.volume.disconnect();
};
};
FX.prototype.constructor = FX;
FX.prototype = {
play: function (oscillator)
{
oscillator.start(this.audioContext.currentTime + this.wait);
//Oscillators have to be stopped otherwise they accumulate in
//memory and tax the CPU. They'll be stopped after a default
//timeout of 2 seconds, which should be enough for most sound
//effects. Override this in the `soundEffect` parameters if you
//need a longer sound
oscillator.stop(this.audioContext.currentTime + this.wait + 2);
},
fadeIn: function (volume)
{
volume.gain.value = 0;
volume.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + this.wait);
volume.gain.linearRampToValueAtTime(this.volumeValue, this.audioContext.currentTime + this.wait + this.attack);
},
fadeOut: function (volume)
{
volume.gain.linearRampToValueAtTime(this.volumeValue, this.audioContext.currentTime + this.wait + this.attack);
volume.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + this.wait + this.attack + this.decay);
},
addReverb: function (volume)
{
var convolver = this.audioContext.createConvolver();
convolver.buffer = this.impulseResponse(this.reverbDuration, this.reverbDecay, this.reverbReverse, this.audioContext);
volume.connect(convolver);
convolver.connect(this.pan);
},
addEcho: function (volume)
{
var feedback = this.audioContext.createGain();
var delay = this.audioContext.createDelay();
var filter = this.audioContext.createBiquadFilter();
// Set the node values
feedback.gain.value = this.echoFeedback;
delay.delayTime.value = this.echoDelay;
if (this.echoFilter)
{
filter.frequency.value = this.echoFilter;
}
// Create the delay feedback loop (with optional filtering)
delay.connect(feedback);
if (this.echoFilter)
{
feedback.connect(filter);
filter.connect(delay);
}
else
{
feedback.connect(delay);
}
// Connect the delay node to the oscillator volume node
volume.connect(delay);
// Connect the delay node to the main sound chains pan node,
// so that the echo effect is directed to the correct speaker
delay.connect(this.pan);
},
pitchBend: function (oscillator)
{
var frequency = oscillator.frequency.value;
if (!this.reverse)
{
// If reverse is false, make the sound drop in pitch
oscillator.frequency.linearRampToValueAtTime(frequency, this.audioContext.currentTime + this.wait);
oscillator.frequency.linearRampToValueAtTime(frequency - this.pitchBendAmount, this.audioContext.currentTime + this.wait + this.attack + this.decay);
}
else
{
// If reverse is true, make the sound rise in pitch
oscillator.frequency.linearRampToValueAtTime(frequency, this.audioContext.currentTime + this.wait);
oscillator.frequency.linearRampToValueAtTime(frequency + this.pitchBendAmount, this.audioContext.currentTime + this.wait + this.attack + this.decay);
}
},
addDissonance: function ()
{
// Create two more oscillators and gain nodes
var ctx = this.audioContext;
var d1 = ctx.createOscillator();
var d2 = ctx.createOscillator();
var d1Volume = ctx.createGain();
var d2Volume = ctx.createGain();
// Set the volume to the `volumeValue`
d1Volume.gain.value = this.volumeValue;
d2Volume.gain.value = this.volumeValue;
// Connect the oscillators to the gain and destination nodes
d1.connect(d1Volume);
d2.connect(d2Volume);
d1Volume.connect(ctx.destination);
d2Volume.connect(ctx.destination);
// Set the waveform to "sawtooth" for a harsh effect
d1.type = 'sawtooth';
d2.type = 'sawtooth';
// Make the two oscillators play at frequencies above and below the main sound's frequency.
// Use whatever value was supplied by the `dissonance` argument
d1.frequency.value = this.frequencyValue + this.dissonance;
d2.frequency.value = this.frequencyValue - this.dissonance;
// Fade in / out, pitch bend and play the oscillators to match the main sound
if (this.attack > 0)
{
this.fadeIn(d1Volume);
this.fadeIn(d2Volume);
}
if (this.decay > 0)
{
this.fadeOut(d1Volume);
this.fadeOut(d2Volume);
}
if (this.pitchBendAmount > 0)
{
this.pitchBend(d1);
this.pitchBend(d2);
}
if (this.echo)
{
this.addEcho(d1Volume);
this.addEcho(d2Volume);
}
if (this.reverb)
{
this.addReverb(d1Volume);
this.addReverb(d2Volume);
}
this.play(d1);
this.play(d2);
},
impulseResponse: function (duration, decay, reverse)
{
// The length of the buffer.
var length = this.audioContext.sampleRate * duration;
// Create an audio buffer (an empty sound container) to store the reverb effect.
var impulse = this.audioContext.createBuffer(2, length, this.audioContext.sampleRate);
// Use `getChannelData` to initialize empty arrays to store sound data for the left and right channels.
var left = impulse.getChannelData(0);
var right = impulse.getChannelData(1);
// Loop through each sample-frame and fill the channel data with random noise.
for (var i = 0; i < length; i++)
{
// Apply the reverse effect, if `reverse` is `true`.
var n = (reverse) ? length - i : i;
// Fill the left and right channels with random white noise which decays exponentially.
left[i] = (Math.random() * 2 - 1) * Math.pow(1 - n / length, decay);
right[i] = (Math.random() * 2 - 1) * Math.pow(1 - n / length, decay);
}
// Return the `impulse`.
return impulse;
}
};
module.exports = FX;

View file

@ -0,0 +1,7 @@
// Phaser.Sound.Dynamic
module.exports = {
FX: require('./FX')
};

7
v3/src/sound/index.js Normal file
View file

@ -0,0 +1,7 @@
// Phaser.Sound
module.exports = {
Dynamic: require('./dynamic')
};