An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.
An integer representing the size of the buffer in sample-frames.
The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000, with 44100 being the most commonly used.
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on.
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).
MDN
Closes the audio context, releasing any system audio resources that it uses.
Closes the audio context, releasing any system audio resources that it uses.
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.
An integer representing the size of the buffer in sample-frames.
The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object.
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
The number of channels in the input audio streams, which the output stream will contain; the default is 6 is this parameter is not specified.
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
The number of channels in the input audio stream that you want to output separately; the default is 6 is this parameter is not specified.
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount.
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.
The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
Creates a GainNode, which can be used to control the overall volume of the audio graph.
Creates a GainNode, which can be used to control the overall volume of the audio graph.
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement.
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.
An HTMLMediaElement object that you want to feed into an audio processing graph to manipulate.
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
A MediaStream object that you want to feed into an audio processing graph to manipulate.
Creates an OscillatorNode, a source representing a periodic waveform.
Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
Returns a double representing an ever-increasing hardware time in seconds used for scheduling.
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.
Asynchronously decodes audio file data contained in an ArrayBuffer.
Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.
An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer.
A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.
An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
Returns an AudioDestinationNode representing the final destination of all audio in the context.
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order.
Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().
MDN
Returns the AudioListener object, used for 3D spatialization.
Returns the AudioListener object, used for 3D spatialization.
Removes the event listener previously registered with EventTarget.addEventListener.
Removes the event listener previously registered with EventTarget.addEventListener.
MDN
Resumes the progression of time in an audio context that has previously been suspended.
Resumes the progression of time in an audio context that has previously been suspended.
Returns a float representing the sample rate (in samples per second) used by all nodes in this context.
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.
The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.
The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.
When the method is invoked, the rendering is started and a promise is raised. When the rendering is completed, the promise resolves with an AudioBuffer containing the rendered audio.
Returns the current state of the AudioContext.
Returns the current state of the AudioContext.
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
Is an EventHandler called when the processing is terminated, that is when the complete event (of type OfflineAudioCompletionEvent) is raised.
Is an EventHandler called when the processing is terminated, that is when the complete event (of type OfflineAudioCompletionEvent) is raised.
(Since version forever) Use the promise version of OfflineAudioContext.startRendering instead.
The OfflineAudioContext interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.
It is important to note that, whereas you can create a new AudioContext using the new AudioContext() constructor with no arguments, the new OfflineAudioContext() constructor requires three arguments:
new OfflineAudioContext(numOfChannels, length, sampleRate)
This works in exactly the same way as when you create a new AudioBuffer with the AudioContext.createBuffer method. For more detail, read Audio buffers: frames, samples and channels from our Basic concepts guide.