William Turner has over 13 years of experience as a technical trainer. He previously worked as an instructor at Ex'pression College For Digital Art in Emeryville California for over 10 years, is the founder of WikiAudio.org, and is currently writing his first book.
Section Introduction Transcripts
Section Introduction Transcripts
Hello World (Creating Your First Sound) In this tutorial we're going to demonstrate how to generate sound using the Web Audio API. So to create a hello world style application we can do this by triggering some sound, and we can either trigger an audio file or generate an oscillator. I think the easiest and most direct path towards getting started is basic oscillator generation, so we're going to start there. Now if you're not familiar with what an oscillator is, here's an example. So this is an example of an oscillator waveform, and this is the sound we're going to be generating using the Web Audio API, (Tone) and in fact the browser is going to be generating the oscillator directly.
Filters and the Node Graph So now that we understand the basics of the node graph and how to generate oscillators, I want to now explore the various sound manipulating tools within the node graph and how they can be applied to the synthesis engine. So one effect that is common with audio is equalization. An equalizer simply allows you to attenuate or boost a range of frequencies of a sound. Well, in order to create an equalizer, you need audio filters. The Web Audio API has a node called biquad filter that allows you to select from a list of different filter types. We can demonstrate what these filter types look like by opening up an unrelated application called REAPER and using their equalization plugin to demonstrate.
Loading and Playing a Sound So in this tutorial we're going to be going over how to load and playback audio files using the Web Audio API. Now you're probably familiar with the regular HTML5 audio tag element, which has an SRC attribute and built- in audio controls. (Tone) Well, programming playback of audio files with the web audio API is a little bit more complicated. So here's an example of a code needed to playback an audio file with the Web Audio API, and here's some example of the code needed to playback an audio file with the audio element.
Abstracting Audio File Loading & Playback So in our last tutorial we demonstrated how to play back an audio file. In this tutorial we're going to create a small abstraction, so for the rest of our tutorials we don't need to write the same large block of code every time we want to load and play back an audio file. The end result of our abstraction will allow us to load and play back an audio file with only a few lines of code that look something like this.
Dynamics Compression In this tutorial we're going to talk about how to add dynamic compression to your Web Audio API projects, and we're going to do this by using a built-in node called the dynamics compressor node. Now before we can explain the dynamics compressor node in the Web Audio API a good thing to do would be to explain the concept behind a regular hardware compressor, which is what this node is modeled off of. So this is what a conventional hardware compressor looks like, and here's the software equivalent. The easiest way to explain what this device does is as follows: Imagine you're listening to a piece of music through an electronic device be it your stereo, cell phone, computer, or tablet, and imagine every time there was a very loud passage the volume control was automatically attenuated just for that part of the music and after that section of the music ended the volume was automatically returned to its previous state. Well, that's what a compressor does, but it does it at very fast intervals anywhere from milliseconds to seconds. Now technically this kind of compression is called downward compression. And there's another type of compression called upward compression that actually makes soft passages louder, but for brevity we're only going to be talking about the more common form of compression, which is downward, and from here on out when we talk about compression we're going to talking about downward compression. Now if we open up a third party application called REAPER, we can open up a software compressor plug-in and see this effect in action. (Tone) Here we can see the amount of compression that's being invoked. In short, this is the amount of attenuation that's taking place.
Changing Parameter Values Over Time In this tutorial I'm going to demonstrate a few built-in methods that allow us to modify parameters over time. In the examples we're using we'll be modifying a gain value parameter; however, we could use any node value param for this demonstration. The first is the setValueAtTime method. This allows us to change a param value some time into the future. It takes two arguments. The first is a value you would like the parameter to be changed to, and the next is the time duration that will pass prior to this value being changed, so here's an example of changing a gainNode's gain value. Three seconds from when our sound is played we'll be increasing its value to two. (Tone)
Integrating HTML5 Audio and Video Elements Welcome to the Integrating HTML5 Audio and Video Elements module. To use the Web Audio API with the regular HTML5 audio and video elements you can use the context. createMediaElementSource method. This allows you to take the audio output of the audio or video elements and pipe them through the Web Audio API node graph. Here you can then apply thte same processing as any other audio source including filters, gain node effects, reverb effects, etc. So here I've created a video example where I piped the audio output to the Web Audio API node graph and apply a five-band equalizer to the signal using a series of biquad filter nodes. (Tone) The createMediaElementSource method is easy to use. All you need to do is to capture the audio or video element node and then place that node in the argument of the createMediaElementSource method just like this. In our example I'm using a video element, but you could just as easily replace that with the audio element.
Adding UI Elements In this tutorial we're going to be going over some very basic interface building tools that you can integrate into your audio applications. Mainly I'm going to be demonstrating how to integrate jQuery UI sliders, which look like this, (Tone) as well as a library called jQuery Knob that allows for input sliders that have a dial theme and look like this. (Tone) So the first thing we need to do is to make sure that we have jQuery, jQuery UI, and the jQuery UI CSS file, as well as the jQuery Knob library added to our index. html file. So first here's an example of a jQuery UI slider connected to the frequency value of an oscillator. (Tone) To do this I created two DOM elements, one with an ID of pad, which will be used to trigger our sound, and the other with an ID of slider, which will be used to change our frequency value. I declared the oscillator and val variables here so our other functions have access to them. I then placed our oscillator code inside of the mousedown event listener. If you look at the frequency value, it's assigned to the val variable. We then select our slider DOM element and assign it an object using the slider method. In our sliderParams object there's a series of parameters, but the most important thing to take notice of is the slide method, which allows us to modify the val variable and hence effect the frequency of our oscillator.