Posted on

What’s Missing From Your 3D Sound Toolbox?

Audio for VR/AR is getting a lot of attention these days, now that people are realising how essential good spatial audio is for an immersive experience. But we still don’t have as many tools as are available for stereo. Not even close!

This is because Ambisonics has to handled carefully when processing in order to keep the correct spatial effect – even a small phase change between channels significantly alter the spatial effect – so there are very few plugins that can be used after the sound has been encoded.

To avoid this problem we can apply effects and processing before spatial encoding, but then we are restricted in what we can do and how we can place it. It is also not an option if you are using an Ambisonics microphone (such as the SoundField, Tetra Mic or AMBEO VR), because it is already encoded! We need to be able to process Ambisonics channels directly without destroying the spatial effect.

So, what is missing from your 3D sound toolbox? Is there a plugin that you would reach for in stereo that doesn’t exist for spatial audio? Maybe you want to take advantage of the additional spatial dimensions but don’t have a tool to help you do that. Whatever you need, I am interested in hearing about it. I have a number of plugins that will be available soon that will fulfil some technical and creative requirements, but there can always be more! In fact, I’ve already released the first one for free. I am particularly interested in creative tools that would be applied after encoding but before decoding.

With that in mind, I am asking what you would like to see that doesn’t exist. If you are the first person to suggest an idea (either via the form or in the comments) and I am able to make it into a plugin then you’ll get a free copy! There is plenty of work to do to get spatial audio tools to the level of stereo but, with your help, I want to make a start.

Posted on

Free Ambisonics Plugin: o1Panner

I am working on some spatial audio plugins to provide some more tools for VR/AR audio and I am kicking things off with a freebie: the o1Panner. It is free to download from the Shop.

What is it?

The o1Panner a simple first-order Ambisonics encoder with a width control.

How to use it

There are two display types: top-down and rectangular. The azimuth, elevation and width are controlled in different ways in each of these views. The views are selected by right clicking on the display.

For the top-down view, azimuth is controlled by clicking and dragging on the main display, the elevation is controlled by holding shift and dragging up/down and width is controlled by holding ctrl and dragging up/down.

For the rectangular view, azimuth and elevation correspond to the x- and y-coordinates respectively and width is controlled by holding ctrl and dragging up/down.

What does it output?

The output is AmbiX (SN3D/ACN) Ambisonics. This is the format used by Google for YouTube 360 and is quickly being adopted as the standard for Ambisonics and HOA.

What’s coming up?

I am working on several Ambisonics and HOA plugins that will be available in 2018. Some of them will do things that other plugins do, but most of them should do something new. Some of them will do something more creative and experimental. If you want to see a certain effect for spatial audio, just get in touch and let me know what you want. If you’re the first person to suggest a plugin that gets developed then you will get a free copy to say thanks!

What about HOA?

The industry is rapidly moving on from first-order Ambisonics and embracing HOA. For example, ProTools recently added support up to third-order Ambisonics. Higher order tools are in the pipeline, so check back soon.

Stay Up To Date

If you want to keep current with upcoming plugin news and about updates to the o1Panner, subscribe to the mailing list:

Posted on

Ambisonics to Stereo Comparison

In my last post I detailed two methods of converting Ambisonics to stereo. Equations and graphs are all very good, but there’s nothing better than being able to listen and compare for yourself when it comes to spatial audio.

With that in mind, I’ve made a video comparing different first-order Ambisonics to stereo decoding methods. I used some (work-in-progress) VST plugins I’m working on for the encoding and decoding. I recommend watching the video with the highest quality setting to best hear the difference between the decoders.

There are 4 different decoders:

  • Cardioid decoder (mid-side decoding)
  • UHJ (IIR) – UHJ stereo decoding implemented with an infinite impulse response filter.
  • UHJ (FIR) – UHJ stereo decoding using a finite impulse response filter.
  • Binaural – Using the Google HRTF.

The cardioid decoder more quickly moves to, and sticks in, the left and right channels as the source moves, while this is more gradual with the UHJ decoder. To me, the UHJ decoding is much smoother than the cardioid, making it perhaps a bit easier to get a nice left-right distribution that uses all of the space, while cardioid leads to some bunching at the extremes.

The binaural has more externalisation but pretty significant colouration changes compared to UHJ and cardioid decoding, but also potentially allows some perception of height, which the others don’t.

The VSTs in the video are part of a set I’ve been working on that should be available some time in 2018. If you’re interested in getting updates about when they’re release, sign up here:

Posted on

Ambisonics Over Stereo

Ambisonics, especially Higher Order Ambisonics, is great for 3D sound applications. But what if you have spent a long time mixing for a 3D audio format but want to share it with listeners who are only listening on stereo?

The first thing depends if they’re going to be using headphones or loudspeakers. If they’re using headphones then you can create a binaural mix in the usual way. If they are using loudspeakers then binaural is no longer an option (unless you want to go down the fragile transaural route). In this post we will focus on how you can decode from first order Ambisonics to stereo using one of two common options.

Mid-Side Decoding

The first option is probably the simplest – treat the Ambisonics signal as a mid-side recorded scene by taking the W and Y channels, with W being the mid and Y being the side. Then you can make your left and right (L and R) stereo playback channels using \begin{eqnarray} L = 0.5(W+Y),\\ R = 0.5(W-Y) \end{eqnarray}

This is effectively the same as recording a sound field with two cardioid microphones pointing directly left and right. Sounds panned to 90 degrees will play only through the left loudspeaker and those at -90 degrees through the right.

The advantage of this sort of decoding is that it is very conceptually simple and, as long as your DAW can handle the routing, it is even possible to do without any dedicated plugins. It also results in pure amplitude panning, meaning that it has all of the advantages and disadvantages of standard intensity-stereo. However, we’ve got another option to choose from when we want to play back over a stereo system that has some advantages.

UHJ Stereo

A more complex and interesting technique is UHJ. We’re only going to go over how UHJ for stereo listening, but it is worth noting that UHJ is mono compatible and that a 4-channel version exists from which full first-order Ambisonics information that can be retrieved via correct decoding. 3-channel UHJ can get you a 2D (horizontal) decoder by retrieving the W, X and Y channels. A nice property of the 3- and 4-channel versions is that they contain the stereo L and R channels as a subset. This means, importantly, 2-channel UHJ does not require a decoder when played back over two loudspeakers. All you need to do is take the first two channels of the audio stream. 

The stereo L and R channels can be calculated using the following equations:\begin{eqnarray} \Sigma &=& 0.9397W + 0.1856X \\ \Delta &=& j(-0.3430W + 0.5099X) + 0.6555Y\\ L &=& 0.5(\Sigma + \Delta)\\R &=& 0.5(\Sigma – \Delta)\end{eqnarray} where \(j\) is a 90 degree phase shift.

You can see from these equations, converting to UHJ from first-order Ambisonics results in signals with phase differences between the L and R channels. This creates quite a different impression to the kind of mid-side decoding mentioned above. There will obviously be some room for personal taste as to whether or not UHJ is actually preferred to mid-side decoding. Sound sources placed to the rear of the listener are more diffuse when reproduced of a stereo arrangement than those at the front, while for mid-side decoding there is no sonic distinction between a sound panned to 30 degrees or to 150 degrees.

Beyond front-back distinction, UHJ can actually result in some sounds appearing to originate from outside the loudspeaker pair by a small amount. This is why it is sometimes referred to as Super Stereo. In my experience, this effect is very dependent on the sound being played, both its frequency content and how transient it is.

Figure 1: The (high frequency/broadband) localisation curves for UHJ and Mid-side decoding of an Ambisonic sound source over two loudspeakers at +30 and -30 degrees.
Figure 1: The (high frequency/broadband) localisation curves for UHJ and Mid-side decoding of an Ambisonic sound source over two loudspeakers at +30 and -30 degrees.

Because UHJ stereo relies on phase differences between the two channels, any post-processing or mastering applied should preserve the phase relationship between L and R, otherwise there is a very real risk that the final presentation will be phase-y and spatially blurred.

Figure 1 shows the localisation curves for a sound played back over a stereo system where the signal in the Ambisonics domain is panned fully round the listener. Obviously the sound stays to the front, but the actual trajectories between UHJ and mid-side decoding are quite different. (These localisation curves were calculated using the energy vector model of localisation, so they are most appropriate for mid/high frequencies and broadband sounds).

Which of the two stereo loudspeaker decoding strategies you’ll want to use will depend on the needs of your project. Mid-side decoding is simpler and results in pure amplitude panning. UHJ can result in images outside of the loudspeaker base, but relies on the phase information being preserved. If you want to retrieve any spatial information then UHJ is absolutely the way to go.

Tools for Stereo Decoding

I have an old Ambisonics to UHJ transcoder VST that you can download here, but they are old and I am not sure how compatible they are with newer version of Windows and Mac OSX. To remedy that, I’ve been working on an updated version that will provide simple first-order to stereo decoding. Just select which method you want to use and pump some Ambisonics through it. Keep an eye out in the near future for when it is made available!

I’m curious to hear from anyone who has used both techniques what you prefer. Leave a comment below!

Posted on

Fundamentals of Communication Acoustics

MOOCs can be a great way of following a world-class course on a topic without having to enrol in a university and pay the associated fees.

For those interested in spatial audio (and audio more generally) there is a MOOC starting on 23rd October called the Fundamentals of Communication Acoustics that looks like it covers a lot of important topics. It’s followed up by Applications of Communication Acoustics.

I’m considering auditing these, because it never hurts to refresh the basics and the course is taught by some very talented people so I’ll probably even learn plenty of new things!

The MOOC is available on the EdX platform: here.

Posted on

JUCE for Spatial Audio

Several years ago I wrote some VST plugins for Ambisonics (available here) using the Steinberg VSTSDK and it definitely wasn’t particularly easy. Since then I’ve discovered JUCE – a framework that lets you get plugins up and running in no time. It handles all of the back-end stuff, meaning you just need to focus on the DSP and GUI.

I’ve already prototyped a few new plugins, just for fun, and I’m amazed at how fast it makes development. Not only does it handle the plugin “bookkeeping”, it includes all sorts of modules with common tools, making coding so much easier. And on top of all that, it allows for easy cross-platform compilation! I just need to get myself a Mac to actually take advantage of that…

I wish I had used JUCE the first time around!

Posted on

Better Externalisation with Binaural

Some research that I was involved in was published last week in the Journal of the Audio Engineering Society [1]. You can download it from the JAES e-library here. The research was led by Etienne Hendrickx (currently at Université de Bretagne Occidentale) and was a follow on from other work we did together on head-tracking with dynamic binaural rendering [2, 3, 4].

The new study looked at externalisation (the perception that a sound played over headphones is emanating from the real work, not inside the listener’s head). It specifically investigated the worst case scenario for externalisation – sound sources directly in-front of ($0^{\circ}$) or behind the listener ($180^{\circ}$). It tested the benefit of listeners moving their head, as well as listeners keeping their head still and the binaural source following a “head movement-like” trajectory. Both were found to give some improvement to the perceived externalisation, with head movement providing the most improvement.

The fact that source movements can improve externalisation is important because we don’t always have head tracking systems. Most people will experience binaural with normal headphones. This hints at a direction for some “calibration” to help the listener get immersed in the scene, improving their overall experience.

Also importantly, the listeners used in the study were all new to listening to binaural content. This is important because lots of previous studies use expert listeners, but the vast majority of real-world listeners are not experts! The results of this paper are encouraging because they show that you don’t need hours of listening to binaural to benefit from some instant perceptual improvement in a fairly easy manner.

References

[1] E. Hendrickx, P. Stitt, J. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Improvement of Externalization by Listener and Source Movement Using a ‘Binauralized’ Microphone Array,’” J. Audio Eng. Soc., vol. 65, no. 7, pp. 589–599, 2017. link

[2] E. Hendrickx, P. Stitt, J.-C. Messonnier, J.-M. Lyzwa, B. F. Katz, and C. de Boishéraud, “Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis,” J. Acoust. Soc. Am., vol. 141, no. 3, pp. 2011–2023, 2017. link

[3] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The Role of Head Tracking in Binaural Rendering,” in 29th Tonmeistertagung – VDT International Convention, 2016, pp. 1–5. link

[4] P. Stitt, E. Hendrickx, J.-C. Messonnier, and B. F. G. Katz, “The influence of head tracking latency on binaural rendering in simple and complex sound scenes,” in Audio Engineering Society Convention 140, 2016, pp. 1–8. link

Posted on

What Is… Higher Order Ambisonics?

This post is part of a What Is… series that explains spatial audio techniques and terminology.

The last post was a brief introduction to Ambisonics covering some of the main concepts of first-order Ambisonics. Here I’ll give an overview of what is meant by Higher Order Ambisonics (HOA). I’ll stick to some more practical details here and leave the maths and sound field analysis for later.


Higher Order Ambisonics (HOA) is a technique for storing and reproducing a sound field at a particular point to an arbitrary degree of spatial accuracy. The degree of accuracy to which the sound field can be reproduced will depend on several elements, such as the number of loudspeakers available at the reproduction stage, how much storage space you have, computer power, download/transmission limits etc. As with most things, the more accuracy you want the more data you need to handle.

Encoding

Spherical harmonics used for third-order HOA (image by Dr Franz Zotter https://commons.wikimedia.org/wiki/File:Spherical_Harmonics_deg3.png) Spherical harmonics used for third-order HOA (image by Dr Franz Zotter https://commons.wikimedia.org/wiki/File:Spherical_Harmonics_deg3.png)

In its most basic form, HOA is used to reconstruct a plane wave by decomposing the sound field into spherical harmonics. This process is known as encoding. Encoding creates a set of signals that depend on the position of the sound source, with the channels weighted depending on the source direction. The functions become more and more complex as the HOA order increases. The spherical harmonics are shown in the image up to third-order. These third-order signals include, as a subset, the omnidirectional zeroth-order and the first-order figure-of-eights. Depending on the source direction and the channel, the signal can also have its polarity inverted (the darker lobes).

An infinite number of spherical harmonics are needed to perfectly recreate the sound field but in practice the series is limited to a finite order \(M\). An ambisonic reconstruction of order \(M\) > 1 is referred to as Higher Order Ambisonics (HOA).

An HOA encoded sound field requires \((M+1)^{2}\) channels to represent the scene, e.g 4 for first-order, 9 for second, 16 for third, etc. We can see that very quickly we require a very large number of audio channels even for relatively low orders. However, as with first-order Ambisonics, it is possible to do rotations of the full sound field relatively easily, allowing for integration with head tracker information for VR/AR purposes. The number of channels remains the same no matter how many sources we include.

Decoding

The sound field generated by order 1, 3, 5 and 7 Ambisonics for a 500 Hz sine wave. The black circle in the middle is approximately the size of a listener's head. The sound field generated by order 1, 3, 5 and 7 Ambisonics for a 500 Hz sine wave. The black circle in the middle is approximately the size of a listener’s head.

The encoded channels contain the spatial information of the sound sources but are not intended to be listened to directly. A decoder is required that converts the encoded signals to loudspeaker signals. The decoder has to be designed for your particular listening arrangement and takes into account the positions of the loudspeakers. As with first-order Ambisonics, regular layouts on a circle or sphere provide the best results.

The number of loudspeakers required is at least the number of HOA encoded channels coming in.

A so-called Basic decoder provides a physical reconstruction of the sound field at the centre of the array. The size of this physically accurately reconstructed area increases with increasing order but decreases with frequency. Low frequency ranges can be reproduced physically (holophony) but eventually the well-reproduced region becomes smaller than the size of a human head and decoding is generally switched to a max rE decoder, which is designed to optimise psychoacoustic cues.

The (slightly trippy) animation shows orders 1, 3, 5 and 7 of a 500 Hz sine wave to demonstrate the increasing size of the well-reconstructed region at the centre of the array. All of the loudspeakers interact to recreate the exact sound field at the centre but there is some unwanted interference out of the sweet spot.

Why HOA?

Since the number of loudspeakers has to at least match the number of HOA channels the cost and practicality are often the main limiting factor. How many people can afford 121 loudspeakers needed for a 10th order rendering? So why bother encoding things to a high order if we are limited to lower order playback? Two reasons: future-proofing and binaural.

First, future-proofing. One of the nice properties of HOA is that you can select a subset of channels to use for a lower order rendering. The first four channels in a fifth-order mix are exactly the same as the four channels of a first-order mix (see the spherical harmonic images above). We can easily ignore the higher order channels without having to do any approximative down-mixing. By encoding at a higher order than might be feasible at the minute you can remain ready for a future when loudspeakers cost the same as a cup of coffee (we can dream, right?)!

Second, binaural. If the limiting factors to HOA are cost and loudspeaker placement issues then what if we use headphones instead? A binaural rendering uses headphones to place a set of virtual loudspeakers around the listener. Now our rendering is only limited by the number of channels our PC/laptop/smartphone can handle at any one time (and the quality of the HRTF).

The Future

As first-order Ambisonics makes its way into the workflow of not just VR/AR but also music production environments, we’re already seeing companies preparing to introduce HOA. Facebook already includes a version of second-order Ambisonics in its Facebook 360 Spatial Workstation. Google have stated that they are working to expand beyond first-order for YouTube. I have worked with VideoLabs to include third-order Ambisonics in VLC Media player (scheduled for a later release).

Microphones for recording higher than first-order aren’t at the stage of being accessible to everyone yet, but there are tools, like Matthias Kronlachner’s AmbiX VSTs that will let you encoded mono signals up to seventh-order. There are also up-mixers like Harpex if you want to work with existing first-order recordings.

All of this means that if you can encoded your work in higher orders now, you should. You do not want to have to go back to your projects to rework them in six months or a year when you can do it now.

Posted on

What Is… Ambisonics?

This post is part of a What Is… series that explains spatial audio techniques and terminology.

Ambisonics is a spatial audio system that has been around since the 1970s, with lots of the pioneering work done by Michael Gerzon. Interest in Ambisonics has waxed and waned over the decades but it is finding use in virtual, mixed and augmented reality because it has a number of useful mathematical properties. In this post you’ll find a brief summary of first-order Ambisonics, without going too deeply into the maths that underpins it.

Unlike channel-based systems (stereo, VBAP, etc.) Ambisonics works in two stages: encoding and decoding. The encoding stage converts the signals into B-format (spherical harmonics), which are agnostic of the loudspeaker arrangement. The decoding stage takes these signals and converts them to the loudspeaker signals needed to recreate the scene for the listeners.

Ambisonic Encoding

First-order Ambisonic encoding to SN3D for a sound source rotating in the horizontal plane. The Z channel is always zero for these source directions. First-order Ambisonic encoding to SN3D for a sound source rotating in the horizontal plane. The Z channel is always zero for these source directions.

A mono signal can be encoded to Ambisonics B-format using the following equations:

\[
W = S \\
Y = S\sin\theta\cos\phi \\
Z = S\sin\phi\\
X = S\cos\theta\cos\phi
\]

where \(S\) is the signal being encoded, \(\theta\) is the azimuthal direction of the source and \(\phi\) is the elevation angle. (These equations use the semi-normalised 3D (SN3D) scheme, as in the AmbiX format used by Google for YouTube. The channel ordering also follows the AmbiX standard.) First-order Ambisonics can also be captured using a tetrahedral microphone array.

B-format is a representation of the sound field at a particular point. Each sound source is encoded and the W, X, Y and Z channels for each source are summed to give the complete sound field. Therefore, no matter how many sound sources are in the scene, only 4 channels are required for transmission.

This encoding can be thought of as capturing a sound source using one omnidirectional microphone (W) and 3 figure-of-eight microphones pointing along the Cartesian x-, y- and z-axes. As shown in the animation to the side, the amplitude of the W channel stays constant for all source positions while the X and Y channels change relatives gains and sign (positive/negative) with source position. Comparison of the polarity of X and Y with W allows the direction of the sound source to be derived.

Ambisonic Decoding

Decoding is the process of taking the B-format signals and converting them to loudspeaker signals. Depending on the loudspeaker layout this can be relatively straightforward or really quite complex. In the most simple cases, with a perfectly regular layout, the B-format signals are sampled at the loudspeaker positions. Other methods, (for example, mode-matching or energy preserving) can be used but tend to give the same results for a regular array.

Assuming a regular array and good decoder, an Ambisonic decoder will recreate the exact sound field up to approximately 400 to 700 Hz. Below this limit frequency the reproduction error is low and the ITD cues are well recreated, meaning the system can provide good localisation. Above this frequency the recreated sound field deviates from the intended physical sound field so some psychoacoustic optimisation is applied. This is realised by using a different decoder in higher frequency ranges that focusses the energy in as small a region as possible in the loudspeaker array. This helps produce better ILD cues and a more precise image.

Ambisonics differs from VBAP because, in most cases, all loudspeakers will be active for any particular source direction. Not only will the amplitude vary, the polarity of the loudspeaker signals will also matter. Ambisonics uses all of the loudspeakers to “push” and “pull” so that the correct sound field is recreated at the centre of the loudspeaker array.

What is a Good Decoder?

A “good” Ambisonic decoder requires an appropriate loudspeaker arrangement. Ambisonics ideally uses a regularly positioned loudspeaker arrangement. For example, a horizontal-only system will place the loudspeakers at regular intervals around the centre of an array.

Any number of loudspeakers can be used to decode the sound scene but using more than required can lead to colouration problems. The more loudspeakers are added the more of a low-pass filtering effect there is for listeners in the centre of the array. So what is the best number of loudspeakers to use for first-order Ambisonics? It is generally agreed that 4 loudspeakers placed in a square be used for a horizontal system and 8 in a cuboid be used for 3D playback. This avoids too much colouration and satisfies several conditions for good (well… consistent) localisation.

There are metrics defined in the Ambisonics literature that predict the quality of the system in terms of localisation. These are the velocity and energy vectors and they are deserving of their own article. For now, it’s worth noting that the velocity vector links to low-frequency ITD localisation cues. Decoders are designed to optimise it at low-frequencies while they are optimised using the energy vector at higher frequencies. The high frequency decoder is known as a ‘max rE’ decoder, so-called because it aims to maximise the magnitude of the energy vector metric. This is just another way of saying that the energy is focussed in as small an area as possible.

Ambisonic Rotations

When it comes to virtual and augmented reality, efficient rotation of the full sound field is to follow head movements is a big plus. Thankfully, Ambisonics has got us covered here. The full sound field can be rotated before decoding by blending the X, Y and Z channels correctly.

The advantage of rotating the Ambisonic sound field is that any number of sound sources can be encoded in just 4 channels, meaning rotating a sound field with one sound source takes as much effort as rotating one with 100 sound sources.


That’s the basics of Ambisonics covered. At some point we’ll look more at measures of quality of Ambisonics decoders and how well ITD and ILD are recreated. This blog has also only covered first-order Ambisonics, but Higher Order Ambisonics (HOA) is likely make its way to VR platforms in a significant way in the near future so I’ll cover that soon.

Do you have any spatial audio questions you’d like to have answered? Just leave a comment and let me know!

Posted on

What Is… Stereophony?

This post is part of my What Is… series that explains spatial audio techniques and terminology.

OK, you know what stereo is. Everyone knows what stereo is. So why bother writing about it? Well, because it allows us to introduce some links between the reproduction system and spatial perception before moving on to systems which use much more than 2 loudspeakers.

Before going any further, this post will deal with amplitude panning. Time panning will be left for another day. I also won’t be covering stereo microphone recording techniques because that could fill up its own series of posts.

The Playback Setup

A standard stereo setup is two loudspeakers placed symmetrically at \(\pm30^{\circ}\) to the left and right of the listener. We will assume for now that there is only a single listener equidistant from both loudspeakers. The loudspeaker basis angle can be wider or narrower but if they get too wide there is a hole-in-the-middle problem. Too narrow and we reduce the range of positions at which the source can be placed. Placing the loudspeakers at \(\pm30^{\circ}\) gives a good compromise between these two, balancing sound image quality with potential soundstage width.

A standard stereo listening arrangement.
A standard stereo listening arrangement.
The tangent law prediction of perceived source angle for different level differences
The tangent law prediction of perceived source angle for different level differences

Placing the Sound

Amplitude panning takes a mono signal and sends copies to the two output channels with (potentially) different levels. When played back over two loudspeakers the level difference between the two channels controls the perceived direction of the sound source. With amplitude panning the perceived image will remain between the loudspeakers. If we know the level difference between the two channels then we can predict the perceived direction using a panning law. The two most famous of these are the tangent law and the sine law. The tangent law is defined as
\begin{equation}
\frac{\tan\theta}{\tan\theta_{0}} = \frac{G_{L} – G_{R}}{G_{L} + G_{R}}
\end{equation}
where \(\theta\) is the source direction, \(\theta_0\) is the angle between either loudspeaker and the front (30 degrees in the case illustrated above) and \(G_{L}\) and \(G_{R}\) are the linear gains of the left and right loudspeakers.

The ITD produced for a source panned with loudspeaker level differences generated by the tangent law.
The ITD produced for a source panned with loudspeaker level differences generated by the tangent law.

How It Works

Despite being simple conceptually and very common, the psychoacoustics of stereo are actually quite complex. We’ll stick to discussing how it relates to the main spatial hearing cues.

As long as both loudspeakers are active, signals from both loudspeakers will reach both ears. Due to the layout symmetry, both ears receive signals at the same time but with different intensities corresponding to the level differences of the loudspeakers. Furthermore, since it has further to travel, the signal from the left loudspeaker will reach the right ear slightly later than the signal from the right loudspeaker. The opposite is true for the right ear. This time difference combined with the intensity difference gives rise to interference that generates phase differences at the ears. These phase differences are interpreted as time differences, moving the sound between the loudspeakers.

The ITD (below 1400 Hz) is shown in the figure and is roughly linear with panning angle. This is pretty close to exactly what we see for a real sound source moving between these angles. This works pretty well for loudspeakers at \(\pm30^{\circ}\) or less, but once the angle gets bigger the relationship becomes slightly less linear.

These strong, predictable ITD cues mean that any sound source with a decent amount of low frequency information will allow us to place the image pretty precisely. Content in higher frequency ranges won’t necessarily be in the same direction as long frequency content because ILD becomes the main cue.

Even though stereo gives rise to interaural differences that similar to those of a real source, that does not mean it is a physically-based spatial audio system (like HOA and WFS). The aim is to produce a psychoacoustically plausible (or at least pleasing) sound scene. Psychoacoustically-based spatial audio systems tend to use the loudspeakers available to fit some aim (precise image, broad source) without regards to if the resulting sound scene ressembles anything a real sound source would emit. 

So, there you have a quick overview of stereo from a spatial audio perspective. There are other issues that will be cover later because they relate to other spatial audio techniques. For example, what if I’m not in the sweet spot? What if the speakers are to the side or I turn my head? What if I add a third (or forth or fifth) active loudspeaker? Why do some sounds panned to the centre sound elevated? All of these remaining and non-trivial points shows just how complex perception of even a simple spatial audio system can be.