The JAudioLibs’ AudioServer API is a Java library loosely inspired by PortAudio. It was initially designed early in the development of Praxis LIVE in order to provide a common callback-based interface for working with low-latency audio. This API has since found its way into a variety of other projects, primarily by people wanting to use the JACK Audio Connection Kit from Java (JAudioLibs’ JNAJack was developed at the same time). Using the AudioServer API provides an application the ability to switch easily between JavaSound and JACK at runtime. It can also make working just with JavaSound a little easier.
For some time I have been considering how to extend the AudioServer API to improve runtime service discovery, provide better access to features of the underlying audio libraries, and make it easier for people to contribute new implementations. A recent email from Ollie Bown, developer of the excellent Beads audio library, prompted me to spend some time over the last week trying to finish this work (the development version of Beads has been using this API for some time).
Why not JavaSound?
I always love that sceptical snort – “you use Java for live audio processing?” – and then watching their face change as you start demonstrating! 🙂 The fact is the Java platform is quite usable for doing live audio processing. Even the built-in JavaSound implementations can offer reasonable low-latency performance, particularly on Linux, with a few tricks (yes, they’re built in to the JSAudioServer!). So, why not provide a JavaSound Mixer implementation for JACK?
The key word is callbacks. Many low-latency audio API’s work with callbacks, calling into the application with a reference to the audio buffers to be filled. On the other hand, JavaSound is a blocking stream-based API. It is possible to write a callback API on top of a blocking one quite easily (as the JSAudioServer does), but it is pretty much impossible to write a blocking API on top of a callback one without introducing additional overhead and latency. Another issue is to do with formats; JACK works solely with audio data as 32-bit floating point, and it is almost certain that we will want to be doing our DSP in that same format, so why add unnecessary layers of conversion.
The AudioServer API is designed to be as simple as possible. All an application needs to do is provide an implementation of the AudioClient interface, and its three methods – configure(), process() and shutdown(). This AudioClient implementation can then be passed to any AudioServer implementation and run.
The configure() method is guaranteed to be called prior to any call to process(). It provides an AudioConfiguration object with details of sample rate, buffer sizes, channels, etc. One gotcha for new users is that this configuration is not guaranteed to match that requested in creating the server, as not all servers can guarantee this – therefore this method can veto.
All audio processing is done through the process() callback. This method provides a List of FloatBuffers for both input and output channels – FloatBuffer is used rather than float to allow direct pointers to natively allocated audio buffers. The code in this method should never block (ie. no synchronized!) and never do anything other than necessary audio processing – it’s going to be called 100’s of times a second. I highly recommend reading and following Ross Bencina’s article on real-time audio programming.
An extendible API
The API as so far discussed has existed for a number of years. However, over the last week I’ve been tidying up my thoughts on how to make the API more extendible. In particular I wanted to address –
- A mechanism for discovering and creating server implementations at run-time without requiring dependencies on every server type.
- A mechanism for querying server implementations for particular optional features.
- A mechanism for requesting optional features when creating a server.
- A mechanism for clients to receive optional features from the server.
The first of these was addressed by the creation of an abstract AudioServerProvider class. This class is designed to be registered in META-INF/services and loaded through use of ServiceLoader or equivalent, although it is also possible to directly instantiate a provider if required. All factory methods on specific server implementations have now been deprecated.
The solution to address the rest of these points was inspired by the local lookup part of the Lookup API in the NetBeans platform (also used in Praxis LIVE). This provides a way to extend Objects with different features using a Class based registry that can be queried at runtime, and unlike other solutions (eg. property map) it provides an element of type safety. The AudioConfiguration class has now been extended with the following two methods, also present in the AudioServerProvider class –
public <T> T find(Class<T> type) public Iterable<T> findAll(Class<T> type)
The first method will return null if a feature cannot be found, whereas the second will return an empty collection. This is much simpler then the NetBeans Lookup API in that the methods are not within a separate class and there is no mechanism to listen for changes. In the case of AudioConfiguration, features are fixed at creation time and passed into the constructor, whereas individual AudioServerProviders can provide dynamic or lazily-loaded results if they wish (there is no built-in support for this).
There are currently 3 extension types within the core API – ClientID, Connections and Device – and it is expected that these will be added to over time. Extension instances can be passed along in the AudioConfiguration passed to the createServer() method, though none are required. ClientID and Connections are fairly simple, and primarily of use with the JACK server for now. You’ll notice that Device is actually an abstract class, and instances cannot be directly instantiated. Instead users of the API can request a list of devices from the AudioServerProvider.
Iterable<Device> devices = serverProvider.findAll(Device.class)
The user can then select the device(s) required, and pass along in the AudioConfiguration. Note that sometimes separate input and output devices must be chosen (eg. JavaSound on Windows). Device itself also has find() and findAll() methods – you might use them to get the underlying Mixer class backing each JavaSound Device.
Extensions used do not have to be from the core API. For example, it is possible to pass along a JSTimingMode that will be used by the JavaSound server. Remembering that the AudioConfiguration passed into the AudioClient configure method is not the same one used to create the server, it is also possible for the server to provide additional features back to the client. For the first time it is now possible to get access to the underlying JNAJack JackClient that backs the JACK server, and install additional callbacks, etc.
JackClient jClient = audioConfiguration.find(JackClient.class);
There are a few examples of usage in the JAudioLibs’ examples repository, and these will be added to over time.
- SineAudioClient – a documented example showing how to write a simple client that outputs a sine wave and attach it to a server.
- PassThroughAudioClient – a documented example showing another simple client for directly passing audio from input to output (be careful about feedback!)
- DeviceIteration – a simple example of searching for server implementations and devices (note that JACK does not currently provide any devices).
- ChorusPipe – an example that shows use of the AudioServer API along with JAudioLibs’ Pipes (routing API) and AudioOps (audio processing). These other APIs form the basis of Praxis LIVE’s audio features.
Want to use it or get involved?
The basic AudioServer API is available under an all-permissive license, with different implementations under different licenses (JavaSound under GPL2 w/CPE and JACK under LGPL). All are suitable for use in open-source or commercial projects. As well as the additional API, there have been some changes to the underlying server implementations – hopefully things are now fairly stable but there may be some changes over the next few weeks as things get tested. Please report any bugs!
If you need support, want to add to discussions on future development, or are interested in extending existing server implementations or writing new ones, then please get in touch either on the JAudioLibs Google Group or through the Praxis LIVE contact page.
And if you know Maven better than I and would be willing to help get these libraries into the central repository, please do get in touch! 🙂