This chapter describes some issues that can arise with the Java Sound technology and suggests causes and workarounds.
This chapter contains the following sections:
Make sure that your audio system is correctly configured (sound card driver/DirectSound for Windows, ALSA for Linux, Audio Mixer for Solaris OS). In addition, ensure that your speakers are connected and that your sound card volume and mute state are adjusted to the appropriate value. To test your sound configuration, run any native sound application and play some sound through it.
On Solaris OS and Linux, you might be unable to play sounds because an application (or sound daemon, such as
artsd) opens the audio device exclusively, thereby denying Java Sound access to the device.
Java Sound supports a set of audio file formats, for example AU, AIF, and WAV. Most of the file formats are only containers and can contain audio data in various compressed audio formats. Java Sound file readers support some formats (uncompressed PCM, a-law, mu-law), but do not support ADPCM, MP3, and others.
Java Sound also supports plug-ins for file readers and writers through the service provider interface (SPI). You can use Sun, third-party, or your own plug-ins to read various audio files. In any case you must handle the presence of the plug-in, for example, by distributing the required plug-ins with your application or by requiring plug-ins to be installed in the client Java environment.
Java Sound supports various audio formats, but their availability depends on the operating system. To use some audio format for recording or playing, the format must be supported by your system (sound card drivers). Use supported formats as much as possible: PCM; 8 or 16 bits; 8000, 11025, 22050, 44100 Hz. The formats are supported by most if not all present sound cards. Most sound cards support only PCM formats, and even if the driver supports mu-law, it requires some modification to the software. If you need to play or record mu-law data, the preferred way is to convert it to PCM format through a format converter.
See the documentation for
AudioSystem.getAudioInputStream for details about format conversion.
Recorded data is kept in a
DataLine buffer. If you did not read from the line for a long time, an "overrun" condition will occur, and older data will be replaced with new. This will produce artifacts in the recorded audio data.
A similar situation occurs with playing. If all data from the buffer has been played and no new data has been written to the line, an "underrun" condition will occur and silence will be played until you write a new portion of audio data to the line.
The preferred way to record is to read data in a separate thread to prevent the possible influence of other tasks (for example, UI handling). If you use
SourceDataLine for playing, a separate thread for writing data into the line is also the preferred method to use. If you use
Clip for playing, the
Clip implementation creates such a thread itself.