This is an old revision of the document!


Linux Audio User FAQ (Frequently Asked Questions)

.

.

General

Q: Where to ask a question ?

A: There are different mailing lists. Visit Resources at linuxaudio.org and Lists

Q: How to optimize my system for audio / midi ?

A: Using a “real-time kernel” and setting priorities appropriate is an important step here. This has often been told, but according to the jackd FAQ it is simply not true! See http://jackaudio.org/realtime_vs_realtime_kernel Articles and more information about real time operation.

Q: Some of my applications sound a (half)tone too high/low, have a wrong pitch ?

A: It's likely, that the sample-rate of the programs you're using doesn't match. Decide for a samplerate and make sure all applications actually use it. When running a sound-server (jack), make sure it uses the same samplerate. (For example when running fluidsynth, set the -r parameter appropriately)

Q: How to remove noise ? How to restore old recordings (from vinyl/tape) ?

Q: What about timers and timing ?

Modern PCs provide good hardware timing sources. One of it is the RTC (real-time-clock). Another one is the HPET (high-precision-event-timer), which is preferable due to higher accuracy. Then there is a software layer (in the kernel) to make available the hw-timers to the applications, through different interfaces. One example is the usual system-timer. An implementation, that tries to squeeze everything out of the hardware is the “HR timer” (high-resolution-timer). The higher the timer-resolution, the higher is the possible accuracy of audio/midi-processing. For midi processing it is recommended to increase the system-timer frequency from 250Hz (current default) to 1000Hz. The hr-timer offers even higher frequencies. In order to save power at high frequencies, there's a so called tickless-timer feature in the kernel, which only does timer-ticks (function calls) at (user-)defined times (nano-seconds). For best performance, timer interrupts should be given a high scheduling priority (see: priority settings, rtirq).

How to know, if a HPET (hardware) is available?

dmesg | grep -i hpet

How to get a list of available timers?

cat /proc/asound/timers

Related Kernel Options

# cat .config | grep -i hpet
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y

# cat .config | egrep -i "hrt|hr_t"
CONFIG_SCHED_HRTICK=y
CONFIG_SCx200HR_TIMER=m
CONFIG_SND_HRTIMER=m
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y

CONFIG_NO_HZ=y
CONFIG_HZ_1000=y

Related Articles: Article at kerneltrap.org

.

.

Audio

Q: How to order the numbering of soundcards, if using ALSA drivers?

In /etc/modprobe.d/alsa-base.conf (debian based distro) one can give an option line as:

options [sound-driver-name] index=[number]

When a driver handles more than one device, you have to specify multiple values for the index option, like this:

options snd-usb-audio index=2,3

How do I specify order of *several* USB soundcards in alsa-base.conf?

If you have several soundcards of one type (=the same driver), you can additionally specify product-ids, as follows:

Look at the output of “lsusb” and “lsusb -n” for the vendor/product IDs of the devices, then specify these IDs in the vid and/or pid options, in hexadecimal. For exampe, if your first USB device has IDs 0123:4567 and the second 89ab:cdef, use the line

options snd-usb-audio index=2,3 pid=0x4567,0xcdef

alsa - set card order

additional information at alsa.opensrc.org - http://alsa.opensrc.org/index.php/MultipleUSBAudioDevices

.

Q: Is it possible to use a label (device name) instead of a number, when referencing a soundcard (e.g. in qjackctl)

Instead of 'hw:a.b' (hw:1,0) you can use 'hw:DEVICE_ID' which will tell the command

cat /proc/asound/cards
1 [UA25EX ]: USB-Audio - UA-25EX

here you would specify: hw:UA25EX

full command for jackd: /usr/bin/jackd -P70 -u -dalsa -dhw:UA25EX -r48000 -p512 -n3 -M -Xseq

Q: How to set up the JACK audio server? (jackd)

A:

  • Use a samplerate supported by your hardware (usually 44100 (CD-quality) or 48000 (DAT-quality)) (jackd param -r)
  • Use either 2 or 3 as value for periods per buffer (depends on hardware) (jackd param -n)
  • start with 2048 “frames per period” and divide it by 2, until you get xruns (lower is better here). In general a value of 64 to 256 is possible *with* a real-time kernel (only). Use a value higher than that, which produces xruns, so you don't get xruns anymore. (jackd param -p)
  • for recording (from microphone etc., not for midi) you probably want to use a small period size for low latency monitoring
  • for mastering, mixing or editing: raise the period size up to >= 1024 (many postprocessing effects are very CPU intense, and you do not need low latency to listen)
  • If you're using a linux-distribution dedicated to audio-processing, with realtime-kernel, you can achieve lower latencies, by setting the -R (realtime) and -P (rt-priority) parameters of jackd.

Q: What is an xrun?

A: A buffer is not filled in time for the requesting software to use it. Sound data does not arrive fast enough so that the requesting software has a continuous stream of data, in other words, the requesting software runs out of sound data. This is because some part of the system is not fast enough to keep up.

MIDI

Q: What is the difference between Jack-Midi and Alsa-Midi?

The short answer:

Jack-Midi has been introduced to extend/replace alsa-midi with:

  • improved timing
  • sample-accurate midi-event alignment

They currently co-exist.

The long answer:

kindly provided by Fons Adriaensen (2008-09-23 at linux-audio-user _AT_ lists.linuxaudio.org)

To understand how all this fits together you need to know the following.

1. The bottom layer: ALSA audio and ALSA raw midi.

On most (nearly all) Linux systems these are the drivers handling audio and midi devices respectively. A program can use these directly, but then it is limited to connecting to hardware devices only.

2. Interconnecting applications and hardware: Jack and ALSA midi sequencer.

Jack can interconnect audio programs that are written to use it to each other and to an audio card. To talk to the audio card Jack uses the ALSA driver in most cases. This is what you see in the 'AUDIO' tab in qjackctl.

The ALSA midi sequencer does the same for midi. It can connect applications to each other and to physical midi interfaces (raw ALSA ports). This is what you see in the 'ALSA' tab of qjackctl.

3. Jack midi replacing ALSA midi sequencer.

For a programmer the ALSA midi sequencer can be hard to use. It has had serious problems with timing, mainly because of lack of high-resolution timer support in Linux until recent times. This has resulted in the development of midi-in-Jack, which can (almost) be used as a replacement for the ALSA midi sequencer. It has its pros and cons. This is what you see in the 'MIDI' tab in qjackctl.

Currently the two midi routing system coexist, and this leads to some confusion.

In qjackctl you have three options for midi-in-Jack:

1. 'None' - this means that Jack will not use any ALSA midi devices. Applications that use midi-in-jack (e.g. Aeolus) will still show up in the 'MIDI' tab, but you can't connect them to any physical ALSA midi ports, only to other apps (if there are any other).

2. 'Raw' - this means that Jack will grab the raw ALSA (hardware) midi devices, and convert them into Jack midi ports with rather useless names. The same devices are still shown in the 'ALSA' tab, but you will notice you can't connect them there anymore - they are usually supporting only one client, and Jack has already taken them. This means that applications that only support connecting to ALSA midi sequencer can't be connected to physical midi devices anymore. In the dreams of the Jack midi developers, such apps will soon cease to exist, and if that happens the 'raw' option is the normal one to use.

3. 'Seq' - this means Jack will convert *all* ALSA sequencer ports into Jack midi ports. The original ones remain accessible in the 'ALSA' tab. The Jack-midi to ALSA-seq bridge is only included in the ALSA backend, not in the FreeBoB/FFADO firewire backend. a2jmidid or similar apps are then the solution.

Example: Aeolus has both an ALSA midi sequencer port, and a midi-in-Jack port. You will see Aeolus in both the 'MIDI' and 'ALSA' tabs. If you use 'Raw', you will have to use the Jack connection to connect Aeolus to a keyboard. If you use 'Seq' you can make the connection in either system.

Also if you use 'Seq' there will be two Jack-midi ports for Aeolus: the Jack-midi port that Aeolus creates, and a Jack-midi copy of the ALSA sequencer port of Aeolus, shown as system:playback_#, with # some number.

Don't use the latter. What happens if you do that is that the midi data from your keyboard will follow a rather long route: Raw ALSA → Jack port → Jack port → ALSA sequencer → Aeolus.

The most direct route is still the one in the 'ALSA' tab: Raw ALSAALSA sequencer → Aeolus.

.

.

Hardware

Q: Is my Hardware / Soundcard supported in the Linux OS ?

A:

  • PCI: At http://kmuto.jp/debian/hcl/ there is a rough test for compatibility of PCI devices. (currently using the 2.6.30-1-686 kernel) Just excecute “lspci -n” and paste on the site.
  • There also is linux on laptops, but it doesn't tell you if the laptop is particularly suited for audio production.

.

Devices following the USB 1.1 standard are usually supported. Some require additional firmware to be loaded however.

(on debian/ubuntu: apt-get install midisport-firmware alsa-firmware-loaders)

Support for USB 2.0 devices is also increasingly common.

.

Software

Q: Is it possible to run VST plugins on Linux?

A: Yes, there are different solutions:

List of windows vst-hosts:

List of linux vst-hosts:

Related sites:

Note: Instead of VST, you will soon prefere to use Linux plugin technologie :-) LADSPA, DSSI and LV2 Host

faq/start.1328235834.txt.gz · Last modified: 2012/02/03 03:23 by 76.10.176.11