The composer's toolbox

Unlike with many commercial packages on Windows or macOS, going from idea to finished audio file isn't necessarily serviced best by 1-stop-shop software in Linux (although a few do exist). This approach may be counterintuitive to those who have used software such as Propellerhead Reason or Ableton Live to essentially be responsible for all the tasks associated with the production of music.

Instead, Linux music production can be envisioned as an assembly line where each “worker” (in this case, program) is tasked with a different duty.

This paradigm allows for greater flexibility and since the tasks and duties of some software is well-defined and well-confined, this allows for projects to be more or less finished and work seamlessly far in the future with devices and other parts of the assembly line.

This integration can be done two ways

  • Horizontally
  • Vertically

In a Vertical integration, independently written software communicates with each other through plugins. This allows one to use say, a general sequencer, take zynaddsubfx for example, with a number of composition programs, such as lmms.

In a Horizontal integration, you use essentially the equivalent to the UNIX pipe ( '|' ) for music. Pretend you had a midi keyboard. You can route the MIDI codes to an arpeggiator and have that software output MIDI codes which can then route to a sequencer which outputs not MIDI, but audio to a mixer which then mixes the audio together to a daw.

These two approaches can be used together and additional or fewer pieces can be placed between striking a key on a keyboard and hearing a sound from a speaker such as automation engines or effects plugins.

There is also an entirely different paradigm for creating music that includes musical programming languages. Under this paradigm, a composer authors a computer program in a specialized audio language and then the computer compiles it and produces the audio output. Some of these, such as supercollider and pure data introduce novel paradigms that permit real-time feedback. A decent article that shows this approach in practice can be found in Recontextualizing Ambient Music in Csound by Kim Cascone. The author uses the freely available csound to construct the ambient music.

Finally, to really make things interesting, the two paradigms can work together. There are bridges that permit both Vertical and Horizontal integration between the two ways of doing things. This can allow one to have immense control over the finest details of a composition.


Composition comprises of two stages:

  1. scoring
  2. production

A composer is expected to create two distinct but related products:

  1. the score
  2. the audio


Scoring is the process by which a composer represents the music in its high-level form, a preliminary step useful in its own right. As a product used by performers, the composer is judged by his ability to capture the music in its most elegant structure (making the best possible compromise between the essence of the piece and its intricate details).


Production is the process by which the composer generates the intended output, the sound - the product the vast majority of the audience is interested in. If the composer fails to accomplish this step correctly, what would otherwise be a masterpiece could go unnoticed. Imagine Beethoven living in today's age, writing his 5th Symphony on MuseScore, and releasing it on YouTube, with the audio rendered from the scorewriter's default output. We all know what that would sound like (and if you don't, try it). He probably wouldn't get many views - and of the views he would get, most of them would be dislikes. It is a must for a composer to be a skillful producer. This has always been the case. After completing the final score of a symphony, the composer's next step was to instruct the orchestra on how to properly reproduce the opus in sound.

During the common practice era, the audio took the form of performance - which was often done by the composer himself. For example, in the early stages of his career, Beethoven's work was mainly for the piano, for which he rapidly developed a reputation as a virtuoso. In its grandest scale, performance involves the composer playing the largest instrument around, the symphony orchestra.

By the 20th century, however, a major revolution occurred upon Thomas Edison's invention of the phonograph. For the first time in history, it became possible to encapsulate audio in an object - referred to as a recording. What the printing press was to language, recording became to audio. Just like the printing press made it possible to disseminate works of language in masses, recording made it possible to offer works of audio in heaps and piles (and works of audio, as we know, predominantly took the form of music).

The storage medium for sound first made its debut as the phonograph cylinder - which quickly evolved to the phonograph record - and then to tape - and then compact disc - and finally, nowadays, exists (and is distributed) in its pure information form, the soundfile.


The composer's workflow generally adheres to the following order:

A sound server is used to connect all the tools together. A session manager is used to facilitate their setup and management.

An example of a beginner's toolset would be:

  • sound server: JACK (QjackCtl)
  • scorewriter: MuseScore
  • sampler: LinuxSampler (JSampler)
  • sample library: Sonatina Symphonic Orchestra

However, for the sample library, most Linux musicians employ:

  • sample library: ad-hoc collection

(Although it would be ideal if there was a collaborative effort to create a centralized library, for everybody to use and contribute to, in the Linux way-of-doing-business.)

Peter Schaffter offers an example of such a toolchain in his excellent MuseScore tutorial.

A very good toolset would be:

  • sound server: JACK (QjackCtl)
  • session manager: Non Session Manager
  • scorewriter-sequencer: Lilypond (Laborejo)
  • sampler: LinuxSampler (QSampler)
  • sample library: Sonatina Symphonic Orchestra, ad-hoc collection
  • workstation: Ardour

The Non tools are popular: Non Session Manager, Non Sequencer, Non Mixer, Non Timeline

Nils Gey, based on his testing of SSO (Experiment 1, 2 and 3), uses:

  • sound server: JACK
  • session manager: Non Session Manager
  • scorewriter-sequencer: Lilypond (Laborejo)
  • sampler: Calfbox (LisaloQt)
  • sample library: Sonatina Symphonic Orchestra
  • reverb: zita-rev1
  • recorder: jack_capture
  • mixer: Non Mixer

Brett W. McCoy, based on his blog articles, uses:

  • sound server: JACK
  • scorewriter: Lilypond (Frescobaldi)
  • sequencer: Rosegarden
  • samplers and sample libraries: Windows machine running commercial samplers and sample libraries
  • workstations: Ardour, Mixbus
  • mastering interface: Jamin


This section attempts to discuss the philosophy behind the artist's toolbox.

Good toolset

A good toolset is:

  1. easily assembled, and easily reproducible if lost
  2. as minimal as possible (without sacrificing quality) (i.e. elegant)

Reliable tool

A reliable tool:

  1. has a site
  2. is maintained in a personal package archive
  3. is maintained in a major repository
  4. has a large userbase
  5. has a Wikipedia article

Userbase: When selecting a tool of a certain type, weight should be given to the one with the largest userbase (as the larger the userbase, the greater the probability of reliability). What are the majority of practitioners doing? What is the standard?


For a discussion about the composer's toolbox, see the corresponding forum topic.

See also

wiki/composition.txt · Last modified: 2021/08/20 16:38 by gootz