# PREFACE

Csound is one of the best known and longest established programs in the field of audio-programming. It was developed in the mid-1980s at the Massachusetts Institute of Technology (MIT) by Barry Vercoe.

Csound's history lies deep in the roots of computer music. It is a direct descendant of the oldest computer-program for sound synthesis, 'MusicN' by Max Mathews. Csound is free, distributed under the LGPL licence and is tended and expanded by a core of developers with support from a wider community.

Csound has been growing for more than 25 years. There are few things related to audio that you cannot do with Csound. You can work by rendering offline, or in real-time by processing live audio and synthesizing sound on the fly. You can control Csound via MIDI, OSC, or via the Csound API (Application Programming Interface). In Csound, you will find the widest collection of tools for sound synthesis and sound modification, including special filters and tools for spectral processing.

Is Csound difficult to learn? Generally, graphical audio programming languages like Pure Data (more commonly known as Pd - see the Pure Data FLOSS Manual for further information), Max or Reaktor are easier to learn than text-coded audio programming languages like Csound, SuperCollider or ChucK. You cannot make a typo which produces an error which you do not understand. You program without being aware that you are programming. It feels like patching together different units in a studio. This is a fantastic approach. But when you deal with more complex projects, a text-based programming language is often easier to use and debug, and many people prefer to program by typing words and sentences rather than by wiring symbols together using the mouse.

Thanks to the work of Victor Lazzarini and Davis Pyon, it is also very easy to use Csound as a kind of audio engine inside Pd or Max. See the chapter Csound in other applications and the information on CSound with Pd and CSound in MaxMSP for further information.

Amongst text-based audio programming languages, Csound is arguably the simplest. You do not need to know anything about objects or functions. The basics of the Csound language are a straightforward transfer of the signal flow paradigm to text.

For example, to create a 400 Hz sine oscillator with an amplitude of 0.2, this is the signal flow:

This is a possible transformation of the signal graph into Csound code:

```    instr Sine
aSig      oscils    0.2, 400, 0
out       aSig
endin
```

The oscillator is represented by the opcode oscils and gets its input arguments amplitude (0.2), frequency (400) and phase (0) righthand. It produces an audio signal called aSig at the left side, which is in turn the input of the second opcode out. The first and last lines encase these connections inside an instrument called Sine. That's it.

But it is often difficult to find up to date resources that explain all of the things that are possible with Csound. Documentation and tutorials produced by many experienced users tend to be scattered across many different locations. This was one of the main motivations in producing this manual: to facilitate a flow between the knowledge of contemporary Csound users and those wishing to learn more about Csound.

Ten years after the milestone of Richard Boulanger's Csound Book the Csound FLOSS Manual is intended to offer an easy-to-understand introduction and to provide a centre of up to date information about the many features of Csound - not as detailed and in depth as the Csound Book, but including new information and sharing this knowledge with the wider Csound community.

Throughout this manual we will attempt a difficult balancing act: we want to provide users with nearly everything important there is to know about Csound, but we also want to keep things simple and concise to save you from drowning under the multitude of things that we could say about Csound. Frequently this manual will link to other more detailed resources like the Canonical Csound Reference Manual, the primary documentation provided by the Csound developers and associated community over the years, and the Csound Journal (edited by Steven Yi and James Hearon), a quarterly online publication with many great Csound-related articles.

Good luck and happy Csounding!

# HOW TO USE THIS MANUAL

The goal of this manual is to provide a readable introduction to Csound. In no way is it meant as a replacement for the Canonical Csound Reference Manual. It is intended as an introduction-tutorial-reference hybrid, gathering the most important information you need for working with Csound in a variety of situations. In many places, links are provided to other resources, such as the official manual, the Csound Journal, example collections, and more.

It is not necessary to read each chapter in sequence, feel free to jump to any chapter that interests you, although bear in mind that occasionally a chapter will make reference to a previous one.

If you are new to Csound, the QUICK START chapter will be the best place to go to get started. BASICS provides a general introduction to key concepts about digital sound vital to understanding how Csound deals with audio. The CSOUND LANGUAGE chapter provides greater detail about how Csound works and how to work with Csound.

SOUND SYNTHESIS introduces various methods of creating sound from scratch and SOUND MODIFICATION describes various methods of transforming sounds that already exist within Csound. SAMPLES outlines ways in which to record and play audio samples in Csound, an area that might be of particular interest to those intent on using Csound as a real-time performance instrument. The MIDI and OPEN SOUND CONTROL  chapters focus on different methods of controlling Csound using external software or hardware. The final chapters introduce various front-ends that can be used to interface with the Csound engine and Csound's communication with other applications.

If you would like to know more about a topic, and in particular about the use of any opcode, refer first to the Canonical Csound Reference Manual.

All files - examples and audio files - can be downloaded at www.csound-tutorial.net . If you use CsoundQt, you can find all the examples in CsoundQt's examples menu under "Floss Manual Examples".

Like other Audio Tools, Csound can produce extreme dynamic range. Be careful when you run the examples! Start with a low volume setting on your amplifier and take special care when using headphones.

You can help to improve this manual, either in reporting bugs or requests, or in joining as a writer. Just contact one of the maintainers (see the list in ON THIS RELEASE).

Thanks to Alex Hofmann, this manual can be ordered as a print-on-demand at www.lulu.com. Just use the search utility there and look for "Csound". Just the links will not work ...

# ON THIS RELEASE

We are happy to announce the second release of the Csound Floss Manual. It has been an exciting year for Csound, with many activities and important developments. Thanks to the long and hard work of Steven Yi, John ffitch, Tito Latini and others, a new parser has been written. This opens up many new possibilities for future language adaptations and more flexibility within the Csound syntax. In autumn 2011, the first international Csound Conference took place at HMTM Hannover, with many inspiring workshops, concerts, papers and most notably discussions between developers and users. In early 2012, Jim Aikin's Csound Power! was published and it represents a very well written introduction to Csound. In early spring, Victor Lazzarini and Steven Yi published the first release of Csound on Android devices, and all developers are currently pushing towards Csound6.

The first edition of the Csound Floss Manual has been a huge success. We are proud and glad to see it used, linked and quoted in many places. It has come to be regarded as a complement to the Csound Manual. We hope we can continue to reflect Csound's development in this manual. The core writers of the Csound Floss manual would like to extend their thanks to Richard Boulanger, John Clements and others for their support, and to all the writers for their various contributions. Thanks also are due to Adam Hyde and the team at flossmanuals.net for maintaining and developing this important platform for free libre open source software.

### What's new in this Release

• New chapters:
• MACROS (Csound Language)
• CABBAGE (Csound Frontends)
• BUILDING CSOUND (Appendix)
• METHODS OF WRITING CSOUND SCORES (Appendix)
• Chapters now completed:
• WAVESHAPING (Sound Synthesis)
• PHYSICAL MODELLING (Sound Synthesis)
• CONVOLUTION (Sound Modification)
• CSOUND VIA TERMINAL (Csound Frontends)
• CSOUND UTILITIES
• Significant amendments and additions to the following chapters:
• AM / RM / WAVESHAPING (Sound Modification)
• GRANULAR SYNTHESIS (Sound Modification)
• CSOUND IN PD (Csound in Other Applications)
• LINKS (Appendix)
• New chapters as drafts:
• CSOUND IN ABLETON LIVE (Csound in Other Applications)
• CSOUND AS A VST PLUGIN (Csound in Other Applications)
• PYTHON IN CSOUNDQT
• LUA IN CSOUND
• Slight changes in the structure (the TERMINAL is now considered as a frontend, and THE CSOUND API chapter is now part of the section Csound and other Programming Languages)

### Still on the To-Do-List:

• More and better illustrations
• Adding examples for VBAP, Ambisonics etc in PANNING AND SPATIALIZATION (Sound Modification)
• Adding examples and explanations in METHODS OF WRITING CSOUND SCORES (Appendix)
• Update OPCODE GUIDE (and more eyes on it at all)
• Much more should be written in the GLOSSARY
• Except the new drafted chapters PYTHON INSIDE CSOUND and EXTENDING CSOUND are still to write.

Last summer Alex Hofmann put a lot of work into making this manual available as a book on www.lulu.com. Just use the search utility there and look for "Csound", if you would like to obtain a printed version. This second release will be available soon.

Surround Wunderbar Studios, Berlin, 30th March, 2012

Joachim Heintz & Iain McCurdy

### Foreword on the First Release

In spring 2010 a group of Csounders decided to start this project. The chapter outline was suggested by Joachim Heintz with suggestions and improvements provided by Richard Boulanger, Oeyvind Brandtsegg, Andrés Cabrera, Alex Hofmann, Jacob Joaquin, Iain McCurdy, Rory Walsh and others. Rory also pointed us to the FLOSS Manuals platform as a possible environment for writing and publishing. Stefano Bonetti, François Pinot, Davis Pyon and Steven Yi joined later and wrote chapters.

In a volunteer project like this, it is not always easy to sustain momentum so in the spring of 2011 some members of the team met in Berlin for a 'book sprint' to achieve a level of completion, and publish a first release.

With heads spinning and square eyes we are happy and proud to offer this manual to you. At the same time we realize that this is a first release with much potential for further improvement. Several chapters have yet to be written, others are not yet complete and the differences between the various authors in terms of the level at which they aim and their degree of detail are perhaps larger than they should be.

This is therefore a beginning. Everyone is invited to improve this book. You can begin to write for one of the empty chapters, contribute to an existing one or insert new examples where you feel they are of benefit. You just need to create an account at http://booki.flossmanuals.net or to let us know of your suggestions.

We hope you enjoy using this manual, we had fun writing it!

Berlin, 31st March, 2011

Joachim Heintz Alex Hofmann Iain McCurdy

jh at joachimheintz.de alex at boomclicks.de i_mccurdy at hotmail.com

You can order a printed version here:

http://www.lulu.com/product/paperback/csound---floss-manual/16265055

# DIGITAL AUDIO

At a purely physical level, sound is simply a mechanical disturbance of a medium. The medium in question may be air, solid, liquid, gas or a mixture of several of these. This disturbance to the medium causes molecules to move to and fro in a spring-like manner. As one molecule hits the next, the disturbance moves through the medium causing sound to travel. These so called compressions and rarefactions in the medium can be described as sound waves. The simplest type of waveform, describing what is referred to as 'simple harmonic motion', is a sine wave.

Each time the waveform signal goes above 0 the molecules are in a state of compression meaning they are pushing towards each other. Every time the waveform signal drops below 0 the molecules are in a state of rarefaction meaning they are pulling away from each other. When a waveform shows a clear repeating pattern, as in the case above, it is said to be periodic. Periodic sounds give rise to the sensation of pitch.

## Elements of a sound wave

Periodic waves have four common parameters, and each of the four parameters affects the way we perceive sound.

• Period: This is the length of time it takes for a waveform to complete one cycle. This amount of time is referred to as t

• Wavelength(): the distance it takes for a wave to complete one full period. This is usually measured in meters.

• Frequency: the number of cycles or periods per second. Frequency is measured in Hertz. If a sound has a frequency of 440Hz it completes 440 cycles every second. Given a frequency, one can easily calculate the period of any sound. Mathematically, the period is the reciprocal of the frequency (and vice versa). In equation form, this is expressed as follows.

``` Frequency = 1/Period         Period = 1/Frequency
```

Therefore the frequency is the inverse of the period, so a wave of 100 Hz frequency has a period of 1/100 or 0.01 secs, likewise a frequency of 256Hz has a period of 1/256, or 0.004 secs. To calculate the wavelength of a sound in any given medium we can use the following equation:

``` Wavelength = Velocity/Frequency
```

Humans can hear frequencies from 20Hz to 20000Hz (although this can differ dramatically from individual to individual). You can read more about frequency in the next chapter.

• Phase: This is the starting point of a waveform. The starting point along the Y-axis of our plotted waveform is not always 0. This can be expressed in degrees or in radians. A complete cycle of a waveform will cover 360 degrees or (2 x pi) radians.

• Amplitude: Amplitude is represented by the y-axis of a plotted pressure wave. The strength at which the molecules pull or push away from each other will determine how far above and below 0 the wave fluctuates. The greater the y-value the greater the amplitude of our wave. The greater the compressions and rarefactions the greater the amplitude.

## Transduction

The analogue sound waves we hear in the world around us need to be converted into an electrical signal in order to be amplified or sent to a soundcard for recording. The process of converting acoustical energy in the form of pressure waves into an electrical signal is carried out by a device known as a a transducer.

A transducer, which is usually found in microphones, produces a changing electrical voltage that mirrors the changing compression and rarefaction of the air molecules caused by the sound wave. The continuous variation of pressure is therefore 'transduced' into continuous variation of voltage. The greater the variation of pressure the greater the variation of voltage that is sent to the computer.

Ideally, the transduction process should be as transparent and clean as possible: i.e., whatever goes in comes out as a perfect voltage representation. In the real world however this is never the case. Noise and distortion are always incorporated into the signal. Every time sound passes through a transducer or is transmitted electrically a change in signal quality will result. When we talk of 'noise' we are talking specifically about any unwanted signal captured during the transduction process. This normally manifests itself as an unwanted 'hiss'.

## Sampling

The analogue voltage that corresponds to an acoustic signal changes continuously so that at each instant in time it will have a different value. It is not possible for a computer to receive the value of the voltage for every instant because of the physical limitations of both the computer and the data converters (remember also that there are an infinite number of instances between every two instances!).

What the soundcard can do however is to measure the power of the analogue voltage at intervals of equal duration. This is how all digital recording works and is known as 'sampling'. The result of this sampling process is a discrete or digital signal which is no more than a sequence of numbers corresponding to the voltage at each successive sample time.

Below left is a diagram showing a sinusoidal waveform. The vertical lines that run through the diagram represents the points in time when a snapshot is taken of the signal. After the sampling has taken place we are left with what is known as a discrete signal consisting of a collection of audio samples, as illustrated in the diagram on the right hand side below. If one is recording using a typical audio editor the incoming samples will be stored in the computer RAM (Random Access Memory). In Csound one can process the incoming audio samples in real time and output a new stream of samples, or write them to disk in the form of a sound file.

It is important to remember that each sample represents the amount of voltage, positive or negative, that was present in the signal at the point in time the sample or snapshot was taken.

The same principle applies to recording of live video. A video camera takes a sequence of pictures of something in motion for example. Most video cameras will take between 30 and 60 still pictures a second. Each picture is called a frame. When these frames are played we no longer perceive them as individual pictures. We perceive them instead as a continuous moving image.

## Analogue versus Digital

In general, analogue systems can be quite unreliable when it comes to noise and distortion. Each time something is copied or transmitted, some noise and distortion is introduced into the process. If this is done many times, the cumulative effect can deteriorate a signal quite considerably. It is because of this, the music industry has turned to digital technology, which so far offers the best solution to this problem. As we saw above, in digital systems sound is stored as numbers, so a signal can be effectively "cloned". Mathematical routines can be applied to prevent errors in transmission, which could otherwise introduce noise into the signal.

## Sample Rate and the Sampling Theorem

The sample rate describes the number of samples (pictures/snapshots) taken each second. To sample an audio signal correctly it is important to pay attention to the sampling theorem:

```"To represent digitally a signal containing frequencies up to X Hz, it is
necessary to use a sampling rate of at least 2X samples per second"
```

According to this theorem, a soundcard or any other digital recording device will not be able to represent any frequency above 1/2 the sampling rate. Half the sampling rate is also referred to as the Nyquist frequency, after the Swedish physicist Harry Nyquist who formalized the theory in the 1920s. What it all means is that any signal with frequencies above the Nyquist frequency will be misrepresented. Furthermore it will result in a frequency lower than the one being sampled. When this happens it results in what is known as aliasing or foldover.

## Aliasing

Here is a graphical representation of aliasing.

The sinusoidal wave form in blue is being sampled at each arrow. The line that joins the red circles together is the captured waveform. As you can see the captured wave form and the original waveform have different frequencies. Here is another example:

We can see that if the sample rate is 40,000 there is no problem sampling a signal that is 10KHz. On the other hand, in the second example it can be seen that a 30kHz waveform is not going to be correctly sampled. In fact we end up with a waveform that is 10kHz, rather than 30kHz.

The following Csound instrument plays a 1000 Hz tone first directly, and then because the frequency is 1000 Hz lower than the sample rate of 44100 Hz:

EXAMPLE 01A01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
asig    oscils  .2, p4, 0
outs    asig, asig
endin

</CsInstruments>
<CsScore>
i 1 0 2 1000 ;1000 Hz tone
i 1 3 2 43100 ;43100 Hz tone sounds like 1000 Hz because of aliasing
</CsScore>
</CsoundSynthesizer>
```

The same phenomenon takes places in film and video too. You may recall having seen wagon wheels apparently move backwards in old Westerns. Let us say for example that a camera is taking 60 frames per second of a wheel moving. If the wheel is completing one rotation in exactly 1/60th of a second, then every picture looks the same. - as a result the wheel appears to stand still. If the wheel speeds up, i.e., increases frequency, it will appear as if the wheel is slowly turning backwards. This is because the wheel will complete more than a full rotation between each snapshot. This is the most ugly side-effect of aliasing - wrong information.

As an aside, it is worth observing that a lot of modern 'glitch' music intentionally makes a feature of the spectral distortion that aliasing induces in digital audio.

Audio-CD Quality uses a sample rate of 44100Kz (44.1 kHz). This means that CD quality can only represent frequencies up to 22050Hz. Humans typically have an absolute upper limit of hearing of about 20Khz thus making 44.1KHz a reasonable standard sampling rate.

## Bits, Bytes and Words. Understanding binary.

All digital computers represent data as a collection of bits (short for binary digit). A bit is the smallest possible unit of information. One bit can only be one of two states - off or on, 0 or 1. The meaning of the bit, which can represent almost anything, is unimportant at this point. The thing to remember is that all computer data - a text file on disk, a program in memory, a packet on a network - is ultimately a collection of bits.

Bits in groups of eight are called bytes, and one byte usually represents a single character of data in the computer. It's a little used term, but you might be interested in knowing that a nibble is half a byte (usually 4 bits).

## The Binary System

All digital computers work in a environment that has only two variables, 0 and 1. All numbers in our decimal system therefore must be translated into 0's and 1's in the binary system. If you think of
binary numbers in terms of switches. With one switch you can represent up to two different numbers.

0 (OFF) = Decimal 0
1 (ON) = Decimal 1

Thus, a single bit represents 2 numbers, two bits can represent 4 numbers, three bits represent 8 numbers, four bits represent 16 numbers, and so on up to a byte, or eight bits, which represents 256 numbers. Therefore each added bit doubles the amount of possible numbers that can be represented. Put simply, the more bits you have at your disposal the more information you can store.

## Bit-depth Resolution

Apart from the sample rate, another important parameter which can affect the fidelity of a digital signal is the accuracy with which each sample is known, in other words knowing how strong each voltage is. Every sample obtained is set to a specific amplitude (the measure of strength for each voltage) level. The number of levels depends on the precision of the measurement in bits, i.e., how many binary digits are used to store the samples. The number of bits that a system can use is normally referred to as the bit-depth resolution.

If the bit-depth resolution is 3 then there are 8 possible levels of amplitude that we can use for each sample. We can see this in the diagram below. At each sampling period the soundcard plots an amplitude. As we are only using a 3-bit system the resolution is not good enough to plot the correct amplitude of each sample. We can see in the diagram that some vertical lines stop above or below the real signal. This is because our bit-depth is not high enough to plot the amplitude levels with sufficient accuracy at each sampling period.

```example here for 4, 6, 8, 12, 16 bit of a sine signal ...
... coming in the next release
```

The standard resolution for CDs is 16 bit, which allows for 65536 different possible amplitude levels, 32767 either side of the zero axis. Using bit rates lower than 16 is not a good idea as it will result in noise being added to the signal. This is referred to as quantization noise and is a result of amplitude values being excessively rounded up or down when being digitized. Quantization noise becomes most apparent when trying to represent low amplitude (quiet) sounds. Frequently a tiny amount of noise, known as a dither signal, will be added to digital audio before conversion back into an analogue signal. Adding this dither signal will actually reduce the more noticeable noise created by quantization. As higher bit depth resolutions are employed in the digitizing process the need for dithering is reduced. A general rule is to use the highest bit rate available.

Many electronic musicians make use of deliberately low bit depth quantization in order to add noise to a signal. The effect is commonly known as 'bit-crunching' and is relatively easy to do in Csound.

## ADC / DAC

The entire process, as described above, of taking an analogue signal and converting it into a digital signal is referred to as analogue to digital conversion or ADC. Of course digital to analogue conversion, DAC, is also possible. This is how we get to hear our music through our PC's headphones or speakers. For example, if one plays a sound from Media Player or iTunes the software will send a series of numbers to the computer soundcard. In fact it will most likely send 44100 numbers a second. If the audio that is playing is 16 bit then these numbers will range from -32768 to +32767.

When the sound card receives these numbers from the audio stream it will output corresponding voltages to a loudspeaker. When the voltages reach the loudspeaker they cause the loudspeakers magnet to move inwards and outwards. This causes a disturbance in the air around the speaker resulting in what we perceive as sound.

# FREQUENCIES

As mentioned in the previous section frequency is defined as the number of cycles or periods per second. Frequency is measured in Hertz. If a tone has a frequency of 440Hz it completes 440 cycles every second. Given a tone's frequency, one can easily calculate the period of any sound. Mathematically, the period is the reciprocal of the frequency and vice versa. In equation form, this is expressed as follows.

``` Frequency = 1/Period         Period = 1/Frequency
```

Therefore the frequency is the inverse of the period, so a wave of 100 Hz frequency has a period of 1/100 or 0.01 seconds, likewise a frequency of 256Hz has a period of 1/256, or 0.004 seconds. To calculate the wavelength of a sound in any given medium we can use the following equation:

```λ = Velocity/Frequency
```

For instance, a wave of 1000 Hz in air (velocity of diffusion about 340 m/s) has a length of approximately 340/1000 m = 34 cm.

## Lower And Higher Borders For Hearing

The human ear can generally hear sounds in the range 20 Hz to 20,000 Hz (20 kHz). This upper limit tends to decrease with age due to a condition known as presbyacusis, or age related hearing loss. Most adults can hear to about 16 kHz while most children can hear beyond this. At the lower end of the spectrum the human ear does not respond to frequencies below 20 Hz, with 40 of 50 Hz being the lowest most people can perceive.

So, in the following example, you will not hear the first (10 Hz) tone, and probably not the last (20 kHz) one, but hopefully the other ones (100 Hz, 1000 Hz, 10000 Hz):

EXAMPLE 01B01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
;example by joachim heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
prints  "Playing %d Hertz!\n", p4
asig    oscils  .2, p4, 0
outs    asig, asig
endin

</CsInstruments>
<CsScore>
i 1 0 2 10
i . + . 100
i . + . 1000
i . + . 10000
i . + . 20000
</CsScore>
</CsoundSynthesizer>
```

## Logarithms, Frequency Ratios and Intervals

A lot of basic maths is about simplification of complex equations. Shortcuts are taken all the time to make things easier to read and equate. Multiplication can be seen as a shorthand of addition, for example, 5x10 = 5+5+5+5+5+5+5+5+5+5. Exponents are shorthand for multiplication, 35 = 3x3x3x3x3. Logarithms are shorthand for exponents and are used in many areas of science and engineering in which quantities vary over a large range. Examples of logarithmic scales include the decibel scale, the Richter scale for measuring earthquake magnitudes and the astronomical scale of stellar brightnesses. Musical frequencies also work on a logarithmic scale, more on this later.

Intervals in music describe the distance between two notes. When dealing with standard musical notation it is easy to determine an interval between two adjacent notes. For example a perfect 5th is always made up of 7 semitones. When dealing with Hz values things are different. A difference of say 100Hz does not always equate to the same musical interval. This is because musical intervals as we hear them are represented in Hz as frequency ratios. An octave for example is always 2:1. That is to say every time you double a Hz value you will jump up by a musical interval of an octave.

Consider the following. A flute can play the note A at 440 Hz. If the player plays another A an octave above it at 880 Hz the difference in Hz is 440. Now consider the piccolo, the highest pitched instrument of the orchestra. It can play a frequency of 2000 Hz but it can also play an octave above this at 4000 Hz (2 x 2000 Hz). While the difference in Hertz between the two notes on the flute is only 440 Hz, the difference between the two high pitched notes on a piccolo is 1000 Hz yet they are both only playing notes one octave apart.

What all this demonstrates is that the higher two pitches become the greater the difference in Hertz needs to be for us to recognize the difference as the same musical interval. The most common ratios found in the equal temperament scale are the unison: (1:1), the octave: (2:1), the perfect fifth (3:2), the perfect fourth (4:3), the major third (5:4) and the minor third (6:5).

The following example shows the difference between adding a certain frequency and applying a ratio. First, the frequencies of 100, 400 and 800 Hz all get an addition of 100 Hz. This sounds very different, though the added frequency is the same. Second, the ratio 3/2 (perfect fifth) is applied to the same frequencies. This sounds always the same, though the frequency displacement is different each time.

EXAMPLE 01B02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac -m0
</CsOptions>
<CsInstruments>
;example by joachim heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
prints  "Playing %d Hertz!\n", p4
asig    oscils  .2, p4, 0
outs    asig, asig
endin

instr 2
prints  "Adding %d Hertz to %d Hertz!\n", p5, p4
asig    oscils  .2, p4+p5, 0
outs    asig, asig
endin

instr 3
prints  "Applying the ratio of %f (adding %d Hertz)
to %d Hertz!\n", p5, p4*p5, p4
asig    oscils  .2, p4*p5, 0
outs    asig, asig
endin

</CsInstruments>
<CsScore>
;adding a certain frequency (instr 2)
i 1 0 1 100
i 2 1 1 100 100
i 1 3 1 400
i 2 4 1 400 100
i 1 6 1 800
i 2 7 1 800 100
;applying a certain ratio (instr 3)
i 1 10 1 100
i 3 11 1 100 [3/2]
i 1 13 1 400
i 3 14 1 400 [3/2]
i 1 16 1 800
i 3 17 1 800 [3/2]
</CsScore>
</CsoundSynthesizer>
```

So what of the algorithms mentioned above. As some readers will know the current preferred method of tuning western instruments is based on equal temperament. Essentially this means that all octaves are split into 12 equal intervals. Therefore a semitone has a ratio of 2(1/12), which is approximately 1.059463.

So what about the reference to logarithms in the heading above? As stated previously, logarithms are shorthand for exponents. 2(1/12)= 1.059463 can also be written as log2(1.059463)= 1/12. Therefore musical frequency works on a logarithmic scale.

## MIDI Notes

Csound can easily deal with MIDI notes and comes with functions that will convert MIDI notes to Hertz values and back again. In MIDI speak A440 is equal to A4. You can think of A4 as being the fourth A from the lowest A we can hear, well almost hear.

Caution: like many 'standards' there is occasional disagreement about the mapping between frequency and octave number. You may occasionally encounter A440 being described as A3.

# INTENSITIES

## Real World Intensities and Amplitudes

There are many ways to describe a sound physically. One of the most common is the Sound Intensity Level (SIL). It describes the amount of power on a certain surface, so its unit is Watt per square meter ( $\displaystyle\black{W}\//{{m}}^{{2}}$ ). The range of human hearing is about  $\displaystyle\black{{10}}^{{-{{12}}}}{W}\//{{m}}^{{2}}$ at the threshold of hearing to $\displaystyle\black{{10}}^{{0}}{W}\//{{m}}^{{2}}$ at the threshold of pain. For ordering this immense range, and to facilitate the measurement of one sound intensity based upon its ratio with another, a logarithmic scale is used. The unit Bel describes the relation of one intensity $I$ to a reference intensity $I0$ as follows:

$\displaystyle\black{\log}_{{{10}}}\frac{{I}}{{I}_{{0}}}$   Sound Intensity Level in Bel

If, for instance, the ratio  $\displaystyle\black\frac{{I}}{{I}_{{0}}}$ is 10, this is 1 Bel. If the ratio is 100, this is 2 Bel.

For real world sounds, it makes sense to set the reference value  $\displaystyle\black{I}_{{0}}$ to the threshold of hearing which has been fixed as  $\displaystyle\black{{10}}^{{-{{12}}}}{W}\//{{m}}^{{2}}$ at 1000 Hertz. So the range of hearing covers about 12 Bel. Usually 1 Bel is divided into 10 deci Bel, so the common formula for measuring a sound intensity is:

$\displaystyle\black{10}\cdot{\log}_{{10}}\frac{{I}}{{I}_{{0}}}$   Sound Intensity Level (SIL) in Decibel (dB) with  $\displaystyle\black{I}_{{0}}={{10}}^{{-{{12}}}}{W}\//{{m}}^{{2}}$

While the sound intensity level is useful to describe the way in which the human hearing works, the measurement of sound is more closely related to the sound pressure deviations. Sound waves compress and expand the air particles and by this they increase and decrease the localized air pressure. These deviations are measured and transformed by a microphone. So the question arises: what is the relationship between the sound pressure deviations and the sound intensity? The answer is: sound intensity changes $\displaystyle\black{I}$ are proportional to the square of the sound pressure changes $\displaystyle\black{P}$ . As a formula:

 $\displaystyle\black{I}\approx{{P}}^{{2}}$   Relation between Sound Intensity and Sound Pressure

Let us take an example to see what this means. The sound pressure at the threshold of hearing can be fixed at  $\displaystyle\black{2}\cdot{{10}}^{{-{{5}}}}{P}{a}$  . This value is the reference value of the Sound Pressure Level (SPL). If we have now a value of  $\displaystyle\black{2}\cdot{{10}}^{{-{{4}}}}{P}{a}$   , the corresponding sound intensity relation can be calculated as:

$\displaystyle\black{{\left(\frac{{{2}\cdot{{10}}^{{4}}}}{{{2}\cdot{{10}}^{{5}}}}\right)}}^{{2}}={{10}}^{{2}}={100}$

So, a factor of 10 at the pressure relation yields a factor of 100 at the intensity relation. In general, the dB scale for the pressure $P$ related to the pressure $P0$ is:

$\displaystyle\black{10}\cdot{\log}_{{10}}{{\left(\frac{{P}}{{P}_{{0}}}\right)}}^{{2}}={2}\cdot{10}\cdot{\log}_{{10}}\frac{{P}}{{P}_{{0}}}={20}\cdot{\log}_{{10}}\frac{{P}}{{P}_{{0}}}$

Sound Pressure Level (SPL) in Decibel (dB) with $\displaystyle\black{P}_{{0}}={2}\cdot{{10}}^{{-{{5}}}}{P}{a}$

Working with Digital Audio basically means working with amplitudes. What we are dealing with microphones are amplitudes. Any audio file is a sequence of amplitudes. What you generate in Csound and write either to the DAC in realtime or to a sound file, are again nothing but a sequence of amplitudes. As amplitudes are directly related to the sound pressure deviations, all the relations between sound intensity and sound pressure can be transferred to relations between sound intensity and amplitudes:

$\displaystyle\black{I}\approx{{A}}^{{2}}$   Relation between Intensity and Ampltitudes

$\displaystyle\black{20}\cdot{\log}_{{10}}\frac{{A}}{{A}_{{0}}}$   Decibel (dB) Scale of Amplitudes with any amplitude  $\displaystyle\black{A}$ related to an other amplitude $\displaystyle\black{A}_{{0}}$

If you drive an oscillator with the amplitude 1, and another oscillator with the amplitude 0.5, and you want to know the difference in dB, you calculate:

$\displaystyle\black{20}\cdot{\log}_{{10}}\frac{{1}}{{0.5}}={20}\cdot{\log}_{{10}}{2}={20}\cdot{0.30103}={6.0206}{d}{B}$

So, the most useful thing to keep in mind is: when you double the amplitude, you get +6 dB; when you have half of the amplitude as before, you get -6 dB.

## What is 0 dB?

As described in the last section, any dB scale - for intensities, pressures or amplitudes - is just a way to describe a relationship. To have any sort of quantitative measurement you will need to know the reference value referred to as "0 dB". For real world sounds, it makes sense to set this level to the threshold of hearing. This is done, as we saw, by setting the SIL to  $\displaystyle\black{{10}}^{{-{{12}}}}{W}\//{{m}}^{{2}}$  and the SPL to  $\displaystyle\black{2}\cdot{{10}}^{{-{{5}}}}{P}{a}$.

But for working with digital sound in the computer, this does not make any sense. What you will hear from the sound you produce in the computer, just depends on the amplification, the speakers, and so on. It has nothing, per se, to do with the level in your audio editor or in Csound. Nevertheless, there is a rational reference level for the amplitudes. In a digital system, there is a strict limit for the maximum number you can store as amplitude. This maximum possible level is called 0 dB.

Each program connects this maximum possible amplitude with a number. Usually it is '1' which is a good choice, because you know that everything above 1 is clipping, and you have a handy relation for lower values. But actually this value is nothing but a setting, and in Csound you are free to set it to any value you like via the 0dbfs opcode. Usually you should use this statement in the orchestra header:

```0dbfs = 1
```

This means: "Set the level for zero dB as full scale to 1 as reference value." Note that because of historical reasons the default value in Csound is not 1 but 32768. So you must have this 0dbfs = 1 statement in your header if you want to set Csound to the value probably all other audio applications have.

## dB Scale Versus Linear Amplitude

Let's see some practical consequences now of what we have discussed so far. One major point is: for getting smooth transitions between intensity levels you must not use a simple linear transition of the amplitudes, but a linear transition of the dB equivalent. The following example shows a linear rise of the amplitudes from 0 to 1, and then a linear rise of the dB's from -80 to 0 dB, both over 10 seconds.

EXAMPLE 01C01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;example by joachim heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1 ;linear amplitude rise
kamp      line    0, p3, 1 ;amp rise 0->1
asig      oscils  1, 1000, 0 ;1000 Hz sine
aout      =       asig * kamp
outs    aout, aout
endin

instr 2 ;linear rise of dB
kdb       line    -80, p3, 0 ;dB rise -60 -> 0
asig      oscils  1, 1000, 0 ;1000 Hz sine
kamp      =       ampdb(kdb) ;transformation db -> amp
aout      =       asig * kamp
outs    aout, aout
endin

</CsInstruments>
<CsScore>
i 1 0 10
i 2 11 10
</CsScore>
</CsoundSynthesizer>
```

You will hear how fast the sound intensity increases at the first note with direct amplitude rise, and then stays nearly constant. At the second note you should hear a very smooth and constant increment of intensity.

## RMS Measurement

Sound intensity depends on many factors. One of the most important is the effective mean of the amplitudes in a certain time span. This is called the Root Mean Square (RMS) value. To calculate it, you have (1) to calculate the squared amplitudes of number N samples. Then you (2) divide the result by N to calculate the mean of it. Finally (3) take the square root.

Let's see a simple example, and then have a look how getting the rms value works in Csound. Assumeing we have a sine wave which consists of 16 samples, we get these amplitudes:

These are the squared amplitudes:

The mean of these values is:

$(0+0.146+0.5+0.854+1+0.854+0.5+0.146+0+0.146+0.5+0.854+1+0.854+0.5+0.146)/16=8/16=0.5$

And the resulting RMS value is $0.5=0.707$

The rms opcode in Csound calculates the RMS power in a certain time span, and smoothes the values in time according to the ihp parameter: the higher this value (the default is 10 Hz), the snappier the measurement, and vice versa. This opcode can be used to implement a self-regulating system, in which the rms opcode prevents the system from exploding. Each time the rms value exceeds a certain value, the amount of feedback is reduced. This is an example1 :

EXAMPLE 01C02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;example by Martin Neukom, adapted by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1 ;table with a sine wave

instr 1
a3        init      0
kamp      linseg    0, 1.5, 0.2, 1.5, 0 ;envelope for initial input
asnd      poscil    kamp, 440, giSine ;initial input
if p4 == 1 then ;choose between two sines ...
adel1     poscil    0.0523, 0.023, giSine
adel2     poscil    0.073, 0.023, giSine,.5
else ;or a random movement for the delay lines
adel1     randi     0.05, 0.1, 2
adel2     randi     0.08, 0.2, 2
endif
a0        delayr    1 ;delay line of 1 second
a1        deltapi   adel1 + 0.1 ;first reading
a2        deltapi   adel2 + 0.1 ;second reading
krms      rms       a3 ;rms measurement
delayw    asnd + exp(-krms) * a3 ;feedback depending on rms
a3        reson     -(a1+a2), 3000, 7000, 2 ;calculate a3
aout      linen     a1/3, 1, p3, 1 ;apply fade in and fade out
outs      aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 60 1 ;two sine movements of delay with feedback
i 1 61 . 2 ;two random movements of delay with feedback
</CsScore>
</CsoundSynthesizer>
```

## Fletcher-Munson Curves

Human hearing is roughly in a range between 20 and 20000 Hz. But inside this range, the hearing is not equally sensitive. The most sensitive region is around 3000 Hz. If you come to the upper or lower border of the range, you need more intensity to perceive a sound as "equally loud".

These curves of equal loudness are mostly called "Fletcher-Munson Curves" because of the paper of H. Fletcher and W. A. Munson in 1933. They look like this:

Try the following test. In the first 5 seconds you will hear a tone of 3000 Hz. Adjust the level of your amplifier to the lowest possible point at which you still can hear the tone. - Then you hear a tone whose frequency starts at 20 Hertz and ends at 20000 Hertz, over 20 seconds. Try to move the fader or knob of your amplification exactly in a way that you still can hear anything, but as soft as possible. The movement of your fader should roughly be similar to the lowest Fletcher-Munson-Curve: starting relatively high, going down and down until 3000 Hertz, and then up again. (As always, this test depends on your speaker hardware. If your speaker do not provide proper lower frequencies, you will not hear anything in the bass region.)

EXAMPLE 01C03.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1 ;table with a sine wave

instr 1
kfreq     expseg    p4, p3, p5
printk    1, kfreq ;prints the frequencies once a second
asin      poscil    .2, kfreq, giSine
aout      linen     asin, .01, p3, .01
outs      aout, aout
endin
</CsInstruments>
<CsScore>
i 1 0 5 1000 1000
i 1 6 20 20  20000
</CsScore>
</CsoundSynthesizer>
```

It is very important to bear in mind that the perceived loudness depends much on the frequencies. You must know that putting out a sine of 30 Hz with a certain amplitude is totally different from a sine of 3000 Hz with the same amplitude - the latter will sound much louder.

1. cf Martin Neukom, Signale Systeme Klangsynthese, Zürich 2003, p. 383^

# MAKE CSOUND RUN

## Csound and Frontends

The core element of Csound is an audio engine for the Csound language. It has no graphical elements and it is designed to take Csound text files (like ".csd" files) and produce audio, either in realtime, or by writing to a file. It can still be used in this way, but most users nowadays prefer to use Csound via a frontend. A frontend is an application which assists you in writing code and running Csound. Beyond the functions of a simple text editor, a frontend environment will offer colour coded highlighting of language specific keywords and quick access to an integrated help system. A frontend can also expand possibilities by providing tools to build interactive interfaces as well, sometimes, as advanced compositional tools.

In 2009 the Csound developers decided to include QuteCsound as the standard frontend to be included with the Csound distribution, so you will already have this frontend if you have installed any of the recent pre-built versions of Csound. Conversely if you install a frontend you will require a separate installation of Csound in order for it to function. If you experience any problems with QuteCsound, or simply prefer another frontend design, try WinXound as alternative.

## How to Download and Install Csound

To get Csound you first need to download the package for your system from the SourceForge page: http://sourceforge.net/projects/csound/files/csound5/

There are many files here, so here are some guidelines to help you choose the appropriate version.

### Windows

Windows installers are the ones ending in .exe. Look for the latest version of Csound, and find a file which should be called something like: Csound5.17-gnu-win32-d.exe. The important thing to note is the final letter of the installer name, which can be "d" or "f". This specifies the computation precision of the Csound engine. Float precision (32-bit float) is marked with "f" and double precision (64-bit float) is marked "d". This is important to bear in mind, as a frontend which works with the "floats" version, will not run if you have the "doubles" version installed. More recent versions of the pre-built Windows installer have only been released in the 'doubles' version.

After you have downloaded the installer, just run it and follow the instructions. When you are finished, you will find a Csound folder in your start menu containing Csound utilities and the CsoundQt (QuteCsound) frontend.

### Mac OS X

The Mac OS X installers are the files ending in .dmg. Look for the latest version of Csound for your particular system, for example a Universal binary for 10.7 will be called something like: csound5.17.3-OSX10.7-Universal.dmg. When you double click the downloaded file, you will have a disk image on your desktop, with the Csound installer, CsoundQt and a readme file. Double-click the installer and follow the instructions. Csound and the basic Csound utilities will be installed. To install the CsoundQt frontend, you only need to move it to your Applications folder.

### Linux and others

Csound is available from the official package repositories for many distributions like Debian, Ubuntu, Fedora, Archlinux and Gentoo. If there are no binary packages for your platform, or you need a more recent version, you can get the source package from the SourceForge page and build from source. Some build instructions can be find in the chapter BUILDING CSOUND in the appendix, and in the Csound Wiki on Sourceforge. Detailed information can also be found in the Building Csound Manual Page.

Note that the Csound repository has moved from cvs to git. After installing git, you can use this command to clone the Csound5 repository, if you like to have access to the latest (perhaps unstable) sources:

```git clone git://csound.git.sourceforge.net/gitroot/csound/csound5
```

### Android and iOS

Recently Csound has been ported to Android and iOS. At the time of writing this release, it is too early for a description. If you are interested, you may have a look at http://sourceforge.net/projects/csound/files/csound5 or at the paper from Victor Lazzarini and Steven Yi at the 2012 Linux Audio Conference.

## Install Problems?

If, for any reason, you can't find the CsoundQt (formerly QuteCsound) frontend on your system after install, or if you want to install the most recent version of CsoundQt, or if you prefer another frontend altogether: see the CSOUND FRONTENDS section of this manual for further information. If you have any install problems, consider joining the Csound Mailing List to report your issues, or write a mail to one of the maintainers (see ON THIS RELEASE).

## The Csound Reference Manual

The Csound Reference Manual is an indispensable companion to Csound. It is available in various formats from the same place as the Csound installers, and it is installed with the packages for OS X and Windows. It can also be browsed online at The Csound Manual Section at Csounds.com. Many frontends will provide you with direct and easy access to it.

## How to Execute a Simple Example

### Using CsoundQt

Run CsoundQt. Go into the CsoundQt menubar and choose: Examples->Getting started...-> Basics-> HelloWorld

You will see a very basic Csound file (.csd) with a lot of comments in green.

Click on the "RUN" icon in the CsoundQt control bar to start the realtime Csound engine. You should hear a 440 Hz sine wave.

You can also run the Csound engine in the terminal from within QuteCsound. Just click on "Run in Term". A console will pop up and Csound will be executed as an independent process. The result should be the same - the 440 Hz "beep".

### Using the Terminal / Console

1. Save the following code in any plain text editor as HelloWorld.csd.

EXAMPLE 02A01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Alex Hofmann
instr 1
aSin      oscils    0dbfs/4, 440, 0
out       aSin
endin
</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

2. Open the Terminal / Prompt / Console

3. Type: csound /full/path/HelloWorld.csd

where /full/path/HelloWorld.csd is the complete path to your file. You also execute this file by just typing csound then dragging the file into the terminal window and then hitting return.

You should hear a 440 Hz tone.

# CSOUND SYNTAX

## Orchestra and Score

In Csound, you must define "instruments", which are units which "do things", for instance playing a sine wave. These instruments must be called or "turned on" by a "score". The Csound "score" is a list of events which describe how the instruments are to be played in time. It can be thought of as a timeline in text.

A Csound instrument is contained within an Instrument Block, which starts with the keyword instr and ends with the keyword endin. All instruments are given a number (or a name) to identify them.

```instr 1
... instrument instructions come here...
endin
```

Score events in Csound are individual text lines, which can turn on instruments for a certain time. For example, to turn on instrument 1, at time 0, for 2 seconds you will use:

```i 1 0 2
```

## The Csound Document Structure

A Csound document is structured into three main sections:

• CsOptions: Contains the configuration options for Csound. For example using "-o dac" in this section will make Csound run in real-time instead of writing a sound file.
• CsInstruments: Contains the instrument definitions and optionally some global settings and definitions like sample rate, etc.
• CsScore: Contains the score events which trigger the instruments.

Each of these sections is opened with a <xyz> tag and closed with a </xyz> tag. Every Csound file starts with the <CsoundSynthesizer> tag, and ends with </CsoundSynthesizer>. Only the text in-between will be used by Csound.

EXAMPLE 02B01.csd

```<CsoundSynthesizer>; START OF A CSOUND FILE

<CsOptions> ; CSOUND CONFIGURATION
-odac
</CsOptions>

<CsInstruments> ; INSTRUMENT DEFINITIONS GO HERE

; Set the audio sample rate to 44100 Hz
sr = 44100

instr 1
; a 440 Hz Sine Wave
aSin      oscils    0dbfs/4, 440, 0
out       aSin
endin
</CsInstruments>

<CsScore> ; SCORE EVENTS GO HERE
i 1 0 1
</CsScore>

</CsoundSynthesizer> ; END OF THE CSOUND FILE
; Anything after is ignored by Csound
```

Comments, which are lines of text that Csound will ignore, are started with the ";" character. Multi-line comments can be made by encasing them between "/*" and  "*/".

## Opcodes

"Opcodes" or "Unit generators" are the basic building blocks of Csound. Opcodes can do many things like produce oscillating signals, filter signals, perform mathematical functions or even turn on and off instruments. Opcodes, depending on their function, will take inputs and outputs. Each input or output is called, in programming terms, an "argument". Opcodes always take input arguments on the right and output their results on the left, like this:

```output    OPCODE    input1, input2, input3, .., inputN
```

For example the oscils opcode has three inputs: amplitude, frequency and phase, and produces a sine wave signal:

```aSin      oscils    0dbfs/4, 440, 0
```

In this case, a 440 Hertz oscillation starting at phase 0 radians, with an amplitude of 0dbfs/4 (a quarter of 0 dB as full scale) will be created and its output will be stored in a container called aSin. The order of the arguments is important: the first input to oscils will always be amplitude, the second, frequency and the third, phase.

Many opcodes include optional input arguments and occasionally optional output arguments. These will always be placed after the essential arguments. In the Csound Manual documentation they are indicated using square brackets "[]". If optional input arguments are omitted they are replaced with the default values indicated in the Csound Manual. The addition of optional output arguments normally initiates a different mode of that opcode: for example, a stereo as opposed to mono version of the opcode.

## Variables

A "variable" is a named container. It is a place to store things like signals or values from where they can be recalled by using their name. In Csound there are various types of variables. The easiest way to deal with variables when getting to know Csound is to imagine them as cables.

If you want to patch this together: Oscillator->Filter->Output,

you need two cables, one going out from the oscillator into the filter and one from the filter to the output. The cables carry audio signals, which are variables beginning with the letter "a".

```aSource    buzz       0.8, 200, 10, 1
aFiltered  moogladder aSource, 400, 0.8
out        aFiltered
```

In the example above, the buzz opcode produces a complex waveform as signal aSource. This signal is fed into the moogladder opcode, which in turn produces the signal aFiltered. The out opcode takes this signal, and sends it to the output whether that be to the speakers or to a rendered file.

Other common variable types are "k" variables which store control signals, which are updated less frequently than audio signals, and "i" variables which are constants within each instrument note.

You can find more information about variable types here in this manual, or here in the Csound Journal.

## Using The Manual

The Csound Reference Manual is a comprehensive source regarding Csound's syntax and opcodes. All opcodes have their own manual entry describing their syntax and behavior, and the manual contains a detailed reference on the Csound language and options.

In CsoundQt you can find the Csound Manual in the Help Menu. You can quickly go to a particular opcode entry in the manual by putting the cursor on the opcode and pressing Shift+F1. WinXsound and Blue also provide easy access to the manual.

# CONFIGURING MIDI

Csound can receive MIDI events (like MIDI notes and MIDI control changes) from an external MIDI interface or from another program via a virtual MIDI cable. This information can be used to control any aspect of synthesis or performance.

Csound receives MIDI data through MIDI Realtime Modules. These are special Csound plugins which enable MIDI input using different methods according to platform. They are enabled using the -+rtmidi command line flag in the <CsOptions> section of your .csd file, but can also be set interactively on some front-ends via the configure dialog setups.

There is the universal "portmidi" module. PortMidi is a cross-platform module for MIDI I/O and should be available on all platforms. To enable the "portmidi" module, you can use the flag:

```-+rtmidi=portmidi
```

After selecting the RT MIDI module from a front-end or the command line, you need to select the MIDI devices for input and output. These are set using the flags -M and -Q respectively followed by the number of the interface. You can usually use:

```-M999
```

To get a performance error with a listing of available interfaces.

For the PortMidi module (and others like ALSA), you can specify no number to use the default MIDI interface or the 'a' character to use all devices. This will even work when no MIDI devices are present.

```-Ma
```

So if you want MIDI input using the portmidi module, using device 2 for input and device 1 for output, your <CsOptions> section should contain:

```-+rtmidi=portmidi -M2 -Q1
```

There is a special "virtual" RT MIDI module which enables MIDI input from a virtual keyboard. To enable it, you can use:

``` -+rtmidi=virtual -M0
```

## Platform Specific Modules

If the "portmidi" module is not working properly for some reason, you can try other platform specific modules.

### Linux

On Linux systems, you might also have an "alsa" module to use the alsa raw MIDI interface. This is different from the more common alsa sequencer interface and will typically require the snd-virmidi module to be loaded.

### OS X

On OS X you may have a "coremidi" module available.

### Windows

On Windows, you may have a "winmme" MIDI module.

## MIDI I/O in CsoundQt

As with Audio I/O, you can set the MIDI preferences in the configuration dialog. In it you will find a selection box for the RT MIDI module, and text boxes for MIDI input and output devices.

## How to Use a MIDI Keyboard

Once you've set up the hardware, you are ready to receive MIDI information and interpret it in Csound. By default, when a MIDI note is received, it turns on the Csound instrument corresponding to its channel number, so if a note is received on channel 3, it will turn on instrument 3, if it is received on channel 10, it will turn on instrument 10 and so on.

If you want to change this routing of MIDI channels to instruments, you can use the massign opcode. For instance, this statement lets you route your MIDI channel 1 to instrument 10:

``` massign 1, 10
```

On the following example, a simple instrument, which plays a sine wave, is defined in instrument 1. There are no score note events, so no sound will be produced unless a MIDI note is received on channel 1.

EXAMPLE 02C01.csd

```<CsoundSynthesizer>
<CsOptions>
-+rtmidi=portmidi -Ma -odac
</CsOptions>
<CsInstruments>
;Example by Andrés Cabrera

sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

massign   0, 1 ;assign all MIDI channels to instrument 1
giSine  ftgen     0,0,2^10,10,1 ;a function table with a sine wave

instr 1
iCps    cpsmidi   ;get the frequency from the key pressed
iAmp    ampmidi   0dbfs * 0.3 ;get the amplitude
aOut    poscil    iAmp, iCps, giSine ;generate a sine tone
outs      aOut, aOut ;write it to the output
endin

</CsInstruments>
<CsScore>
e 3600
</CsScore>
</CsoundSynthesizer>
```

Note that Csound has an unlimited polyphony in this way: each key pressed starts a new instance of instrument 1, and you can have any number of instrument instances at the same time.

## How to Use a MIDI Controller

To receive MIDI controller events, opcodes like ctrl7 can be used.  In the following example instrument 1 is turned on for 60 seconds. It will receive controller #1 (modulation wheel) on channel 1 and convert MIDI range (0-127) to a range between 220 and 440. This value is used to set the frequency of a simple sine oscillator.

EXAMPLE 02C02.csd

```<CsoundSynthesizer>
<CsOptions>
-+rtmidi=virtual -M1 -odac
</CsOptions>
<CsInstruments>
;Example by Andrés Cabrera

sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine ftgen 0,0,2^10,10,1

instr 1
; --- receive controller number 1 on channel 1 and scale from 220 to 440
kFreq ctrl7  1, 1, 220, 440
; --- use this value as varying frequency for a sine wave
aOut  poscil 0.2, kFreq, giSine
outs   aOut, aOut
endin
</CsInstruments>
<CsScore>
i 1 0 60
e
</CsScore>
</CsoundSynthesizer>
```

## Other Type of MIDI Data

Csound can receive other type of MIDI, like pitch bend, and aftertouch through the usage of specific opcodes. Generic MIDI Data can be received using the midiin opcode. The example below prints to the console the data received via MIDI.

EXAMPLE 02C03.csd

```<CsoundSynthesizer>
<CsOptions>
-+rtmidi=portmidi -Ma -odac
</CsOptions>
<CsInstruments>
;Example by Andrés Cabrera

sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
kStatus, kChan, kData1, kData2 midiin

if kStatus != 0 then ;print if any new MIDI message has been received
printk 0, kStatus
printk 0, kChan
printk 0, kData1
printk 0, kData2
endif

endin

</CsInstruments>
<CsScore>
i1 0 3600
e
</CsScore>
</CsoundSynthesizer>
```

# LIVE AUDIO

## Configuring Audio & Tuning Audio Performance

### Selecting Audio Devices and Drivers

Csound relates to the various inputs and outputs of sound devices installed on your computer as a numbered list. If you are using a multichannel interface then each stereo pair will most likely be assigned a different number. If you wish to send or receive audio to or from a specific audio connection you will need to know the number by which Csound knows it. If you are not sure of what that is you can trick Csound into providing you with a list of available devices by trying to run Csound using an obviously out of range device number, like this:

EXAMPLE 02D01.csd

```<CsoundSynthesizer>
<CsOptions>
-iadc999 -odac999
</CsOptions>
<CsInstruments>
;Example by Andrés Cabrera
instr 1
endin
</CsInstruments>
<CsScore>
e
</CsScore>
</CsoundSynthesizer>
```

The input and output devices will be listed seperately. Specify your input device with the -iadc flag and the number of your input device, and your output device with the -odac flag and the number of your output device. For instance, if you select the "XYZ" device from the list above both, for input and output, you may include something like

``` -iadc2 -odac3
```

in the <CsOptions> section of you .csd file.

The RT (= real-time) output module can be set with the -+rtaudio flag. If you don't use this flag, the PortAudio driver will be used. Other possible drivers are jack and alsa (Linux), mme (Windows) or CoreAudio (Mac). So, this sets your audio driver to mme instead of Port Audio:

```-+rtaudio=mme
```

### Tuning Performance and Latency

Live performance and latency depend mainly on the sizes of the software and the hardware buffers. They can be set in the <CsOptions> using the -B flag for the hardware buffer, and the -b flag for the software buffer. For instance, this statement sets the hardware buffer size to 512 samples and the software buffer size to 128 sample:

```-B512 -b128
```

The other factor which affects Csound's live performance is the ksmps value which is set in the header of the <CsInstruments> section. By this value, you define how many samples are processed every Csound control cycle.

Try your realtime performance with -B512, -b128 and ksmps=32. With a software buffer of 128 samples, a hardware buffer of 512 and a sample rate of 44100 you will have around 12ms latency, which is usable for live keyboard playing. If you have problems with either the latency or the performance, tweak the values as described here.

### CsoundQt

To define the audio hardware used for realtime performance, open the configuration dialog. In the "Run" Tab, you can choose your audio interface, and the preferred driver. You can select input and output devices from a list if you press the buttons to the right of the text boxes for input and output names. Software and hardware buffer sizes can be set at the top of this dialogue box.

## Csound Can Produce Extreme Dynamic Range!

Csound can produce extreme dynamic range, so keep an eye on the level you are sending to your output. The number which describes the level of 0 dB, can be set in Csound by the 0dbfs assignment in the <CsInstruments> header. There is no limitation, if you set 0dbfs = 1 and send a value of 32000, this can damage your ears and speakers!

## Using Live Audio Input and Output

To process audio from an external source (for example a microphone), use the inch opcode to access any of the inputs of your audio input device. For the output, outch gives you all necessary flexibility. The following example takes a live audio input and transforms its sound using ring modulation. The Csound Console should output five times per second the input amplitude level.

EXAMPLE 02D02.csd

```<CsoundSynthesizer>
<CsOptions>
;CHANGE YOUR INPUT AND OUTPUT DEVICE NUMBER HERE IF NECESSARY!
-iadc0 -odac0 -B512 -b128
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100 ;set sample rate to 44100 Hz
ksmps = 32 ;number of samples per control cycle
nchnls = 2 ;use two audio channels
0dbfs = 1 ;set maximum level as 1

giSine    ftgen     0, 0, 2^10, 10, 1 ;table with sine wave

instr 1
aIn       inch      1   ;take input from channel 1
kInLev    downsamp  aIn ;convert audio input to control signal
printk    .2, abs(kInLev)
;make modulator frequency oscillate 200 to 1000 Hz
kModFreq  poscil    400, 1/2, giSine
kModFreq  =         kModFreq+600
aMod      poscil    1, kModFreq, giSine ;modulator signal
aRM       =         aIn * aMod ;ring modulation
outch     1, aRM, 2, aRM ;output to channel 1 and 2
endin
</CsInstruments>
<CsScore>
i 1 0 3600
</CsScore>
</CsoundSynthesizer>
```

Live Audio is frequently used with live devices like widgets or MIDI. In CsoundQt, you can find several examples in Examples -> Getting Started -> Realtime Interaction.

# RENDERING TO FILE

## When to Render to File

Csound can also render audio straight to a sound file stored on your hard drive instead of as live audio sent to the audio hardware. This gives you the possibility to hear the results of very complex processes which your computer can't produce in realtime.

Csound can render to formats like wav, aiff or ogg (and other less popular ones), but not mp3 due to its patent and licencing problems.

## Rendering to File

Save the following code as Render.csd:

EXAMPLE 02E01.csd

```<CsoundSynthesizer>
<CsOptions>
-o Render.wav
</CsOptions>
<CsInstruments>
;Example by Alex Hofmann
instr 1
aSin      oscils    0dbfs/4, 440, 0
out       aSin
endin
</CsInstruments>
<CsScore>
i 1 0 1
e
</CsScore>
</CsoundSynthesizer>
```

Open the Terminal / Prompt / Console and type:

```csound /path/to/Render.csd
```

Now, because you changed the -o flag in the <CsOptions> from "-o dac" to "-o filename", the audio output is no longer written in realtime to your audio device, but instead to a file. The file will be rendered to the default directory (usually the user home directory). This file can be opened and played in any audio player or editor, e.g. Audacity. (By default, csound is a non-realtime program. So if no command line options are given, it will always render the csd to a file called test.wav, and you will hear nothing in realtime.)

The -o flag can also be used to write the output file to a certain directory. Something like this for Windows ...

```<CsOptions>
-o c:/music/samples/Render.wav
</CsOptions>
```

... and this for Linux or Mac OSX:

```<CsOptions>
-o /Users/JSB/organ/tatata.wav
</CsOptions>
```

### Rendering Options

The internal rendering of audio data in Csound is done with 32-bit floating point numbers (or even with 64-bit numbers for the "double" version). Depending on your needs, you should decide the precision of your rendered output file:

• If you want to render 32-bit floats, use the option flag -f.
• If you want to render 24-bit, use the flag -3.
• If you want to render 16-bit, use the flag -s (or nothing, because this is also the default in Csound).

For making sure that the header of your soundfile will be written correctly, you should use the -W flag for a WAV file, or the -A flag for a AIFF file. So these options will render the file "Wow.wav" as WAV file with 24-bit accuracy:

```<CsOptions>
-o Wow.wav -W -3
</CsOptions>
```

### Realtime and Render-To-File at the Same Time

Sometimes you may want to simultaneously have realtime output and file rendering to disk, like recording your live performance. This can be achieved by using the fout opcode. You just have to specify your output file name. File type and format are given by a number, for instance 18 specifies "wav 24 bit" (see the manual page for more information). The following example creates a random frequency and panning movement of a sine wave, and writes it to the file "live_record.wav" (in the same directory as your .csd file):

EXAMPLE 02E02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0 ;each time different seed for random
giSine    ftgen     0, 0, 2^10, 10, 1 ;a sine wave

instr 1
kFreq     randomi   400, 800, 1 ;random frequency
aSig      poscil    .2, kFreq, giSine ;sine with this frequency
kPan      randomi   0, 1, 1 ;random panning
aL, aR    pan2      aSig, kPan ;stereo output signal
outs      aL, aR ;live output
fout      "live_record.wav", 18, aL, aR ;write to soundfile
endin

</CsInstruments>
<CsScore>
i 1 0 10
e
</CsScore>
</CsoundSynthesizer>
```

### CsoundQt

All the options which are described in this chapter can be handled very easily in CsoundQt:

• Rendering to file is simply done by clicking the "Render" button, or choosing "Control->Render to File" in the Menu.
• To set file-destination and file-type, you can make your own settings in "CsoundQt Configuration" under the tab "Run -> File (offline render)". The default is a 16-Bit .wav-file.
• To record a live performance, just click the "Record" button. You will find a file with the same name as your .csd file, and a number appended for each record task, in the same folder as your .csd file.

# INITIALIZATION AND PERFORMANCE PASS

## What's The Difference

A Csound instrument is defined in the <CsInstruments> section of a .csd file. An instrument definition starts with the keyword instr (followed by a number or name to identify the instrument), and ends with the line endin. Each instrument can be called by a score event which starts with the character "i". For instance, this score line

```i 1 0 3
```

calls instrument 1, starting at time 0, for 3 seconds. It is very important to understand that such a call consists of two different stages: the initialization and the performance pass.

At first, Csound initializes all the variables which begin with a i or a gi. This initialization pass is done just once.

After this, the actual performance begins. During this performance, Csound calculates all the time-varying values in the orchestra again and again. This is called the performance pass, and each of these calculations is called a control cycle (also abbreviated as k-cycle or k-loop). The time for each control cycle depends on the ksmps constant in the orchestra header. If ksmps=10 (which is the default), the performance pass consists of 10 samples. If your sample rate is 44100, with ksmps=10 you will have 4410 control cycles per second (kr=4410), and each of them has a duration of 1/4410 = 0.000227 seconds. On each control cycle, all the variables starting with k, gk, a and ga are updated (see the next chapter about variables for more explanations).

This is an example instrument, containing i-, k- and a-variables:

EXAMPLE 03A01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 441
nchnls = 2
0dbfs = 1
instr 1
iAmp      =       p4 ;amplitude taken from the 4th parameter of the score line
iFreq     =       p5 ;frequency taken from the 5th parameter
; --- move from 0 to 1 in the duration of this instrument call (p3)
kPan      line      0, p3, 1
aNote     oscils  iAmp, iFreq, 0 ;create an audio signal
aL, aR    pan2    aNote, kPan ;let the signal move from left to right
outs    aL, aR ;write it to the output
endin
</CsInstruments>
<CsScore>
i 1 0 3 0.2 443
</CsScore>
</CsoundSynthesizer>
```

As ksmps=441, each control cycle is 0.01 seconds long (441/44100). So this happens when the instrument call is performed:

Here is another simple example which shows the internal loop at each k-cycle. As we print out the value at each control cycle, ksmps is very high here, so that each k-pass takes 0.1 seconds. The init opcode can be used to set a k-variable to a certain value first (at the init-pass), otherwise it will have the default value of zero until it is assigned something else during the first k-cycle.

EXAMPLE 03A02.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410

instr 1
kcount    init      0; set kcount to 0 first
kcount    =         kcount + 1; increase at each k-pass
printk    0, kcount; print the value
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

Your output should contain the lines:

i   1 time     0.10000:     1.00000
i   1 time     0.20000:     2.00000
i   1 time     0.30000:     3.00000
i   1 time     0.40000:     4.00000
i   1 time     0.50000:     5.00000
i   1 time     0.60000:     6.00000
i   1 time     0.70000:     7.00000
i   1 time     0.80000:     8.00000
i   1 time     0.90000:     9.00000
i   1 time     1.00000:    10.00000

Try changing the ksmps value from 4410 to 44100 and to 2205 and observe the difference.

## Reinitialization

If you try the example above with i-variables, you will have no success, because the i-variable is calculated just once:

EXAMPLE 03A03.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410

instr 1
icount    init      0          ;set icount to 0 first
icount    =         icount + 1 ;increase
print     icount     ;print the value
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

The printout is:

instr 1:  icount = 1.000

Nevertheless it is possible to refresh even an i-rate variable in Csound. This is done with the reinit opcode. You must mark a section by a label (any name followed by a colon). Then the reinit statement will cause the i-variable to refresh. Use rireturn to end the reinit section.

EXAMPLE 03A04.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410

instr 1
icount    init      0          ; set icount to 0 first
new:
icount    =         icount + 1 ; increase
print     icount     ; print the value
reinit    new        ; reinit the section each k-pass
rireturn
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

This prints now:

instr 1:  icount = 1.000
instr 1:  icount = 2.000
instr 1:  icount = 3.000
instr 1:  icount = 4.000
instr 1:  icount = 5.000
instr 1:  icount = 6.000
instr 1:  icount = 7.000
instr 1:  icount = 8.000
instr 1:  icount = 9.000
instr 1:  icount = 10.000
instr 1:  icount = 11.000

## Order Of Calculation

Sometimes it is very important to observe the order in which the instruments of a Csound orchestra are evaluated. This order is given by the instrument numbers. So, if you want to use during the same performance pass a value in instrument 10 which is generated by another instrument, you must not give this instrument the number 11 or higher. In the following example, first instrument 10 uses a value of instrument 1, then a value of instrument 100.

EXAMPLE 03A05.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410

instr 1
gkcount   init      0 ;set gkcount to 0 first
gkcount   =         gkcount + 1 ;increase
endin

instr 10
printk    0, gkcount ;print the value
endin

instr 100
gkcount   init      0 ;set gkcount to 0 first
gkcount   =         gkcount + 1 ;increase
endin

</CsInstruments>
<CsScore>
;first i1 and i10
i 1 0 1
i 10 0 1
;then i100 and i10
i 100 1 1
i 10 1 1
</CsScore>
</CsoundSynthesizer>
```

The output shows the difference:

new alloc for instr 1:
new alloc for instr 10:
i  10 time     0.10000:     1.00000
i  10 time     0.20000:     2.00000
i  10 time     0.30000:     3.00000
i  10 time     0.40000:     4.00000
i  10 time     0.50000:     5.00000
i  10 time     0.60000:     6.00000
i  10 time     0.70000:     7.00000
i  10 time     0.80000:     8.00000
i  10 time     0.90000:     9.00000
i  10 time     1.00000:    10.00000
B  0.000 ..  1.000 T  1.000 TT  1.000 M:      0.0
new alloc for instr 100:
i  10 time     1.10000:     0.00000
i  10 time     1.20000:     1.00000
i  10 time     1.30000:     2.00000
i  10 time     1.50000:     4.00000
i  10 time     1.60000:     5.00000
i  10 time     1.70000:     6.00000
i  10 time     1.80000:     7.00000
i  10 time     1.90000:     8.00000
i  10 time     2.00000:     9.00000
B  1.000 ..  2.000 T  2.000 TT  2.000 M:      0.0

## About "i-time" And "k-rate" Opcodes

It is often confusing for the beginner that there are some opcodes which only work at "i-time" or "i-rate", and others which only work at "k-rate" or "k-time". For instance, if the user wants to print the value of any variable, he thinks: "OK - print it out." But Csound replies: "Please, tell me first if you want to print an i- or a k-variable" (see the following section about the variable types).

For instance, the print opcode just prints variables which are updated at each initialization pass ("i-time" or "i-rate"). If you want to print a variable which is updated at each control cycle ("k-rate" or "k-time"), you need its counterpart printk. (As the performance pass is usually updated some thousands times per second, you have an additional parameter in printk, telling Csound how often you want to print out the k-values.)

So, some opcodes are just for i-rate variables, like filelen or ftgen. Others are just for k-rate variables like metro or max_k. Many opcodes have variants for either i-rate-variables or k-rate-variables, like printf_i and printf, sprintf and sprintfk, strindex and strindexk.

Most of the Csound opcodes are able to work either at i-time or at k-time or at audio-rate, but you have to think carefully what you need, as the behaviour will be very different if you choose the i-, k- or a-variante of an opcode. For example, the random opcode can work at all three rates:

```ires      random    imin, imax : works at "i-time"
kres      random    kmin, kmax : works at "k-rate"
ares      random    kmin, kmax : works at "audio-rate"
```

If you use the i-rate random generator, you will get one value for each note. For instance, if you want to have a different pitch for each note you are generating, you will use this one.

If you use the k-rate random generator, you will get one new value on every control cycle. If your sample rate is 44100 and your ksmps=10, you will get 4410 new values per second! If you take this as pitch value for a note, you will hear nothing but a noisy jumping. If you want to have a moving pitch, you can use the randomi variant of the k-rate random generator, which can reduce the number of new values per second, and interpolate between them.

If you use the a-rate random generator, you will get as many new values per second as your sample rate is. If you use it in the range of your 0 dB amplitude, you produce white noise.

EXAMPLE 03A06.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2

seed      0 ;each time different seed
giSine    ftgen     0, 0, 2^10, 10, 1 ;sine table

instr 1 ;i-rate random
iPch      random    300, 600
aAmp      linseg    .5, p3, 0
aSine     poscil    aAmp, iPch, giSine
outs      aSine, aSine
endin

instr 2 ;k-rate random: noisy
kPch      random    300, 600
aAmp      linseg    .5, p3, 0
aSine     poscil    aAmp, kPch, giSine
outs      aSine, aSine
endin

instr 3 ;k-rate random with interpolation: sliding pitch
kPch      randomi   300, 600, 3
aAmp      linseg    .5, p3, 0
aSine     poscil    aAmp, kPch, giSine
outs      aSine, aSine
endin

instr 4 ;a-rate random: white noise
aNoise    random    -.1, .1
outs      aNoise, aNoise
endin

</CsInstruments>
<CsScore>
i 1 0   .5
i 1 .25 .5
i 1 .5  .5
i 1 .75 .5
i 2 2   1
i 3 4   2
i 3 5   2
i 3 6   2
i 4 9   1
</CsScore>
</CsoundSynthesizer>
```

## Timelessness And Tick Size In Csound

In a way it is confusing to speak from "i-time". For Csound, "time" actually begins with the first performance pass. The initalization time is actually the "time zero". Regardless how much human time or CPU time is needed for the initialization pass, the Csound clock does not move at all. This is the reason why you can use any i-time opcode with a zero duration (p3) in the score:

EXAMPLE 03A07.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
instr 1
prints "%nHello Eternity!%n%n"
endin
</CsInstruments>
<CsScore>
i 1 0 0 ;let instrument 1 play for zero seconds ...
</CsScore>
</CsoundSynthesizer>
```

Csound's clock is the control cycle. The number of samples in one control cycle - given by the ksmps value - is the smallest possible "tick" in Csound at k-rate. If your sample rate is 44100, and you have 4410 samples in one control cycle (ksmps=4410), you will not be able to start a k-event faster than each 1/10 second, because there is no k-time for Csound "between" two control cycles. Try the following example with larger and smaller ksmps values:

EXAMPLE 03A08.csd

```<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; try 44100 or 2205 instead

instr 1; prints the time once in each control cycle
kTimek   timek
kTimes   times
printks    "Number of control cycles = %d%n", 0, kTimek
printks    "Time = %f%n%n", 0, kTimes
endin
</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
```

Consider typical size of 32 for ksmps. When sample rate is 44100, a single tick will be less than a millisecond. This should be sufficient for in most situations. If you need a more accurate time resolution, just decrease the ksmps value. The cost of this smaller tick size is a smaller computational efficiency. So your choice depends on the situation, and usually a ksmps of 32 represents a good tradeoff.

Of course the precision of writing samples (at a-rate) is in no way affected by the size of the internal k-ticks. Samples are indeed written "in between" control cycles, because they are vectors. So it can be necessary to use a-time variables instead of k-time variables in certain situations. In the following example, the ksmps value is rather high (128). If you use a k-rate variable for a fast moving envelope, you will hear a certain roughness (instrument 1) sometime referred to as 'zipper' noise. If you use an a-rate variable instead, you will have a much cleaner sound (instr 2).

EXAMPLE 03A09.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
;--- increase or decrease to hear the difference more or less evident
ksmps = 128
nchnls = 2
0dbfs = 1

instr 1 ;envelope at k-time
aSine     oscils    .5, 800, 0
kEnv      transeg   0, .1, 5, 1, .1, -5, 0
aOut      =         aSine * kEnv
outs      aOut, aOut
endin

instr 2 ;envelope at a-time
aSine     oscils    .5, 800, 0
aEnv      transeg   0, .1, 5, 1, .1, -5, 0
aOut      =         aSine * aEnv
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
r 5 ;repeat the following line 5 times
i 1 0 1
s ;end of section
r 5
i 2 0 1
e
</CsScore>
</CsoundSynthesizer>
```

# LOCAL AND GLOBAL VARIABLES

## Variable Types

In Csound, there are several types of variables. It is important to understand the differences of these types. There are

• initialization variables, which are updated at each initialization pass, i.e. at the beginning of each note or score event. They start with the character i. To this group count also the score parameter fields, which always starts with a p, followed by any number: p1 refers to the first parameter field in the score, p2 to the second one, and so on.
• control variables, which are updated at each control cycle (performance pass). They start with the character k.
• audio variables, which are also updated at each control cycle, but instead of a single number (like control variables) they consist of a vector (a collection of numbers), having in this way one number for each sample. They start with the character a.
• string variables, which are updated either at i-time or at k-time (depending on the opcode which produces a string). They start with the character S.

Except these four standard types, there are two other variable types which are used for spectral processing:

• f-variables are used for the streaming phase vocoder opcodes (all starting with the characters pvs), which are very important for doing realtime FFT (Fast Fourier Transformation) in Csound. They are updated at k-time, but their values depend also on the FFT parameters like frame size and overlap.
• w-variables are used in some older spectral processing opcodes.

The following example exemplifies all the variable types (except the w-type):

EXAMPLE 03B01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
0dbfs = 1
nchnls = 2

seed      0; random seed each time different

instr 1; i-time variables
iVar1     =         p2; second parameter in the score
iVar2     random    0, 10; random value between 0 and 10
iVar      =         iVar1 + iVar2; do any math at i-rate
print     iVar1, iVar2, iVar
endin

instr 2; k-time variables
kVar1     line       0, p3, 10; moves from 0 to 10 in p3
kVar2     random     0, 10; new random value each control-cycle
kVar      =          kVar1 + kVar2; do any math at k-rate
; --- print each 0.1 seconds
printks   "kVar1 = %.3f, kVar2 = %.3f, kVar = %.3f%n", 0.1, kVar1, kVar2, kVar
endin

instr 3; a-variables
aVar1     oscils     .2, 400, 0; first audio signal: sine
aVar2     rand       1; second audio signal: noise
aVar3     butbp      aVar2, 1200, 12; third audio signal: noise filtered
aVar      =          aVar1 + aVar3; audio variables can also be added
outs       aVar, aVar; write to sound card
endin

instr 4; S-variables
iMyVar    random     0, 10; one random value per note
kMyVar    random     0, 10; one random value per each control-cycle
;S-variable updated just at init-time
SMyVar1   sprintf   "This string is updated just at init-time:
kMyVar = %d\n", iMyVar
printf_i  "%s", 1, SMyVar1
;S-variable updates at each control-cycle
printks   "This string is updated at k-time:
kMyVar = %.3f\n", .1, kMyVar
endin

instr 5; f-variables
aSig      rand       .2; audio signal (noise)
; f-signal by FFT-analyzing the audio-signal
fSig1     pvsanal    aSig, 1024, 256, 1024, 1
; second f-signal (spectral bandpass filter)
fSig2     pvsbandp   fSig1, 350, 400, 400, 450
aOut      pvsynth    fSig2; change back to audio signal
outs       aOut*20, aOut*20
endin

</CsInstruments>
<CsScore>
; p1    p2    p3
i 1     0     0.1
i 1     0.1   0.1
i 2     1     1
i 3     2     1
i 4     3     1
i 5     4     1
</CsScore>
</CsoundSynthesizer>
```

You can think of variables as named connectors between opcodes. You can connect the output from an opcode to the input of another. The type of connector (audio, control, etc.) can be known from the first letter of its name.

For a more detailed discussion, see the article An overview Of Csound Variable Types by Andrés Cabrera in the Csound Journal, and the page about Types, Constants and Variables in the Canonical Csound Manual.

## Local Scope

The scope of these variables is usually the instrument in which they are defined. They are local variables. In the following example, the variables in instrument 1 and instrument 2 have the same names, but different values.

EXAMPLE 03B02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1

instr 1
;i-variable
iMyVar    init      0
iMyVar    =         iMyVar + 1
print     iMyVar
;k-variable
kMyVar    init      0
kMyVar    =         kMyVar + 1
printk    0, kMyVar
;a-variable
aMyVar    oscils    .2, 400, 0
outs      aMyVar, aMyVar
;S-variable updated just at init-time
SMyVar1   sprintf   "This string is updated just at init-time:
kMyVar = %d\n", i(kMyVar)
printf    "%s", kMyVar, SMyVar1
;S-variable updated at each control-cycle
SMyVar2   sprintfk  "This string is updated at k-time:
kMyVar = %d\n", kMyVar
printf    "%s", kMyVar, SMyVar2
endin

instr 2
;i-variable
iMyVar    init      100
iMyVar    =         iMyVar + 1
print     iMyVar
;k-variable
kMyVar    init      100
kMyVar    =         kMyVar + 1
printk    0, kMyVar
;a-variable
aMyVar    oscils    .3, 600, 0
outs      aMyVar, aMyVar
;S-variable updated just at init-time
SMyVar1   sprintf   "This string is updated just at init-time:
kMyVar = %d\n", i(kMyVar)
printf    "%s", kMyVar, SMyVar1
;S-variable updated at each control-cycle
SMyVar2   sprintfk  "This string is updated at k-time:
kMyVar = %d\n", kMyVar
printf    "%s", kMyVar, SMyVar2
endin

</CsInstruments>
<CsScore>
i 1 0 .3
i 2 1 .3
</CsScore>
</CsoundSynthesizer>
```

This is the output (first the output at init-time by the print opcode, then at each k-cycle the output of printk and the two printf opcodes):

new alloc for instr 1:
instr 1:  iMyVar = 1.000
i   1 time     0.10000:     1.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 1
i   1 time     0.20000:     2.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 2
i   1 time     0.30000:     3.00000
This string is updated just at init-time: kMyVar = 0
This string is updated at k-time: kMyVar = 3
B  0.000 ..  1.000 T  1.000 TT  1.000 M:  0.20000  0.20000
new alloc for instr 2:
instr 2:  iMyVar = 101.000
i   2 time     1.10000:   101.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 101
i   2 time     1.20000:   102.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 102
i   2 time     1.30000:   103.00000
This string is updated just at init-time: kMyVar = 100
This string is updated at k-time: kMyVar = 103
B  1.000 ..  1.300 T  1.300 TT  1.300 M:  0.29998  0.29998

## Global Scope

If you need variables which are recognized beyond the scope of an instrument, you must define them as global. This is done by prefixing the character g before the types i, k, a or S. See the following example:

EXAMPLE 03B03.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1

;global scalar variables can now be inititalized in the header
giMyVar   init      0
gkMyVar   init      0

instr 1
;global i-variable
giMyVar   =         giMyVar + 1
print     giMyVar
;global k-variable
gkMyVar   =         gkMyVar + 1
printk    0, gkMyVar
;global S-variable updated just at init-time
gSMyVar1  sprintf   "This string is updated just at init-time:
gkMyVar = %d\n", i(gkMyVar)
printf    "%s", gkMyVar, gSMyVar1
;global S-variable updated at each control-cycle
gSMyVar2  sprintfk  "This string is updated at k-time:
gkMyVar = %d\n", gkMyVar
printf    "%s", gkMyVar, gSMyVar2
endin

instr 2
;global i-variable, gets value from instr 1
giMyVar   =         giMyVar + 1
print     giMyVar
;global k-variable, gets value from instr 1
gkMyVar   =         gkMyVar + 1
printk    0, gkMyVar
;global S-variable updated just at init-time, gets value from instr 1
printf    "Instr 1 tells: '%s'\n", gkMyVar, gSMyVar1
;global S-variable updated at each control-cycle, gets value from instr 1
printf    "Instr 1 tells: '%s'\n\n", gkMyVar, gSMyVar2
endin

</CsInstruments>
<CsScore>
i 1 0 .3
i 2 0 .3
</CsScore>
</CsoundSynthesizer>
```

The output shows the global scope, as instrument 2 uses the values which have been changed by instrument 1 in the same control cycle:

new alloc for instr 1:
instr 1:  giMyVar = 1.000
new alloc for instr 2:
instr 2:  giMyVar = 2.000
i   1 time     0.10000:     1.00000
This string is updated just at init-time: gkMyVar = 0
This string is updated at k-time: gkMyVar = 1
i   2 time     0.10000:     2.00000
Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0'
Instr 1 tells: 'This string is updated at k-time: gkMyVar = 1'

i   1 time     0.20000:     3.00000
This string is updated just at init-time: gkMyVar = 0
This string is updated at k-time: gkMyVar = 3
i   2 time     0.20000:     4.00000
Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0'
Instr 1 tells: 'This string is updated at k-time: gkMyVar = 3'

i   1 time     0.30000:     5.00000
This string is updated just at init-time: gkMyVar = 0
This string is updated at k-time: gkMyVar = 5
i   2 time     0.30000:     6.00000
Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0'
Instr 1 tells: 'This string is updated at k-time: gkMyVar = 5'

## How To Work With Global Audio Variables

Some special considerations must be taken if you work with global audio variables. Actually, Csound behaves basically the same whether you work with a local or a global audio variable. But usually you work with global audio variables if you want to add several audio signals to a global signal, and that makes a difference.

The next few examples are going into a bit more detail. If you just want to see the result (= global audio usually must be cleared), you can skip the next examples and just go to the last one of this section.

It should be understood first that a global audio variable is treated the same by Csound if it is applied like a local audio signal:

EXAMPLE 03B04.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1; produces a 400 Hz sine
gaSig     oscils    .1, 400, 0
endin

instr 2; outputs gaSig
outs      gaSig, gaSig
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 2 0 3
</CsScore>
</CsoundSynthesizer>
```

Of course, there is absolutely no need to use a global variable in this case. If you do it, you risk that your audio will be overwritten by an instrument with a higher number that uses the same variable name. In the following example, you will just hear a 600 Hz sine tone, because the 400 Hz sine of instrument 1 is overwritten by the 600 Hz sine of instrument 2:

EXAMPLE 03B05.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1; produces a 400 Hz sine
gaSig     oscils    .1, 400, 0
endin

instr 2; overwrites gaSig with 600 Hz sine
gaSig     oscils    .1, 600, 0
endin

instr 3; outputs gaSig
outs      gaSig, gaSig
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 2 0 3
i 3 0 3
</CsScore>
</CsoundSynthesizer>
```

In general, you will use a global audio variable like a bus to which several local audio signal can be added. It's this addition of a global audio signal to its previous state which can cause some trouble. Let's first see a simple example of a control signal to understand what is happening:

EXAMPLE 03B06.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1

instr 1
kSum      init      0; sum is zero at init pass
kAdd      =         1; control signal to add
kSum      =         kSum + kAdd; new sum in each k-cycle
printk    0, kSum; print the sum
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

In this case, the "sum bus" kSum increases at each control cycle by 1, because it adds the kAdd signal (which is always 1) in each k-pass to its previous state. It is no different if this is done by a local k-signal, like here, or by a global k-signal, like in the next example:

EXAMPLE 03B07.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
nchnls = 2
0dbfs = 1

gkSum     init      0; sum is zero at init

instr 1
gkAdd     =         1; control signal to add
endin

instr 2
gkSum     =         gkSum + gkAdd; new sum in each k-cycle
printk    0, gkSum; print the sum
endin

</CsInstruments>
<CsScore>
i 1 0 1
i 2 0 1
</CsScore>
</CsoundSynthesizer>
```

What is happening now when we work with audio signals instead of control signals in this way, repeatedly adding a signal to its previous state? Audio signals in Csound are a collection of numbers (a vector). The size of this vector is given by the ksmps constant. If your sample rate is 44100, and ksmps=100, you will calculate 441 times in one second a vector which consists of 100 numbers, indicating the amplitude of each sample.

So, if you add an audio signal to its previous state, different things can happen, depending on what is the present state of the vector and what was its previous state. If the previous state (with ksmps=9) has been [0 0.1 0.2 0.1 0 -0.1 -0.2 -0.1 0], and the present state is the same, you will get a signal which is twice as strong: [0 0.2 0.4 0.2 0 -0.2 -0.4 -0.2 0]. But if the present state is [0 -0.1 -0.2 -0.1 0 0.1 0.2 0.1 0], you wil just get zero's if you add it. This is shown in the next example with a local audio variable, and then in the following example with a global audio variable.

EXAMPLE 03B08.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
;(change to 441 to see the difference)
nchnls = 2
0dbfs = 1

instr 1
;initialize a general audio variable
aSum      init      0
;produce a sine signal (change frequency to 401 to see the difference)
aAdd      oscils    .1, 400, 0
;add it to the general audio (= the previous vector)
aSum      =         aSum + aAdd
kmax      max_k     aSum, 1, 1; calculate maximum
printk    0, kmax; print it out
outs      aSum, aSum
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

EXAMPLE 03B09.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 4410; very high because of printing
;(change to 441 to see the difference)
nchnls = 2
0dbfs = 1

;initialize a general audio variable
gaSum     init      0

instr 1
;produce a sine signal (change frequency to 401 to see the difference)
aAdd      oscils    .1, 400, 0
;add it to the general audio (= the previous vector)
gaSum     =         gaSum + aAdd
endin

instr 2
kmax      max_k     gaSum, 1, 1; calculate maximum
printk    0, kmax; print it out
outs      gaSum, gaSum
endin

</CsInstruments>
<CsScore>
i 1 0 1
i 2 0 1
</CsScore>
</CsoundSynthesizer>
```

In both cases, you get a signal which increases each 1/10 second, because you have 10 control cycles per second (ksmps=4410), and the frequency of 400 Hz can evenly be divided by this. If you change the ksmps value to 441, you will get a signal which increases much faster and is out of range after 1/10 second. If you change the frequency to 401 Hz, you will get a signal which increases first, and then decreases, because each audio vector has 40.1 cycles of the sine wave. So the phases are shifting; first getting stronger and then weaker. If you change the frequency to 10 Hz, and then to 15 Hz (at ksmps=44100), you cannot hear anything, but if you render to file, you can see the whole process of either enforcing or erasing quite clear:

Self-reinforcing global audio signal on account of its state in one control cycle being the same as in the previous one

Partly self-erasing global audio signal because of phase inversions in two subsequent control cycles

So the result of all is: If you work with global audio variables in a way that you add several local audio signals to a global audio variable (which works like a bus), you must clear this global bus at each control cycle. As in Csound all the instruments are calculated in ascending order, it should be done either at the beginning of the first, or at the end of the last instrument. Perhaps it is the best idea to declare all global audio variables in the orchestra header first, and then clear them in an "always on" instrument with the highest number of all the instruments used. This is an example of a typical situation:

EXAMPLE 03B10.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

;initialize the global audio variables
gaBusL    init      0
gaBusR    init      0
;make the seed for random values each time different
seed      0

instr 1; produces short signals
loop:
iDur      random    .3, 1.5
timout    0, iDur, makenote
reinit    loop
makenote:
iFreq     random    300, 1000
iVol      random    -12, -3; dB
iPan      random    0, 1; random panning for each signal
aSin      oscil3    ampdb(iVol), iFreq, 1
aEnv      transeg   1, iDur, -10, 0; env in a-rate is cleaner
aAdd      =         aSin * aEnv
aL, aR    pan2      aAdd, iPan
gaBusL    =         gaBusL + aL; add to the global audio signals
gaBusR    =         gaBusR + aR
endin

instr 2; produces short filtered noise signals (4 partials)
loop:
iDur      random    .1, .7
timout    0, iDur, makenote
reinit    loop
makenote:
iFreq     random    100, 500
iVol      random    -24, -12; dB
iPan      random    0, 1
aNois     rand      ampdb(iVol)
aFilt     reson     aNois, iFreq, iFreq/10
aRes      balance   aFilt, aNois
aEnv      transeg   1, iDur, -10, 0
aAdd      =         aRes * aEnv
aL, aR    pan2      aAdd, iPan
gaBusL    =         gaBusL + aL; add to the global audio signals
gaBusR    =         gaBusR + aR
endin

instr 3; reverb of gaBus and output
aL, aR    freeverb  gaBusL, gaBusR, .8, .5
outs      aL, aR
endin

instr 100; clear global audios at the end
clear     gaBusL, gaBusR
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1 .5 .3 .1
i 1 0 20
i 2 0 20
i 3 0 20
i 100 0 20
</CsScore>
</CsoundSynthesizer>
```

## The chn Opcodes For Global Variables

Instead of using the traditional g-variables for any values or signals which are to transfer between several instruments, it is also possible to use the chn opcodes. An i-, k-, a- or S-value or signal can be set by chnset and received by chnget. One advantage is to have strings as names, so that you can choose intuitive names.

For audio variables, instead of performing an addition, you can use the chnmix opcode. For clearing an audio variable, the chnclear opcode can be used.

EXAMPLE 03B11.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1; send i-values
chnset    1, "sio"
chnset    -1, "non"
endin

instr 2; send k-values
kfreq     randomi   100, 300, 1
chnset    kfreq, "cntrfreq"
kbw       =         kfreq/10
chnset    kbw, "bandw"
endin

instr 3; send a-values
anois     rand      .1
chnset    anois, "noise"
loop:
idur      random    .3, 1.5
timout    0, idur, do
reinit    loop
do:
ifreq     random    400, 1200
iamp      random    .1, .3
asig      oscils    iamp, ifreq, 0
aenv      transeg   1, idur, -10, 0
asine     =         asig * aenv
chnset    asine, "sine"
endin

instr 11; receive some chn values and send again
ival1     chnget    "sio"
ival2     chnget    "non"
print     ival1, ival2
kcntfreq  chnget    "cntrfreq"
kbandw    chnget    "bandw"
anoise    chnget    "noise"
afilt     reson     anoise, kcntfreq, kbandw
afilt     balance   afilt, anoise
chnset    afilt, "filtered"
endin

instr 12; mix the two audio signals
amix1     chnget     "sine"
amix2     chnget     "filtered"
chnmix     amix1, "mix"
chnmix     amix2, "mix"
endin

instr 20; receive and reverb
amix      chnget     "mix"
aL, aR    freeverb   amix, amix, .8, .5
outs       aL, aR
endin

instr 100; clear
chnclear   "mix"
endin

</CsInstruments>
<CsScore>
i 1 0 20
i 2 0 20
i 3 0 20
i 11 0 20
i 12 0 20
i 20 0 20
i 100 0 20
</CsScore>
</CsoundSynthesizer>

```

# CONTROL STRUCTURES

In a way, control structures are the core of a programming language. The fundamental element in each language is the conditional if branch. Actually all other control structures like for-, until- or while-loops can be traced back to if-statements.

So, Csound provides mainly the if-statement; either in the usual if-then-else form, or in the older way of an if-goto statement. These ones will be covered first. Though all necessary loops can be built just by if-statements, Csound's loop facility offers a more comfortable way of performing loops. They will be introduced in the Loop section of this chapter. At least, time loops are shown, which are particulary important in audio programming languages.

## If i-Time Then Not k-Time!

The fundamental difference in Csound between i-time and k-time which has been explained in a previous chapter, must be regarded very carefully when you work with control structures. If you make a conditional branch at i-time, the condition will be tested just once for each note, at the initialization pass. If you make a conditional branch at k-time, the condition will be tested again and again in each control-cycle.

For instance, if you test a soundfile whether it is mono or stereo, this is done at init-time. If you test an amplitude value to be below a certain threshold, it is done at performance time (k-time). If you get user-input by a scroll number, this is also a k-value, so you need a k-condition.

Thus, all if and loop opcodes have an "i" and a "k" descendant. In the next few sections, a general introduction into the different control tools is given, followed by examples both at i-time and at k-time for each tool.

## If - then - [elseif - then -] else

The use of the if-then-else statement is very similar to other programming languages. Note that in Csound, "then" must be written in the same line as "if" and the expression to be tested, and that you must close the if-block with an "endif" statement on a new line:

```if <condition> then
...
else
...
endif
```

It is also possible to have no "else" statement:

```if <condition> then
...
endif
```

Or you can have one or more "elseif-then" statements in between:

```if <condition1> then
...
elseif <condition2> then
...
else
...
endif
```

If statements can also be nested. Each level must be closed with an "endif". This is an example with three levels:

```if <condition1> then; first condition opened
if <condition2> then; second condition openend
if <condition3> then; third condition openend
...
else
...
endif; third condition closed
elseif <condition2a> then
...
endif; second condition closed
else
...
endif; first condition closed
```

### i-Rate Examples

A typical problem in Csound: You have either mono or stereo files, and want to read both with a stereo output. For the real stereo ones that means: use soundin (diskin / diskin2) with two output arguments. For the mono ones it means: use soundin / diskin / diskin2 with one output argument, and throw it to both output channels:

EXAMPLE 03C01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
Sfile     =          "/my/file.wav" ;your soundfile path here
ifilchnls filenchnls Sfile
if ifilchnls == 1 then ;mono
aL        soundin    Sfile
aR        =          aL
else   ;stereo
aL, aR    soundin    Sfile
endif
outs       aL, aR
endin

</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
```

If you use QuteCsound, you can browse in the widget panel for the soundfile. See the corresponding example in the QuteCsound Example menu.

### k-Rate Examples

The following example establishes a moving gate between 0 and 1. If the gate is above 0.5, the gate opens and you hear a tone.  If the gate is equal or below 0.5, the gate closes, and you hear nothing.

EXAMPLE 03C02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0; random values each time different
giTone    ftgen     0, 0, 2^10, 10, 1, .5, .3, .1

instr 1
kGate     randomi   0, 1, 3; moves between 0 and 1 (3 new values per second)
kFreq     randomi   300, 800, 1; moves between 300 and 800 hz (1 new value per sec)
kdB       randomi   -12, 0, 5; moves between -12 and 0 dB
;(5 new values per sec)
aSig      oscil3    1, kFreq, giTone
kVol      init      0
if kGate > 0.5 then; if kGate is larger than 0.5
kVol      =         ampdb(kdB); open gate
else
kVol      =         0; otherwise close gate
endif
kVol      port      kVol, .02; smooth volume curve to avoid clicks
aOut      =         aSig * kVol
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
```

### Short Form: (a v b ? x : y)

If you need an if-statement to give a value to an (i- or k-) variable, you can also use a traditional short form in parentheses: (a v b ? x : y). It asks whether the condition a or b is true. If a, the value is set to x; if b, to y. For instance, the last example could be written in this way:

EXAMPLE 03C03.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0
giTone    ftgen     0, 0, 2^10, 10, 1, .5, .3, .1

instr 1
kGate     randomi   0, 1, 3; moves between 0 and 1 (3 new values per second)
kFreq     randomi   300, 800, 1; moves between 300 and 800 hz
;(1 new value per sec)
kdB       randomi   -12, 0, 5; moves between -12 and 0 dB
;(5 new values per sec)
aSig      oscil3    1, kFreq, giTone
kVol      init      0
kVol      =         (kGate > 0.5 ? ampdb(kdB) : 0); short form of condition
kVol      port      kVol, .02; smooth volume curve to avoid clicks
aOut      =         aSig * kVol
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
i 1 0 20
</CsScore>
</CsoundSynthesizer>
```

## If - goto

An older way of performing a conditional branch - but still useful in certain cases - is an "if" statement which is not followed by a "then", but by a label name. The "else" construction follows (or doesn't follow) in the next line. Like the if-then-else statement, the if-goto works either at i-time or at k-time. You should declare the type by either using igoto or kgoto. Usually you need an additional igoto/kgoto statement for omitting the "else" block if the first condition is true. This is the general syntax:

i-time

```if <condition> igoto this; same as if-then
igoto that; same as else
this: ;the label "this" ...
...
igoto continue ;skip the "that" block
that: ; ... and the label "that" must be found
...
continue: ;go on after the conditional branch
...
```

k-time

```if <condition> kgoto this; same as if-then
kgoto that; same as else
this: ;the label "this" ...
...
kgoto continue ;skip the "that" block
that: ; ... and the label "that" must be found
...
continue: ;go on after the conditional branch
...
```

### i-Rate Examples

This is the same example as above in the if-then-else syntax for a branch depending on a mono or stereo file. If you just want to know whether a file is mono or stereo, you can use the "pure" if-igoto statement:

EXAMPLE 03C04.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
Sfile     = "/Joachim/Materialien/SamplesKlangbearbeitung/Kontrabass.aif"
ifilchnls filenchnls Sfile
if ifilchnls == 1 igoto mono; condition if true
igoto stereo; else condition
mono:
prints     "The file is mono!%n"
igoto      continue
stereo:
prints     "The file is stereo!%n"
continue:
endin

</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
```

But if you want to play the file, you must also use a k-rate if-kgoto, because you have not just an action at i-time (initializing the soundin opcode) but also at k-time (producing an audio signal). So the code in this case is much more cumbersome than with the if-then-else facility shown previously.

EXAMPLE 03C05.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
Sfile     =          "my/file.wav"
ifilchnls filenchnls Sfile
if ifilchnls == 1 kgoto mono
kgoto stereo
if ifilchnls == 1 igoto mono; condition if true
igoto stereo; else condition
mono:
aL        soundin    Sfile
aR        =          aL
igoto      continue
kgoto      continue
stereo:
aL, aR    soundin    Sfile
continue:
outs       aL, aR
endin

</CsInstruments>
<CsScore>
i 1 0 5
</CsScore>
</CsoundSynthesizer>
```

### k-Rate Examples

This is the same example as above in the if-then-else syntax for a moving gate between 0 and 1:

EXAMPLE 03C06.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0
giTone    ftgen     0, 0, 2^10, 10, 1, .5, .3, .1

instr 1
kGate     randomi   0, 1, 3; moves between 0 and 1 (3 new values per second)
kFreq     randomi   300, 800, 1; moves between 300 and 800 hz
;(1 new value per sec)
kdB       randomi   -12, 0, 5; moves between -12 and 0 dB
;(5 new values per sec)
aSig      oscil3    1, kFreq, giTone
kVol      init      0
if kGate > 0.5 kgoto open; if condition is true
kgoto close; "else" condition
open:
kVol      =         ampdb(kdB)
kgoto continue
close:
kVol      =         0
continue:
kVol      port      kVol, .02; smooth volume curve to avoid clicks
aOut      =         aSig * kVol
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
```

## Loops

Loops can be built either at i-time or at k-time just with the "if" facility. The following example shows an i-rate and a k-rate loop created using the if-i/kgoto facility:

EXAMPLE 03C07.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

instr 1 ;i-time loop: counts from 1 until 10 has been reached
icount    =         1
count:
print     icount
icount    =         icount + 1
if icount < 11 igoto count
prints    "i-END!%n"
endin

instr 2 ;k-rate loop: counts in the 100th k-cycle from 1 to 11
kcount    init      0
ktimek    timeinstk ;counts k-cycle from the start of this instrument
if ktimek == 100 kgoto loop
kgoto noloop
loop:
printks   "k-cycle %d reached!%n", 0, ktimek
kcount    =         kcount + 1
printk2   kcount
if kcount < 11 kgoto loop
printks   "k-END!%n", 0
noloop:
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 1
</CsScore>
</CsoundSynthesizer>
```

But Csound offers a slightly simpler syntax for this kind of i-rate or k-rate loops. There are four variants of the loop opcode. All four refer to a label as the starting point of the loop, an index variable as a counter, an increment or decrement, and finally a reference value (maximum or minimum) as comparision:

• loop_lt counts upwards and looks if the index variable is lower than the reference value;
• loop_le also counts upwards and looks if the index is lower than or equal to the reference value;
• loop_gt counts downwards and looks if the index is greater than the reference value;
• loop_ge also counts downwards and looks if the index is greater than or equal to the reference value.
As always, all four opcodes can be applied either at i-time or at k-time. Here are some examples, first for i-time loops, and then for k-time loops.

### i-Rate Examples

The following .csd provides a simple example for all four loop opcodes:

EXAMPLE 03C08.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

instr 1 ;loop_lt: counts from 1 upwards and checks if < 10
icount    =         1
loop:
print     icount
loop_lt   icount, 1, 10, loop
prints    "Instr 1 terminated!%n"
endin

instr 2 ;loop_le: counts from 1 upwards and checks if <= 10
icount    =         1
loop:
print     icount
loop_le   icount, 1, 10, loop
prints    "Instr 2 terminated!%n"
endin

instr 3 ;loop_gt: counts from 10 downwards and checks if > 0
icount    =         10
loop:
print     icount
loop_gt   icount, 1, 0, loop
prints    "Instr 3 terminated!%n"
endin

instr 4 ;loop_ge: counts from 10 downwards and checks if >= 0
icount    =         10
loop:
print     icount
loop_ge   icount, 1, 0, loop
prints    "Instr 4 terminated!%n"
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
i 3 0 0
i 4 0 0
</CsScore>
</CsoundSynthesizer>
```

The next example produces a random string of 10 characters and prints it out:

EXAMPLE 03C09.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

instr 1
icount    =         0
Sname     =         ""; starts with an empty string
loop:
ichar     random    65, 90.999
Schar     sprintf   "%c", int(ichar); new character
Sname     strcat    Sname, Schar; append to Sname
loop_lt   icount, 1, 10, loop; loop construction
printf_i  "My name is '%s'!\n", 1, Sname; print result
endin

</CsInstruments>
<CsScore>
; call instr 1 ten times
r 10
i 1 0 0
</CsScore>
</CsoundSynthesizer>
```

You can also use an i-rate loop to fill a function table (= buffer) with any kind of values. In the next example, a function table with 20 positions (indices) is filled with random integers between 0 and 10 by instrument 1. Nearly the same loop construction is used afterwards to read these values by instrument 2.

EXAMPLE 03C10.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

giTable   ftgen     0, 0, -20, -2, 0; empty function table with 20 points
seed      0; each time different seed

instr 1 ; writes in the table
icount    =         0
loop:
ival      random    0, 10.999 ;random value
; --- write in giTable at first, second, third ... position
tableiw   int(ival), icount, giTable
loop_lt   icount, 1, 20, loop; loop construction
endin

instr 2; reads from the table
icount    =         0
loop:
; --- read from giTable at first, second, third ... position
ival      tablei    icount, giTable
print     ival; prints the content
loop_lt   icount, 1, 20, loop; loop construction
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
```

### k-Rate Examples

The next example performs a loop at k-time. Once per second, every value of an existing function table is changed by a random deviation of 10%. Though there are special opcodes for this task, it can also be done by a k-rate loop like the one shown here:

EXAMPLE 03C11.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 441
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 256, 10, 1; sine wave
seed      0; each time different seed

instr 1
ktiminstk timeinstk ;time in control-cycles
kcount    init      1
if ktiminstk == kcount * kr then; once per second table values manipulation:
kndx      =         0
loop:
krand     random    -.1, .1;random factor for deviations
kval      table     kndx, giSine; read old value
knewval   =         kval + (kval * krand); calculate new value
tablew    knewval, kndx, giSine; write new value
loop_lt   kndx, 1, 256, loop; loop construction
kcount    =         kcount + 1; increase counter
endif
asig      poscil    .2, 400, giSine
outs      asig, asig
endin

</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
```

## Time Loops

Until now, we have just discussed loops which are executed "as fast as possible", either at i-time or at k-time. But, in an audio programming language, time loops are of particular interest and importance. A time loop means, repeating any action after a certain amount of time. This amount of time can be equal to or different to the previous time loop. The action can be, for instance: playing a tone, or triggering an instrument, or calculating a new value for the movement of an envelope.

In Csound, the usual way of performing time loops, is the timout facility. The use of timout is a bit intricate, so some examples are given, starting from very simple to more complex ones.

Another way of performing time loops is by using a measurement of time or k-cycles. This method is also discussed and similar examples to those used for the timout opcode are given so that both methods can be compared.

### timout Basics

The timout opcode refers to the fact that in the traditional way of working with Csound, each "note" (an "i" score event) has its own time. This is the duration of the note, given in the score by the duration parameter, abbreviated as "p3". A timout statement says: "I am now jumping out of this p3 duration and establishing my own time." This time will be repeated as long as the duration of the note allows it.

Let's see an example. This is a sine tone with a moving frequency, starting at 400 Hz and ending at 600 Hz. The duration of this movement is 3 seconds for the first note, and 5 seconds for the second note:

EXAMPLE 03C12.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
kFreq     expseg    400, p3, 600
aTone     poscil    .2, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
```

Now we perform a time loop with timout which is 1 second long. So, for the first note, it will be repeated three times, and for the second note five times:

EXAMPLE 03C13.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
loop:
timout    0, 1, play
reinit    loop
play:
kFreq     expseg    400, 1, 600
aTone     poscil    .2, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
```

This is the general syntax of timout:

```first_label:
timout    istart, idur, second_label
reinit    first_label
second_label:
... <any action you want to have here>
```

The first_label is an arbitrary word (followed by a colon) for marking the beginning of the time loop section. The istart argument for timout tells Csound, when the second_label section is to be executed. Usually istart is zero, telling Csound: execute the second_label section immediately, without any delay. The idur argument for timout defines how many seconds the second_label section is to be executed before the time loop begins again. Note that the "reinit first_label" is necessary to start the second loop after idur seconds with a resetting of all the values. (See the explanations about reinitialization in the chapter Initilalization And Performance Pass.)

As usual when you work with the reinit opcode, you can use a rireturn statement to constrain the reinit-pass. In this way you can have both, the timeloop section and the non-timeloop section in the body of an instrument:

EXAMPLE 03C14.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
loop:
timout    0, 1, play
reinit    loop
play:
kFreq1    expseg    400, 1, 600
aTone1    oscil3    .2, kFreq1, giSine
rireturn  ;end of the time loop
kFreq2    expseg    400, p3, 600
aTone2    poscil    .2, kFreq2, giSine

outs      aTone1+aTone2, aTone1+aTone2
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
```

### timout Applications

In a time loop, it is very important to change the duration of the loop. This can be done either by referring to the duration of this note (p3) ...

EXAMPLE 03C15.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
loop:
timout    0, p3/5, play
reinit    loop
play:
kFreq     expseg    400, p3/5, 600
aTone     poscil    .2, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 1 4 5
</CsScore>
</CsoundSynthesizer>
```

... or by calculating new values for the loop duration on each reinit pass, for instance by random values:

EXAMPLE 03C16.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
loop:
idur      random    .5, 3 ;new value between 0.5 and 3 seconds each time
timout    0, idur, play
reinit    loop
play:
kFreq     expseg    400, idur, 600
aTone     poscil    .2, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 20
</CsScore>
</CsoundSynthesizer>
```

The applications discussed so far have the disadvantage that all the signals inside the time loop must definitely be finished or interrupted, when the next loop begins. In this way it is not possible to have any overlapping of events. For achieving this, the time loop can be used just to trigger an event. This can be done with event_i or scoreline_i. In the following example, the time loop in instrument 1 triggers each half to two seconds an instance of instrument 2 for a duration of 1 to 5 seconds. So usually the previous instance of instrument 2 will still play when the new instance is triggered. In instrument 2, some random calculations are executed to make each note different, though having a descending pitch (glissando):

EXAMPLE 03C17.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
loop:
idurloop  random    .5, 2 ;duration of each loop
timout    0, idurloop, play
reinit    loop
play:
idurins   random    1, 5 ;duration of the triggered instrument
event_i   "i", 2, 0, idurins ;triggers instrument 2
endin

instr 2
ifreq1    random    600, 1000 ;starting frequency
idiff     random    100, 300 ;difference to final frequency
ifreq2    =         ifreq1 - idiff ;final frequency
kFreq     expseg    ifreq1, p3, ifreq2 ;glissando
iMaxdb    random    -12, 0 ;peak randomly between -12 and 0 dB
kAmp      transeg   ampdb(iMaxdb), p3, -10, 0 ;envelope
aTone     poscil    kAmp, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
```

The last application of a time loop with the timout opcode which is shown here, is a randomly moving envelope. If you want to create an envelope in Csound which moves between a lower and an upper limit, and has one new random value in a certain time span (for instance, once a second), the time loop with timout is one way to achieve it. A line movement must be performed in each time loop, from a given starting value to a new evaluated final value. Then, in the next loop, the previous final value must be set as the new starting value, and so on. This is a possible solution:

EXAMPLE 03C18.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1
iupper    =         0; upper and ...
ilower    =         -24; ... lower limit in dB
ival1     random    ilower, iupper; starting value
loop:
idurloop  random    .5, 2; duration of each loop
timout    0, idurloop, play
reinit    loop
play:
ival2     random    ilower, iupper; final value
kdb       linseg    ival1, idurloop, ival2
ival1     =         ival2; let ival2 be ival1 for next loop
rireturn  ;end reinit section
aTone     poscil    ampdb(kdb), 400, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
```

Note that in this case the oscillator has been put after the time loop section (which is terminated by the rireturn statement. Otherwise the oscillator would start afresh with zero phase in each time loop, thus producing clicks.

### Time Loops by using the metro Opcode

The metro opcode outputs a "1" at distinct times, otherwise it outputs a "0". The frequency of this "banging" (which is in some way similar to the metro objects in PD or Max) is given by the kfreq input argument. So the output of metro offers a simple and intuitive method for controlling time loops, if you use it to trigger a separate instrument which then carries out another job. Below is a simple example for calling a subinstrument twice a second:

EXAMPLE 03C19.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1; triggering instrument
kTrig     metro     2; outputs "1" twice a second
if kTrig == 1 then
event     "i", 2, 0, 1
endif
endin

instr 2; triggered instrument
aSig      oscils    .2, 400, 0
aEnv      transeg   1, p3, -10, 0
outs      aSig*aEnv, aSig*aEnv
endin

</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
```

The example which is given above (03C17.csd) as a flexible time loop by timout, can be done with the metro opcode in this way:

EXAMPLE 03C20.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1
kfreq     init      1; give a start value for the trigger frequency
kTrig     metro     kfreq
if kTrig == 1 then ;if trigger impulse:
kdur      random    1, 5; random duration for instr 2
event     "i", 2, 0, kdur; call instr 2
kfreq     random    .5, 2; set new value for trigger frequency
endif
endin

instr 2
ifreq1    random    600, 1000; starting frequency
idiff     random    100, 300; difference to final frequency
ifreq2    =         ifreq1 - idiff; final frequency
kFreq     expseg    ifreq1, p3, ifreq2; glissando
iMaxdb    random    -12, 0; peak randomly between -12 and 0 dB
kAmp      transeg   ampdb(iMaxdb), p3, -10, 0; envelope
aTone     poscil    kAmp, kFreq, giSine
outs      aTone, aTone
endin

</CsInstruments>
<CsScore>
i 1 0 30
</CsScore>
</CsoundSynthesizer>
```

Note the differences in working with the metro opcode compared to the timout feature:

• As metro works at k-time, you must use the k-variants of event or scoreline to call the subinstrument. With timout you must use the i-variants of event or scoreline (event_i and scoreline_i), because it uses reinitialization for performing the time loops.
• You must select the one k-cycle where the metro opcode sends a "1". This is done with an if-statement. The rest of the instrument is not affected. If you use timout, you usually must seperate the reinitialized from the not reinitialized section by a rireturn statement.

## Links

Steven Yi: Control Flow (Part I = Csound Journal Spring 2006, Part 2 = Csound Journal Summer 2006)

# FUNCTION TABLES

A function table is essentially the same as what other audio programming languages call a buffer, a table, a list or an array. It is a place where data can be stored in an ordered way. Each function table has a size: how much data (in Csound just numbers) can be stored in it. Each value in the table can be accessed by an index, counting from 0 to size-1. For instance, if you have a function table with a size of 10, and the numbers [1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89] in it, this is the relation of value and index:

 VALUE 1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89 INDEX 0 1 2 3 4 5 6 7 8 9

So, if you want to retrieve the value 13.13, you must point to the value stored under index 5.

The use of function tables is manifold. A function table can contain pitch values to which you may refer using the input of a MIDI keyboard. A function table can contain a model of a waveform which is read periodically by an oscillator. You can record live audio input in a function table, and then play it back. There are many more applications, all using the fast access (because a function table is part of the RAM) and flexible use of function tables.

## How to Generate a Function Table

Each function table must be created before it can be used. Even if you want to write values later, you must first create an empty table, because you must initially reserve some space in memory for it.

Each creation of a function table in Csound is performed by one of the so-called GEN Routines. Each GEN Routine generates a function table in a particular way: GEN01 transfers audio samples from a soundfile into a table, with GEN02 we can write values in "by hand" one by one, GEN10 calculates a waveform using information determining a sum of sinusoids, GEN20 generates window functions typically used for granular synthesis, and so on. There is a good overview in the Csound Manual of all existing GEN Routines. Here we will explain the general use and give simple examples for some frequent cases.

### GEN02 And General Parameters For GEN Routines

Let's start with our example above and write the 10 numbers into a function table of the same size. For this, use of a GEN02 function table is required. A short description of GEN02 from the manual reads as follows:

```f # time size 2 v1 v2 v3 ...
```

This is the traditional way of creating a function table by an "f statement" or an "f score event" (in comparision for instance to "i score events" which call instrument instances). The input parameters after the "f" are the following:

• #: a number (as positive integer) for this function table;
• time: at which time to be the function table available (usually 0 = from the beginning);
• size: the size of the function table. This is a bit tricky, because in the early days of Csound just power-of-two sizes for function tables were possible (2, 4, 8, 16, ...). Nowadays nearly every GEN Routine accepts other sizes, but these non-power-of-two sizes must be declared as a negative number!
• 2: the number of the GEN Routine which is used to generate the table. And here is another important point which must be regarded. By default, Csound normalizes the table values. This means that the maximum is scaled to +1 if positive, and to -1 if negative. To prevent Csound from normalizing, a negative number must be given as GEN number (here -2 instead of 2).
• v1 v2 v3 ...: the values which are written into the function table.

So this is the way to put the values [1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89] in a function table with the number 1:

EXAMPLE 03D01.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
instr 1 ;prints the values of table 1 or 2
prints    "%nFunction Table %d:%n", p4
indx      init      0
loop:
ival      table     indx, p4
prints    "Index %d = %f%n", indx, ival
loop_lt   indx, 1, 10, loop
endin
</CsInstruments>
<CsScore>
f 1 0 -10 -2 1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89; not normalized
f 2 0 -10 2 1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89; normalized
i 1 0 0 1; prints function table 1
i 1 0 0 2; prints function table 2
</CsScore>
</CsoundSynthesizer>
```

Instrument 1 just serves to print the values of the table (the tablei opcode will be explained later). See the difference whether the table is normalized (positive GEN number) or not normalized (negative GEN number).

Using the ftgen opcode is a more modern way of creating a function table, which is in some ways preferable to the old way of writing an f-statement in the score. The syntax is explained below:

```giVar     ftgen     ifn, itime, isize, igen, iarg1 [, iarg2 [, ...]]
```
• giVar: a variable name. Each function is stored in an i-variable. Usually you want to have access to it from every instrument, so a gi-variable (global initialization variable) is given.
• ifn: a number for the function table. If you type in 0, you give Csound the job to choose a number, which is mostly preferable.

The other parameters (size, GEN number, individual arguments) are the same as in the f-statement in the score. As this GEN call is now a part of the orchestra, each argument is separated from the next by a comma (not by a space or tab like in the score).

So this is the same example as above, but now with the function tables being generated in the orchestra header:

EXAMPLE 03D02.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

giFt1 ftgen 1, 0, -10, -2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21, 34.34, 55.55, 89.89
giFt2 ftgen 2, 0, -10, 2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21, 34.34, 55.55, 89.89

instr 1; prints the values of table 1 or 2
prints    "%nFunction Table %d:%n", p4
indx      init      0
loop:
ival      table     indx, p4
prints    "Index %d = %f%n", indx, ival
loop_lt   indx, 1, 10, loop
endin

</CsInstruments>
<CsScore>
i 1 0 0 1; prints function table 1
i 1 0 0 2; prints function table 2
</CsScore>
</CsoundSynthesizer>
```

### GEN01: Importing a Soundfile

GEN01 is used for importing soundfiles stored on disk into the computer's RAM, ready for for use by a number of Csound's opcodes in the orchestra. A typical ftgen statement for this import might be the following:

```varname             ifn itime isize igen Sfilnam       iskip iformat ichn
giFile    ftgen     0,  0,    0,    1,   "myfile.wav", 0,    0,      0
```
• varname, ifn, itime: These arguments have the same meaning as explained above in reference to GEN02.
• isize: Usually you won't know the length of your soundfile in samples, and want to have a table length which includes exactly all the samples. This is done by setting isize=0. (Note that some opcodes may need a power-of-two table. In this case you can not use this option, but must calculate the next larger power-of-two value as size for the function table.)
• igen: As explained in the previous subchapter, this is always the place for indicating the number of the GEN Routine which must be used. As always, a positive number means normalizing, which is usually convenient for audio samples.
• Sfilnam: The name of the soundfile in double quotes. Similar to other audio programming languages, Csound recognizes just the name if your .csd and the soundfile are in the same folder. Otherwise, give the full path. (You can also include the folder via the "SSDIR" variable, or add the folder via the "--env:NAME+=VALUE" option.)
• iskip: The time in seconds you want to skip at the beginning of the soundfile. 0 means reading from the beginning of the file.
• iformat: Usually 0, which means: read the sample format from the soundfile header.
• ichn: 1 = read the first channel of the soundfile into the table, 2 = read the second channel, etc. 0 means that all channels are read.

The next example plays a short sample. You can download it here. Copy the text below, save it to the same location as the "fox.wav" soundfile, and it should work. Reading the function table is done here with the poscil3 opcode which can deal with non-power-of-two tables.

EXAMPLE 03D03.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSample  ftgen     0, 0, 0, 1, "fox.wav", 0, 0, 1

instr 1
itablen   =         ftlen(giSample) ;length of the table
idur      =         itablen / sr ;duration
aSamp     poscil3   .5, 1/idur, giSample
outs      aSamp, aSamp
endin

</CsInstruments>
<CsScore>
i 1 0 2.757
</CsScore>
</CsoundSynthesizer>
```

### GEN10: Creating a Waveform

The third example for generating a function table covers one classical case: building a function table which stores one cycle of a waveform. This waveform is then read by an oscillator to produce a sound.

There are many GEN Routines to achieve this. The simplest one is GEN10. It produces a waveform by adding sine waves which have the "harmonic" frequency relations 1 : 2 : 3  : 4 ... After the usual arguments for function table number, start, size and gen routine number, which are the first four arguments in ftgen for all GEN Routines, you must specify for GEN10 the relative strengths of the harmonics. So, if you just provide one argument, you will end up with a sine wave (1st harmonic). The next argument is the strength of the 2nd harmonic, then the 3rd, and so on. In this way, you can build the standard harmonic waveforms by sums of sinoids. This is done in the next example by instruments 1-5. Instrument 6 uses the sine wavetable twice: for generating both the sound and the envelope.

EXAMPLE 03D04.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
giSaw     ftgen     0, 0, 2^10, 10, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9
giSquare  ftgen     0, 0, 2^10, 10, 1, 0, 1/3, 0, 1/5, 0, 1/7, 0, 1/9
giTri     ftgen     0, 0, 2^10, 10, 1, 0, -1/9, 0, 1/25, 0, -1/49, 0, 1/81
giImp     ftgen     0, 0, 2^10, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1

instr 1 ;plays the sine wavetable
aSine     poscil    .2, 400, giSine
aEnv      linen     aSine, .01, p3, .05
outs      aEnv, aEnv
endin

instr 2 ;plays the saw wavetable
aSaw      poscil    .2, 400, giSaw
aEnv      linen     aSaw, .01, p3, .05
outs      aEnv, aEnv
endin

instr 3 ;plays the square wavetable
aSqu      poscil    .2, 400, giSquare
aEnv      linen     aSqu, .01, p3, .05
outs      aEnv, aEnv
endin

instr 4 ;plays the triangular wavetable
aTri      poscil    .2, 400, giTri
aEnv      linen     aTri, .01, p3, .05
outs      aEnv, aEnv
endin

instr 5 ;plays the impulse wavetable
aImp      poscil    .2, 400, giImp
aEnv      linen     aImp, .01, p3, .05
outs      aEnv, aEnv
endin

instr 6 ;plays a sine and uses the first half of its shape as envelope
aEnv      poscil    .2, 1/6, giSine
aSine     poscil    aEnv, 400, giSine
outs      aSine, aSine
endin

</CsInstruments>
<CsScore>
i 1 0 3
i 2 4 3
i 3 8 3
i 4 12 3
i 5 16 3
i 6 20 3
</CsScore>
</CsoundSynthesizer>
```

## How to Write Values to a Function Table

As we saw, each GEN Routine generates a function table, and by doing this, it writes values into it. But in certain cases you might first want to create an empty table, and then write the values into it later. This section is about how to do this.

Actually it is not correct to speak of an "empty table". If Csound creates an "empty" table, in fact it writes zeros to the indices which are not specified. This is perhaps the easiest method of creating an "empty" table for 100 values:

```giEmpty   ftgen     0, 0, -100, 2, 0
```

The basic opcode which writes values to existing function tables is tablew and its i-time descendant tableiw. Note that you may have problems with some features if your table is not a power-of-two size . In this case, you can also use tabw / tabw_i, but they don't have the offset- and the wraparound-feature. As usual, you must differentiate if your signal (variable) is i-rate, k-rate or a-rate. The usage is simple and differs just in the class of values you want to write to the table (i-, k- or a-variables):

```          tableiw   isig, indx, ifn [, ixmode] [, ixoff] [, iwgmode]
tablew    ksig, kndx, ifn [, ixmode] [, ixoff] [, iwgmode]
tablew    asig, andx, ifn [, ixmode] [, ixoff] [, iwgmode]
```
• isig, ksig, asig is the value (variable) you want to write into specified locations of the table;
• indx, kndx, andx is the location (index) where you write the value;
• ifn is the function table you want to write in;
• ixmode gives the choice to write by raw indices (counting from 0 to size-1), or by a normalized writing mode in which the start and end of each table are always referred as 0 and 1 (not depending on the length of the table). The default is ixmode=0 which means the raw index mode. A value not equal to zero for ixmode changes to the normalized index mode.
• ixoff (default=0) gives an index offset. So, if indx=0 and ixoff=5, you will write at index 5.
• iwgmode tells what you want to do if your index is larger than the size of the table. If iwgmode=0 (default), any index larger than possible is written at the last possible index. If iwgmode=1, the indices are wrapped around. For instance, if your table size is 8, and your index is 10, in the wraparound mode the value will be written at index 2.

Here are some examples for i-, k- and a-rate values.

### i-Rate Example

The following example calculates the first 12 values of a Fibonacci series and writes it to a table. This table has been created first in the header (filled with zeros). Then instrument 1 calculates the values in an i-time loop and writes them to the table with tableiw. Instrument 2 just serves to print the values.

EXAMPLE 03D05.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

giFt      ftgen     0, 0, -12, -2, 0

instr 1; calculates first 12 fibonacci values and writes them to giFt
istart    =         1
inext     =         2
indx      =         0
loop:
tableiw   istart, indx, giFt ;writes istart to table
istartold =         istart ;keep previous value of istart
istart    =         inext ;reset istart for next loop
inext     =         istartold + inext ;reset inext for next loop
loop_lt   indx, 1, 12, loop
endin

instr 2; prints the values of the table
prints    "%nContent of Function Table:%n"
indx      init      0
loop:
ival      table     indx, giFt
prints    "Index %d = %f%n", indx, ival
loop_lt   indx, 1, ftlen(giFt), loop
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
```

### k-Rate Example

The next example writes a k-signal continuously into a table. This can be used to record any kind of user input, for instance by MIDI or widgets. It can also be used to record random movements of k-signals, like here:

EXAMPLE 03D06.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giFt      ftgen     0, 0, -5*kr, 2, 0; size for 5 seconds of recording
giWave    ftgen     0, 0, 2^10, 10, 1, .5, .3, .1; waveform for oscillator
seed      0

; - recording of a random frequency movement for 5 seconds, and playing it
instr 1
kFreq     randomi   400, 1000, 1 ;random frequency
aSnd      poscil    .2, kFreq, giWave ;play it
outs      aSnd, aSnd
;;record the k-signal
prints    "RECORDING!%n"
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
kindx     linseg    0, 5, ftlen(giFt)
;write the k-signal
tablew    kFreq, kindx, giFt
endin

instr 2; read the values of the table and play it again
;;read the k-signal
prints    "PLAYING!%n"
;create a reading pointer in the table,
;moving in 5 seconds from index 0 to the end
kindx     linseg    0, 5, ftlen(giFt)
;read the k-signal
kFreq     table     kindx, giFt
aSnd      oscil3    .2, kFreq, giWave; play it
outs      aSnd, aSnd
endin

</CsInstruments>
<CsScore>
i 1 0 5
i 2 6 5
</CsScore>
</CsoundSynthesizer>
```

As you see, in this typical case of writing k-values to a table you need a moving signal for the index. This can be done using the line or linseg opcode like here, or by using a phasor. The phasor always moves from 0 to 1 in a certain frequency. So, if you want the phasor to move from 0 to 1 in 5 seconds, you must set the frequency to 1/5. By setting the ixmode argument of tablew to 1, you can use the phasor output directly as writing pointer. So this is an alternative version of instrument 1 taken from the previous example:

```instr 1; recording of a random frequency movement for 5 seconds, and playing it
kFreq     randomi   400, 1000, 1; random frequency
aSnd      oscil3    .2, kFreq, giWave; play it
outs      aSnd, aSnd
;;record the k-signal with a phasor as index
prints    "RECORDING!%n"
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
kindx     phasor    1/5
;write the k-signal
tablew    kFreq, kindx, giFt, 1
endin
```

### a-Rate Example

Recording an audio signal is quite similar to recording a control signal. You just need an a-signal as input and also as index. The first example shows first the recording of a random audio signal. If you have live audio input, you can then record your input for 5 seconds.

EXAMPLE 03D07.csd

```<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giFt      ftgen     0, 0, -5*sr, 2, 0; size for 5 seconds of recording audio
seed      0

instr 1 ;generating a band filtered noise for 5 seconds, and recording it
aNois     rand      .2
kCfreq    randomi   200, 2000, 3; random center frequency
aFilt     butbp     aNois, kCfreq, kCfreq/10; filtered noise
aBal      balance   aFilt, aNois, 1; balance amplitude
outs      aBal, aBal
;;record the audiosignal with a phasor as index
prints    "RECORDING FILTERED NOISE!%n"
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
aindx     phasor    1/5
;write the k-signal
tablew    aBal, aindx, giFt, 1
endin

instr 2 ;read the values of the table and play it
prints    "PLAYING FILTERED NOISE!%n"
aindx     phasor    1/5
aSnd      table3    aindx, giFt, 1
outs      aSnd, aSnd
endin

instr 3 ;record live input
ktim      timeinsts ; playing time of the instrument in seconds
prints    "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n"
kBeepEnv  linseg    0, 1, 0, .01, 1, .5, 1, .01, 0
aBeep     oscils    .2, 600, 0
outs      aBeep*kBeepEnv, aBeep*kBeepEnv
;;record the audiosignal after 2 seconds
if ktim > 2 then
ain       inch      1
printks   "RECORDING LIVE INPUT!%n", 10
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
aindx     phasor    1/5
;write the k-signal
tablew    ain, aindx, giFt, 1
endif
endin

instr 4 ;read the values from the table and play it
prints    "PLAYING LIVE INPUT!%n"
aindx     phasor    1/5
aSnd      table3    aindx, giFt, 1
outs      aSnd, aSnd
endin

</CsInstruments>
<CsScore>
i 1 0 5
i 2 6 5
i 3 12 7
i 4 20 5
</CsScore>
</CsoundSynthesizer>
```

## How to Retreive Values from a Function Table

There are two methods of reading table values. You can either use the table / tab opcodes, which are universally usable, but need an index; or you can use an oscillator for reading a table at k-rate or a-rate.

### The table Opcode

The table opcode is quite similar in syntax to the tableiw/tablew opcode (which are explained above). It's just its counterpart in reading values from a function table (instead of writing values to it). So its output is either an i-, k- or a-signal. The main input is an index of the appropriate rate (i-index for i-output, k-index for k-output, a-index for a-output). The other arguments are as explained above for tableiw/tablew:

```ires      table    indx, ifn [, ixmode] [, ixoff] [, iwrap]
kres      table    kndx, ifn [, ixmode] [, ixoff] [, iwrap]
ares      table    andx, ifn [, ixmode] [, ixoff] [, iwrap]
```

As table reading often requires interpolation between the table values - for instance if you read k or a-values faster or slower than they have been written in the table - Csound offers two descendants of table for interpolation: tablei interpolates linearly, whilst table3 performs cubic interpolation (which is generally preferable but is computationally slightly more expensive).
Another variant is the tab_i / tab opcode which misses some features but may be preferable in some situations. If you have any problems in reading non-power-of-two tables, give them a try. They should also be faster than the table opcode, but you must take care: they include fewer built-in protection measures than table, tablei and table3 and if they are given index values that exceed the table size Csound will stop and report a performance error.
Examples of the use of the table opcodes can be found in the earlier examples in the How-To-Write-Values... section.

### Oscillators

Reading table values using an oscillator is standard if you read tables which contain one cycle of a waveform at audio-rate. But actually you can read any table using an oscillator, either at a- or at k-rate. The advantage is that you needn't create an index signal. You can simply specify the frequency of the oscillator.
You should bear in mind that many of the oscillators in Csound will work only with power-of-two table sizes. The poscil/poscil3 opcodes do not have this restriction and offer a high precision, because they work with floating point indices, so in general it is recommended to use them. Below is an example that demonstrates both reading a k-rate and an a-rate signal from a buffer with poscil3 (an oscillator with a cubic interpolation):

EXAMPLE 03D08.csd

```<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; -- size for 5 seconds of recording control data
giControl ftgen     0, 0, -5*kr, 2, 0
; -- size for 5 seconds of recording audio data
giAudio   ftgen     0, 0, -5*sr, 2, 0
giWave    ftgen     0, 0, 2^10, 10, 1, .5, .3, .1; waveform for oscillator
seed      0

; -- ;recording of a random frequency movement for 5 seconds, and playing it
instr 1
kFreq     randomi   400, 1000, 1; random frequency
aSnd      poscil    .2, kFreq, giWave; play it
outs      aSnd, aSnd
;;record the k-signal with a phasor as index
prints    "RECORDING RANDOM CONTROL SIGNAL!%n"
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
kindx     phasor    1/5
;write the k-signal
tablew    kFreq, kindx, giControl, 1
endin

instr 2; read the values of the table and play it with poscil
prints    "PLAYING CONTROL SIGNAL!%n"
kFreq     poscil    1, 1/5, giControl
aSnd      poscil    .2, kFreq, giWave; play it
outs      aSnd, aSnd
endin

instr 3; record live input
ktim      timeinsts ; playing time of the instrument in seconds
prints    "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n"
kBeepEnv  linseg    0, 1, 0, .01, 1, .5, 1, .01, 0
aBeep     oscils    .2, 600, 0
outs      aBeep*kBeepEnv, aBeep*kBeepEnv
;;record the audiosignal after 2 seconds
if ktim > 2 then
ain       inch      1
printks   "RECORDING LIVE INPUT!%n", 10
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
aindx     phasor    1/5
;write the k-signal
tablew    ain, aindx, giAudio, 1
endif
endin

instr 4; read the values from the table and play it with poscil
prints    "PLAYING LIVE INPUT!%n"
aSnd      poscil    .5, 1/5, giAudio
outs      aSnd, aSnd
endin

</CsInstruments>
<CsScore>
i 1 0 5
i 2 6 5
i 3 12 7
i 4 20 5
</CsScore>
</CsoundSynthesizer>
```

## Saving the Contents of a Function Table to a File

A function table exists just as long as you run the Csound instance which has created it. If Csound terminates, all the data is lost. If you want to save the data for later use, you must write them to a file. There are several cases, depending on firstly whether you write at i-time or at k-time and secondly on what kind of file you want to write to.

### Writing a File in Csound's ftsave Format at i-Time or k-Time

Any function table in Csound can easily be written to a file by the ftsave (i-time) or ftsavek (k-time) opcode. The use is very simple. The first argument specifies the filename (in double quotes), the second argument chooses between a text format (non zero) or a binary format (zero) to write, then you just give the number of the function table(s) to save.
For the following example you should end up with two textfiles in the same folder as your .csd: "i-time_save.txt" saves function table 1 (a sine wave) at i-time; "k-time_save.txt" saves function table 2 (a linear increment produced during the performance) at k-time.

EXAMPLE 03D09.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giWave    ftgen     1, 0, 2^7, 10, 1; sine with 128 points
giControl ftgen     2, 0, -kr, 2, 0; size for 1 second of recording control data
seed      0

instr 1; saving giWave at i-time
ftsave    "i-time_save.txt", 1, 1
endin

instr 2; recording of a line transition between 0 and 1 for one second
kline     linseg    0, 1, 1
tabw      kline, kline, giControl, 1
endin

instr 3; saving giWave at k-time
ftsave    "k-time_save.txt", 1, 2
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 1
i 3 1 .1
</CsScore>
</CsoundSynthesizer>
```

The counterpart to ftsave/ftsavek are the opcodes ftload/ftloadk. Using them you can load the saved files into function tables.

### Writing a Soundfile from a Recorded Function Table

If you have recorded your live-input to a buffer, you may want to save your buffer as a soundfile. There is no opcode in Csound which does that, but it can be done by using a k-rate loop and the fout opcode. This is shown in the next example, in instrument 2. First instrument 1 records your live input. Then instrument 2 writes the file "testwrite.wav" into the same folder as your .csd. This is done at the first k-cycle of instrument 2, by reading again and again the table values and writing them as an audio signal to disk. After this is done, the instrument is turned off by executing the turnoff statement.

EXAMPLE 03D10.csd

```<CsoundSynthesizer>
<CsOptions>
-i adc
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
; --  size for 5 seconds of recording audio data
giAudio   ftgen     0, 0, -5*sr, 2, 0

instr 1 ;record live input
ktim      timeinsts ; playing time of the instrument in seconds
prints    "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n"
kBeepEnv  linseg    0, 1, 0, .01, 1, .5, 1, .01, 0
aBeep     oscils    .2, 600, 0
outs      aBeep*kBeepEnv, aBeep*kBeepEnv
;;record the audiosignal after 2 seconds
if ktim > 2 then
ain       inch      1
printks   "RECORDING LIVE INPUT!%n", 10
;create a writing pointer in the table,
;moving in 5 seconds from index 0 to the end
aindx     phasor    1/5
;write the k-signal
tablew    ain, aindx, giAudio, 1
endif
endin

instr 2; write the giAudio table to a soundfile
Soutname  =         "testwrite.wav"; name of the output file
iformat   =         14; write as 16 bit wav file
itablen   =         ftlen(giAudio); length of the table in samples

kcnt      init      0; set the counter to 0 at start
loop:
kcnt      =         kcnt+ksmps; next value (e.g. 10 if ksmps=10)
andx      interp    kcnt-1; calculate audio index (e.g. from 0 to 9)
asig      tab       andx, giAudio; read the table values as audio signal
fout      Soutname, iformat, asig; write asig to a file
if kcnt <= itablen-ksmps kgoto loop; go back as long there is something to do
turnoff   ; terminate the instrument
endin

</CsInstruments>
<CsScore>
i 1 0 7
i 2 7 .1
</CsScore>
</CsoundSynthesizer>
```

This code can also be transformed in a User Defined Opcode. It can be found here.

### Related Opcodes

ftgen: Creates a function table in the orchestra using any GEN Routine.

table / tablei / table3: Read values from a function table at any rate, either by direct indexing (table), or by linear (tablei) or cubic (table3) interpolation. These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables.

tab_i / tab: Read values from a function table at i-rate (tab_i), k-rate or a-rate (tab). Offer no interpolation and less options than the table opcodes, but they work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not reading any value off the table boundaries.

tableiw / tablew: Write values to a function table at i-rate (tableiw), k-rate and a-rate (tablew). These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables.

tabw_i / tabw: Write values to a function table at i-rate (tabw_i), k-rate or a-rate (tabw). Offer less options than the tableiw/tablew opcodes, but work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not writing any value off the table boundaries.

poscil / poscil3: Precise oscillators for reading function tables at k- or a-rate, with linear (poscil) or cubic (poscil3) interpolation. They support also non-power-of-two tables, so it's usually recommended to use them instead of the older oscili/oscil3 opcodes. Poscil has also a-rate input for amplitude and frequency, while poscil3 has just k-rate input.

oscili / oscil3: The standard oscillators in Csound for reading function tables at k- or a-rate, with linear (oscili) or cubic (oscil3) interpolation. They support all rates for the amplitude and frequency input, but are restricted to power-of-two tables. Particularily for long tables and low frequencies they are not as precise as the poscil/poscil3 oscillators.

ftsave / ftsavek: Save a function table as a file, at i-time (ftsave) or k-time (ftsavek). This can be a text file or a binary file, but not a soundfile. If you want to save a soundfile, use the User Defined Opcode TableToSF.

ftload / ftloadk: Load a function table which has been written by ftsave/ftsavek.

line / linseg / phasor: Can be used to create index values which are needed to read/write k- or a-signals with the table/tablew or tab/tabw opcodes.

# TRIGGERING INSTRUMENT EVENTS

The basic concept of Csound from the early days of the program is still valent and fertile because it is a familiar musical one. You create a set of instruments and instruct them to play at various times. These calls of instrument instances, and their execution, are called "instrument events".

This scheme of instruments and events can be instigated in a number of ways. In the classical approach you think of an "orchestra" with a number of musicians playing from a "score", but you can also trigger instruments using any kind of live input: from MIDI, from OSC, from the command line, from a GUI (such as Csound's FLTK widgets or QuteCsound's widgets), from the API (also used in QuteCsound's Live Event Sheet). Or you can create a kind of "master instrument", which is always on, and triggers other instruments using opcodes designed for this task, perhaps under certain conditions: if the live audio input from a singer has been detected to have a base frequency greater than 1043 Hz, then start an instrument which plays a soundfile of broken glass...

This chapter is about the various ways to trigger instrument events whether that be from the score, by using MIDI, by using widgets, through using conditionals or by using loops.

## Order Of Execution

Whatever you do in Csound with instrument events, you must bear in mind the order of execution that has been explained in the first chapter of this section about the Initialization and Performance Pass: instruments are executed one by one, both in the initialization pass and in each control cycle, and the order is determined by the instrument number. So if you have an instrument which triggers another instrument, it should usually have the lower number. If, for instance, instrument 10 calls instrument 20 in a certain control cycle, instrument 20 will execute the event in the same control cycle. But if instrument 20 calls instrument 10, then instrument 10 will execute the event only in the next control cycle.

## Instrument Events From The Score

This is the classical way of triggering instrument events: you write a list in the score section of a .csd file. Each line which begins with an "i", is an instrument event. As this is very simple, and examples can be found easily, let us focus instead on some additional features which can be useful when you work in this way. Documentation for these features can be found in the Score Statements section of the Canonical Csound Reference Manual. Here are some examples:

EXAMPLE 03E01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giWav     ftgen     0, 0, 2^10, 10, 1, .5, .3, .1

instr 1
kFadout   init      1
krel      release   ;returns "1" if last k-cycle
if krel == 1 && p3 < 0 then ;if so, and negative p3:
xtratim   .5       ;give 0.5 extra seconds
kFadout   linseg    1, .5, 0 ;and make fade out
endif
kEnv      linseg    0, .01, p4, abs(p3)-.1, p4, .09, 0; normal fade out
aSig      poscil    kEnv*kFadout, p5, giWav
outs      aSig, aSig
endin

</CsInstruments>
<CsScore>
t 0 120                      ;set tempo to 120 beats per minute
i    1    0    1    .2   400 ;play instr 1 for one second
i    1    2   -10   .5   500 ;play instr 1 indefinetely (negative p3)
i   -1    5    0             ;turn it off (negative p1)
; -- turn on instance 1 of instr 1 one sec after the previous start
i    1.1  ^+1  -10  .2   600
i    1.2  ^+2  -10  .2   700 ;another instance of instr 1
i   -1.2  ^+2  0             ;turn off 1.2
; -- turn off 1.1 (dot = same as the same p-field above)
i   -1.1  ^+1  .
s                            ;end of a section, so time begins from new at zero
i    1    1    1    .2   800
r 5                          ;repeats the following line (until the next "s")
i    1   .25  .25   .2   900
s
v 2                          ;lets time be double as long
i    1    0    2    .2   1000
i    1    1    1    .2   1100
s
v 0.5                        ;lets time be half as long
i    1    0    2    .2   1200
i    1    1    1    .2   1300
s                            ;time is normal now again
i    1    0    2    .2   1000
i    1    1    1    .2   900
s
; -- make a score loop (4 times) with the variable "LOOP"{4 LOOP
i    1    [0 + 4 * \$LOOP.]    3    .2   [1200 - \$LOOP. * 100]
i    1    [1 + 4 * \$LOOP.]    2    .    [1200 - \$LOOP. * 200]
i    1    [2 + 4 * \$LOOP.]    1    .    [1200 - \$LOOP. * 300]
}
e
</CsScore>
</CsoundSynthesizer>
```

Triggering an instrument with an indefinite duration by setting p3 to any negative value, and stopping it by a negative p1 value, can be an important feature for live events. If you turn instruments off in this way you may have to add a fade out segment. One method of doing this is shown in the instrument above with a combination of the release and the xtratim opcodes. Also note that you can start and stop certain instances of an instrument with a floating point number as p1.

## Using MIDI Noteon Events

Csound has a particular feature which makes it very simple to trigger instrument events from a MIDI keyboard. Each MIDI Note-On event can trigger an instrument, and the related Note-Off event of the same key stops the related instrument instance. This is explained more in detail in the chapter Triggering Instrument Instances in the MIDI section of this manual. Here, just a small example is shown. Simply connect your MIDI keyboard and it should work.

EXAMPLE 03E02.csd

```<CsoundSynthesizer>
<CsOptions>
-Ma -odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
massign   0, 1; assigns all midi channels to instr 1

instr 1
iFreq     cpsmidi   ;gets frequency of a pressed key
iAmp      ampmidi   8 ;gets amplitude and scales 0-8
iRatio    random    .9, 1.1 ;ratio randomly between 0.9 and 1.1
aTone     foscili   .1, iFreq, 1, iRatio/5, iAmp+1, giSine ;fm
aEnv      linenr    aTone, 0, .01, .01 ; avoiding clicks at the note-end
outs      aEnv, aEnv
endin

</CsInstruments>
<CsScore>
f 0 36000; play for 10 hours
e
</CsScore>
</CsoundSynthesizer>
```

## Using Widgets

If you want to trigger an instrument event in realtime with a Graphical User Interface, it is usually a "Button" widget which will do this job. We will see here a simple example; first implemented using Csound's FLTK widgets, and then using QuteCsound's widgets.

### FLTK Button

This is a very simple example demonstrating how to trigger an instrument using an FLTK button. A more extended example can be found here.

EXAMPLE 03E03.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

; -- create a FLTK panel --
FLpanel   "Trigger By FLTK Button", 300, 100, 100, 100
; -- trigger instr 1 (equivalent to the score line "i 1 0 1")k1, ih1   FLbutton  "Push me!", 0, 0, 1, 150, 40, 10, 25, 0, 1, 0, 1
; -- trigger instr 2
k2, ih2   FLbutton  "Quit", 0, 0, 1, 80, 40, 200, 25, 0, 2, 0, 1
FLpanelEnd; end of the FLTK panel section
FLrun     ; run FLTK
seed      0; random seed different each time

instr 1
idur      random    .5, 3; recalculate instrument duration
p3        =         idur; reset instrument duration
ioct      random    8, 11; random values between 8th and 11th octave
idb       random    -18, -6; random values between -6 and -18 dB
aSig      oscils    ampdb(idb), cpsoct(ioct), 0
aEnv      transeg   1, p3, -10, 0
outs      aSig*aEnv, aSig*aEnv
endin

instr 2
exitnow
endin

</CsInstruments>
<CsScore>
f 0 36000
e
</CsScore>
</CsoundSynthesizer>
```

Note that in this example the duration of an instrument event is recalculated when the instrument is inititalized. This is done using the statement "p3 = i...". This can be a useful technique if you want the duration that an instrument plays for to be different each time it is called. In this example duration is the result of a random function'. The duration defined by the FLTK button will be overwriten by any other calculation within the instrument itself at i-time.

### QuteCsound Button

In QuteCsound, a button can be created easily from the submenu in a widget panel:

In the Properties Dialog of the button widget, make sure you have selected "event" as Type. Insert a Channel name, and at the bottom type in the event you want to trigger - as you would if writing a line in the score.

In your Csound code, you need nothing more than the instrument you want to trigger:

For more information about QuteCsound, read the QuteCsound chapter in the 'Frontends' section of this manual.

## Using A Realtime Score (Live Event Sheet)

### Command Line With The -L stdin Option

If you use any .csd with the option "-L stdin" (and the -odac option for realtime output), you can type any score line in realtime (sorry, this does not work for Windows). For instance, save this .csd anywhere and run it from the command line:

EXAMPLE 03E04.csd

```<CsoundSynthesizer>
<CsOptions>
-L stdin -odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0; random seed different each time

instr 1
idur      random    .5, 3; calculate instrument duration
p3        =         idur; reset instrument duration
ioct      random    8, 11; random values between 8th and 11th octave
idb       random    -18, -6; random values between -6 and -18 dB
aSig      oscils    ampdb(idb), cpsoct(ioct), 0
aEnv      transeg   1, p3, -10, 0
outs      aSig*aEnv, aSig*aEnv
endin

</CsInstruments>
<CsScore>
f 0 36000
e
</CsScore>
</CsoundSynthesizer>
```

If you run it by typing and returning a commandline like this ...

... you should get a prompt at the end of the Csound messages:

If you now type the line "i 1 0 1" and press return, you should hear that instrument 1 has been executed. After three times your messages may look like this:

### QuteCsound's Live Event Sheet

In general, this is the method that QuteCsound uses and it is made available to the user in a flexible environment called the Live Event Sheet. This is just a screenshot of the current (QuteCsound 0.6.0) example of the Live Event Sheet in QuteCsound:

Have a look in the QuteCsound frontend to see more of the possibilities of "firing" live instrument events using the Live Event Sheet.

## By Conditions

We have discussed first the classical method of triggering instrument events from the score section of a .csd file, then we went on to look at different methods of triggering real time events using MIDI, by using widgets, and by using score lines inserted live. We will now look at the Csound orchestra itself and to some methods by which an instrument can internally trigger another instrument. The pattern of triggering could be governed by conditionals, or by different kinds of loops. As this "master" instrument can itself be triggered by a realtime event, you have unlimited options available for combining the different methods.

Let's start with conditionals. If we have a realtime input, we may want to define a threshold, and trigger an event

1. if we cross the threshold from below to above;
2. if we cross the threshold from above to below.

In Csound, this could be implemented using an orchestra of three instruments. The first instrument is the master instrument. It receives the input signal and investigates whether that signal is crossing the threshold and if it does whether it is crossing from low to high or from high to low. If it crosses the threshold from low ot high the second instrument is triggered, if it crosses from high to low the third instrument is triggered.

EXAMPLE 03E05.csd

```<CsoundSynthesizer>
<CsOptions>
-iadc -odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0; random seed different each time

instr 1; master instrument
ichoose   =         p4; 1 = real time audio, 2 = random amplitude movement
ithresh   =         -12; threshold in dB
kstat     init      1; 1 = under the threshold, 2 = over the threshold
;;CHOOSE INPUT SIGNAL
if ichoose == 1 then
ain       inch      1
else
kdB       randomi   -18, -6, 1
ain       pinkish   ampdb(kdB)
endif
;;MEASURE AMPLITUDE AND TRIGGER SUBINSTRUMENTS IF THRESHOLD IS CROSSED
afoll     follow    ain, .1; measure mean amplitude each 1/10 second
kfoll     downsamp  afoll
if kstat == 1 && dbamp(kfoll) > ithresh then; transition down->up
event     "i", 2, 0, 1; call instr 2
printks   "Amplitude = %.3f dB%n", 0, dbamp(kfoll)
kstat     =         2; change status to "up"
elseif kstat == 2 && dbamp(kfoll) < ithresh then; transition up->down
event     "i", 3, 0, 1; call instr 3
printks   "Amplitude = %.3f dB%n", 0, dbamp(kfoll)
kstat     =         1; change status to "down"
endif
endin

instr 2; triggered if threshold has been crossed from down to up
asig      oscils    .2, 500, 0
aenv      transeg   1, p3, -10, 0
outs      asig*aenv, asig*aenv
endin

instr 3; triggered if threshold has been crossed from up to down
asig      oscils    .2, 400, 0
aenv      transeg   1, p3, -10, 0
outs      asig*aenv, asig*aenv
endin

</CsInstruments>
<CsScore>
i 1 0 1000 2 ;change p4 to "1" for live input
e
</CsScore>
</CsoundSynthesizer>
```

## Using i-Rate Loops For Calculating A Pool Of Instrument Events

You can perform a number of calculations at init-time which lead to a list of instrument events. In this way you are producing a score, but inside an instrument. The score events are then executed later.

Using this opportunity we can introduce the scoreline / scoreline_i opcode. It is quite similar to the event / event_i opcode but has two major benefits:

• You can write more than one scoreline by using "{{" at the beginning and "}}" at the end.
• You can send a string to the subinstrument (which is not possible with the event opcode).

Let's look at a simple example for executing score events from an instrument using the scoreline opcode:

EXAMPLE 03E06.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

seed      0; random seed different each time

instr 1 ;master instrument with event pool
scoreline_i {{i 2 0 2 7.09
i 2 2 2 8.04
i 2 4 2 8.03
i 2 6 1 8.04}}
endin

instr 2 ;plays the notes
asig      pluck     .2, cpspch(p4), cpspch(p4), 0, 1
aenv      transeg   1, p3, 0, 0
outs      asig*aenv, asig*aenv
endin

</CsInstruments>
<CsScore>
i 1 0 7
e
</CsScore>
</CsoundSynthesizer>
```

With good right, you might say: "OK, that's nice, but I can also write scorelines in the score itself!" That's right, but the advantage with the scoreline_i method is that you can render the score events in an instrument, and then send them out to one or more instruments to execute them. This can be done with the sprintf opcode, which produces the string for scoreline in an i-time loop (see the chapter about control structures).

EXAMPLE 03E07.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giPch     ftgen     0, 0, 4, -2, 7.09, 8.04, 8.03, 8.04
seed      0; random seed different each time

instr 1 ; master instrument with event pool
itimes    =         7 ;number of events to produce
icnt      =         0 ;counter
istart    =         0
Slines    =         ""
loop:               ;start of the i-time loop
idur      random    1, 2.9999 ;duration of each note:
idur      =         int(idur) ;either 1 or 2
itabndx   random    0, 3.9999 ;index for the giPch table:
itabndx   =         int(itabndx) ;0-3
ipch      table     itabndx, giPch ;random pitch value from the table
Sline     sprintf   "i 2 %d %d %.2f\n", istart, idur, ipch ;new scoreline
Slines    strcat    Slines, Sline ;append to previous scorelines
istart    =         istart + idur ;recalculate start for next scoreline
loop_lt   icnt, 1, itimes, loop ;end of the i-time loop
puts      Slines, 1 ;print the scorelines
scoreline_i Slines ;execute them
iend      =         istart + idur ;calculate the total duration
p3        =         iend ;set p3 to the sum of all durations
print     p3 ;print it
endin

instr 2 ;plays the notes
asig      pluck     .2, cpspch(p4), cpspch(p4), 0, 1
aenv      transeg   1, p3, 0, 0
outs      asig*aenv, asig*aenv
endin

</CsInstruments>
<CsScore>
i 1 0 1 ;p3 is automatically set to the total duration
e
</CsScore>
</CsoundSynthesizer>
```

In this example, seven events have been rendered in an i-time loop in instrument 1. The result is stored in the string variable Slines. This string is given at i-time to scoreline_i, which executes them then one by one according to their starting times (p2), durations (p3) and other parameters.

If you have many scorelines which are added in this way, you may run to Csound's maximal string length. By default, it is 255 characters. It can be extended by adding the option "-+max_str_len=10000" to Csound's maximum string length of 9999 characters. Instead of collecting all score lines in a single string, you can also execute them inside the i-time loop. Also in this way all the single score lines are added to Csound's event pool. The next example shows an alternative version of the previous one by adding the instrument events one by one in the i-time loop, either with event_i (instr 1) or with scoreline_i (instr 2):

EXAMPLE 03E08.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giPch     ftgen     0, 0, 4, -2, 7.09, 8.04, 8.03, 8.04
seed      0; random seed different each time

instr 1; master instrument with event_i
itimes    =         7; number of events to produce
icnt      =         0; counter
istart    =         0
loop:               ;start of the i-time loop
idur      random    1, 2.9999; duration of each note:
idur      =         int(idur); either 1 or 2
itabndx   random    0, 3.9999; index for the giPch table:
itabndx   =         int(itabndx); 0-3
ipch      table     itabndx, giPch; random pitch value from the table
event_i   "i", 3, istart, idur, ipch; new instrument event
istart    =         istart + idur; recalculate start for next scoreline
loop_lt   icnt, 1, itimes, loop; end of the i-time loop
iend      =         istart + idur; calculate the total duration
p3        =         iend; set p3 to the sum of all durations
print     p3; print it
endin

instr 2; master instrument with scoreline_i
itimes    =         7; number of events to produce
icnt      =         0; counter
istart    =         0
loop:               ;start of the i-time loop
idur      random    1, 2.9999; duration of each note:
idur      =         int(idur); either 1 or 2
itabndx   random    0, 3.9999; index for the giPch table:
itabndx   =         int(itabndx); 0-3
ipch      table     itabndx, giPch; random pitch value from the table
Sline     sprintf   "i 3 %d %d %.2f", istart, idur, ipch; new scoreline
scoreline_i Sline; execute it
puts      Sline, 1; print it
istart    =         istart + idur; recalculate start for next scoreline
loop_lt   icnt, 1, itimes, loop; end of the i-time loop
iend      =         istart + idur; calculate the total duration
p3        =         iend; set p3 to the sum of all durations
print     p3; print it
endin

instr 3; plays the notes
asig      pluck     .2, cpspch(p4), cpspch(p4), 0, 1
aenv      transeg   1, p3, 0, 0
outs      asig*aenv, asig*aenv
endin

</CsInstruments>
<CsScore>
i 1 0 1
i 2 14 1
e
</CsScore>
</CsoundSynthesizer>
```

## Using Time Loops

As discussed above in the chapter about control structures, a time loop can be built in Csound either with the timout opcode or with the metro opcode. There were also simple examples for triggering instrument events using both methods. Here, a more complex example is given: A master instrument performs a time loop (choose either instr 1 for the timout method or instr 2 for the metro method) and triggers once in a loop a subinstrument. The subinstrument itself (instr 10) performs an i-time loop and triggers several instances of a sub-subinstrument (instr 100). Each instance performs a partial with an independent envelope for a bell-like additive synthesis.

EXAMPLE 03E09.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1; time loop with timout. events are triggered by event_i (i-rate)
loop:
idurloop  random    1, 4; duration of each loop
timout    0, idurloop, play
reinit    loop
play:
idurins   random    1, 5; duration of the triggered instrument
event_i   "i", 10, 0, idurins; triggers instrument 10
endin

instr 2; time loop with metro. events are triggered by event (k-rate)
kfreq     init      1; give a start value for the trigger frequency
kTrig     metro     kfreq
if kTrig == 1 then ;if trigger impulse:
kdur      random    1, 5; random duration for instr 10
event     "i", 10, 0, kdur; call instr 10
kfreq     random    .25, 1; set new value for trigger frequency
endif
endin

instr 10; triggers 8-13 partials
inumparts random    8, 14
inumparts =         int(inumparts); 8-13 as integer
ibasoct   random    5, 10; base pitch in octave values
ibasfreq  =         cpsoct(ibasoct)
ipan      random    .2, .8; random panning between left (0) and right (1)
icnt      =         0; counter
loop:
event_i   "i", 100, 0, p3, ibasfreq, icnt+1, inumparts, ipan
loop_lt   icnt, 1, inumparts, loop
endin

instr 100; plays one partial
ibasfreq  =         p4; base frequency of sound mixture
ipartnum  =         p5; which partial is this (1 - N)
inumparts =         p6; total number of partials
ipan      =         p7; panning
ifreqgen  =         ibasfreq * ipartnum; general frequency of this partial
ifreqdev  random    -10, 10; frequency deviation between -10% and +10%
; -- real frequency regarding deviation
ifreq     =         ifreqgen + (ifreqdev*ifreqgen)/100
ixtratim  random    0, p3; calculate additional time for this partial
p3        =         p3 + ixtratim; new duration of this partial
imaxamp   =         1/inumparts; maximum amplitude
idbdev    random    -6, 0; random deviation in dB for this partial
iamp      =   imaxamp * ampdb(idbdev-ipartnum); higher partials are softer
ipandev   random    -.1, .1; panning deviation
ipan      =         ipan + ipandev
aEnv      transeg   0, .005, 0, iamp, p3-.005, -10, 0
aSine     poscil    aEnv, ifreq, giSine
aL, aR    pan2      aSine, ipan
outs      aL, aR
prints    "ibasfreq = %d, ipartial = %d, ifreq = %d%n",
ibasfreq, ipartnum, ifreq
endin

</CsInstruments>
<CsScore>
i 1 0 300 ;try this, or the next line (or both)
;i 2 0 300
</CsScore>
</CsoundSynthesizer>

```

## Links And Related Opcodes

### Links

A great collection of interactive examples with FLTK widgets by Iain McCurdy can be found here. See particularily the "Realtime Score Generation" section. Recently, the collection has been ported to QuteCsound by René Jopi, and is part of QuteCsound's example menu.

An extended example for calculating score events at i-time can be found in the Re-Generation of Stockhausen's "Studie II" by Joachim Heintz (also included in the QuteCsound Examples menu).

### Related Opcodes

event_i / event: Generate an instrument event at i-time (event_i) or at k-time (event). Easy to use, but you cannot send a string to the subinstrument.

scoreline_i / scoreline: Generate an instrument at i-time (scoreline_i) or at k-time (scoreline). Like event_i/event, but you can send to more than one instrument but unlike event_i/event you can send strings. On the other hand, you must usually preformat your scoreline-string using sprintf.

sprintf / sprintfk: Generate a formatted string at i-time (sprintf) or k-time (sprintfk), and store it as a string-variable.

-+max_str_len=10000: Option in the "CsOptions" tag of a .csd file which extend the maximum string length to 9999 characters.

massign: Assigns the incoming MIDI events to a particular instrument. It is also possible to prevent any assigment by this opcode.

cpsmidi / ampmidi: Returns the frequency / velocity of a pressed MIDI key.

release: Returns "1" if the last k-cycle of an instrument has begun.

xtratim: Adds an additional time to the duration (p3) of an instrument.

turnoff / turnoff2: Turns an instrument off; either by the instrument itself (turnoff), or from another instrument and with several options (turnoff2).

-p3 / -p1: A negative duration (p3) turns an instrument on "indefinitely"; a negative instrument number (p1) turns this instrument off. See the examples at the beginning of this chapter.

-L stdin: Option in the "CsOptions" tag of a .csd file which lets you type in realtime score events.

timout: Allows you to perform time loops at i-time with reinitalization passes.

metro: Outputs momentary 1s with a definable (and variable) frequency. Can be used to perform a time loop at k-rate.

follow: Envelope follower.

# USER DEFINED OPCODES

Opcodes are the core units of everything that Csound does. They are like little machines that do a job, and programming is akin to connecting these little machines to perform a larger job. An opcode usually has something which goes into it: the inputs or arguments, and usually it has something which comes out of it: the output which is stored in one or more variables. Opcodes are written in the programming language C (that is where the name "Csound" comes from). If you want to create a new opcode in Csound, you must write it in C. How to do this is described in the Extending Csound chapter of this manual, and is also described in the relevant chapter of the Canonical Csound Reference Manual.

There is, however, a way of writing your own opcodes in the Csound Language itself. The opcodes which are written in this way, are called User Defined Opcodes or "UDO"s. A UDO behaves in the same way as a standard opcode: it has input arguments, and usually one or more output variables. They run at i-time or at k-time. You use them as part of the Csound Language after you have defined and loaded them.

User Defined Opcodes have many valuable properties. They make your instrument code clearer because they allow you to create abstractions of  blocks of code. Once a UDO has been defined it can be recalled and repeated many times within an orchestra, each repetition requiring only a single line of code. UDOs allow you to build up your own library of functions you need and return to frequently in your work. In this way, you build your own Csound dialect within the Csound Language. UDOs also represent a convenient format with which to share your work in Csound with other users.

This chapter explains, initially with a very basic example, how you can build your own UDOs, and what options they offer. Following this, the practice of loading UDOs in your .csd file is shown, followed by some tips in regard to some unique capabilities of UDOs. Before the "Links And Related Opcodes" section at the end, some examples are shown for different User Defined Opcode definitions and applications.

## Transforming Csound Instrument Code To A User Defined Opcode

Writing a User Defined Opcode is actually very easy and straightforward. It mainly means to extract a portion of usual Csound instrument code, and put it in the frame of a UDO. Let's start with the instrument code:

EXAMPLE 03F01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1
aDel      init      0; initialize delay signal
iFb       =         .7; feedback multiplier
aSnd      rand      .2; white noise
kdB       randomi   -18, -6, .4; random movement between -18 and -6
aSnd      =         aSnd * ampdb(kdB); applied as dB to noise
kFiltFq   randomi   100, 1000, 1; random movement between 100 and 1000
aFilt     reson    aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt     balance   aFilt, aSnd; bring aFilt to the volume of aSnd
aDelTm    randomi   .1, .8, .2; random movement between .1 and .8 as delay time
aDel      vdelayx   aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt   randomi   -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel    randomi   -12, 0, 1; ... for the filtered and the delayed signal
aOut      =         aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
```

This is a filtered noise, and its delay, which is fed back again into the delay line at a certain ratio iFb. The filter is moving as kFiltFq randomly between 100 and 1000 Hz. The volume of the filtered noise is moving as kdB randomly between -18 dB and -6 dB. The delay time moves between 0.1 and 0.8 seconds, and then both signals are mixed together.

### Basic Example

If this signal processing unit is to be transformed into a User Defined Opcode, the first question is about the extend of the code that will be encapsulated: where the UDO code will begin and end? The first solution could be a radical, and possibly bad, approach: to transform the whole instrument into a UDO.

EXAMPLE 03F02.csd

```<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

opcode FiltFb, 0, 0
aDel      init      0; initialize delay signal
iFb       =         .7; feedback multiplier
aSnd      rand      .2; white noise
kdB       randomi   -18, -6, .4; random movement between -18 and -6
aSnd      =         aSnd * ampdb(kdB); applied as dB to noise
kFiltFq   randomi   100, 1000, 1; random movement between 100 and 1000
aFilt     reson    aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt     balance   aFilt, aSnd; bring aFilt to the volume of aSnd
aDelTm    randomi   .1, .8, .2; random movement between .1 and .8 as delay time
aDel      vdelayx   aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt   randomi   -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel    randomi   -12, 0, 1; ... for the filtered and the delayed signal
aOut      =         aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
outs      aOut, aOut
endop

instr 1
FiltFb
endin

</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
```

Before we continue the discussion about the quality of this transormation, we should have a look at the syntax first. The general syntax for a User Defined Opcode is:

```opcode name, outtypes, intypes
...
endop
```

Here, the name of the UDO is FiltFb. You are free to use any name, but it is suggested that you begin the name with a capital letter. By doing this, you avoid duplicating the name of most of the pre-existing opcodes (FLTK and STK opcodes begin with capital letters) which normally start with a lower case letter. As we have no input arguments and no output arguments for this first version of FiltFb, both outtypes and intypes are set to zero. Similar to the instr ... endin block of a normal instrument definition, for a UDO the opcode ... endop keywords begin and end the UDO definition block. In the instrument, the UDO is called like a normal opcode by using its name, and in the same line the input arguments are listed on the right and the output arguments on the left. In the previous a example, 'FiltFb' has no input and output arguments so it is called by just using its name:

```instr 1
FiltFb
endin
```

Now - why is this UDO more or less useless? It achieves nothing, when compared to the original non UDO version, and in fact loses some of the advantages of the instrument defined version. Firstly, it is not advisable to include this line in the UDO:

```          outs      aOut, aOut
```

This statement writes the audio signal aOut from inside the UDO to the output device. Imagine you want to change the output channels, or you want to add any signal modifier after the opcode. This would be impossible with this statement. So instead of including the 'outs' opcode, we give the FiltFb UDO an audio output:

```          xout      aOut
```

The xout statement of a UDO definition works like the "outlets" in PD or Max, sending the result(s) of an opcode back to the caller instrument.

Now let us consider the UDO's input arguments, choose which processes should be carried out within the FiltFb unit, and what aspects would offer greater flexibility if controllable from outside the UDO. First, the aSnd parameter should not be restricted to a white noise with amplitude 0.2, but should be an input (like a "signal inlet" in PD/Max). This is implemented using the line:

```aSnd      xin
```

Both the output and the input type must be declared in the first line of the UDO definition, whether they are i-, k- or a-variables. So instead of "opcode FiltFb, 0, 0" the statement has changed now to "opcode FiltFb, a, a", because we have both input and output as a-variable.

The UDO is now much more flexible and logical: it takes any audio input, it performs the filtered delay and feedback processing, and returns the result as another audio signal. In the next example, instrument 1 does exactly the same as before. Instrument 2 has live input instead.

EXAMPLE 03F03.csd

```<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

opcode FiltFb, a, a
aSnd      xin
aDel      init      0; initialize delay signal
iFb       =         .7; feedback multiplier
kdB       randomi   -18, -6, .4; random movement between -18 and -6
aSnd      =         aSnd * ampdb(kdB); applied as dB to noise
kFiltFq   randomi   100, 1000, 1; random movement between 100 and 1000
aFilt     reson    aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency
aFilt     balance   aFilt, aSnd; bring aFilt to the volume of aSnd
aDelTm    randomi   .1, .8, .2; random movement between .1 and .8 as delay time
aDel      vdelayx   aFilt + iFb*aDel, aDelTm, 1, 128; variable delay
kdbFilt   randomi   -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel    randomi   -12, 0, 1; ... for the filtered and the delayed signal
aOut      =         aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
xout      aOut
endop

instr 1; white noise input
aSnd      rand      .2
aOut      FiltFb    aSnd
outs      aOut, aOut
endin

instr 2; live audio input
aSnd      inch      1; input from channel 1
aOut      FiltFb    aSnd
outs      aOut, aOut
endin

</CsInstruments>
<CsScore>
i 1 0 60 ;change to i 2 for live audio input
</CsScore>
</CsoundSynthesizer>
```

### Is There An Optimal Design For A User Defined Opcode?

Is this now the optimal version of the FiltFb User Defined Opcode? Obviously there are other parts of the opcode definiton which could be controllable from outside: the feedback multiplier iFb, the random movement of the input signal kdB, the random movement of the filter frequency kFiltFq, and the random movements of the output mix kdbSnd and kdbDel. Is it better to put them outside of the opcode definition, or is it better to leave them inside?

There is no general answer. It depends on the degree of abstraction you desire or you prefer to relinquish. If you are working on a piece for which all of the parameters settings are already defined as required in the UDO, then control from the caller instrument may not be necessary . The advantage of minimizing the number of input and output arguments is the simplification in using the UDO. The more flexibility you require from your UDO however, the greater the number of input arguments that will be required. Providing more control is better for a later reusability, but may be unnecessarily complicated.

Perhaps it is the best solution to have one abstract definition which performs one task, and to create a derivative - also as UDO - fine tuned for the particular project you are working on. The final example demonstrates the definition of a general and more abstract UDO FiltFb, and its various applications: instrument 1 defines the specifications in the instrument itself; instrument 2 uses a second UDO Opus123_FiltFb for this purpose; instrument 3 sets the general FiltFb in a new context of two varying delay lines with a buzz sound as input signal.

EXAMPLE 03F04.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

opcode FiltFb, aa, akkkia
; -- DELAY AND FEEDBACK OF A BAND FILTERED INPUT SIGNAL --
;input: aSnd = input sound
; kFb = feedback multiplier (0-1)
; kFiltFq: center frequency for the reson band filter (Hz)
; kQ = band width of reson filter as kFiltFq/kQ
; iMaxDel = maximum delay time in seconds
; aDelTm = delay time
;output: aFilt = filtered and balanced aSnd
; aDel = delay and feedback of aFilt

aSnd, kFb, kFiltFq, kQ, iMaxDel, aDelTm xin
aDel      init      0
aFilt     reson     aSnd, kFiltFq, kFiltFq/kQ
aFilt     balance   aFilt, aSnd
aDel      vdelayx   aFilt + kFb*aDel, aDelTm, iMaxDel, 128; variable delay
xout      aFilt, aDel
endop

opcode Opus123_FiltFb, a, a
;;the udo FiltFb here in my opus 123 :)
;input = aSnd
;output = filtered and delayed aSnd in different mixtures
aSnd      xin
kdB       randomi   -18, -6, .4; random movement between -18 and -6
aSnd      =         aSnd * ampdb(kdB); applied as dB to noise
kFiltFq   randomi   100, 1000, 1; random movement between 100 and 1000
iQ        =         5
iFb       =         .7; feedback multiplier
aDelTm    randomi   .1, .8, .2; random movement between .1 and .8 as delay time
aFilt, aDel FiltFb    aSnd, iFb, kFiltFq, iQ, 1, aDelTm
kdbFilt   randomi   -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel    randomi   -12, 0, 1; ... for the noise and the delay signal
aOut      =         aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
xout      aOut
endop

instr 1; well known context as instrument
aSnd      rand      .2
kdB       randomi   -18, -6, .4; random movement between -18 and -6
aSnd      =         aSnd * ampdb(kdB); applied as dB to noise
kFiltFq   randomi   100, 1000, 1; random movement between 100 and 1000
iQ        =         5
iFb       =         .7; feedback multiplier
aDelTm    randomi   .1, .8, .2; random movement between .1 and .8 as delay time
aFilt, aDel FiltFb    aSnd, iFb, kFiltFq, iQ, 1, aDelTm
kdbFilt   randomi   -12, 0, 1; two random movements between -12 and 0 (dB) ...
kdbDel    randomi   -12, 0, 1; ... for the noise and the delay signal
aOut      =         aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
aOut      linen     aOut, .1, p3, 3
outs      aOut, aOut
endin

instr 2; well known context UDO which embeds another UDO
aSnd      rand      .2
aOut      Opus123_FiltFb aSnd
aOut      linen     aOut, .1, p3, 3
outs      aOut, aOut
endin

instr 3; other context: two delay lines with buzz
kFreq     randomh   200, 400, .08; frequency for buzzer
aSnd      buzz      .2, kFreq, 100, giSine; buzzer as aSnd
kFiltFq   randomi   100, 1000, .2; center frequency
aDelTm1   randomi   .1, .8, .2; time for first delay line
aDelTm2   randomi   .1, .8, .2; time for second delay line
kFb1      randomi   .8, 1, .1; feedback for first delay line
kFb2      randomi   .8, 1, .1; feedback for second delay line
a0, aDel1 FiltFb    aSnd, kFb1, kFiltFq, 1, 1, aDelTm1; delay signal 1
a0, aDel2 FiltFb    aSnd, kFb2, kFiltFq, 1, 1, aDelTm2; delay signal 2
aDel1     linen     aDel1, .1, p3, 3
aDel2     linen     aDel2, .1, p3, 3
outs      aDel1, aDel2
endin

</CsInstruments>
<CsScore>
i 1 0 30
i 2 31 30
i 3 62 120
</CsScore>
</CsoundSynthesizer>
```

The good thing about the different possibilities of writing a more specified UDO, or a more generalized: You needn't decide this at the beginning of your work. Just start with any formulation you find useful in a certain situation. If you continue and see that you should have some more parameters accessible, it should be easy to rewrite the UDO. Just be careful not to confuse the different versions you create. Use names like Faulty1, Faulty2 etc. instead of overwriting Faulty. Making use of extensive commenting when you initially create the UDO will make it easier to adapt the UDO at a later time. What are the inputs (including the measurement units they use such as Hertz or seconds)? What are the outputs? - How you do this, is up to you and depends on your style and your preference.

## How To Use The User Defined Opcode Facility In Practice

In this section, we will address the main points of using UDOs: what you must bear in mind when loading them, what special features they offer, what restrictions you must be aware of and how you can build your own language with them.

### Loading User Defined Opcodes In The Orchestra Header

As can be seen from the examples above, User Defined Opcodes must be defined in the orchestra header (which is sometimes called "instrument 0"). Note that your opcode definitions must be the last part of all your orchestra header statements. The following usage results in an error, even though it is probably fair to regard Csound as intolerant in doing so - this intolerance may be removed in future versions of Csound.

EXAMPLE 03F05.csd

```<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

opcode FiltFb, aa, akkkia
; -- DELAY AND FEEDBACK OF A BAND FILTERED INPUT SIGNAL --
;input: aSnd = input sound
; kFb = feedback multiplier (0-1)
; kFiltFq: center frequency for the reson band filter (Hz)
; kQ = band width of reson filter as kFiltFq/kQ
; iMaxDel = maximum delay time in seconds
; aDelTm = delay time
;output: aFilt = filtered and balanced aSnd
; aDel = delay and feedback of aFilt

aSnd, kFb, kFiltFq, kQ, iMaxDel, aDelTm xin
aDel      init      0
aFilt     reson     aSnd, kFiltFq, kFiltFq/kQ
aFilt     balance   aFilt, aSnd
aDel      vdelayx   aFilt + kFb*aDel, aDelTm, iMaxDel, 128; variable delay
xout      aFilt, aDel
endop

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1
...
```

Csound will complain about "misplaced opcodes", which means that the ftgen and the seed statement must be before the opcode definitions.

### Loading A Set Of User Defined Opcodes

You can load as many User Defined Opcodes into a Csound orchestra as you wish. As long as they do not depend on each other, their order is arbitrarily. If UDO Opus123_FiltFb uses the UDO FiltFb for its definition (see the example above), you must first load FiltFb, and then Opus123_FiltFb. If not, you will get an error like this:

```orch compiler:
opcode  Opus123_FiltFb  a       a
error:  no legal opcode, line 25:
aFilt, aDel FiltFb    aSnd, iFb, kFiltFq, iQ, 1, aDelTm
```

### Loading By An #include File

Definitions of User Defined Opcodes can also be loaded into a .csd file by an "#include" statement. What you must do is the following:

1. Save your opcode definitions in a plain text file, for instance "MyOpcodes.txt".
2. If this file is in the same directory as your .csd file, you can just call it by the statement:
```#include "MyOpcodes.txt"
```
3. If "MyOpcodes.txt" is in a different directory, you must call it by the full path name, for instance:
```#include "/Users/me/Documents/Csound/UDO/MyOpcodes.txt"
```

As always, make sure that the "#include" statement is the last one in the orchestra header, and that the logical order is accepted if one opcode depends on another.

If you work with User Defined Opcodes a lot, and build up a collection of them, the #include feature allows you easily import several or all of them to your .csd file.

### The setksmps Feature

The ksmps assignment in the orchestra header cannot be changed during the performance of a .csd file. But in a User Defined Opcode you have the unique possibility of changing this value by a local assignment. If you use a setksmps statement in your UDO, you can have a locally smaller value for the number of samples per control cycle in the UDO. In the following example, the print statement in the UDO prints ten times compared to one time in the instrument, because ksmps in the UDO is 10 times smaller:

EXAMPLE 03F06.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 44100 ;very high because of printing

opcode Faster, 0, 0
setksmps 4410 ;local ksmps is 1/10 of global ksmps
printks "UDO print!%n", 0
endop

instr 1
printks "Instr print!%n", 0 ;print each control period (once per second)
Faster ;print 10 times per second because of local ksmps
endin

</CsInstruments>
<CsScore>
i 1 0 2
</CsScore>
</CsoundSynthesizer>
```

### Default Arguments

For i-time arguments, you can use a simple feature to set default values:

• "o" (instead of "i") defaults to 0
• "p" (instead of "i") defaults to 1
• "j" (instead of "i") defaults to -1

So you can omit these arguments - in this case the default values will be used. If you give an input argument instead, the default value will be overwritten:

EXAMPLE 03F07.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

opcode Defaults, iii, opj
ia, ib, ic xin
xout ia, ib, ic
endop

instr 1
ia, ib, ic Defaults
print     ia, ib, ic
ia, ib, ic Defaults  10
print     ia, ib, ic
ia, ib, ic Defaults  10, 100
print     ia, ib, ic
ia, ib, ic Defaults  10, 100, 1000
print     ia, ib, ic
endin

</CsInstruments>
<CsScore>
i 1 0 0
</CsScore>
</CsoundSynthesizer>
```

### Recursive User Defined Opcodes

Recursion means that a function can call itself. This is a feature which can be useful in many situations. Also User Defined Opcodes can be recursive. You can do many things with a recursive UDO which you cannot do in any other way; at least not in a simliarly simple way. This is an example of generating eight partials by a recursive UDO. See the last example in the next section for a more musical application of a recursive UDO.

EXAMPLE 03F08.csd

```<CsoundSynthesizer>
<CsOptions>
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

opcode Recursion, a, iip
;input: frequency, number of partials, first partial (default=1)
ifreq, inparts, istart xin
iamp      =         1/inparts/istart ;decreasing amplitudes for higher partials
if istart < inparts then ;if inparts have not yet reached
acall     Recursion ifreq, inparts, istart+1 ;call another instance of this UDO
endif
aout      oscils    iamp, ifreq*istart, 0 ;execute this partial
aout      =         aout + acall ;add the audio signals
xout      aout
endop

instr 1
amix      Recursion 400, 8 ;8 partials with a base frequency of 400 Hz
aout      linen     amix, .01, p3, .1
outs      aout, aout
endin

</CsInstruments>
<CsScore>
i 1 0 1
</CsScore>
</CsoundSynthesizer>
```

## Examples

We will focus here on some examples which will hopefully show the wide range of User Defined Opcodes. Some of them are adaptions of examples from previous chapters about the Csound Syntax. Much more examples can be found in the User-Defined Opcode Database, editied by Steven Yi.

### Play A Mono Or Stereo Soundfile

Csound is often very strict and gives errors where other applications might 'turn a blind eye'. This is also the case if you read a soundfile using one of Csound's opcodes: soundin, diskin or diskin2. If your soundfile is mono, you must use the mono version, which has one audio signal as output. If your soundfile is stereo, you must use the stereo version, which outputs two audio signals. If you want a stereo output, but you happen to have a mono soundfile as input, you will get the error message:

```INIT ERROR in ...: number of output args inconsistent with number
of file channels
```

It may be more useful to have an opcode which works for both, mono and stereo files as input. This is a ideal job for a UDO. Two versions are possible: FilePlay1 returns always one audio signal (if the file is stereo it uses just the first channel), FilePlay2 returns always two audio signals (if the file is mono it duplicates this to both channels). We can use the default arguments to make this opcode behave exactly as diskin2:

EXAMPLE 03F09.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

opcode FilePlay1, a, Skoooooo
;gives mono output regardless your soundfile is mono or stereo
;(if stereo, just the first channel is used)
;see diskin2 page of the csound manual for information about the input arguments
Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit xin
ichn      filenchnls Sfil
if ichn == 1 then
aout      diskin2   Sfil, kspeed, iskip, iloop, iformat, iwsize,
ibufsize, iskipinit
else
aout, a0  diskin2   Sfil, kspeed, iskip, iloop, iformat, iwsize,
ibufsize, iskipinit
endif
xout      aout
endop

opcode FilePlay2, aa, Skoooooo
;gives stereo output regardless your soundfile is mono or stereo
;see diskin2 page of the csound manual for information about the input arguments
Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit xin
ichn      filenchnls Sfil
if ichn == 1 then
aL        diskin2    Sfil, kspeed, iskip, iloop, iformat, iwsize,
ibufsize, iskipinit
aR        =          aL
else
aL, aR      diskin2    Sfil, kspeed, iskip, iloop, iformat, iwsize,
ibufsize, iskipinit
endif
xout       aL, aR
endop

instr 1
aMono     FilePlay1  "fox.wav", 1
outs       aMono, aMono
endin

instr 2
aL, aR    FilePlay2  "fox.wav", 1
outs       aL, aR
endin

</CsInstruments>
<CsScore>
i 1 0 4
i 2 4 4
</CsScore>
</CsoundSynthesizer>
```

### Change The Content Of A Function Table

In example 03C11.csd, a function table has been changed at performance time, once a second, by random deviations. This can be easily transformed to a User Defined Opcode. It takes the function table variable, a trigger signal, and the random deviation in percent as input. In each control cycle where the trigger signal is "1", the table values are read. The random deviation is applied, and the changed values are written again into the table. Here, the tab/tabw opcodes are used to make sure that also non-power-of-two tables can be used.

EXAMPLE 03F10.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 441
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 256, 10, 1; sine wave
seed      0; each time different seed

opcode TabDirtk, 0, ikk
;"dirties" a function table by applying random deviations at a k-rate trigger
;input: function table, trigger (1 = perform manipulation),
;deviation as percentage
ift, ktrig, kperc xin
if ktrig == 1 then ;just work if you get a trigger signal
kndx      =         0
loop:
krand     random    -kperc/100, kperc/100
kval      tab       kndx, ift; read old value
knewval   =         kval + (kval * krand); calculate new value
tabw      knewval, kndx, giSine; write new value
loop_lt   kndx, 1, ftlen(ift), loop; loop construction
endif
endop

instr 1
kTrig     metro     1, .00001 ;trigger signal once per second
TabDirtk  giSine, kTrig, 10
aSig      poscil    .2, 400, giSine
outs      aSig, aSig
endin

</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
```

Of course you can also change the content of a function table at init-time. The next example permutes a series of numbers randomly each time it is called. For this purpose, first the input function table iTabin is copied as iCopy. This is necessary because we do not want to change iTabin in any way. Next a random index in iCopy is created and the value at this location in iTabin is written at the beginning of iTabout, which contains the permuted results. At the end of this cycle, each value in iCopy which has a larger index than the one which has just been read, is shifted one position to the left. So now iCopy has become one position smaller - not in table size but in the number of values to read. This procedure is continued until all values from iCopy are reflected in iTabout:

EXAMPLE 03F11.csd

```<CsoundSynthesizer>
<CsInstruments>
;Example by Joachim Heintz

giVals    ftgen     0, 0, -12, -2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
seed      0; each time different seed

opcode TabPermRand_i, i, i
;permuts randomly the values of the input table
;and creates an output table for the result
iTabin    xin
itablen   =         ftlen(iTabin)
iTabout   ftgen     0, 0, -itablen, 2, 0 ;create empty output table
iCopy     ftgen     0, 0, -itablen, 2, 0 ;create empty copy of input table
tableicopy iCopy, iTabin ;write values of iTabin into iCopy
icplen    init      itablen ;number of values in iCopy
indxwt    init      0 ;index of writing in iTabout
loop:
indxrd    random    0, icplen - .0001; random read index in iCopy
indxrd    =         int(indxrd)
ival      tab_i     indxrd, iCopy; read the value
tabw_i    ival, indxwt, iTabout; write it to iTabout
; -- shift values in iCopy larger than indxrd one position to the left
shift:
if indxrd < icplen-1 then ;if indxrd has not been the last table value
ivalshft  tab_i     indxrd+1, iCopy ;take the value to the right ...
tabw_i    ivalshft, indxrd, iCopy ;...and write it to indxrd position
indxrd    =         indxrd + 1 ;then go to the next position
igoto     shift ;return to shift and see if there is anything left to do
endif
indxwt    =         indxwt + 1 ;increase the index of writing in iTabout
loop_gt   icplen, 1, 0, loop ;loop as long as there is ;
;a value in iCopy
ftfree    iCopy, 0 ;delete the copy table
xout      iTabout ;return the number of iTabout
endop

instr 1
iPerm     TabPermRand_i giVals ;perform permutation
;print the result
indx      =         0
Sres      =         "Result:"
print:
ival      tab_i     indx, iPerm
Sprint    sprintf   "%s %d", Sres, ival
Sres      =         Sprint
loop_lt   indx, 1, 12, print
puts      Sres, 1
endin

instr 2; the same but performed ten times
icnt      =         0
loop:
iPerm     TabPermRand_i giVals ;perform permutation
;print the result
indx      =         0
Sres      =         "Result:"
print:
ival      tab_i     indx, iPerm
Sprint    sprintf   "%s %d", Sres, ival
Sres      =         Sprint
loop_lt   indx, 1, 12, print
puts      Sres, 1
loop_lt   icnt, 1, 10, loop
endin

</CsInstruments>
<CsScore>
i 1 0 0
i 2 0 0
</CsScore>
</CsoundSynthesizer>
```

### Print The Content Of A Function Table

There is no opcode in Csound for printing the contents of a function table, but one can be created as a UDO. Again a loop is needed for checking the values and putting them into a string which can then be printed. In addition, some options can be given for the print precision and for the number of elements in a line.

EXAMPLE 03F12.csd

```<CsoundSynthesizer>
<CsOptions>
-ndm0 -+max_str_len=10000
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz

gitab     ftgen     1, 0, -7, -2, 0, 1, 2, 3, 4, 5, 6
gisin     ftgen     2, 0, 128, 10, 1

opcode TableDumpSimp, 0, ijo
;prints the content of a table in a simple way
;input: function table, float precision while printing (default = 3),
;parameters per row (default = 10, maximum = 32)
ifn, iprec, ippr xin
iprec     =         (iprec == -1 ? 3 : iprec)
ippr      =         (ippr == 0 ? 10 : ippr)
iend      =         ftlen(ifn)
indx      =         0
Sformat   sprintf   "%%.%df\t", iprec
Sdump     =         ""
loop:
ival      tab_i     indx, ifn
Snew      sprintf   Sformat, ival
Sdump     strcat    Sdump, Snew
indx      =         indx + 1
imod      =         indx % ippr
if imod == 0 then
puts      Sdump, 1
Sdump     =         ""
endif
if indx < iend igoto loop
puts      Sdump, 1
endop

instr 1
TableDumpSimp p4, p5, p6
prints    "%n"
endin

</CsInstruments>
<CsScore>
;i1   st   dur   ftab   prec   ppr
i1    0    0     1      -1
i1    .    .     1       0
i1    .    .     2       3     10
i1    .    .     2       6     32
</CsScore>
</CsoundSynthesizer>
```

### A Recursive User Defined Opcode For Additive Synthesis

In the last example of the chapter about Triggering Instrument Events a number of partials were synthesized, each with a random frequency deviation of up to 10% compared to precise harmonic spectrum frequencies and a unique duration for each partial. This can also be written as a recursive UDO. Each UDO generates one partial, and calls the UDO again until the last partial is generated. Now the code can be reduced to two instruments: instrument 1 performs the time loop, calculates the basic values for one note, and triggers the event. Then instrument 11 is called which feeds the UDO with the values and passes the audio signals to the output.

EXAMPLE 03F13.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

opcode PlayPartials, aa, iiipo
;plays inumparts partials with frequency deviation and own envelopes and
;durations for each partial
ibasfreq  \ ; base frequency of sound mixture
inumparts \ ; total number of partials
ipan      \ ; panning
ipartnum  \ ; which partial is this (1 - N, default=1)
ixtratim  \ ; extra time in addition to p3 needed for this partial (default=0)
xin
ifreqgen  =         ibasfreq * ipartnum; general frequency of this partial
ifreqdev  random    -10, 10; frequency deviation between -10% and +10%
ifreq     =         ifreqgen + (ifreqdev*ifreqgen)/100; real frequency
ixtratim1 random    0, p3; calculate additional time for this partial
imaxamp   =         1/inumparts; maximum amplitude
idbdev    random    -6, 0; random deviation in dB for this partial
iamp      =        imaxamp * ampdb(idbdev-ipartnum); higher partials are softer
ipandev   random    -.1, .1; panning deviation
ipan      =         ipan + ipandev
aEnv      transeg   0, .005, 0, iamp, p3+ixtratim1-.005, -10, 0; envelope
aSine     poscil    aEnv, ifreq, giSine
aL1, aR1  pan2      aSine, ipan
if ixtratim1 > ixtratim then
ixtratim  =  ixtratim1 ;set ixtratim to the ixtratim1 if the latter is larger
endif
if ipartnum < inumparts then ;if this is not the last partial
; -- call the next one
aL2, aR2  PlayPartials ibasfreq, inumparts, ipan, ipartnum+1, ixtratim
else               ;if this is the last partial
p3        =         p3 + ixtratim; reset p3 to the longest ixtratim value
endif
xout      aL1+aL2, aR1+aR2
endop

instr 1; time loop with metro
kfreq     init      1; give a start value for the trigger frequency
kTrig     metro     kfreq
if kTrig == 1 then ;if trigger impulse:
kdur      random    1, 5; random duration for instr 10
knumparts random    8, 14
knumparts =         int(knumparts); 8-13 partials
kbasoct   random    5, 10; base pitch in octave values
kbasfreq  =         cpsoct(kbasoct) ;base frequency
kpan      random    .2, .8; random panning between left (0) and right (1)
event     "i", 11, 0, kdur, kbasfreq, knumparts, kpan; call instr 11
kfreq     random    .25, 1; set new value for trigger frequency
endif
endin

instr 11; plays one mixture with 8-13 partials
aL, aR    PlayPartials p4, p5, p6
outs      aL, aR
endin

</CsInstruments>
<CsScore>
i 1 0 300
</CsScore>
</CsoundSynthesizer>
```

### Using Strings as Arrays

For some situations it can be very useful to use strings in Csound as a collection of single strings or numbers. This is what programming languages call a list or an array. Csound does not provide opcodes for this purpose, but you can define these opcodes as UDOs. A set of these UDOs can then be used like this:

```ilen       StrayLen     "a b c d e"
ilen -> 5
Sel        StrayGetEl   "a b c d e", 0
Sel -> "a"
inum       StrayGetNum  "1 2 3 4 5", 0
inum -> 1
ipos       StrayElMem   "a b c d e", "c"
ipos -> 2
ipos       StrayNumMem  "1 2 3 4 5", 3
ipos -> 2
Sres       StraySetEl   "a b c d e", "go", 0
Sres -> "go a b c d e"
Sres       StraySetNum  "1 2 3 4 5", 0, 0
Sres -> "0 1 2 3 4 5"
Srev       StrayRev     "a b c d e"
Srev -> "e d c b a"
Sub        StraySub     "a b c d e", 1, 3
Sub -> "b c"
Sout       StrayRmv     "a b c d e", "b d"
Sout -> "a c e"
Srem       StrayRemDup  "a b a c c d e e"
Srem -> "a b c d e"
ift,iftlen StrayNumToFt "1 2 3 4 5", 1
ift -> 1 (same as f 1 0 -5 -2 1 2 3 4 5)
iftlen -> 5
```

You can find an article about defining such a sub-language here, and the up to date UDO code here (or at the UDO repository).

## Links And Related Opcodes

### Links

This is the page in the Canonical Csound Reference Manual about the definition of UDOs.

The most important resource of User Defined Opcodes is the User-Defined Opcode Database, editied by Steven Yi.

Also by Steven Yi, read the second part of his article about control flow in Csound in the Csound Journal (summer 2006).

### Related Opcodes

opcode: The opcode used to begin a User Defined Opcode definition.

#include: Useful to include any loadable Csound code, in this case definitions of User Defined Opcodes.

setksmps: Lets you set a smaller ksmps value locally in a User Defined Opcode.

# MACROS

Macros within Csound is a mechanism whereby a line or a block of text can be referenced using a macro codeword. Whenever the codeword is subsequently encountered in a Csound orchestra or score it will be replaced by the code text contained within the macro. This mechanism can be useful in situations where a line or a block of code will be repeated many times - if a change is required in the code that will be repeated, it need only be altered once in the macro definition rather than having to be edited in each of the repetitions.

Csound utilises a subtly different mechanism for orchestra and score macros so each will be considered in turn. There are also additional features offered by the macro system such as the ability to create a macro that accepts arguments - a little like the main macro containing sub-macros that can be repeated several times within the main macro - the inclusion of a block of text contained within a completely separate file and other macro refinements.

It is important to realise that a macro can contain any text, including carriage returns, and that Csound will be ignorant to its use of syntax until the macro is actually used and expanded elsewhere in the orchestra or score.

## Orchestra Macros

Macros are defined using the syntax:

```#define NAME # replacement text #
```

'NAME' is the user-defined name that will be used to call the macro at some point later in the orchestra; it must begin with a letter but can then contain any combination of numbers and letters. 'replacement text', bounded by hash symbols will be the text that will replace the macro name when later called. Remember that the replacement text can stretch over several lines. One syntactical aspect to note is that '#define' needs to be right at the beginning of a line, i.e. the Csound parser will be intolerant toward the initial '#' being preceded by any white space, whether that be spaces or tabs. A macro can be defined anywhere within the <CsInstruments> </CsInstruments> sections of a .csd file.

When it is desired to use and expand the macro later in the orchestra the macro name needs to be preceded with a '\$' symbol thus:

```  \$NAME
```

The following example illustrates the basic syntax needed to employ macros. The name of a sound file is referenced twice in the score so it is defined as a macro just after the header statements. Instrument 1 derives the duration of the sound file and instructs instrument 2 to play a note for this duration. instrument 2 plays the sound file. The score as defined in the <CsScore> </CsScore> section only lasts for 0.01 seconds but the event_i statement in instrument 1 will extend this for the required duration. The sound file is a mono file so you can replace it with any other mono file or use the original one.

#### EXAMPLE 03G01.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
sr      =       44100
ksmps   =       16
nchnls  =       1
0dbfs   =       1

; define the macro
#define SOUNDFILE # "loop.wav" #

instr  1
; use an expansion of the macro in deriving the duration of the sound file
idur  filelen   \$SOUNDFILE
event_i   "i",2,0,idur
endin

instr  2
; use another expansion of the macro in playing the sound file
a1  diskin2  \$SOUNDFILE,1
out      a1
endin

</CsInstruments>

<CsScore>
i 1 0 0.01
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
```

In more complex situations where we require slight variations, such as different constant values or different sound files in each reuse of the macro, we can use a macro with arguments. A macro's argument are defined as a list of sub-macro names within brackets after the name of the primary macro and each macro argument is separated by an apostrophe as shown below.

```#define NAME(Arg1'Arg2'Arg3...) # replacement text #
```

Arguments can be any text string permitted as Csound code, they should not be likened to opcode arguments where each must conform to a certain type such as i, k, a etc. Macro arguments are subsequently referenced in the macro text using their names preceded by a '\$' symbol. When the main macro is called later in the orchestra its arguments are then replaced with the values or strings required. The Csound Reference Manual states that up to five arguments are permitted but this still refers to an earlier implementation and in fact many more are actually permitted.

In the following example a 6 partial additive synthesis engine with a percussive character is defined within a macro. Its fundamental frequency and the ratios of its six partials to this fundamental frequency are prescribed as macro arguments. The macro is reused within the orchestra twice to create two different timbres, it could be reused many more times however. The fundamental frequency argument is passed to the macro as p4 from the score.

#### EXAMPLE 03G02.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
sr      =       44100
ksmps   =       16
nchnls  =       1
0dbfs   =       1

gisine  ftgen  0,0,2^10,10,1

; define the macro
#define ADDITIVE_TONE(Frq'Ratio1'Ratio2'Ratio3'Ratio4'Ratio5'Ratio6) #
iamp =      0.1
aenv expseg  1,p3*(1/\$Ratio1),0.001,1,0.001
a1  poscil  iamp*aenv,\$Frq*\$Ratio1,gisine
aenv expseg  1,p3*(1/\$Ratio2),0.001,1,0.001
a2  poscil  iamp*aenv,\$Frq*\$Ratio2,gisine
aenv expseg  1,p3*(1/\$Ratio3),0.001,1,0.001
a3  poscil  iamp*aenv,\$Frq*\$Ratio3,gisine
aenv expseg  1,p3*(1/\$Ratio4),0.001,1,0.001
a4  poscil  iamp*aenv,\$Frq*\$Ratio4,gisine
aenv expseg  1,p3*(1/\$Ratio5),0.001,1,0.001
a5  poscil  iamp*aenv,\$Frq*\$Ratio5,gisine
aenv expseg  1,p3*(1/\$Ratio6),0.001,1,0.001
a6  poscil  iamp*aenv,\$Frq*\$Ratio6,gisine
a7  sum     a1,a2,a3,a4,a5,a6
out     a7
#

instr  1 ; xylophone
; expand the macro with partial ratios that reflect those of a xylophone
; the fundemental frequency macro argument (the first argument -
; - is passed as p4 from the score
\$ADDITIVE_TONE(p4'1'3.932'9.538'16.688'24.566'31.147)
endin

instr  2 ; vibraphone
\$ADDITIVE_TONE(p4'1'3.997'9.469'15.566'20.863'29.440)
endin

</CsInstruments>

<CsScore>
i 1 0  1 200
i 1 1  2 150
i 1 2  4 100
i 2 3  7 800
i 2 4  4 700
i 2 5  7 600
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy

```

## Score Macros

Score macros employ a similar syntax. Macros in the score can be used in situations where a long string of p-fields are likely to be repeated or, as in the next example, to define a palette of score patterns than repeat but with some variation such as transposition. In this example two 'riffs' are defined which each employ two macro arguments: the first to define when the riff will begin and the second to define a transposition factor in semitones. These riffs are played back using a bass guitar-like instrument using the wgpluck2 opcode. Remember that mathematical expressions within the Csound score must be bound within square brackets [].

#### EXAMPLE 03G02.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
sr      =       44100
ksmps   =       16
nchnls  =       1
0dbfs   =       1

instr  1 ; bass guitar
a1   wgpluck2 0.98, 0.4, cpsmidinn(p4), 0.1, 0.6
aenv linseg   1,p3-0.1,1,0.1,0
out    a1*aenv
endin

</CsInstruments>

<CsScore>
; p4 = pitch as a midi note number
#define RIFF_1(Start'Trans)
#
i 1 [\$Start     ]  1     [36+\$Trans]
i 1 [\$Start+1   ]  0.25  [43+\$Trans]
i 1 [\$Start+1.25]  0.25  [43+\$Trans]
i 1 [\$Start+1.75]  0.25  [41+\$Trans]
i 1 [\$Start+2.5 ]  1     [46+\$Trans]
i 1 [\$Start+3.25]  1     [48+\$Trans]
#
#define RIFF_2(Start'Trans)
#
i 1 [\$Start     ]  1     [34+\$Trans]
i 1 [\$Start+1.25]  0.25  [41+\$Trans]
i 1 [\$Start+1.5 ]  0.25  [43+\$Trans]
i 1 [\$Start+1.75]  0.25  [46+\$Trans]
i 1 [\$Start+2.25]  0.25  [43+\$Trans]
i 1 [\$Start+2.75]  0.25  [41+\$Trans]
i 1 [\$Start+3   ]  0.5   [43+\$Trans]
i 1 [\$Start+3.5 ]  0.25  [46+\$Trans]
#
t 0 90
\$RIFF_1(0 ' 0)
\$RIFF_1(4 ' 0)
\$RIFF_2(8 ' 0)
\$RIFF_2(12'-5)
\$RIFF_1(16'-5)
\$RIFF_2(20'-7)
\$RIFF_2(24' 0)
\$RIFF_2(28' 5)
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
```

Score macros can themselves contain macros so that, for example, the above example could be further expanded so that a verse, chorus structure could be employed where verses and choruses, defined using macros, were themselves constructed from a series of riff macros.

UDOs and macros can both be used to reduce code repetition and there are many situations where either could be used but each offers its own strengths. UDOs strengths lies in their ability to be used just like an opcode with inputs and output, the ease with which they can be shared - between Csound projects and between Csound users - their ability to operate at a different k-rate to the rest of the orchestra and in how they facilitate recursion. The fact that macro arguments are merely blocks of text, however, offers up new possibilities and unlike UDOs, macros can span several instruments. Of course UDOs have no use in the Csound score unlike macros. Macros can also be used to simplify the creation of complex FLTK GUI where panel sections might be repeated with variations of output variable names and location.

Csound's orchestra and score macro system offers many additional refinements and this chapter serves merely as an introduction to their basic use. To learn more it is recommended to refer to the relevant sections of the Csound Reference Manual.

# ADDITIVE SYNTHESIS

Jean Baptiste Joseph Fourier demonstrated around 1800 that any continuous function can be perfectly described as a sum of sine waves. This in fact means that you can create any sound, no matter how complex, if you know which sine waves to add together.

This concept really excited the early pioneers of electronic music, who imagined that sine waves would give them the power to create any sound imaginable and previously unimagined. Unfortunately, they soon realized that while adding sine waves is easy, interesting sounds must have a large number of sine waves which are constantly varying in frequency and amplitude, which turns out to be a hugely impractical task.

However, additive synthesis can provide unusual and interesting sounds. Moreover both, the power of modern computers, and the ability of managing data in a programming language offer new dimensions of working with this old tool. As with most things in Csound there are several ways to go about it. We will try to show some of them, and see how they are connected with different programming paradigms.

## What are the main parameters of Additive Synthesis?

Before going into different ways of implementing additive synthesis in Csound, we shall think about the parameters to consider. As additive synthesis is the addition of several sine generators, the parameters are on two different levels:

• For each sine, there is a frequency and an amplitude with an envelope.
• The frequency is usually a constant value. But it can be varied, though. Natural sounds usually have very slight changes of partial frequencies.
• The amplitude must at least have a simple envelope like the well-known ADSR. But more complex ways of continuously altering the amplitude will make the sound much more lively.
• For the sound as a whole, these are the relevant parameters:
• The total number of sinusoids. A sound which consists of just three sinusoids is of course "poorer" than a sound which consists of 100 sinusoids.
• The frequency ratios of the sine generators. For a classical harmonic spectrum, the multipliers of the sinusoids are 1, 2, 3, ... (If your first sine is 100 Hz, the others are 200, 300, 400, ... Hz.) For an inharmonic or noisy spectrum, there are probably no simple integer ratios. This frequency ratio is mainly responsible for our perception of timbre.
• The base frequency is the frequency of the first partial. If the partials are showing an harmonic ratio, this frequency (in the example given 100 Hz) is also the overall perceived pitch.
• The amplitude ratios of the sinusoids. This is also very important for the resulting timbre of a sound. If the higher partials are relatively strong, the sound appears more brilliant; if the higher partials are soft, the sound appears dark and soft.
• The duration ratios of the sinusoids. In simple additive synthesis, all single sines have the same duration, but they may also differ. This usually relates to the envelopes: if the envelopes of different partials vary, some partials may die away faster than others.

It is not always the aim of additive synthesis to imitate natural sounds, but it can definitely be  learned a lot through the task of first analyzing and then attempting to imitate a sound using additive synthesis techniques. This is what a guitar note looks like when spectrally analyzed:

Spectral analysis of a guitar tone in time (courtesy of W. Fohl, Hamburg)

Each partial has its own movement and duration. We may or may not be able to achieve this successfully in additive synthesis. Let us begin with some simple sounds and consider ways of programming this with Csound; later we will look at some more complex sounds and advanced ways of programming this.

## Simple Additions of Sinusoids inside an Instrument

If additive synthesis amounts to the adding sine generators, it is straightforward to create multiple oscillators in a single instrument and to add the resulting audio signals together. In the following example, instrument 1 shows a harmonic spectrum, and instrument 2 an inharmonic one. Both instruments share the same amplitude multipliers: 1, 1/2, 1/3, 1/4, ... and receive the base frequency in Csound's pitch notation (octave.semitone) and the main amplitude in dB.

EXAMPLE 04A01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;example by Andrés Cabrera
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1 ;harmonic additive synthesis
;receive general pitch and volume from the score
ibasefrq  =         cpspch(p4) ;convert pitch values to frequency
ibaseamp  =         ampdbfs(p5) ;convert dB to amplitude
;create 8 harmonic partials
aOsc1     poscil    ibaseamp, ibasefrq, giSine
aOsc2     poscil    ibaseamp/2, ibasefrq*2, giSine
aOsc3     poscil    ibaseamp/3, ibasefrq*3, giSine
aOsc4     poscil    ibaseamp/4, ibasefrq*4, giSine
aOsc5     poscil    ibaseamp/5, ibasefrq*5, giSine
aOsc6     poscil    ibaseamp/6, ibasefrq*6, giSine
aOsc7     poscil    ibaseamp/7, ibasefrq*7, giSine
aOsc8     poscil    ibaseamp/8, ibasefrq*8, giSine
;apply simple envelope
kenv      linen     1, p3/4, p3, p3/4
;add partials and write to output
aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8
outs      aOut*kenv, aOut*kenv
endin

instr 2 ;inharmonic additive synthesis
ibasefrq  =         cpspch(p4)
ibaseamp  =         ampdbfs(p5)
;create 8 inharmonic partials
aOsc1     poscil    ibaseamp, ibasefrq, giSine
aOsc2     poscil    ibaseamp/2, ibasefrq*1.02, giSine
aOsc3     poscil    ibaseamp/3, ibasefrq*1.1, giSine
aOsc4     poscil    ibaseamp/4, ibasefrq*1.23, giSine
aOsc5     poscil    ibaseamp/5, ibasefrq*1.26, giSine
aOsc6     poscil    ibaseamp/6, ibasefrq*1.31, giSine
aOsc7     poscil    ibaseamp/7, ibasefrq*1.39, giSine
aOsc8     poscil    ibaseamp/8, ibasefrq*1.41, giSine
kenv      linen     1, p3/4, p3, p3/4
aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8
outs aOut*kenv, aOut*kenv
endin

</CsInstruments>
<CsScore>
;          pch       amp
i 1 0 5    8.00      -10
i 1 3 5    9.00      -14
i 1 5 8    9.02      -12
i 1 6 9    7.01      -12
i 1 7 10   6.00      -10
s
i 2 0 5    8.00      -10
i 2 3 5    9.00      -14
i 2 5 8    9.02      -12
i 2 6 9    7.01      -12
i 2 7 10   6.00      -10
</CsScore>
</CsoundSynthesizer>
```

## Simple Additions of Sinusoids via the Score

A typical paradigm in programming: If you find some almost identical lines in your code, consider to abstract it. For the Csound Language this can mean, to move parameter control to the score. In our case, the lines

```aOsc1     poscil    ibaseamp, ibasefrq, giSine
aOsc2     poscil    ibaseamp/2, ibasefrq*2, giSine
aOsc3     poscil    ibaseamp/3, ibasefrq*3, giSine
aOsc4     poscil    ibaseamp/4, ibasefrq*4, giSine
aOsc5     poscil    ibaseamp/5, ibasefrq*5, giSine
aOsc6     poscil    ibaseamp/6, ibasefrq*6, giSine
aOsc7     poscil    ibaseamp/7, ibasefrq*7, giSine
aOsc8     poscil    ibaseamp/8, ibasefrq*8, giSine
```

can be abstracted to the form

```aOsc     poscil    ibaseamp*iampfactor, ibasefrq*ifreqfactor, giSine
```

with the parameters iampfactor (the relative amplitude of a partial) and ifreqfactor (the frequency multiplier) transferred to the score.

The next version simplifies the instrument code and defines the variable values as score parameters:

EXAMPLE 04A02.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;example by Andrés Cabrera and Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1
iBaseFreq =         cpspch(p4)
iFreqMult =         p5 ;frequency multiplier
iBaseAmp  =         ampdbfs(p6)
iAmpMult  =         p7 ;amplitude multiplier
iFreq     =         iBaseFreq * iFreqMult
iAmp      =         iBaseAmp * iAmpMult
kEnv      linen     iAmp, p3/4, p3, p3/4
aOsc      poscil    kEnv, iFreq, giSine
outs      aOsc, aOsc
endin

</CsInstruments>
<CsScore>
;          freq      freqmult  amp       ampmult
i 1 0 7    8.09      1         -10       1
i . . 6    .         2         .         [1/2]
i . . 5    .         3         .         [1/3]
i . . 4    .         4         .         [1/4]
i . . 3    .         5         .         [1/5]
i . . 3    .         6         .         [1/6]
i . . 3    .         7         .         [1/7]
s
i 1 0 6    8.09      1.5       -10       1
i . . 4    .         3.1       .         [1/3]
i . . 3    .         3.4       .         [1/6]
i . . 4    .         4.2       .         [1/9]
i . . 5    .         6.1       .         [1/12]
i . . 6    .         6.3       .         [1/15]
</CsScore>
</CsoundSynthesizer>
```

You might say: Okay, where is the simplification? There are even more lines than before! - This is true, and this is certainly just a step on the way to a better code. The main benefit now is flexibility. Now our code is capable of realizing any number of partials, with any amplitude, frequency and duration ratios. Using the Csound score abbreviations (for instance a dot for repeating the previous value in the same p-field), you can do a lot of copy-and-paste, and focus on what is changing from line to line.

Note also that you are now calling one instrument in multiple instances at the same time for performing additive synthesis. In fact, each instance of the instrument contributes just one partial for the additive synthesis. This call of multiple and simultaneous instances of one instrument is also a typical procedure for situations like this, and for writing clean and effective Csound code. We will discuss later how this can be done in a more elegant way than in the last example.

## Creating Function Tables for Additive Synthesis

Before we continue on this road, let us go back to the first example and discuss a classical and abbreviated method of playing a number of partials. As we mentioned at the beginning, Fourier stated that any periodic oscillation can be described as a sum of simple sinusoids. If the single sinusoids are static (no individual envelope or duration), the resulting waveform will always be the same.

You see four sine generators, each with fixed frequency and amplitude relations, and mixed together. At the bottom of the illustration you see the composite waveform which repeats itself at each period. So - why not just calculate this composite waveform first, and then read it with just one oscillator?

This is what some Csound GEN routines do. They compose the resulting shape of the periodic wave, and store the values in a function table. GEN10 can be used for creating a waveform consisting of harmonically related partials. After the common GEN routine p-fields

```<table number>, <creation time>, <size in points>, <GEN number>
```

you have just to determine the relative strength of the harmonics. GEN09 is more complex and allows you to also control the frequency multiplier and the phase (0-360°) of each partial. We are able to reproduce the first example in a shorter (and computational faster) form:

EXAMPLE 04A03.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;example by Andrés Cabrera and Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
giHarm    ftgen     1, 0, 2^12, 10, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8
giNois    ftgen     2, 0, 2^12, 9, 100,1,0,  102,1/2,0,  110,1/3,0,
123,1/4,0,  126,1/5,0,  131,1/6,0,  139,1/7,0,  141,1/8,0

instr 1
iBasFreq  =         cpspch(p4)
iTabFreq  =         p7 ;base frequency of the table
iBasFreq  =         iBasFreq / iTabFreq
iBaseAmp  =         ampdb(p5)
iFtNum    =         p6
aOsc      poscil    iBaseAmp, iBasFreq, iFtNum
aEnv      linen     aOsc, p3/4, p3, p3/4
outs      aEnv, aEnv
endin

</CsInstruments>
<CsScore>
;          pch       amp       table      table base (Hz)
i 1 0 5    8.00      -10       1          1
i . 3 5    9.00      -14       .          .
i . 5 8    9.02      -12       .          .
i . 6 9    7.01      -12       .          .
i . 7 10   6.00      -10       .          .
s
i 1 0 5    8.00      -10       2          100
i . 3 5    9.00      -14       .          .
i . 5 8    9.02      -12       .          .
i . 6 9    7.01      -12       .          .
i . 7 10   6.00      -10       .          .
</CsScore>
</CsoundSynthesizer>
```

As you can see, for non-harmonically related partials, the construction of a table must be done with a special care. If the frequency multipliers in our first example started with 1 and 1.02, the resulting period is acually very long. For a base frequency of 100 Hz, you will have the frequencies of 100 Hz and 102 Hz overlapping each other. So you need 100 cycles from the 1.00 multiplier and 102 cycles from the 1.02 multiplier to complete one period and to start again both together from zero. In other words, we have to create a table which contains 100 respectively 102 periods, instead of 1 and 1.02. Then the table values are not related to 1 - as usual - but to 100. That is the reason we have to introduce a new parameter iTabFreq for this purpose.

This method of composing waveforms can also be used for generating the four standard historical shapes used in a synthesizer. An impulse wave can be created by adding a number of harmonics of the same strength. A sawtooth has the amplitude multipliers 1, 1/2, 1/3, ... for the harmonics. A square has the same multipliers, but just for the odd harmonics. A triangle can be calculated as 1 divided by the square of the odd partials, with swaping positive and negative values. The next example creates function tables with just ten partials for each standard form.

EXAMPLE 04A04.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giImp  ftgen  1, 0, 4096, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
giSaw  ftgen  2, 0, 4096, 10, 1,1/2,1/3,1/4,1/5,1/6,1/7,1/8,1/9,1/10
giSqu  ftgen  3, 0, 4096, 10, 1, 0, 1/3, 0, 1/5, 0, 1/7, 0, 1/9, 0
giTri  ftgen  4, 0, 4096, 10, 1, 0, -1/9, 0, 1/25, 0, -1/49, 0, 1/81, 0

instr 1
asig   poscil .2, 457, p4
outs   asig, asig
endin

</CsInstruments>
<CsScore>
i 1 0 3 1
i 1 4 3 2
i 1 8 3 3
i 1 12 3 4
</CsScore>
</CsoundSynthesizer>
```

## Triggering Subinstruments for the Partials

Performing additive synthesis by designing partial strengths into function tables has the disadvantage that once a note has begun there is no way of varying the relative strengths of individual partials. There are various methods to circumvent the inflexibility of table-based additive synthesis such as morphing between several tables (using for example the ftmorf opcode). Next we will consider another approach: triggering one instance of a subinstrument for each partial, and exploring the possibilities of creating a spectrally dynamic sound using this technique.

Let us return to the second instrument (05A02.csd) which already made some abstractions and triggered one instrument instance for each partial. This was done in the score; but now we will trigger one complete note in one score line, not just one partial. The first step is to assign the desired number of partials via a score parameter. The next example triggers any number of partials using this one value:

EXAMPLE 04A05.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1

instr 1 ;master instrument
inumparts =         p4 ;number of partials
ibasfreq  =         200 ;base frequency
ipart     =         1 ;count variable for loop
;loop for inumparts over the ipart variable
;and trigger inumpartss instanes of the subinstrument
loop:
ifreq     =         ibasfreq * ipart
iamp      =         1/ipart/inumparts
event_i   "i", 10, 0, p3, ifreq, iamp
loop_le   ipart, 1, inumparts, loop
endin

instr 10 ;subinstrument for playing one partial
ifreq     =         p4 ;frequency of this partial
iamp      =         p5 ;amplitude of this partial
aenv      transeg   0, .01, 0, iamp, p3-0.1, -10, 0
apart     poscil    aenv, ifreq, giSine
outs      apart, apart
endin

</CsInstruments>
<CsScore>
;         number of partials
i 1 0 3   10
i 1 3 3   20
i 1 6 3   2
</CsScore>
</CsoundSynthesizer>
```

This instrument can easily be transformed to be played via a midi keyboard. The next example connects the number of synthesized partials with the midi velocity. So if you play softly, the sound will have fewer partials than if a key is struck with force.

EXAMPLE 04A06.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giSine    ftgen     0, 0, 2^10, 10, 1
massign   0, 1 ;all midi channels to instr 1

instr 1 ;master instrument
ibasfreq  cpsmidi       ;base frequency
iampmid   ampmidi   20 ;receive midi-velocity and scale 0-20
inparts   =         int(iampmid)+1 ;exclude zero
ipart     =         1 ;count variable for loop
;loop for inumparts over the ipart variable
;and trigger inumpartss instanes of the subinstrument
loop:
ifreq     =         ibasfreq * ipart
iamp      =         1/ipart/inparts
event_i   "i", 10, 0, 1, ifreq, iamp
loop_le   ipart, 1, inparts, loop
endin

instr 10 ;subinstrument for playing one partial
ifreq     =         p4 ;frequency of this partial
iamp      =         p5 ;amplitude of this partial
aenv      transeg   0, .01, 0, iamp, p3-.01, -3, 0
apart     poscil    aenv, ifreq, giSine
outs      apart/3, apart/3
endin

</CsInstruments>
<CsScore>
f 0 3600
</CsScore>
</CsoundSynthesizer>
```

Although this instrument is rather primitive it is useful to be able to control the timbre in this using key velocity. Let us continue to explore other methods of creating parameter variations in additive synthesis.

## User-controlled Random Variations in Additive Synthesis

In natural sounds, there is movement and change all the time. Even the best player or singer will not be able to play a note in the exact same way twice. And inside a tone, the partials have some unsteadiness all the time: slight excitations of the amplitudes, uneven durations, slight frequency movements. In an audio programming language like Csound, we can achieve these movements with random deviations. It is not so important whether we use randomness or not, rather in which way. The boundaries of random deviations must be adjusted as carefully as with any other parameter in electronic composition. If sounds using random deviations begin to sound like mistakes then it is probably less to do with actually using random functions but instead more to do with some poorly chosen boundaries.

Let us start with some random deviations in our subinstrument. These parameters can be affected:

• The frequency of each partial can be slightly detuned. The range of this possible maximum detuning can be set in cents (100 cent = 1 semitone).
• The amplitude of each partial can be altered, compared to its standard value. The alteration can be measured in Decibel (dB).
• The duration of each partial can be shorter or longer than the standard value. Let us define this deviation as a percentage. If the expected duration is five seconds, a maximum deviation of 100% means getting a value between half the duration (2.5 sec) and the double duration (10 sec).

The following example shows the effect of these variations. As a base - and as a reference to its author - we take the "bell-like sound" which Jean-Claude Risset created in his Sound Catalogue.1

EXAMPLE 04A07.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

;frequency and amplitude multipliers for 11 partials of Risset's bell
giFqs     ftgen     0, 0, -11,-2,.56,.563,.92, .923,1.19,1.7,2,2.74,
3,3.74,4.07
giAmps    ftgen     0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1, 4/3
giSine    ftgen     0, 0, 2^10, 10, 1
seed      0

instr 1 ;master instrument
ibasfreq  =         400
ifqdev    =         p4 ;maximum freq deviation in cents
iampdev   =         p5 ;maximum amp deviation in dB
idurdev   =         p6 ;maximum duration deviation in %
indx      =         0 ;count variable for loop
loop:
ifqmult   tab_i     indx, giFqs ;get frequency multiplier from table
ifreq     =         ibasfreq * ifqmult
iampmult  tab_i     indx, giAmps ;get amp multiplier
iamp      =         iampmult / 20 ;scale
event_i   "i", 10, 0, p3, ifreq, iamp, ifqdev, iampdev, idurdev
loop_lt   indx, 1, 11, loop
endin

instr 10 ;subinstrument for playing one partial
;receive the parameters from the master instrument
ifreqnorm =         p4 ;standard frequency of this partial
iampnorm  =         p5 ;standard amplitude of this partial
ifqdev    =         p6 ;maximum freq deviation in cents
iampdev   =         p7 ;maximum amp deviation in dB
idurdev   =         p8 ;maximum duration deviation in %
;calculate frequency
icent     random    -ifqdev, ifqdev ;cent deviation
ifreq     =         ifreqnorm * cent(icent)
;calculate amplitude
idb       random    -iampdev, iampdev ;dB deviation
iamp      =         iampnorm * ampdb(idb)
;calculate duration
idurperc  random    -idurdev, idurdev ;duration deviation (%)
iptdur    =         p3 * 2^(idurperc/100)
p3        =         iptdur ;set p3 to the calculated value
;play partial
aenv      transeg   0, .01, 0, iamp, p3-.01, -10, 0
apart     poscil    aenv, ifreq, giSine
outs      apart, apart
endin

</CsInstruments>
<CsScore>
;         frequency   amplitude   duration
;         deviation   deviation   deviation
;         in cent     in dB       in %
;;unchanged sound (twice)
r 2
i 1 0 5   0           0           0
s
;;slight variations in frequency
r 4
i 1 0 5   25          0           0
;;slight variations in amplitude
r 4
i 1 0 5   0           6           0
;;slight variations in duration
r 4
i 1 0 5   0           0           30
;;slight variations combined
r 6
i 1 0 5   25          6           30
;;heavy variations
r 6
i 1 0 5   50          9           100
</CsScore>
</CsoundSynthesizer>
```

For a midi-triggered descendant of the instrument, we can - as one of many possible choices - vary the amount of possible random variation on the key velocity. So a key pressed softly plays the bell-like sound as described by Risset but as a key is struck with increasing force the sound produced will be increasingly altered.

EXAMPLE 04A08.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
;Example by Joachim Heintz
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

;frequency and amplitude multipliers for 11 partials of Risset's bell
giFqs     ftgen     0, 0, -11, -2, .56,.563,.92,.923,1.19,1.7,2,2.74,3,
3.74,4.07
giAmps    ftgen     0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1, 4/3
giSine    ftgen     0, 0, 2^10, 10, 1
seed      0
massign   0, 1 ;all midi channels to instr 1

instr 1 ;master instrument
;;scale desired deviations for maximum velocity
;frequency (cent)
imxfqdv   =         100
;amplitude (dB)
imxampdv  =         12
;duration (%)
imxdurdv  =         100
;;get midi values
ibasfreq  cpsmidi       ;base frequency
iampmid   ampmidi   1 ;receive midi-velocity and scale 0-1
;;calculate maximum deviations depending on midi-velocity
ifqdev    =         imxfqdv * iampmid
iampdev   =         imxampdv * iampmid
idurdev   =         imxdurdv * iampmid
;;trigger subinstruments
indx      =         0 ;count variable for loop
loop:
ifqmult   tab_i     indx, giFqs ;get frequency multiplier from table
ifreq     =         ibasfreq * ifqmult
iampmult  tab_i     indx, giAmps ;get amp multiplier
iamp      =         iampmult / 20 ;scale
event_i   "i", 10, 0, 3, ifreq, iamp, ifqdev, iampdev, idurdev
loop_lt   indx, 1, 11, loop
endin

instr 10 ;subinstrument for playing one partial
;receive the parameters from the master instrument
ifreqnorm =         p4 ;standard frequency of this partial
iampnorm  =         p5 ;standard amplitude of this partial
ifqdev    =         p6 ;maximum freq deviation in cents
iampdev   =         p7 ;maximum amp deviation in dB
idurdev   =         p8 ;maximum duration deviation in %
;calculate frequency
icent     random    -ifqdev, ifqdev ;cent deviation
ifreq     =         ifreqnorm * cent(icent)
;calculate amplitude
idb       random    -iampdev, iampdev ;dB deviation
iamp      =         iampnorm * ampdb(idb)
;calculate duration
idurperc  random    -idurdev, idurdev ;duration deviation (%)
iptdur    =         p3 * 2^(idurperc/100)
p3        =         iptdur ;set p3 to the calculated value
;play partial
aenv      transeg   0, .01, 0, iamp, p3-.01, -10, 0
apart     poscil    aenv, ifreq, giSine
outs      apart, apart
endin

</CsInstruments>
<CsScore>
f 0 3600
</CsScore>
</CsoundSynthesizer>
```

It will depend on the power of your computer whether you can play examples like this in realtime. Have a look at chapter 2D (Live Audio) for tips on getting the best possible performance from your Csound orchestra.

Additive synthesis can still be an exciting way of producing sounds. The nowadays computational power and programming structures open the way for new discoverings and ideas. The later examples were intended to show some of these potentials of additive synthesis in Csound.

1. Jean-Claude Risset, Introductory Catalogue of Computer Synthesized Sounds (1969), cited after Dodge/Jerse, Computer Music, New York / London 1985, p.94^

# SUBTRACTIVE SYNTHESIS

## Introduction

Subtractive synthesis is, at least conceptually, the inverse of additive synthesis in that instead of building complex sound through the addition of simple cellular materials such as sine waves, subtractive synthesis begins with a complex sound source, such as white noise or a recorded sample, or a rich waveform, such as a sawtooth or pulse, and proceeds to refine that sound by removing partials or entire sections of the frequency spectrum through the use of audio filters.

The creation of dynamic spectra (an arduous task in additive synthesis) is relatively simple in subtractive synthesis as all that will be required will be to modulate a few parameters pertaining to any filters being used. Working with the intricate precision that is possible with additive synthesis may not be as easy with subtractive synthesis but sounds can be created much more instinctively than is possible with additive or FM synthesis.

## A Csound Two-Oscillator Synthesizer

The first example represents perhaps the classic idea of subtractive synthesis: a simple two oscillator synth filtered using a single resonant lowpass filter. Many of the ideas used in this example have been inspired by the design of the Minimoog synthesizer (1970) and other similar instruments.

Each oscillator can describe either a sawtooth, PWM waveform (i.e. square - pulse etc.) or white noise and each oscillator can be transposed in octaves or in cents with respect to a fundamental pitch. The two oscillators are mixed and then passed through a 4-pole / 24dB per octave resonant lowpass filter. The opcode 'moogladder' is chosen on account of its authentic vintage character. The cutoff frequency of the filter is modulated using an ADSR-style (attack-decay-sustain-release) envelope facilitating the creation of dynamic, evolving spectra. Finally the sound output of the filter is shaped by an ADSR amplitude envelope.

As this instrument is suggestive of a performance instrument controlled via MIDI, this has been partially implemented. Through the use of Csound's MIDI interoperability opcode, mididefault, the instrument can be operated from the score or from a MIDI keyboard. If a MIDI note is received, suitable default p-field values are substituted for the missing p-fields. MIDI controller 1 can be used to control the global cutoff frequency for the filter.

A schematic for this instrument is shown below:

#### EXAMPLE 04B01.csd

```<CsoundSynthesizer>

<CsOptions>
-odac -Ma
</CsOptions>

<CsInstruments>
sr = 44100
ksmps = 4
nchnls = 2
0dbfs = 1

initc7 1,1,0.8                 ;set initial controller position

prealloc 1, 10

instr 1
iNum   notnum                  ;read in midi note number
iCF    ctrl7        1,1,0.1,14 ;read in midi controller 1

; set up default p-field values for midi activated notes
mididefault  iNum, p4   ;pitch (note number)
mididefault  0.3, p5    ;amplitude 1
mididefault  2, p6      ;type 1
mididefault  0.5, p7    ;pulse width 1
mididefault  0, p8      ;octave disp. 1
mididefault  0, p9      ;tuning disp. 1
mididefault  0.3, p10   ;amplitude 2
mididefault  1, p11     ;type 2
mididefault  0.5, p12   ;pulse width 2
mididefault  -1, p13    ;octave displacement 2
mididefault  20, p14    ;tuning disp. 2
mididefault  iCF, p15   ;filter cutoff freq
mididefault  0.01, p16  ;filter env. attack time
mididefault  1, p17     ;filter env. decay time
mididefault  0.01, p18  ;filter env. sustain level
mididefault  0.1, p19   ;filter release time
mididefault  0.3, p20   ;filter resonance
mididefault  0.01, p21  ;amp. env. attack
mididefault  0.1, p22   ;amp. env. decay.
mididefault  1, p23     ;amp. env. sustain
mididefault  0.01, p24  ;amp. env. release

; asign p-fields to variables
iCPS   =            cpsmidinn(p4) ;convert from note number to cps
kAmp1  =            p5
iType1 =            p6
kPW1   =            p7
kOct1  =            octave(p8) ;convert from octave displacement to multiplier
kTune1 =            cent(p9)   ;convert from cents displacement to multiplier
kAmp2  =            p10
iType2 =            p11
kPW2   =            p12
kOct2  =            octave(p13)
kTune2 =            cent(p14)
iCF    =            p15
iFAtt  =            p16
iFDec  =            p17
iFSus  =            p18
iFRel  =            p19
kRes   =            p20
iAAtt  =            p21
iADec  =            p22
iASus  =            p23
iARel  =            p24

;oscillator 1
;if type is sawtooth or square...
if iType1==1||iType1==2 then
;...derive vco2 'mode' from waveform type
iMode1 = (iType1=1?0:2)
aSig1  vco2   kAmp1,iCPS*kOct1*kTune1,iMode1,kPW1;VCO audio oscillator
else                                   ;otherwise...
aSig1  noise  kAmp1, 0.5              ;...generate white noise
endif

;oscillator 2 (identical in design to oscillator 1)
if iType2==1||iType2==2 then
iMode2  =  (iType2=1?0:2)
aSig2  vco2   kAmp2,iCPS*kOct2*kTune2,iMode2,kPW2
else
aSig2 noise  kAmp2,0.5
endif

;mix oscillators
aMix       sum          aSig1,aSig2
;lowpass filter
kFiltEnv   expsegr      0.0001,iFAtt,iCPS*iCF,iFDec,iCPS*iCF*iFSus,iFRel,0.0001
aOut       moogladder   aMix, kFiltEnv, kRes

;amplitude envelope
aAmpEnv    expsegr      0.0001,iAAtt,1,iADec,iASus,iARel,0.0001
aOut       =            aOut*aAmpEnv
outs         aOut,aOut
endin
</CsInstruments>

<CsScore>
;p4  = oscillator frequency
;oscillator 1
;p5  = amplitude
;p6  = type (1=sawtooth,2=square-PWM,3=noise)
;p7  = PWM (square wave only)
;p8  = octave displacement
;p9  = tuning displacement (cents)
;oscillator 2
;p10 = amplitude
;p11 = type (1=sawtooth,2=square-PWM,3=noise)
;p12 = pwm (square wave only)
;p13 = octave displacement
;p14 = tuning displacement (cents)
;global filter envelope
;p15 = cutoff
;p16 = attack time
;p17 = decay time
;p18 = sustain level (fraction of cutoff)
;p19 = release time
;p20 = resonance
;global amplitude envelope
;p21 = attack time
;p22 = decay time
;p23 = sustain level
;p24 = release time
; p1 p2 p3  p4 p5  p6 p7   p8 p9  p10 p11 p12 p13
;p14 p15 p16  p17  p18  p19 p20 p21  p22 p23 p24
i 1  0  1   50 0   2  .5   0  -5  0   2   0.5 0   \
5   12  .01  2    .01  .1  0   .005 .01 1   .05
i 1  +  1   50 .2  2  .5   0  -5  .2  2   0.5 0   \
5   1   .01  1    .1   .1  .5  .005 .01 1   .05
i 1  +  1   50 .2  2  .5   0  -8  .2  2   0.5 0   \
8   3   .01  1    .1   .1  .5  .005 .01 1   .05
i 1  +  1   50 .2  2  .5   0  -8  .2  2   0.5 -1  \
8   7  .01   1    .1   .1  .5  .005 .01 1   .05
i 1  +  3   50 .2  1  .5   0  -10 .2  1   0.5 -2  \
10  40  .01  3    .001 .1  .5  .005 .01 1   .05
i 1  +  10  50 1   2  .01  -2 0   .2  3   0.5 0   \
0   40  5    5    .001 1.5 .1  .005 .01 1   .05

f 0 3600
e
</CsScore>

</CsoundSynthesizer>
```

## Simulation of Timbres from a Noise Source

The next example makes extensive use of bandpass filters arranged in parallel to filter white noise. The bandpass filter bandwidths are narrowed to the point where almost pure tones are audible. The crucial difference is that the noise source always induces instability in the amplitude and frequency of tones produced - it is this quality that makes this sort of subtractive synthesis sound much more organic than an additive synthesis equivalent. If the bandwidths are widened then more of the characteristic of the noise source comes through and the tone becomes 'airier' and less distinct; if the bandwidths are narrowed the resonating tones become clearer and steadier. By varying the bandwidths interesting metamorphoses of the resultant sound are possible.

22 reson filters are used for the bandpass filters on account of their ability to ring and resonate as their bandwidth narrows. Another reason for this choice is the relative CPU economy of the reson filter, a not inconsiderable concern as so many of them are used. The frequency ratios between the 22 parallel filters are derived from analysis of a hand bell, the data was found in the appendix of the Csound manual here.

In addition to the white noise as a source, noise impulses are also used as a sound source (via the 'mpulse' opcode). The instrument will automatically and randomly slowly crossfade between these two sound sources.

A lowpass and highpass filter are inserted in series before the parallel bandpass filters to shape the frequency spectrum of the source sound. Csound's butterworth filters butlp and buthp are chosen for this task on account of their steep cutoff slopes and lack of ripple at the cutoff point.

The outputs of the reson filters are sent alternately to the left and right outputs in order to create a broad stereo effect.

This example makes extensive use of the 'rspline' opcode, a generator of random spline functions, to slowly undulate the many input parameters. The orchestra is self generative in that instrument 1 repeatedly triggers note events in instrument 2 and the extensive use of random functions means that the results will continually evolve as the orchestra is allowed to perform.

A flow diagram for this instrument is shown below:

#### EXAMPLE 04B02.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;Example written by Iain McCurdy

sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1

instr 1 ; triggers notes in instrument 2 with randomised p-fields
krate  randomi 0.2,0.4,0.1   ;rate of note generation
ktrig  metro  krate          ;triggers used by schedkwhen
koct   random 5,12           ;fundemental pitch of synth note
kdur   random 15,30          ;duration of note
schedkwhen ktrig,0,0,2,0,kdur,cpsoct(koct) ;trigger a note in instrument 2
endin

instr 2 ; subtractive synthesis instrument
aNoise  pinkish  1                  ;a noise source sound: pink noise
kGap    rspline  0.3,0.05,0.2,2     ;time gap between impulses
aPulse  mpulse   15, kGap           ;a train of impulses
kCFade  rspline  0,1,0.1,1          ;crossfade point between noise and impulses
aInput  ntrpol   aPulse,aNoise,kCFade;implement crossfade

; cutoff frequencies for low and highpass filters
kLPF_CF  rspline  13,8,0.1,0.4
kHPF_CF  rspline  5,10,0.1,0.4
; filter input sound with low and highpass filters in series -
; - done twice per filter in order to sharpen cutoff slopes
aInput    butlp    aInput, cpsoct(kLPF_CF)
aInput    butlp    aInput, cpsoct(kLPF_CF)
aInput    buthp    aInput, cpsoct(kHPF_CF)
aInput    buthp    aInput, cpsoct(kHPF_CF)

kcf     rspline  p4*1.05,p4*0.95,0.01,0.1 ; fundemental
; bandwidth for each filter is created individually as a random spline function
kbw1    rspline  0.00001,10,0.2,1
kbw2    rspline  0.00001,10,0.2,1
kbw3    rspline  0.00001,10,0.2,1
kbw4    rspline  0.00001,10,0.2,1
kbw5    rspline  0.00001,10,0.2,1
kbw6    rspline  0.00001,10,0.2,1
kbw7    rspline  0.00001,10,0.2,1
kbw8    rspline  0.00001,10,0.2,1
kbw9    rspline  0.00001,10,0.2,1
kbw10   rspline  0.00001,10,0.2,1
kbw11   rspline  0.00001,10,0.2,1
kbw12   rspline  0.00001,10,0.2,1
kbw13   rspline  0.00001,10,0.2,1
kbw14   rspline  0.00001,10,0.2,1
kbw15   rspline  0.00001,10,0.2,1
kbw16   rspline  0.00001,10,0.2,1
kbw17   rspline  0.00001,10,0.2,1
kbw18   rspline  0.00001,10,0.2,1
kbw19   rspline  0.00001,10,0.2,1
kbw20   rspline  0.00001,10,0.2,1
kbw21   rspline  0.00001,10,0.2,1
kbw22   rspline  0.00001,10,0.2,1

imode   =        0 ; amplitude balancing method used by the reson filters
a1      reson    aInput, kcf*1,               kbw1, imode
a2      reson    aInput, kcf*1.0019054878049, kbw2, imode
a3      reson    aInput, kcf*1.7936737804878, kbw3, imode
a4      reson    aInput, kcf*1.8009908536585, kbw4, imode
a5      reson    aInput, kcf*2.5201981707317, kbw5, imode
a6      reson    aInput, kcf*2.5224085365854, kbw6, imode
a7      reson    aInput, kcf*2.9907012195122, kbw7, imode
a8      reson    aInput, kcf*2.9940548780488, kbw8, imode
a9      reson    aInput, kcf*3.7855182926829, kbw9, imode
a10     reson    aInput, kcf*3.8061737804878, kbw10,imode
a11     reson    aInput, kcf*4.5689024390244, kbw11,imode
a12     reson    aInput, kcf*4.5754573170732, kbw12,imode
a13     reson    aInput, kcf*5.0296493902439, kbw13,imode
a14     reson    aInput, kcf*5.0455030487805, kbw14,imode
a15     reson    aInput, kcf*6.0759908536585, kbw15,imode
a16     reson    aInput, kcf*5.9094512195122, kbw16,imode
a17     reson    aInput, kcf*6.4124237804878, kbw17,imode
a18     reson    aInput, kcf*6.4430640243902, kbw18,imode
a19     reson    aInput, kcf*7.0826219512195, kbw19,imode
a20     reson    aInput, kcf*7.0923780487805, kbw20,imode
a21     reson    aInput, kcf*7.3188262195122, kbw21,imode
a22     reson    aInput, kcf*7.5551829268293, kbw22,imode

; amplitude control for each filter output
kAmp1    rspline  0, 1, 0.3, 1
kAmp2    rspline  0, 1, 0.3, 1
kAmp3    rspline  0, 1, 0.3, 1
kAmp4    rspline  0, 1, 0.3, 1
kAmp5    rspline  0, 1, 0.3, 1
kAmp6    rspline  0, 1, 0.3, 1
kAmp7    rspline  0, 1, 0.3, 1
kAmp8    rspline  0, 1, 0.3, 1
kAmp9    rspline  0, 1, 0.3, 1
kAmp10   rspline  0, 1, 0.3, 1
kAmp11   rspline  0, 1, 0.3, 1
kAmp12   rspline  0, 1, 0.3, 1
kAmp13   rspline  0, 1, 0.3, 1
kAmp14   rspline  0, 1, 0.3, 1
kAmp15   rspline  0, 1, 0.3, 1
kAmp16   rspline  0, 1, 0.3, 1
kAmp17   rspline  0, 1, 0.3, 1
kAmp18   rspline  0, 1, 0.3, 1
kAmp19   rspline  0, 1, 0.3, 1
kAmp20   rspline  0, 1, 0.3, 1
kAmp21   rspline  0, 1, 0.3, 1
kAmp22   rspline  0, 1, 0.3, 1

; left and right channel mixes are created using alternate filter outputs.
; This shall create a stereo effect.
aMixL    sum      a1*kAmp1,a3*kAmp3,a5*kAmp5,a7*kAmp7,a9*kAmp9,a11*kAmp11,\
a13*kAmp13,a15*kAmp15,a17*kAmp17,a19*kAmp19,a21*kAmp21
aMixR    sum      a2*kAmp2,a4*kAmp4,a6*kAmp6,a8*kAmp8,a10*kAmp10,a12*kAmp12,\
a14*kAmp14,a16*kAmp16,a18*kAmp18,a20*kAmp20,a22*kAmp22

kEnv     linseg   0, p3*0.5, 1,p3*0.5,0,1,0       ; global amplitude envelope
outs   (aMixL*kEnv*0.00008), (aMixR*kEnv*0.00008) ; audio sent to outputs
endin

</CsInstruments>

<CsScore>
i 1 0 3600  ; instrument 1 (note generator) plays for 1 hour
e
</CsScore>

</CsoundSynthesizer>
```

## Vowel-Sound Emulation Using Bandpass Filtering

The final example in this section uses precisely tuned bandpass filters, to simulate the sound of the human voice expressing vowel sounds. Spectral resonances in this context are often referred to as 'formants'. Five formants are used to simulate the effect of the human mouth and head as a resonating (and therefore filtering) body. The filter data for simulating the vowel sounds A,E,I,O and U as expressed by a bass, tenor, counter-tenor, alto and soprano voice were found in the appendix of the Csound manual here. Bandwidth and intensity (dB) information is also needed to accurately simulate the various vowel sounds.

reson filters are again used but butbp and others could be equally valid choices.

Data is stored in GEN07 linear break point function tables, as this data is read by k-rate line functions we can interpolate and therefore morph between different vowel sounds during a note.

The source sound for the filters comes from either a pink noise generator or a pulse waveform. The pink noise source could be used if the emulation is to be that of just the breath whereas the pulse waveform provides a decent approximation of the human vocal chords buzzing. This instrument can however morph continuously between these two sources.

A flow diagram for this instrument is shown below:

#### EXAMPLE 04B03.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;example by Iain McCurdy

sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1

instr 1
kFund    expon     p4,p3,p5               ; fundamental
kVow     line      p6,p3,p7               ; vowel select
kBW      line      p8,p3,p9               ; bandwidth factor
iVoice   =         p10                    ; voice select
kSrc     line      p11,p3,p12             ; source mix

aNoise   pinkish   3                      ; pink noise
aVCO     vco2      1.2,kFund,2,0.02       ; pulse tone
aInput   ntrpol    aVCO,aNoise,kSrc       ; input mix

; read formant cutoff frequenies from tables
kCF1     table     kVow,1+(iVoice*15),1
kCF2     table     kVow,2+(iVoice*15),1
kCF3     table     kVow,3+(iVoice*15),1
kCF4     table     kVow,4+(iVoice*15),1
kCF5     table     kVow,5+(iVoice*15),1
; read formant intensity values from tables
kDB1     table     kVow,6+(iVoice*15),1
kDB2     table     kVow,7+(iVoice*15),1
kDB3     table     kVow,8+(iVoice*15),1
kDB4     table     kVow,9+(iVoice*15),1
kDB5     table     kVow,10+(iVoice*15),1
; read formant bandwidths from tables
kBW1     table     kVow,11+(iVoice*15),1
kBW2     table     kVow,12+(iVoice*15),1
kBW3     table     kVow,13+(iVoice*15),1
kBW4     table     kVow,14+(iVoice*15),1
kBW5     table     kVow,15+(iVoice*15),1
; create resonant formants byt filtering source sound
aForm1   reson     aInput, kCF1, kBW1*kBW, 1     ; formant 1
aForm2   reson     aInput, kCF2, kBW2*kBW, 1     ; formant 2
aForm3   reson     aInput, kCF3, kBW3*kBW, 1     ; formant 3
aForm4   reson     aInput, kCF4, kBW4*kBW, 1     ; formant 4
aForm5   reson     aInput, kCF5, kBW5*kBW, 1     ; formant 5

; formants are mixed and multiplied both by intensity values derived from tables and by the on-screen gain controls for each formant
aMix     sum       aForm1*ampdbfs(kDB1),aForm2*ampdbfs(kDB2),aForm3*ampdbfs(kDB3),aForm4*ampdbfs(kDB4),aForm5*ampdbfs(kDB5)
kEnv     linseg    0,3,1,p3-6,1,3,0     ; an amplitude envelope
outs      aMix*kEnv, aMix*kEnv ; send audio to outputs
endin

</CsInstruments>

<CsScore>
f 0 3600        ;DUMMY SCORE EVENT - PERMITS REALTIME PERFORMANCE FOR UP TO 1 HOUR

;FUNCTION TABLES STORING FORMANT DATA FOR EACH OF THE FIVE VOICE TYPES REPRESENTED
;BASS
f 1  0 32768 -7 600     10922   400     10922   250     10924   350     ;FREQ
f 2  0 32768 -7 1040    10922   1620    10922   1750    10924   600     ;FREQ
f 3  0 32768 -7 2250    10922   2400    10922   2600    10924   2400    ;FREQ
f 4  0 32768 -7 2450    10922   2800    10922   3050    10924   2675    ;FREQ
f 5  0 32768 -7 2750    10922   3100    10922   3340    10924   2950    ;FREQ
f 6  0 32768 -7 0       10922   0       10922   0       10924   0       ;dB
f 7  0 32768 -7 -7      10922   -12     10922   -30     10924   -20     ;dB
f 8  0 32768 -7 -9      10922   -9      10922   -16     10924   -32     ;dB
f 9  0 32768 -7 -9      10922   -12     10922   -22     10924   -28     ;dB
f 10 0 32768 -7 -20     10922   -18     10922   -28     10924   -36     ;dB
f 11 0 32768 -7 60      10922   40      10922   60      10924   40      ;BAND WIDTH
f 12 0 32768 -7 70      10922   80      10922   90      10924   80      ;BAND WIDTH
f 13 0 32768 -7 110     10922   100     10922   100     10924   100     ;BAND WIDTH
f 14 0 32768 -7 120     10922   120     10922   120     10924   120     ;BAND WIDTH
f 15 0 32768 -7 130     10922   120     10922   120     10924   120     ;BAND WIDTH
;TENOR
f 16 0 32768 -7 650     8192    400     8192    290     8192    400     8192    350     ;FREQ
f 17 0 32768 -7 1080    8192    1700    8192    1870    8192    800     8192    600     ;FREQ
f 18 0 32768 -7 2650    8192    2600    8192    2800    8192    2600    8192    2700    ;FREQ
f 19 0 32768 -7 2900    8192    3200    8192    3250    8192    2800    8192    2900    ;FREQ
f 20 0 32768 -7 3250    8192    3580    8192    3540    8192    3000    8192    3300    ;FREQ
f 21 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 22 0 32768 -7 -6      8192    -14     8192    -15     8192    -10     8192    -20     ;dB
f 23 0 32768 -7 -7      8192    -12     8192    -18     8192    -12     8192    -17     ;dB
f 24 0 32768 -7 -8      8192    -14     8192    -20     8192    -12     8192    -14     ;dB
f 25 0 32768 -7 -22     8192    -20     8192    -30     8192    -26     8192    -26     ;dB
f 26 0 32768 -7 80      8192    70      8192    40      8192    40      8192    40      ;BAND WIDTH
f 27 0 32768 -7 90      8192    80      8192    90      8192    80      8192    60      ;BAND WIDTH
f 28 0 32768 -7 120     8192    100     8192    100     8192    100     8192    100     ;BAND WIDTH
f 29 0 32768 -7 130     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
f 30 0 32768 -7 140     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
;COUNTER TENOR
f 31 0 32768 -7 660     8192    440     8192    270     8192    430     8192    370     ;FREQ
f 32 0 32768 -7 1120    8192    1800    8192    1850    8192    820     8192    630     ;FREQ
f 33 0 32768 -7 2750    8192    2700    8192    2900    8192    2700    8192    2750    ;FREQ
f 34 0 32768 -7 3000    8192    3000    8192    3350    8192    3000    8192    3000    ;FREQ
f 35 0 32768 -7 3350    8192    3300    8192    3590    8192    3300    8192    3400    ;FREQ
f 36 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 37 0 32768 -7 -6      8192    -14     8192    -24     8192    -10     8192    -20     ;dB
f 38 0 32768 -7 -23     8192    -18     8192    -24     8192    -26     8192    -23     ;dB
f 39 0 32768 -7 -24     8192    -20     8192    -36     8192    -22     8192    -30     ;dB
f 40 0 32768 -7 -38     8192    -20     8192    -36     8192    -34     8192    -30     ;dB
f 41 0 32768 -7 80      8192    70      8192    40      8192    40      8192    40      ;BAND WIDTH
f 42 0 32768 -7 90      8192    80      8192    90      8192    80      8192    60      ;BAND WIDTH
f 43 0 32768 -7 120     8192    100     8192    100     8192    100     8192    100     ;BAND WIDTH
f 44 0 32768 -7 130     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
f 45 0 32768 -7 140     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
;ALTO
f 46 0 32768 -7 800     8192    400     8192    350     8192    450     8192    325     ;FREQ
f 47 0 32768 -7 1150    8192    1600    8192    1700    8192    800     8192    700     ;FREQ
f 48 0 32768 -7 2800    8192    2700    8192    2700    8192    2830    8192    2530    ;FREQ
f 49 0 32768 -7 3500    8192    3300    8192    3700    8192    3500    8192    2500    ;FREQ
f 50 0 32768 -7 4950    8192    4950    8192    4950    8192    4950    8192    4950    ;FREQ
f 51 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 52 0 32768 -7 -4      8192    -24     8192    -20     8192    -9      8192    -12     ;dB
f 53 0 32768 -7 -20     8192    -30     8192    -30     8192    -16     8192    -30     ;dB
f 54 0 32768 -7 -36     8192    -35     8192    -36     8192    -28     8192    -40     ;dB
f 55 0 32768 -7 -60     8192    -60     8192    -60     8192    -55     8192    -64     ;dB
f 56 0 32768 -7 50      8192    60      8192    50      8192    70      8192    50      ;BAND WIDTH
f 57 0 32768 -7 60      8192    80      8192    100     8192    80      8192    60      ;BAND WIDTH
f 58 0 32768 -7 170     8192    120     8192    120     8192    100     8192    170     ;BAND WIDTH
f 59 0 32768 -7 180     8192    150     8192    150     8192    130     8192    180     ;BAND WIDTH
f 60 0 32768 -7 200     8192    200     8192    200     8192    135     8192    200     ;BAND WIDTH
;SOPRANO
f 61 0 32768 -7 800     8192    350     8192    270     8192    450     8192    325     ;FREQ
f 62 0 32768 -7 1150    8192    2000    8192    2140    8192    800     8192    700     ;FREQ
f 63 0 32768 -7 2900    8192    2800    8192    2950    8192    2830    8192    2700    ;FREQ
f 64 0 32768 -7 3900    8192    3600    8192    3900    8192    3800    8192    3800    ;FREQ
f 65 0 32768 -7 4950    8192    4950    8192    4950    8192    4950    8192    4950    ;FREQ
f 66 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 67 0 32768 -7 -6      8192    -20     8192    -12     8192    -11     8192    -16     ;dB
f 68 0 32768 -7 -32     8192    -15     8192    -26     8192    -22     8192    -35     ;dB
f 69 0 32768 -7 -20     8192    -40     8192    -26     8192    -22     8192    -40     ;dB
f 70 0 32768 -7 -50     8192    -56     8192    -44     8192    -50     8192    -60     ;dB
f 71 0 32768 -7 80      8192    60      8192    60      8192    70      8192    50      ;BAND WIDTH
f 72 0 32768 -7 90      8192    90      8192    90      8192    80      8192    60      ;BAND WIDTH
f 73 0 32768 -7 120     8192    100     8192    100     8192    100     8192    170     ;BAND WIDTH
f 74 0 32768 -7 130     8192    150     8192    120     8192    130     8192    180     ;BAND WIDTH
f 75 0 32768 -7 140     8192    200     8192    120     8192    135     8192    200     ;BAND WIDTH

; p4 = fundemental begin value (c.p.s.)
; p5 = fundemental end value
; p6 = vowel begin value (0 - 1 : a e i o u)
; p7 = vowel end value
; p8 = bandwidth factor begin (suggested range 0 - 2)
; p9 = bandwidth factor end
; p10 = voice (0=bass; 1=tenor; 2=counter_tenor; 3=alto; 4=soprano)
; p11 = input source begin (0 - 1 : VCO - noise)
; p12 = input source end

;         p4  p5  p6  p7  p8  p9 p10 p11  p12
i 1 0  10 50  100 0   1   2   0  0   0    0
i 1 8  .  78  77  1   0   1   0  1   0    0
i 1 16 .  150 118 0   1   1   0  2   1    1
i 1 24 .  200 220 1   0   0.2 0  3   1    0
i 1 32 .  400 800 0   1   0.2 0  4   0    1
e
</CsScore>

</CsoundSynthesizer>
```

## Conclusion

These examples have hopefully demonstrated the strengths of subtractive synthesis in its simplicity, intuitive operation and its ability to create organic sounding timbres. Further research could explore Csound's other filter opcodes including vcomb, wguide1, wguide2 and the more esoteric phaser1, phaser2 and resony.

# AMPLITUDE AND RING MODULATION

## Introduction

Amplitude-modulation (AM) means, that one oscillator varies the volume/amplitude of an other. If this modulation is done very slowly (1 Hz to 10 Hz) it is recognised as tremolo. Volume-modulation above 10 Hz leads to the effect, that the sound changes its timbre. So called side-bands appear.

Example 04C01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>

sr = 48000
ksmps = 32
nchnls = 1
0dbfs = 1

instr 1
aRaise expseg 2, 20, 100
aModSine poscil 0.5, aRaise, 1
aDCOffset = 0.5    ; we want amplitude-modulation
aCarSine poscil 0.3, 440, 1
out aCarSine*(aModSine + aDCOffset)
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1
i 1 0 25
e
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

## Theory, Mathematics and Sidebands

The side-bands appear on both sides of the main frequency. This means (freq1-freq2) and (freq1+freq2) appear.

The sounding result of the following example can be calculated as this: freq1 = 440Hz, freq2 = 40 Hz -> The result is a sound with [400, 440, 480] Hz.

The amount of the sidebands can be controlled by a DC-offset of the modulator.

Example 04C02.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>

sr = 48000
ksmps = 32
nchnls = 1
0dbfs = 1

instr 1
aOffset linseg 0, 1, 0, 5, 0.6, 3, 0
aSine1 poscil 0.3, 40 , 1
aSine2 poscil 0.3, 440, 1
out (aSine1+aOffset)*aSine2
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1
i 1 0 10
e
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

Ring-modulation is a special-case of AM, without DC-offset (DC-Offset = 0). That means the modulator varies between -1 and +1 like the carrier. The sounding difference to AM is, that RM doesn't contain the carrier frequency.

(If the modulator is unipolar (oscillates between 0 and +1) the effect is called AM.)

## More Complex Synthesis using Ring Modulation and Amplitude Modulation

If the modulator itself has more harmonics, the result becomes easily more complex.

Carrier freq: 600 Hz
Modulator freqs: 200Hz with 3 harmonics = [200, 400, 600] Hz
Resulting freqs:  [0, 200, 400, <-600->, 800, 1000, 1200]

Example 04C03.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>

sr = 48000
ksmps = 32
nchnls = 1
0dbfs = 1

instr 1   ; Ring-Modulation (no DC-Offset)
aSine1 poscil 0.3, 200, 2 ; -> [200, 400, 600] Hz
aSine2 poscil 0.3, 600, 1
out aSine1*aSine2
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1 ; sine
f 2 0 1024 10 1 1 1; 3 harmonics
i 1 0 5
e
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

Using an inharmonic modulator frequency also makes the result sound inharmonic. Varying the DC-offset makes the sound-spectrum evolve over time.
Modulator freqs: [230, 460, 690]
Resulting freqs:  [ (-)90, 140, 370, <-600->, 830, 1060, 1290]
(negative frequencies become mirrowed, but phase inverted)

Example 04C04.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>

sr = 48000
ksmps = 32
nchnls = 1
0dbfs = 1

instr 1   ; Amplitude-Modulation
aOffset linseg 0, 1, 0, 5, 1, 3, 0
aSine1 poscil 0.3, 230, 2 ; -> [230, 460, 690] Hz
aSine2 poscil 0.3, 600, 1
out (aSine1+aOffset)*aSine2
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1 ; sine
f 2 0 1024 10 1 1 1; 3 harmonics
i 1 0 10
e
</CsScore>
</CsoundSynthesizer>
```

# GRANULAR SYNTHESIS

## Concept Behind Granular Synthesis

Granular synthesis is a technique in which a source sound or waveform is broken into many fragments, often of very short duration, which are then restructured and rearranged according to various patterning and indeterminacy functions.

If we imagine the simplest possible granular synthesis algorithm in which a precise fragment of sound is repeated with regularity, there are two principle attributes of this process that we are most concerned with. Firstly the duration of each sound grain is significant: if the grain duration if very small, typically less than 0.02 seconds, then less of the characteristics of the source sound will be evident. If the grain duration is greater than 0.02 then more of the character of the source sound or waveform will be evident. Secondly the rate at which grains are generated will be significant: if grain generation is below 20 hertz, i.e. less than 20 grains per second, then the stream of grains will be perceived as a rhythmic pulsation; if rate of grain generation increases beyond 20 Hz then individual grains will be harder to distinguish and instead we will begin to perceive a buzzing tone, the fundamental of which will correspond to the frequency of grain generation. Any pitch contained within the source material is not normally perceived as the fundamental of the tone whenever grain generation is periodic, instead the pitch of the source material or waveform will be perceived as a resonance peak (sometimes referred to as a formant); therefore transposition of the source material will result in the shifting of this resonance peak.

## Granular Synthesis Demonstrated Using First Principles

The following example exemplifies the concepts discussed above. None of Csound's built-in granular synthesis opcodes are used, instead schedkwhen in instrument 1 is used to precisely control the triggering of grains in instrument 2. Three notes in instrument 1 are called from the score one after the other which in turn generate three streams of grains in instrument 2. The first note demonstrates the transition from pulsation to the perception of a tone as the rate of grain generation extends beyond 20 Hz. The second note demonstrates the loss of influence of the source material as the grain duration is reduced below 0.02 seconds. The third note demonstrates how shifting the pitch of the source material for the grains results in the shifting of a resonance peak in the output tone. In each case information regarding rate of grain generation, duration and fundamental (source material pitch) is output to the terminal every 1/2 second so that the user can observe the changing parameters.

It should also be noted how the amplitude of each grain is enveloped in instrument 2. If grains were left unenveloped they would likely produce clicks on account of discontinuities in the waveform produced at the beginning and ending of each grain.

Granular synthesis in which grain generation occurs with perceivable periodicity is referred to as synchronous granular synthesis. granular synthesis in which this periodicity is not evident is referred to as asynchronous granular synthesis.

#### EXAMPLE 04F01.csd

```<CsoundSynthesizer>

<CsOptions>
-odac -m0
</CsOptions>

<CsInstruments>
;Example by Iain McCurdy

sr = 44100
ksmps = 1
nchnls = 1
0dbfs = 1

giSine  ftgen  0,0,4096,10,1

instr 1
kRate  expon  p4,p3,p5   ; rate of grain generation
kTrig  metro  kRate      ; a trigger to generate grains
kDur   expon  p6,p3,p7   ; grain duration
kForm  expon  p8,p3,p9   ; formant (spectral centroid)
;                      p1 p2 p3   p4
schedkwhen    kTrig,0,0,2, 0, kDur,kForm ;trigger a note(grain) in instr 2
;print data to terminal every 1/2 second
printks "Rate:%5.2F  Dur:%5.2F  Formant:%5.2F%n", 0.5, kRate , kDur, kForm
endin

instr 2
iForm =       p4
aEnv  linseg  0,0.005,0.2,p3-0.01,0.2,0.005,0
aSig  poscil  aEnv, iForm, giSine
out     aSig
endin

</CsInstruments>

<CsScore>
;p4 = rate begin
;p5 = rate end
;p6 = duration begin
;p7 = duration end
;p8 = formant begin
;p9 = formant end
; p1 p2 p3 p4 p5  p6   p7    p8  p9
i 1  0  30 1  100 0.02 0.02  400 400  ;demo of grain generation rate
i 1  31 10 10 10  0.4  0.01  400 400  ;demo of grain size
i 1  42 20 50 50  0.02 0.02  100 5000 ;demo of changing formant
e
</CsScore>

</CsoundSynthesizer>
```

## Granular Synthesis of Vowels: FOF

The principles outlined in the previous example can be extended to imitate vowel sounds produced by the human voice. This type of granular synthesis is referred to as FOF (fonction d'onde formatique) synthesis and is based on work by Xavier Rodet on his CHANT program at IRCAM. Typically five synchronous granular synthesis streams will be used to create five different resonant peaks in a fundamental tone in order to imitate different vowel sounds expressible by the human voice. The most crucial element in defining a vowel imitation is the degree to which the source material within each of the five grain streams is transposed. Bandwidth (essentially grain duration) and intensity (loudness) of each grain stream are also important indicators in defining the resultant sound.

Csound has a number of opcodes that make working with FOF synthesis easier. We will be using fof.

Information regarding frequency, bandwidth and intensity values that will produce various vowel sounds for different voice types can be found in the appendix of the Csound manual here. These values are stored in function tables in the FOF synthesis example. GEN07, which produces linear break point envelopes, is chosen as we will then be able to morph continuously between vowels.

#### EXAMPLE 04F02.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;example by Iain McCurdy

sr = 44100
ksmps = 16
nchnls = 2
0dbfs = 1

instr 1
kFund    expon     p4,p3,p5               ; fundemental
kVow     line      p6,p3,p7               ; vowel select
kBW      line      p8,p3,p9               ; bandwidth factor
iVoice   =         p10                    ; voice select

; read formant cutoff frequenies from tables
kForm1     table     kVow,1+(iVoice*15),1
kForm2     table     kVow,2+(iVoice*15),1
kForm3     table     kVow,3+(iVoice*15),1
kForm4     table     kVow,4+(iVoice*15),1
kForm5     table     kVow,5+(iVoice*15),1
; read formant intensity values from tables
kDB1     table     kVow,6+(iVoice*15),1
kDB2     table     kVow,7+(iVoice*15),1
kDB3     table     kVow,8+(iVoice*15),1
kDB4     table     kVow,9+(iVoice*15),1
kDB5     table     kVow,10+(iVoice*15),1
; read formant bandwidths from tables
kBW1     table     kVow,11+(iVoice*15),1
kBW2     table     kVow,12+(iVoice*15),1
kBW3     table     kVow,13+(iVoice*15),1
kBW4     table     kVow,14+(iVoice*15),1
kBW5     table     kVow,15+(iVoice*15),1
; create resonant formants by filtering source sound
koct     =         1
aForm1   fof       ampdb(kDB1),kFund,kForm1,0,kBW1,0.003,0.02,0.007,\
1000,101,102,3600
aForm2   fof       ampdb(kDB2),kFund,kForm2,0,kBW2,0.003,0.02,0.007,\
1000,101,102,3600
aForm3   fof       ampdb(kDB3),kFund,kForm3,0,kBW3,0.003,0.02,0.007,\
1000,101,102,3600
aForm4   fof       ampdb(kDB4),kFund,kForm4,0,kBW4,0.003,0.02,0.007,\
1000,101,102,3600
aForm5   fof       ampdb(kDB5),kFund,kForm5,0,kBW5,0.003,0.02,0.007,\
1000,101,102,3600

; formants are mixed
aMix     sum       aForm1,aForm2,aForm3,aForm4,aForm5
kEnv     linseg    0,3,1,p3-6,1,3,0     ; an amplitude envelope
outs      aMix*kEnv*0.3, aMix*kEnv*0.3 ; send audio to outputs
endin

</CsInstruments>

<CsScore>
;FUNCTION TABLES STORING FORMANT DATA FOR EACH OF THE FIVE VOICE TYPES REPRESENTED
;BASS
f 1  0 32768 -7 600     10922   400     10922   250     10924   350     ;FREQ
f 2  0 32768 -7 1040    10922   1620    10922   1750    10924   600     ;FREQ
f 3  0 32768 -7 2250    10922   2400    10922   2600    10924   2400    ;FREQ
f 4  0 32768 -7 2450    10922   2800    10922   3050    10924   2675    ;FREQ
f 5  0 32768 -7 2750    10922   3100    10922   3340    10924   2950    ;FREQ
f 6  0 32768 -7 0       10922   0       10922   0       10924   0       ;dB
f 7  0 32768 -7 -7      10922   -12     10922   -30     10924   -20     ;dB
f 8  0 32768 -7 -9      10922   -9      10922   -16     10924   -32     ;dB
f 9  0 32768 -7 -9      10922   -12     10922   -22     10924   -28     ;dB
f 10 0 32768 -7 -20     10922   -18     10922   -28     10924   -36     ;dB
f 11 0 32768 -7 60      10922   40      10922   60      10924   40      ;BAND WIDTH
f 12 0 32768 -7 70      10922   80      10922   90      10924   80      ;BAND WIDTH
f 13 0 32768 -7 110     10922   100     10922   100     10924   100     ;BAND WIDTH
f 14 0 32768 -7 120     10922   120     10922   120     10924   120     ;BAND WIDTH
f 15 0 32768 -7 130     10922   120     10922   120     10924   120     ;BAND WIDTH
;TENOR
f 16 0 32768 -7 650     8192    400     8192    290     8192    400     8192    350     ;FREQ
f 17 0 32768 -7 1080    8192    1700    8192    1870    8192    800     8192    600     ;FREQ
f 18 0 32768 -7 2650    8192    2600    8192    2800    8192    2600    8192    2700    ;FREQ
f 19 0 32768 -7 2900    8192    3200    8192    3250    8192    2800    8192    2900    ;FREQ
f 20 0 32768 -7 3250    8192    3580    8192    3540    8192    3000    8192    3300    ;FREQ
f 21 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 22 0 32768 -7 -6      8192    -14     8192    -15     8192    -10     8192    -20     ;dB
f 23 0 32768 -7 -7      8192    -12     8192    -18     8192    -12     8192    -17     ;dB
f 24 0 32768 -7 -8      8192    -14     8192    -20     8192    -12     8192    -14     ;dB
f 25 0 32768 -7 -22     8192    -20     8192    -30     8192    -26     8192    -26     ;dB
f 26 0 32768 -7 80      8192    70      8192    40      8192    40      8192    40      ;BAND WIDTH
f 27 0 32768 -7 90      8192    80      8192    90      8192    80      8192    60      ;BAND WIDTH
f 28 0 32768 -7 120     8192    100     8192    100     8192    100     8192    100     ;BAND WIDTH
f 29 0 32768 -7 130     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
f 30 0 32768 -7 140     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
;COUNTER TENOR
f 31 0 32768 -7 660     8192    440     8192    270     8192    430     8192    370     ;FREQ
f 32 0 32768 -7 1120    8192    1800    8192    1850    8192    820     8192    630     ;FREQ
f 33 0 32768 -7 2750    8192    2700    8192    2900    8192    2700    8192    2750    ;FREQ
f 34 0 32768 -7 3000    8192    3000    8192    3350    8192    3000    8192    3000    ;FREQ
f 35 0 32768 -7 3350    8192    3300    8192    3590    8192    3300    8192    3400    ;FREQ
f 36 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 37 0 32768 -7 -6      8192    -14     8192    -24     8192    -10     8192    -20     ;dB
f 38 0 32768 -7 -23     8192    -18     8192    -24     8192    -26     8192    -23     ;dB
f 39 0 32768 -7 -24     8192    -20     8192    -36     8192    -22     8192    -30     ;dB
f 40 0 32768 -7 -38     8192    -20     8192    -36     8192    -34     8192    -30     ;dB
f 41 0 32768 -7 80      8192    70      8192    40      8192    40      8192    40      ;BAND WIDTH
f 42 0 32768 -7 90      8192    80      8192    90      8192    80      8192    60      ;BAND WIDTH
f 43 0 32768 -7 120     8192    100     8192    100     8192    100     8192    100     ;BAND WIDTH
f 44 0 32768 -7 130     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
f 45 0 32768 -7 140     8192    120     8192    120     8192    120     8192    120     ;BAND WIDTH
;ALTO
f 46 0 32768 -7 800     8192    400     8192    350     8192    450     8192    325     ;FREQ
f 47 0 32768 -7 1150    8192    1600    8192    1700    8192    800     8192    700     ;FREQ
f 48 0 32768 -7 2800    8192    2700    8192    2700    8192    2830    8192    2530    ;FREQ
f 49 0 32768 -7 3500    8192    3300    8192    3700    8192    3500    8192    2500    ;FREQ
f 50 0 32768 -7 4950    8192    4950    8192    4950    8192    4950    8192    4950    ;FREQ
f 51 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 52 0 32768 -7 -4      8192    -24     8192    -20     8192    -9      8192    -12     ;dB
f 53 0 32768 -7 -20     8192    -30     8192    -30     8192    -16     8192    -30     ;dB
f 54 0 32768 -7 -36     8192    -35     8192    -36     8192    -28     8192    -40     ;dB
f 55 0 32768 -7 -60     8192    -60     8192    -60     8192    -55     8192    -64     ;dB
f 56 0 32768 -7 50      8192    60      8192    50      8192    70      8192    50      ;BAND WIDTH
f 57 0 32768 -7 60      8192    80      8192    100     8192    80      8192    60      ;BAND WIDTH
f 58 0 32768 -7 170     8192    120     8192    120     8192    100     8192    170     ;BAND WIDTH
f 59 0 32768 -7 180     8192    150     8192    150     8192    130     8192    180     ;BAND WIDTH
f 60 0 32768 -7 200     8192    200     8192    200     8192    135     8192    200     ;BAND WIDTH
;SOPRANO
f 61 0 32768 -7 800     8192    350     8192    270     8192    450     8192    325     ;FREQ
f 62 0 32768 -7 1150    8192    2000    8192    2140    8192    800     8192    700     ;FREQ
f 63 0 32768 -7 2900    8192    2800    8192    2950    8192    2830    8192    2700    ;FREQ
f 64 0 32768 -7 3900    8192    3600    8192    3900    8192    3800    8192    3800    ;FREQ
f 65 0 32768 -7 4950    8192    4950    8192    4950    8192    4950    8192    4950    ;FREQ
f 66 0 32768 -7 0       8192    0       8192    0       8192    0       8192    0       ;dB
f 67 0 32768 -7 -6      8192    -20     8192    -12     8192    -11     8192    -16     ;dB
f 68 0 32768 -7 -32     8192    -15     8192    -26     8192    -22     8192    -35     ;dB
f 69 0 32768 -7 -20     8192    -40     8192    -26     8192    -22     8192    -40     ;dB
f 70 0 32768 -7 -50     8192    -56     8192    -44     8192    -50     8192    -60     ;dB
f 71 0 32768 -7 80      8192    60      8192    60      8192    70      8192    50      ;BAND WIDTH
f 72 0 32768 -7 90      8192    90      8192    90      8192    80      8192    60      ;BAND WIDTH
f 73 0 32768 -7 120     8192    100     8192    100     8192    100     8192    170     ;BAND WIDTH
f 74 0 32768 -7 130     8192    150     8192    120     8192    130     8192    180     ;BAND WIDTH
f 75 0 32768 -7 140     8192    200     8192    120     8192    135     8192    200     ;BAND WIDTH

f 101 0 4096 10 1                       ;SINE WAVE
;EXPONENTIAL CURVE USED TO DEFINE THE ENVELOPE SHAPE OF FOF PULSES:
f 102 0 1024 19 0.5 0.5 270 0.5
; p4 = fundamental begin value (c.p.s.)
; p5 = fundamental end value
; p6 = vowel begin value (0 - 1 : a e i o u)
; p7 = vowel end value
; p8 = bandwidth factor begin (suggested range 0 - 2)
; p9 = bandwidth factor end
; p10 = voice (0=bass; 1=tenor; 2=counter_tenor; 3=alto; 4=soprano)

; p1 p2  p3  p4  p5  p6  p7  p8  p9 p10
i 1  0   10  50  100 0   1   2   0  0
i 1  8   .   78  77  1   0   1   0  1
i 1  16  .   150 118 0   1   1   0  2
i 1  24  .   200 220 1   0   0.2 0  3
i 1  32  .   400 800 0   1   0.2 0  4
e
</CsScore>
</CsoundSynthesizer>
```

## Asynchronous Granular Synthesis

The previous two examples have played psychoacoustic phenomena associated with the perception of granular textures that exhibit periodicity and patterns. If we introduce indeterminacy into some of the parameters of granular synthesis we begin to lose the coherence of some of these harmonic structures.

The next example is based on the design of example 04F01.csd. Two streams of grains are generated. The first stream begins as a synchronous stream but as the note progresses the periodicity of grain generation is eroded through the addition of an increasing degree of gaussian noise. It will be heard how the tone metamorphosizes from one characterized by steady purity to one of fuzzy airiness. The second the applies a similar process of increasing indeterminacy to the formant parameter (frequency of material within each grain).

Other parameters of granular synthesis such as the amplitude of each grain, grain duration, spatial location etc. can be similarly modulated with random functions to offset the psychoacoustic effects of synchronicity when using constant values.

#### EXAMPLE 04F03.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;Example by Iain McCurdy

sr = 44100
ksmps = 1
nchnls = 1
0dbfs = 1

giWave  ftgen  0,0,2^10,10,1,1/2,1/4,1/8,1/16,1/32,1/64

instr 1 ;grain generating instrument 1
kRate         =          p4
kTrig         metro      kRate      ; a trigger to generate grains
kDur          =          p5
kForm         =          p6
;note delay time (p2) is defined using a random function -
;- beginning with no randomization but then gradually increasing
kDelayRange   transeg    0,1,0,0,  p3-1,4,0.03
kDelay        gauss      kDelayRange
;                                  p1 p2 p3   p4
schedkwhen kTrig,0,0,3, abs(kDelay), kDur,kForm ;trigger a note (grain) in instr 3
endin

instr 2 ;grain generating instrument 2
kRate          =          p4
kTrig          metro      kRate      ; a trigger to generate grains
kDur           =          p5
;formant frequency (p4) is multiplied by a random function -
;- beginning with no randomization but then gradually increasing
kForm          =          p6
kFormOSRange  transeg     0,1,0,0,  p3-1,2,12 ;range defined in semitones
kFormOS       gauss       kFormOSRange
;                                   p1 p2 p3   p4
schedkwhen  kTrig,0,0,3, 0, kDur,kForm*semitone(kFormOS)
endin

instr 3 ;grain sounding instrument
iForm =       p4
aEnv  linseg  0,0.005,0.2,p3-0.01,0.2,0.005,0
aSig  poscil  aEnv, iForm, giWave
out     aSig
endin

</CsInstruments>

<CsScore>
;p4 = rate
;p5 = duration
;p6 = formant
; p1 p2   p3 p4  p5   p6
i 1  0    12 200 0.02 400
i 2  12.5 12 200 0.02 400
e
</CsScore>

</CsoundSynthesizer>
```

## Synthesis of Dynamic Sound Spectra: grain3

The next example introduces another of Csound's built-in granular synthesis opcodes to demonstrate the range of dynamic sound spectra that are possible with granular synthesis.

Several parameters are modulated slowly using Csound's random spline generator rspline. These parameters are formant frequency, grain duration and grain density (rate of grain generation). The waveform used in generating the content for each grain is randomly chosen using a slow sample and hold random function - a new waveform will be selected every 10 seconds. Five waveforms are provided: a sawtooth, a square wave, a triangle wave, a pulse wave and a band limited buzz-like waveform. Some of these waveforms, particularly the sawtooth, square and pulse waveforms, can generate very high overtones, for this reason a high sample rate is recommended to reduce the risk of aliasing (see chapter 01A).

Current values for formant (cps), grain duration, density and waveform are printed to the terminal every second. The key for waveforms is: 1:sawtooth; 2:square; 3:triangle; 4:pulse; 5:buzz.

#### EXAMPLE 04F04.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;example by Iain McCurdy

sr = 96000
ksmps = 16
nchnls = 1
0dbfs = 1

;waveforms used for granulation
giSaw   ftgen 1,0,4096,7,0,4096,1
giSq    ftgen 2,0,4096,7,0,2046,0,0,1,2046,1
giTri   ftgen 3,0,4096,7,0,2046,1,2046,0
giPls   ftgen 4,0,4096,7,1,200,1,0,0,4096-200,0
giBuzz  ftgen 5,0,4096,11,20,1,1

;window function - used as an amplitude envelope for each grain
;(hanning window)
giWFn   ftgen 7,0,16384,20,2,1

instr 1
;random spline generates formant values in oct format
kOct    rspline 4,8,0.1,0.5
;oct format values converted to cps format
kCPS    =       cpsoct(kOct)
;phase location is left at 0 (the beginning of the waveform)
kPhs    =       0
;frequency (formant) randomization and phase randomization are not used
kFmd    =       0
kPmd    =       0
;grain duration and density (rate of grain generation)
kGDur   rspline 0.01,0.2,0.05,0.2
kDens   rspline 10,200,0.05,0.5
;maximum number of grain overlaps allowed. This is used as a CPU brake
iMaxOvr =       1000
;function table for source waveform for content of the grain
;a different waveform chosen once every 10 seconds
kFn     randomh 1,5.99,0.1
;print info. to the terminal
printks "CPS:%5.2F%TDur:%5.2F%TDensity:%5.2F%TWaveform:%1.0F%n",1,\
kCPS,kGDur,kDens,kFn
aSig    grain3  kCPS, kPhs, kFmd, kPmd, kGDur, kDens, iMaxOvr, kFn, giWFn, \
0, 0
out     aSig*0.06
endin

</CsInstruments>

<CsScore>
i 1 0 300
e
</CsScore>

</CsoundSynthesizer>

```

The final example introduces grain3's two built-in randomizing functions for phase and pitch. Phase refers to the location in the source waveform from which a grain will be read, pitch refers to the pitch of the material within grains. In this example a long note is played, initially no randomization is employed but gradually phase randomization is increased and then reduced back to zero. The same process is applied to the pitch randomization amount parameter. This time grain size is relatively large:0.8 seconds and density correspondingly low: 20 Hz.

#### EXAMPLE 04F05.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>
;example by Iain McCurdy

sr = 44100
ksmps = 16
nchnls = 1
0dbfs = 1

;waveforms used for granulation
giBuzz  ftgen 1,0,4096,11,40,1,0.9

;window function - used as an amplitude envelope for each grain
;(bartlett window)
giWFn   ftgen 2,0,16384,20,3,1

instr 1
kCPS    =       100
kPhs    =       0
kFmd    transeg 0,21,0,0, 10,4,15, 10,-4,0
kPmd    transeg 0,1,0,0,  10,4,1,  10,-4,0
kGDur   =       0.8
kDens   =       20
iMaxOvr =       1000
kFn     =       1
;print info. to the terminal
printks "Random Phase:%5.2F%TPitch Random:%5.2F%n",1,kPmd,kFmd
aSig    grain3  kCPS, kPhs, kFmd, kPmd, kGDur, kDens, iMaxOvr, kFn, giWFn, 0, 0
out     aSig*0.06
endin

</CsInstruments>

<CsScore>
i 1 0 51
e
</CsScore>

</CsoundSynthesizer>
```

## Conclusion

This chapter has introduced some of the concepts behind the synthesis of new sounds based on simple waveforms by using granular synthesis techniques. Only two of Csound's built-in opcodes for granular synthesis, fof and grain3, have been used; it is beyond the scope of this work to cover all of the many opcodes for granulation that Csound provides. This chapter has focused mainly on synchronous granular synthesis; chapter 05G, which introduces granulation of recorded sound files, makes greater use of asynchronous granular synthesis for time-stretching and pitch shifting. This chapter will also introduce some of Csound's other opcodes for granular synthesis.

# PHYSICAL MODELLING

With physical modelling we employ a completely different approach to synthesis than we do with all other standard techniques. Unusually the focus is not primarily to produce a sound, but to model a physical process and if this process exhibits certain features such as periodic oscillation within a frequency range of 20 to 20000 Hz, it will produce sound.

Physical modelling synthesis techniques do not build sound using wave tables, oscillators and audio signal generators, instead they attempt to establish a model, as a system in itself, which which can then produce sound because of how the function it producers time varies with time. A physical model usually derives from the real physical world, but could be any time-varying system. Physical modelling is an exciting area for the production of new sounds.

Compared with the complexity of a real-world physically dynamic system a physical model will most likely represent a brutal simplification. Nevertheless, using this technique will demand a lot of formulae, because physical models are described in terms of mathematics. Although designing a model may require some considerable work, once established the results commonly exhibit a lively tone with time-varying partials and a "natural" difference between attack and release by their very design - features that other synthesis techniques will demand more from the end user in order to establish.

Csound already contains many ready-made physical models as opcodes but you can still build your own from scratch. This chapter will look at how to implement two classical models from first principles and then introduce a number of Csound's ready made physical modelling opcodes.

## The Mass-Spring Model1

Many oscillating processes in nature can be modelled as connections of masses and springs. Imagine one mass-spring unit which has been set into motion. This system can be described as a sequence of states, where every new state results from the two preceding ones. Assumed the first state a0 is 0 and the second state a1 is 0.5. Without the restricting force of the spring, the mass would continue moving unimpeded following a constant velocity:

As the velocity between the first two states can be described as a1-a0, the value of the third state a2 will be:

a2 = a1 + (a1 - a0) = 0.5 + 0.5 = 1

But, the spring pulls the mass back with a force which increases the further the mass moves away from the point of equilibrium. Therefore the masses movement can be described as the product of a constant factor c and the last position a1. This damps the continuous movement of the mass so that for a factor of c=0.4 the next position will be:

a2 = (a1 + (a1 - a0)) - c * a1 = 1 - 0.2 = 0.8

Csound can easily calculate the values by simply applying the formulae. For the first k-cycle2 , they are set via the init opcode. After calculating the new state, a1 becomes a0 and a2 becomes a1 for the next k-cycle. This is a csd which prints the new values five times per second. (The states are named here as k0/k1/k2 instead of a0/a1/a2, because k-rate values are needed here for printing instead of audio samples.)

EXAMPLE 04G01.csd

```<CsoundSynthesizer>
<CsOptions>
-n ;no sound
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 8820 ;5 steps per second

instr PrintVals
;initial values
kstep init 0
k0 init 0
k1 init 0.5
kc init 0.4
;calculation of the next value
k2 = k1 + (k1 - k0) - kc * k1
printks "Sample=%d: k0 = %.3f, k1 = %.3f, k2 = %.3f\n", 0, kstep, k0, k1, k2
;actualize values for the next step
kstep = kstep+1
k0 = k1
k1 = k2
endin

</CsInstruments>
<CsScore>
i "PrintVals" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
```

The output starts with:

```State=0:  k0 =  0.000,  k1 =  0.500,  k2 =  0.800
State=1:  k0 =  0.500,  k1 =  0.800,  k2 =  0.780
State=2:  k0 =  0.800,  k1 =  0.780,  k2 =  0.448
State=3:  k0 =  0.780,  k1 =  0.448,  k2 = -0.063
State=4:  k0 =  0.448,  k1 = -0.063,  k2 = -0.549
State=5:  k0 = -0.063,  k1 = -0.549,  k2 = -0.815
State=6:  k0 = -0.549,  k1 = -0.815,  k2 = -0.756
State=7:  k0 = -0.815,  k1 = -0.756,  k2 = -0.393
State=8:  k0 = -0.756,  k1 = -0.393,  k2 =  0.126
State=9:  k0 = -0.393,  k1 =  0.126,  k2 =  0.595
State=10: k0 =  0.126,  k1 =  0.595,  k2 =  0.826
State=11: k0 =  0.595,  k1 =  0.826,  k2 =  0.727
State=12: k0 =  0.826,  k1 =  0.727,  k2 =  0.337
```

So, a sine wave has been created, without the use of any of Csound's oscillators...

Here is the audible proof:

EXAMPLE 04G02.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1

instr MassSpring
;initial values
a0        init      0
a1        init      0.05
ic        =         0.01 ;spring constant
;calculation of the next value
a2        =         a1+(a1-a0) - ic*a1
outs      a0, a0
;actualize values for the next step
a0        =         a1
a1        =         a2
endin
</CsInstruments>
<CsScore>
i "MassSpring" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, after martin neukom
```

As the next sample is calculated in the next control cycle, ksmps has to be set to 1.3 The resulting frequency depends on the spring constant: the higher the constant, the higher the frequency. The resulting amplitude depends on both, the starting value and the spring constant.

This simple model shows the basic principle of a physical modelling synthesis: creating a system which produces sound because it varies in time. Certainly it is not the goal of physical modelling synthesis to reinvent the wheel of a sine wave. But modulating the parameters of a model may lead to interesting results. The next example varies the spring constant, which is now no longer a constant:

EXAMPLE 04G03.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 1
nchnls = 2
0dbfs = 1

instr MassSpring
;initial values
a0        init      0
a1        init      0.05
kc        randomi   .001, .05, 8, 3
;calculation of the next value
a2        =         a1+(a1-a0) - kc*a1
outs      a0, a0
;actualize values for the next step
a0        =         a1
a1        =         a2
endin
</CsInstruments>
<CsScore>
i "MassSpring" 0 10
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz
```

Working with physical modelling demands thought in more physical or mathematical terms: examples of this might be if you were to change the formula when a certain value of c had been reached, or combine more than one spring.

## The Karplus-Strong Algorithm: Plucked String

The Karplus-Strong algorithm provides another simple yet interesting example of how physical modelling can be used to synthesized sound. A buffer is filled with random values of either +1 or -1. At the end of the buffer, the mean of the first and the second value to come out of the buffer is calculated. This value is then put back at the beginning of the buffer, and all the values in the buffer are shifted by one position.

This is what happens for a buffer of five values, for the first five steps:

 initial state 1 -1 1 1 -1 step 1 0 1 -1 1 1 step 2 1 0 1 -1 1 step 3 0 1 0 1 -1 step 4 0 0 1 0 1 step 5 0.5 0 0 1 0

The next Csound example represents the content of the buffer in a function table, implements and executes the algorithm, and prints the result after each five steps which here is referred to as one cycle:

EXAMPLE 04G04.csd

```<CsoundSynthesizer>
<CsOptions>
-n
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 1
0dbfs = 1

opcode KS, 0, ii
;performs the karplus-strong algorithm
iTab, iTbSiz xin
;calculate the mean of the last two values
iUlt      tab_i     iTbSiz-1, iTab
iPenUlt   tab_i     iTbSiz-2, iTab
iNewVal   =         (iUlt + iPenUlt) / 2
;shift values one position to the right
indx      =         iTbSiz-2
loop:
iVal      tab_i     indx, iTab
tabw_i    iVal, indx+1, iTab
loop_ge   indx, 1, 0, loop
;fill the new value at the beginning of the table
tabw_i    iNewVal, 0, iTab
endop

opcode PrintTab, 0, iiS
;prints table content, with a starting string
iTab, iTbSiz, Sout xin
indx      =         0
loop:
iVal      tab_i     indx, iTab
Snew      sprintf   "%8.3f", iVal
Sout      strcat    Sout, Snew
loop_lt   indx, 1, iTbSiz, loop
puts      Sout, 1
endop

instr ShowBuffer
;fill the function table
iTab      ftgen     0, 0, -5, -2, 1, -1, 1, 1, -1
iTbLen    tableng   iTab
;loop cycles (five states)
iCycle    =         0
cycle:
Scycle    sprintf   "Cycle %d:", iCycle
PrintTab  iTab, iTbLen, Scycle
;loop states
iState    =         0
state:
KS        iTab, iTbLen
loop_lt   iState, 1, iTbLen, state
loop_lt   iCycle, 1, 10, cycle
endin

</CsInstruments>
<CsScore>
i "ShowBuffer" 0 1
</CsScore>
</CsoundSynthesizer>
```

This is the output:

```Cycle 0:   1.000  -1.000   1.000   1.000  -1.000
Cycle 1:   0.500   0.000   0.000   1.000   0.000
Cycle 2:   0.500   0.250   0.000   0.500   0.500
Cycle 3:   0.500   0.375   0.125   0.250   0.500
Cycle 4:   0.438   0.438   0.250   0.188   0.375
Cycle 5:   0.359   0.438   0.344   0.219   0.281
Cycle 6:   0.305   0.398   0.391   0.281   0.250
Cycle 7:   0.285   0.352   0.395   0.336   0.266
Cycle 8:   0.293   0.318   0.373   0.365   0.301
Cycle 9:   0.313   0.306   0.346   0.369   0.333
```

It can be seen clearly that the values get smoothed more and more from cycle to cycle. As the buffer size is very small here, the values tend to come to a constant level; in this case 0.333. But for larger buffer sizes, after some cycles the buffer content has the effect of a period which is repeated with a slight loss of amplitude. This is how it sounds, if the buffer size is 1/100 second (or 441 samples at sr=44100):

EXAMPLE 04G05.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps =  1
nchnls = 2
0dbfs = 1

instr 1
;delay time
iDelTm    =         0.01
;fill the delay line with either -1 or 1 randomly
kDur      timeinsts
if kDur < iDelTm then
aFill     rand      1, 2, 1, 1 ;values 0-2
aFill     =         floor(aFill)*2 - 1 ;just -1 or +1
else
aFill     =         0
endif
;delay and feedback
aUlt      init      0 ;last sample in the delay line
aUlt1     init      0 ;delayed by one sample
aMean     =         (aUlt+aUlt1)/2 ;mean of these two
aUlt      delay     aFill+aMean, iDelTm
aUlt1     delay1    aUlt
outs      aUlt, aUlt
endin

</CsInstruments>
<CsScore>
i 1 0 60
</CsScore>
</CsoundSynthesizer>
;example by joachim heintz, after martin neukom
```

This sound resembles a plucked string: at the beginning the sound is noisy but after a short period of time it exhibits periodicity. As can be heard, unless a natural string, the steady state is virtually endless, so for practical use it needs some fade-out. The frequency the listener perceives is related to the length of the delay line. If the delay line is 1/100 of a second, the perceived frequency is 100 Hz. Compared with a sine wave of similar frequency, the inherent periodicity can be seen, and also the rich overtone structure:

Csound also contains over forty opcodes which provide a wide variety of ready-made physical models and emulations. A small number of them will be introduced here to give a brief overview of the sort of things available.

## wgbow - A Waveguide Emulation of a Bowed String by Perry Cook

Perry Cook is a prolific author of physical models and a lot of his work has been converted into Csound opcodes. A number of these models wgbow, wgflute, wgclar wgbowedbar and wgbrass are based on waveguides. A waveguide, in its broadest sense, is some sort of mechanism that limits the extend of oscillations, such as a vibrating string fixed at both ends or a pipe. In these sorts of physical model a delay is used to emulate these limits. One of these, wgbow, implements an emulation of a bowed string. Perhaps the most interesting aspect of many physical models in not specifically whether they emulate the target instrument played in a conventional way accurately but the facilities they provide for extending the physical limits of the instrument and how it is played - there are already vast sample libraries and software samplers for emulating conventional instruments played conventionally. wgbow offers several interesting options for experimentation including the ability to modulate the bow pressure and the bowing position at k-rate. Varying bow pressure will change the tone of the sound produced by changing the harmonic emphasis. As bow pressure reduces, the fundamental of the tone becomes weaker and overtones become more prominent. If the bow pressure is reduced further the abilty of the system to produce a resonance at all collapse. This boundary between tone production and the inability to produce a tone can provide some interesting new sound effect. The following example explores this sound area by modulating the bow pressure parameter around this threshold. Some additional features to enhance the example are that 7 different notes are played simultaneously, the bow pressure modulations in the right channel are delayed by a varying amount with respect top the left channel in order to create a stereo effect and a reverb has been added.

EXAMPLE 04G06.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>

sr      =       44100
ksmps   =       32
nchnls  =       2
0dbfs   =       1
seed    0

gisine  ftgen   0,0,4096,10,1

gaSendL,gaSendR init 0

instr 1 ; wgbow instrument
kamp     =        0.3
kfreq    =        p4
ipres1   =        p5
ipres2   =        p6
; kpres (bow pressure) defined using a random spline
kpres    rspline  p5,p6,0.5,2
krat     =        0.127236
kvibf    =        4.5
kvibamp  =        0
iminfreq =        20
; call the wgbow opcode
aSigL    wgbow    kamp,kfreq,kpres,krat,kvibf,kvibamp,gisine,iminfreq
; modulating delay time
kdel     rspline  0.01,0.1,0.1,0.5
; bow pressure parameter delayed by a varying time in the right channel
kpres    vdel_k   kpres,kdel,0.2,2
aSigR    wgbow    kamp,kfreq,kpres,krat,kvibf,kvibamp,gisine,iminfreq
outs     aSigL,aSigR
; send some audio to the reverb
gaSendL  =        gaSendL + aSigL/3
gaSendR  =        gaSendR + aSigR/3
endin

instr 2 ; reverb
aRvbL,aRvbR reverbsc gaSendL,gaSendR,0.9,7000
outs     aRvbL,aRvbR
clear    gaSendL,gaSendR
endin

</CsInstruments>

<CsScore>
; instr. 1
;  p4 = pitch (hz.)
;  p5 = minimum bow pressure
;  p6 = maximum bow pressure
; 7 notes played by the wgbow instrument
i 1  0 480  70 0.03 0.1
i 1  0 480  85 0.03 0.1
i 1  0 480 100 0.03 0.09
i 1  0 480 135 0.03 0.09
i 1  0 480 170 0.02 0.09
i 1  0 480 202 0.04 0.1
i 1  0 480 233 0.05 0.11
; reverb instrument
i 2 0 480
</CsScore>

</CsoundSynthesizer>
```

This time a stack of eight sustaining notes, each separated by an octave, vary their 'bowing position' randomly and independently. You will hear how different bowing positions accentuates and attenuates different partials of the bowing tone. To enhance the sound produced some filtering with tone and pareq is employed and some reverb is added.

EXAMPLE 04G07.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>

sr      =       44100
ksmps   =       32
nchnls  =       2
0dbfs   =       1
seed    0

gisine  ftgen   0,0,4096,10,1

gaSend init 0

instr 1 ; wgbow instrument
kamp     =        0.1
kfreq    =        p4
kpres    =        0.2
krat     rspline  0.006,0.988,0.1,0.4
kvibf    =        4.5
kvibamp  =        0
iminfreq =        20
aSig     wgbow    kamp,kfreq,kpres,krat,kvibf,kvibamp,gisine,iminfreq
aSig     butlp     aSig,2000
aSig     pareq    aSig,80,6,0.707
outs     aSig,aSig
gaSend   =        gaSend + aSig/3
endin

instr 2 ; reverb
aRvbL,aRvbR reverbsc gaSend,gaSend,0.9,7000
outs     aRvbL,aRvbR
clear    gaSend
endin

</CsInstruments>

<CsScore>
; instr. 1 (wgbow instrument)
;  p4 = pitch (hertz)
; wgbow instrument
i 1  0 480  20
i 1  0 480  40
i 1  0 480  80
i 1  0 480  160
i 1  0 480  320
i 1  0 480  640
i 1  0 480  1280
i 1  0 480  2460
; reverb instrument
i 2 0 480
</CsScore>

</CsoundSynthesizer>
```

All of the wg- family of opcodes are worth exploring and often the approach taken here - exploring each input parameter in isolation whilst the others retain constant values - sets the path to understanding the model better. Tone production with wgbrass is very much dependent upon the relationship between intended pitch and lip tension, random experimentation with this opcode is as likely to result in silence as it is in sound and in this way is perhaps a reflection of the experience of learning a brass instrument when the student spends most time push air silently through the instrument. With patience it is capable of some interesting sounds however. In its case, I would recommend building a realtime GUI and exploring the interaction of its input arguments that way. wgbowedbar, like a number of physical modelling algorithms, is rather unstable. This is not necessary a design flaw in the algorithm but instead perhaps an indication that the algorithm has been left quite open for out experimentation - or abuse. In these situation caution is advised in order to protect ears and loudspeakers. Positive feedback within the model can result in signals of enormous amplitude very quickly. Employment of the clip opcode as a means of some protection is recommended when experimenting in realtime.

## barmodel - a Model of a Struck Metal Bar by Stefan Bilbao

barmodel can also imitate wooden bars, tubular bells, chimes and other resonant inharmonic objects. barmodel is a model that can easily be abused to produce ear shreddingly loud sounds therefore precautions are advised when experimenting with it in realtime. We are presented with a wealth of input arguments such as 'stiffness', 'strike position' and 'strike velocity', which relate in an easily understandable way to the physical process we are emulating. Some parameters will evidently have more of a dramatic effect on the sound produced than other and again it is recommended to create a realtime GUI for exploration. Nonetheless, a fixed example is provided below that should offer some insight into the kinds of sounds possible.

Probably the most important parameter for us is the stiffness of the bar. This actually provides us with our pitch control and is not in cycle-per-second so some experimentation will be required to find a desired pitch. There is a relationship between stiffness and the parameter used to define the width of the strike - when the stiffness coefficient is higher a wider strike may be required in order for the note to sound. Strike width also impacts upon the tone produced, narrower strikes generating emphasis upon upper partials (provided a tone is still produced) whilst wider strikes tend to emphasize the fundamental).

The parameter for strike position also has some impact upon the spectral balance. This effect may be more subtle and may be dependent upon some other parameter settings, for example, when strike width is particularly wide, its effect may be imperceptible. A general rule of thumb here is that is that in order to achieve the greatest effect from strike position, strike width should be as low as will still produce a tone. This kind of interdependency between input parameters is the essence of working with a physical model that can be both intriguing and frustrating.

An important parameter that will vary the impression of the bar from metal to wood is

An interesting feature incorporated into the model in the ability to modulate the point along the bar at which vibrations are read. This could also be described as pick-up position. Moving this scanning location results in tonal and amplitude variations. We just have control over the frequency at which the scanning location is modulated.

EXAMPLE 04G07.csd4

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr     = 44100
ksmps  = 32
nchnls = 2
0dbfs  = 1

instr   1
; boundary conditions 1=fixed 2=pivot 3=free
kbcL    =               1
kbcR    =               1
; stiffness
iK      =               p4
; high freq. loss (damping)
ib      =               p5
; scanning frequency
kscan   rspline         p6,p7,0.2,0.8
; time to reach 30db decay
iT30    =               p3
; strike position
ipos    random          0,1
; strike velocity
ivel    =               1000
; width of strike
iwid    =               0.1156
aSig    barmodel        kbcL,kbcR,iK,ib,kscan,iT30,ipos,ivel,iwid
kPan    rspline         0.1,0.9,0.5,2
aL,aR   pan2            aSig,kPan
outs             aL,aR
endin

</CsInstruments>

<CsScore>
;t 0 90 1 30 2 60 5 90 7 30
; p4 = stiffness (pitch)

#define gliss(dur'Kstrt'Kend'b'scan1'scan2)
#
i 1 0     20 \$Kstrt \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur >     \$b \$scan1 \$scan2
i 1 ^+0.05 \$dur \$Kend \$b \$scan1 \$scan2
#
\$gliss(15'40'400'0.0755'0.1'2)
b 5
\$gliss(2'80'800'0.755'0'0.1)
b 10
\$gliss(3'10'100'0.1'0'0)
b 15
\$gliss(40'40'433'0'0.2'5)
e
</CsScore>
</CsoundSynthesizer>
; example written by Iain McCurdy
```

## PhISEM - Physically Inspired Stochastic Event Modeling

The PhiSEM set of models in Csound, again based on the work of Perry Cook, imitate instruments that rely on collisions between smaller sound producing object to produce their sounds. These models include a tambourine, a set of bamboo windchimes and sleighbells. These models algorithmically mimic these multiple collisions internally so that we only need to define elements such as the number of internal elements (timbrels, beans, bells etc.) internal damping and resonances. Once again the most interesting aspect of working with a model is to stretch the physical limits so that we can hear the results from, for example, a maraca with an impossible number of beans, a tambourine with so little internal damping that it never decays. In the following example I explore tambourine, bamboo and sleighbells each in turn, first in a state that mimics the source instrument and then with some more extreme conditions.

EXAMPLE 04G08.csd

```<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>

<CsInstruments>

sr     = 44100
ksmps  = 32
nchnls = 1
0dbfs  = 1

instr  1 ; tambourine
iAmp      =           p4
iDettack  =           0.01
iNum      =           p5
iDamp     =           p6
iMaxShake =           0
iFreq     =           p7
iFreq1    =           p8
iFreq2    =           p9
aSig      tambourine  iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out         aSig
endin

instr  2 ; bamboo
iAmp      =           p4
iDettack  =           0.01
iNum      =           p5
iDamp     =           p6
iMaxShake =           0
iFreq     =           p7
iFreq1    =           p8
iFreq2    =           p9
aSig      bamboo      iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out         aSig
endin

instr  3 ; sleighbells
iAmp      =           p4
iDettack  =           0.01
iNum      =           p5
iDamp     =           p6
iMaxShake =           0
iFreq     =           p7
iFreq1    =           p8
iFreq2    =           p9
aSig      sleighbells iAmp,iDettack,iNum,iDamp,iMaxShake,iFreq,iFreq1,iFreq2
out         aSig
endin

</CsInstruments>

<CsScore>
; p4 = amp.
; p5 = number of timbrels
; p6 = damping
; p7 = freq (main)
; p8 = freq 1
; p9 = freq 2

; tambourine
i 1 0 1 0.1  32 0.47 2300 5600 8100
i 1 + 1 0.1  32 0.47 2300 5600 8100
i 1 + 2 0.1  32 0.75 2300 5600 8100
i 1 + 2 0.05  2 0.75 2300 5600 8100
i 1 + 1 0.1  16 0.65 2000 4000 8000
i 1 + 1 0.1  16 0.65 1000 2000 3000
i 1 8 2 0.01  1 0.75 1257 2653 6245
i 1 8 2 0.01  1 0.75  673 3256 9102
i 1 8 2 0.01  1 0.75  314 1629 4756

b 10

; bamboo
i 2 0 1 0.4 1.25 0.0  2800 2240 3360
i 2 + 1 0.4 1.25 0.0  2800 2240 3360
i 2 + 2 0.4 1.25 0.05 2800 2240 3360
i 2 + 2 0.2   10 0.05 2800 2240 3360
i 2 + 1 0.3   16 0.01 2000 4000 8000
i 2 + 1 0.3   16 0.01 1000 2000 3000
i 2 8 2 0.1    1 0.05 1257 2653 6245
i 2 8 2 0.1    1 0.05 1073 3256 8102
i 2 8 2 0.1    1 0.05  514 6629 9756

b 20

; sleighbells
i 3 0 1 0.7 1.25 0.17 2500 5300 6500
i 3 + 1 0.7 1.25 0.17 2500 5300 6500
i 3 + 2 0.7 1.25 0.3  2500 5300 6500
i 3 + 2 0.4   10 0.3  2500 5300 6500
i 3 + 1 0.5   16 0.2  2000 4000 8000
i 3 + 1 0.5   16 0.2  1000 2000 3000
i 3 8 2 0.3    1 0.3  1257 2653 6245
i 3 8 2 0.3    1 0.3  1073 3256 8102
i 3 8 2 0.3    1 0.3   514 6629 9756
e
</CsScore>

</CsoundSynthesizer>
; example written by Iain McCurdy
```

Physical modelling can produce rich, spectrally dynamic sounds with user manipulation usually abstracted to a small number of descriptive parameters. Csound offers a wealth of other opcodes for physical modelling which cannot all be introduced here so the user is encouraged to explore based on the approaches exemplified here. You can find lists in the chapters Models and Emulations, Scanned Synthesis and Waveguide Physical Modeling of the Csound Manual.

1. The explanation here follows chapter 8.1.1 of Martin Neukom's Signale Systeme Klangsynthese (Bern 2003)^
2. See chapter 03A INITIALIZATION AND PERFORMANCE PASS for more information about Csound's performance loops.^
3. If defining this as a UDO, a local ksmps=1 could be set without affecting the general ksmps. See chapter 03F USER DEFINED OPCODES and the Csound Manual for setksmps for more information.^
4. See chapter 03G MACROS about the use of macros in the score.^

# WAVESHAPING

Waveshaping can in some ways be thought of as a relation to modulation techniques such as frequency or phase modulation. Waveshaping can achieve quite dramatic sound tranformations through the application of a very simple process. In FM (frequency modulation) synthesis modulation occurs between two oscillators, waveshaping is implemented using a single oscillator (usually a simple sine oscillator) and a so-called 'transfer function'. The transfer function transforms and shapes the incoming amplitude values using a simple lookup process: if the incoming value is x, the outgoing value becomes y. This can be written as a table with two columns. Here is a simple example:

 Incoming (x) Value Outgoing (y) Value -0.5 or lower -1 between -0.5 and 0.5 remain unchanged 0.5 or higher 1

Illustrating this in an x/y coordinate system results in the following image:

## Basic Implementation Model

Implementing this as Csound code is pretty straightforward. The x-axis is the amplitude of every single sample, which is in the range of -1 to +1.1 This number has to be used as index to a table which stores the transfer function. To create a table like the one above, you can use Csound's sub-routine GEN072 . This statement will create a table of 4096 points in the desired shape:

```giTrnsFnc ftgen 0, 0, 4096, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
```

Now, two problems must be solved. First, the index of the function table is not -1 to +1. Rather, it is either 0 to 4095 in the raw index mode, or 0 to 1 in the normalized mode. The simplest solution is to use the normalized index and scale the incoming amplitudes, so that an amplitude of -1 becomes an index of 0, and an amplitude of 1 becomes an index of 1:

```aIndx = (aAmp + 1) / 2
```

The other problem stems from the difference in the accuracy of possible values in a sample and in a function table. Every single sample is encoded in a 32-bit floating point number in standard audio applications - or even in a 64-bit float in recent Csound.3 A table with 4096 points results in a 12-bit number, so you will have a serious loss of accuracy (= sound quality) if you use the table values directly.4 Here, the solution is to use an interpolating table reader. The opcode tablei (instead of table) does this job. This opcode then needs an extra point in the table for interpolating, so it is wise to use 4097 as size instead of 4096.5

This is the code for the simple waveshaping with the transfer function which has been discussed so far:

EXAMPLE 04E01.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1

giTrnsFnc ftgen 0, 0, 4097, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
giSine    ftgen 0, 0, 1024, 10, 1

instr 1
aAmp      poscil    1, 400, giSine
aIndx     =         (aAmp + 1) / 2
aWavShp   tablei    aIndx, giTrnsFnc, 1
outs      aWavShp, aWavShp
endin

</CsInstruments>
<CsScore>
i 1 0 10
</CsScore>
</CsoundSynthesizer>
```

## Chebychev Polynomials as Transfer Functions

1. Use the statement 0dbfs=1 in the orchestra header to ensure this.^
2. See chapter 03D:FUNCTION TABLES to find more information about creating tables.^
3. This is the 'd' in some abbreviations like Csound5.17-gnu-win32-d.exe (d = double precision floats).^
4. Of course you can use an even smaller table if your goal is the degradation of the incoming sound ("distortion"). See chapter 05F for some examples.^
5. A table size of a power-of-two plus one inserts the "extended guard point" as an extension of the last table value, instead of copying the first index to this location. See http://www.csounds.com/manual/html/f.html for more information.^

# FREQUENCY MODULATION

## From Vibrato to the Emergence of Sidebands

A vibrato is a periodical change of pitch, normally less than a halftone and with a slow changing-rate (around 5Hz). Frequency modulation is usually done with sine-wave oscillators.

Example 04D01.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
aMod poscil 10, 5 , 1  ; 5 Hz vibrato with 10 Hz modulation-width
aCar poscil 0.3, 440+aMod, 1  ; -> vibrato between 430-450 Hz
outs aCar, aCar
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 2
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

When the modulation-width becomes increased, it becomes harder to describe the base-frequency, but it is still a vibrato.

Example 04D02.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
aMod poscil 90, 5 , 1 ; modulate 90Hz ->vibrato from 350 to 530 hz
aCar poscil 0.3, 440+aMod, 1
outs aCar, aCar
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 2
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)

```

## The Simple Modulator->Carrier Pairing

Increasing the modulation-rate leads to a different effect. Frequency-modulation with more than 20Hz is no longer recognized as vibrato. The main-oscillator frequency lays in the middle of the sound and sidebands appear above and below. The number of sidebands is related to the modulation amplitude, later this is controlled by the so called modulation-index.

Example 04D03.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
aRaise linseg 2, 10, 100    ;increase modulation from 2Hz to 100Hz
aMod poscil 10, aRaise , 1
aCar poscil 0.3, 440+aMod, 1
outs aCar, aCar
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 12
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011
```

Hereby the main-oscillator is called carrier and the one changing the carriers frequency is the modulator. The modulation-index: I = mod-amp/mod-freq. Making changes to the modulation-index, changes the amount of overtones, but not the overall volume. That gives the possibility produce drastic timbre-changes without the risk of distortion.

When carrier and modulator frequency have integer ratios like 1:1, 2:1, 3:2, 5:4.. the sidebands build a harmonic series, which leads to a sound with clear fundamental pitch.

Example 04D04.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
kCarFreq = 660     ; 660:440 = 3:2 -> harmonic spectrum
kModFreq = 440
kIndex = 15        ; high Index.. try lower values like 1, 2, 3..
kIndexM = 0
kMaxDev = kIndex*kModFreq
kMinDev = kIndexM*kModFreq
kVarDev = kMaxDev-kMinDev
kModAmp = kMinDev+kVarDev
aModulator poscil kModAmp, kModFreq, 1
aCarrier poscil 0.3, kCarFreq+aModulator, 1
outs aCarrier, aCarrier
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 15
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

Otherwise the spectrum of the sound is inharmonic, which makes it metallic or noisy.
Raising the modulation-index, shifts the energy into the side-bands. The side-bands distance is:  Distance in Hz = (carrierFreq)-(k*modFreq) | k = {1, 2, 3, 4 ..}

This calculation can result in negative frequencies. Those become reflected at zero, but with inverted phase! So negative frequencies can erase existing ones. Frequencies over Nyquist-frequency (half of samplingrate) "fold over" (aliasing).

## The John Chowning FM Model of a Trumpet

Composer and researcher Jown Chowning worked on the first digital implementation of FM in the 1970's.

Using envelopes to control the modulation index and the overall amplitude gives you the possibility to create evolving sounds with enormous spectral variations. Chowning showed these possibilities in his pieces, where he let the sounds transform. In the piece Sabelithe a drum sound morphes over the time into a trumpet tone.

Example 04D05.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1  ; simple way to generate a trumpet-like sound
kCarFreq = 440
kModFreq = 440
kIndex = 5
kIndexM = 0
kMaxDev = kIndex*kModFreq
kMinDev = kIndexM * kModFreq
kVarDev = kMaxDev-kMinDev
aEnv expseg .001, 0.2, 1, p3-0.3, 1, 0.2, 0.001
aModAmp = kMinDev+kVarDev*aEnv
aModulator poscil aModAmp, kModFreq, 1
aCarrier poscil 0.3*aEnv, kCarFreq+aModulator, 1
outs aCarrier, aCarrier
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 2
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

The following example uses the same instrument, with different settings to generate a bell-like sound:

Example 04D06.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1  ; bell-like sound
kCarFreq = 200  ; 200/280 = 5:7 -> inharmonic spectrum
kModFreq = 280
kIndex = 12
kIndexM = 0
kMaxDev = kIndex*kModFreq
kMinDev = kIndexM * kModFreq
kVarDev = kMaxDev-kMinDev
aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aModAmp = kMinDev+kVarDev*aEnv
aModulator poscil aModAmp, kModFreq, 1
aCarrier poscil 0.3*aEnv, kCarFreq+aModulator, 1
outs aCarrier, aCarrier
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 9
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)

```

## More Complex FM Algorithms

Combining more than two oscillators (operators) is called complex FM synthesis. Operators can be connected in different combinations often 4-6 operators are used. The carrier is always the last operator in the row. Changing it's pitch, shifts the whole sound. All other operators are modulators, changing their pitch alters the sound-spectrum.

#### Two into One: M1+M2 -> C

The principle here is, that (M1:C) and (M2:C) will be separate modulations and later added together.

Example 04D07.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
aMod1 poscil 200, 700, 1
aMod2 poscil 1800, 290, 1
aSig poscil 0.3, 440+aMod1+aMod2, 1
outs aSig, aSig
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 3
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

#### In series: M1->M2->C

This is much more complicated to calculate and sound-timbre becomes harder to predict, because M1:M2 produces a complex spectrum (W), which then modulates the carrier (W:C).

Example 04D08.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
aMod1 poscil 200, 700, 1
aMod2 poscil 1800, 290+aMod1, 1
aSig poscil 0.3, 440+aMod2, 1
outs aSig, aSig
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 3
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

## Phase Modulation - the Yamaha DX7 and Feedback FM

There is a strong relation between frequency modulation and phase modulation, as both techniques influence the oscillator's pitch, and the resulting timbre modifications are the same.

If you'd like to build a feedbacking FM system, it will happen that the self-modulation comes to a zero point, which stops the oscillator forever. To avoid this, it is more practical to modulate the carriers table-lookup phase, instead of its pitch.

Even the most famous FM-synthesizer Yamaha DX7 is based on the phase-modulation (PM) technique, because this allows feedback. The DX7 provides 7 operators, and offers 32 routing combinations of these. (http://yala.freeservers.com/t2synths.htm#DX7)

To build a PM-synth in Csound tablei opcode needs to be used as oscillator. In order to step through the f-table, a phasor will output the necessary steps.

Example 04D09.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1  ; simple PM-Synth
kCarFreq = 200
kModFreq = 280
kModFactor = kCarFreq/kModFreq
kIndex = 12/6.28   ;  12/2pi to convert from radians to norm. table index
aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aModulator poscil kIndex*aEnv, kModFreq, 1
aPhase phasor kCarFreq
aCarrier tablei aPhase+aModulator, 1, 1, 0, 1
outs (aCarrier*aEnv), (aCarrier*aEnv)
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 9
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

Let's use the possibilities of self-modulation (feedback-modulation) of the oscillator. So in the following example, the oscillator is both modulator and carrier. To control the amount of modulation, an envelope scales the feedback.

Example 04D10.csd

```<CsoundSynthesizer>
<CsOptions>
-o dac
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1  ; feedback PM
kCarFreq = 200
kFeedbackAmountEnv linseg 0, 2, 0.2, 0.1, 0.3, 0.8, 0.2, 1.5, 0
aAmpEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001
aPhase phasor kCarFreq
aCarrier init 0 ; init for feedback
aCarrier tablei aPhase+(aCarrier*kFeedbackAmountEnv), 1, 1, 0, 1
outs aCarrier*aAmpEnv, aCarrier*aAmpEnv
endin

</CsInstruments>
<CsScore>
f 1 0 1024 10 1                 ;Sine wave for table 1
i 1 0 9
</CsScore>
</CsoundSynthesizer>
; written by Alex Hofmann (Mar. 2011)
```

The last example features modulation of the buzz opcode. The buzz opcode can have a lot of harmonic overtones and frequency modulation of the buzz opcode gives even more overtones. Four different voices play at the same time, forming strange chords that use glissando/portamento to move from one chord to the next. This .csd file is regenerative, everytime you run it, it should show a different performance.

EXAMPLE 04D11.csd

```<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>

; By Bjørn Houdorf, April 2012

sr = 44100
ksmps = 8
nchnls = 2
0dbfs = 1

; Global initializations ("Instrument 0")

seed      0;  New pitches,

gkfreq1   init      0;  every time you

gkfreq2   init      0;  run this file

gkfreq3   init      0

gkfreq4   init      0

gimidia1  init      60; The 4 voices start

gimidib1  init      60; at different

gimidia2  init      64; MIDI frequencies

gimidib2  init      64

gimidia3  init      67

gimidib3  init      67

gimidia4  init      70

gimidib4  init      70

; Function Table

giFt1     ftgen     0, 0, 16384, 10, 1; Sine wave

instr 1; Master control pitch for instrument 2

test:

idurtest  poisson   20; Duration of each test loop

timout    0, idurtest, execute

reinit    test

execute:

gimidia1  =         gimidib1

ital1     random    -4, 4

gimidib1  =         gimidib1 + ital1

gimidia2  =         gimidib2

ital2     random    -4, 4

gimidib2  =         gimidib2 + ital2

gimidia3  =         gimidib3

ital3     random    -4, 4

gimidib3  =         gimidib3 + ital3

gimidia4  =         gimidib4

ital4     random    -4, 4

gimidib4  =         gimidib4 + ital4

idiv      poisson   4

idurx     =         0.01; Micro end segment to create

;a held final frequency value

ifreq1a   =         cpsmidinn(gimidia1)

ifreq1b   =         cpsmidinn(gimidib1)

; Portamento frequency ramp:

gkfreq1   linseg    ifreq1a, idurtest/idiv, ifreq1b, idurx, ifreq1b

ifreq2a   =         cpsmidinn(gimidia2)

ifreq2b   =         cpsmidinn(gimidib2)

gkfreq2   linseg    ifreq2a, idurtest/idiv, ifreq2b, idurx, ifreq2b

ifreq3a   =         cpsmidinn(gimidia3)

ifreq3b   =         cpsmidinn(gimidib3)

gkfreq3   linseg    ifreq3a, idurtest/idiv, ifreq3b, idurx, ifreq3b

ifreq4a   =         cpsmidinn(gimidia4)

ifreq4b   =         cpsmidinn(gimidib4)

gkfreq4   linseg    ifreq4a, idurtest/idiv, ifreq4b, idurx, ifreq4b

endin

instr 2 ; Oscillators

iamp      =         p4

irise     =         p5

idur      =         p3

idec      =         p6

kamp      =         p7

imodfrq   =         p8

iharm     =         p9 ; Number of harmonics

ky        linen     iamp, irise, idur, idec

kampfreq  =         2

kampa     oscili    kamp, kampfreq, giFt1

; Different phase for the 4 voices

klfo1     oscili    kampa, imodfrq, giFt1, 0

klfo2     oscili    kampa, imodfrq, giFt1, 0.25

klfo3     oscili    kampa, imodfrq, giFt1, 0.50

klfo4     oscili    kampa, imodfrq, giFt1, 0.75

kzfrq     =         0.1; Velocity of amplitude oscillation

kampvoice =         0.5; Amplitude of each voice

; Amplitude between -0.5 and 0.5

kx1       oscili    0.5, kzfrq, giFt1, 0

kx2       oscili    0.5, kzfrq, giFt1, 0.25

kx3       oscili    0.5, kzfrq, giFt1, 0.50

kx4       oscili    0.5, kzfrq, giFt1, 0.75

; Add 0.5 so amplitude oscillates between 0 and 1

k1        =         kx1+0.5

k2        =         kx2+0.5

k3        =         kx3+0.5

k4        =         kx4+0.5

; Minimize interference between chorus oscillators

itilf     random    -5, 5

asig11    buzz      ky*k1, (2.02*gkfreq1)+itilf+klfo1, iharm, giFt1

asig12    buzz      ky*k1, gkfreq1 +klfo1, iharm, giFt1; Voice 1

asig13    buzz      ky*k1, (1.51*gkfreq1)+itilf+klfo1, iharm, giFt1

aa1       =         asig11+asig12+asig13

asig21    buzz      ky*k2, (2.01*gkfreq2)+itilf+klfo2, iharm, giFt1

asig22    buzz      ky*k2, gkfreq2 +klfo2, iharm, giFt1; Voice 2

asig23    buzz      ky*k2, (1.51*gkfreq2)+itilf+klfo2, iharm, giFt1

aa2       =         asig21+asig22+asig23

asig31    buzz      ky*k3, (2.01*gkfreq3)+itilf+klfo3, iharm, giFt1

asig32    buzz      ky*k3, gkfreq3 +klfo3, iharm, giFt1; Voice 3

asig33    buzz      ky*k3, (1.51*gkfreq3)+itilf+klfo3, iharm, giFt1

aa3       =         asig31+asig32+asig33

asig41    buzz      ky*k4, (2.01*gkfreq4)+itilf+klfo4, iharm, giFt1

asig42    buzz      ky*k4, gkfreq4 +klfo4, iharm, giFt1; Voice 4

asig43    buzz      ky*k4, (1.51*gkfreq4)+itilf+klfo4, iharm, giFt1

aa4       =         asig41+asig42+asig43

outs      aa1+aa3, aa2+aa4

endin

</CsInstruments>

<CsScore>

; Master control instrument

; Inst start dur

i1       0   3600

; Oscillators

; inst start idur iamp irise idec kamp imodfrq iharm

i2      0    3600  0.3   4    20  0.10    7     16

</CsScore>

</CsoundSynthesizer>
```

# CSOUND UTILITIES

Csound comes bundled with a variety of additional utility applications. These are small programs that perform a single function, very often with a sound file, that might be useful just before or just after working with the main Csound program. Originally these were programs that were run from the command line but many of Csound front-ends now offer direct access to many of these utilities through their own utilities menus. It is useful to still have access to these programs via the command line though, if all else fails.

The standard syntax for using these programs from the command line is to type the name of the utility followed optionally by one or more command line flags which control various performance options of the program - all of these will have useable defaults anyway - and finally the name of the sound file upon which the utility will operate.

```utility_name [flag(s)] [file_name(s)]
```

If we require some help or information about a utility and don't want to be bothered hunting through the Csound Manual we can just type the the utility's name with no additional arguments, hit enter and the commmand line response will give us some information about that utility and what command line flags it offers. We can also run the utility through Csound - perhaps useful if there are problems running the utility directly - by calling Csound with the -U flag. The -U flag will instruct Csound to run the utility and to interpret subsequent flags as those of the utility and not its own.

```Csound -U utility_name [flag(s)] [file_name(s)]
```

## sndinfo

As an example of invoking one of these utilities form the command line we shall look at the utility 'sndinfo' (sound information) which provides the user with some information about one or more sound files. 'sndinfo' is invoked and provided with a file name thus:

```sndinfo /Users/iainmccurdy/sounds/mysound.wav
```

If you are unsure of the file address of your sound file you can always just drag and drop it into the terminal window. The output should be something like:

```util sndinfo:
/Users/iainmccurdy/sounds/mysound.wav:
srate 44100, stereo, 24 bit WAV, 3.335 seconds
(147078 sample frames)
```

'sndinfo' will accept a list of file names and provide information on all of them in one go so it may prove more efficient gleaning the same information from a GUI based sample editor. We also have the advantage of begin able to copy and paste from the terminal window into a .csd file.

## Analysis Utilities

Although many of Csound's opcodes already operate upon commonly encountered sound file formats such as 'wav' and 'aiff', a number of them require sound information in more specialised and pre-analysed formats and for this Csound provides the sound analysis utilities atsa, cvanal, hetro, lpanal and pvanal. By far the most commonly used of these is pvanal which, although originally written to provide analysis files for pvoc and its generation of opcodes, has now been extended to be able to generate files in the pvoc-ex (.pvx) format for use with the newer 'pvs' streaming pvoc opcodes.

This time as well as requiring an input sound file for analysis we will need to provide a name (and optionally the full address) for the output file. Using pvanal's command flags we can have full control over typical FFT conversion parameters such as FFT size, overlap, window type etc. as well as additional options that may prove useful such as the ability to select a fragment of a larger sound file for the analysis. In the following illustration we shall make use of just one flag, -s, for selecting which channel of the input sound file to analyse, all other flag values shall assume their default values which should work fine in most situations.

``` pvanal -s1 mysound.wav myanalysis.pvx
```

pvanal will analyse the first (left if stereo) channel of the input sound file 'mysound.wav' (and in this case as no full address has been provided it will need to be in either the current working directory or SSDIR), and a name has been provided for the output file 'myanalysis.pvx', which, as no full address has been given, will be placed in the current working directory. While pvanal is running it will print a running momentary and finally inform us once the process is complete.

If you use CsoundQT you can have direct access to pvanal with all its options through the 'utilities' button in the toolbar. Once opened it will reveal a dialogue window looking something like this:

Especially helpful is the fact that we are also automatically provided with pvanal's manual page.

## File Conversion Utilities

The next group of utilities, het_import, het_export, pvlook, pv_export, pv_import, sdif2ad and srconv facilitate file conversions between various types. Perhaps the most interesting of these are pvlook, which prints to the terminal a formatted text version of a pvanal file - useful to finding out exactly what is going on inside individual analysis bins, something that may be of use when working with the more advanced resynthesis opcodes such as pvadd or pvsbin. srconv can be used to convert the sample rate of a sound file.

## Miscellaneous Utilities

A final grouping gathers together various unsorted utilities: cs, csb64enc, envext, extractor, makecsd, mixer, scale and mkdb. Most interesting of these are perhaps extractor which will extract a user defined fragment of a sound file which it will then write to a new file, mixer which mixes together any number of sound files and with gain control over each file and scale which will scale the amplitude of an individual sound file.

It has been seen that the Csound utilities offer a wealth of useful, but often overlooked, tools to augment our work with Csound. Whilst some of these utilities may seem redundant now that most of us have access to fully featured 3rd-party sound editing software, it should be borne in mind that many of these utilities were written in the 1980s and early 90s when such tools were less readily available.

# THE CSOUND API

An application programming interface (API) is an interface provided by a computer system, library or application that allows users to access functions and routines for a particular task. It gives developers a way to harness the functionality of existing software within a host application. The Csound API can be used to control an instance of Csound through a series of different functions thus making it possible to harness all the power of Csound in one’s own applications. In other words, almost anything that can be done within Csound can be done with the API. The API is written in C, but there are interfaces to other languages as well, such as Python, C++ and Java.

To use the Csound C API, you have to include csound.h in your source file and to link your code with libcsound. Here is an example of the csound command line application written using the Csound C API:

```#include <csound/csound.h>

int main(int argc, char **argv)
{
CSOUND *csound = csoundCreate(NULL);
int result = csoundCompile(csound, argc, argv);
if (result == 0) {
result = csoundPerform(csound);
}
csoundDestroy(csound);
return (result >= 0 ? 0 : result);
}
```

First we create an instance of Csound. To do this we call csoundCreate() which returns an opaque pointer that will be passed to most Csound API functions. Then we compile the orc/sco files or the csd file given as input arguments through the argv parameter of the main function. If the compilation is successful (result == 0), we call the csoundPerform() function. csoundPerform() will cause Csound to perform until the end of the score is reached. When this happens csoundPerform() returns a non-zero value and we destroy our instance before ending the program.

On a linux system, with libcsound named libcsound64 (double version of the csound library), supposing that all include and library paths are set correctly, we would build the above example with the following command:

```gcc -DUSE_DOUBLE -o csoundCommand csoundCommand.c -lcsound64
```

The C API has been wrapped in a C++ class for convenience. This gives the Csound basic C++ API. With this API, the above example would become:

```#include <csound/csound.hpp>

int main(int argc, char **argv)
{
Csound *cs = new Csound();
int result = cs->Compile(argc, argv);
if (result == 0) {
result = cs->Perform();
}
return (result >= 0 ? 0 : result);
}
```

Here, we get a pointer to a Csound object instead of the csound opaque pointer. We call methods of this object instead of C functions, and we don't need to call csoundDestroy in the end of the program, because the C++ object destruction mechanism takes care of this. On our linux system, the example would be built with the following command:

```g++ -DUSE_DOUBLE -o csoundCommandCpp csoundCommand.cpp -lcsound64
```

The Csound API has also been wrapped to other languages. The Csound Python API wraps the Csound API to the Python language. To use this API, you have to import the csnd module. The csnd module is normally installed in the site-packages or dist-packages directory of your python distribution as a csnd.py file. Our csound command example becomes:

```import sys
import csnd

def csoundCommand(args):
csound = csnd.Csound()
arguments = csnd.CsoundArgVList()
for s in args:
arguments.Append(s)
result = csound.Compile(arguments.argc(), arguments.argv())
if result == 0:
result = csound.Perform()
return result

def main():
csoundCommand(sys.argv)

if __name__ =='__main__':
main()
```

We use a Csound object (remember Python has OOp features). Note the use of the CsoundArgVList helper class to wrap the program input arguments into a C++ manageable object. In fact, the Csound class has syntactic sugar (thanks to method  overloading) for the Compile method. If you have less than six string arguments to pass to this method, you can pass them directly. But here, as we don't know the number of arguments to our csound command, we use the more general mechanism of the CsoundArgVList helper class.

The Csound Java API wraps the Csound API to the Java language. To use this API, you have to import the csnd package. The csnd package is located in the csnd.jar archive which has to be known from your Java path. Our csound command example becomes:

```import csnd.*;

public class CsoundCommand
{
private Csound csound = null;
private CsoundArgVList arguments = null;

public CsoundCommand(String[] args) {
csound = new Csound();
arguments = new CsoundArgVList();
arguments.Append("dummy");
for (int i = 0; i < args.length; i++) {
arguments.Append(args[i]);
}
int result = csound.Compile(arguments.argc(), arguments.argv());
if (result == 0) {
result = csound.Perform();
}
System.out.println(result);
}

public static void main(String[] args) {
CsoundCommand csCmd = new CsoundCommand(args);
}
}
```

Note the "dummy" string as first argument in the arguments list. C, C++ and Python expect that the first argument in a program argv input array is implicitly the name of the calling program. This is not the case in Java: the first location in the program argv input array contains the first command line argument if any.  So we have to had this "dummy" string value in the first location of the arguments array so that the C API function called by our csound.Compile method is happy.

This illustrates a fundamental point about the Csound API. Whichever API wrapper is used (C++, Python, Java, etc), it is the C API which is working under the hood. So a thorough knowledge of the Csound C API is highly recommended if you plan to use the Csound API in any of its different flavours. The main source of information about the Csound C API is the csound.h header file which is fully commented.

On our linux system, with csnd.jar located in /usr/local/lib/csound/java, our Java Program would be compiled and run with the following commands:

```javac -cp /usr/local/lib/csound/java/csnd.jar CsoundCommand.java
java -cp /usr/local/lib/csound/java/csnd.jar:. CsoundCommand
```

There also exists an extended Csound C++ API, which adds to the Csound C++ API a CsoundFile class, the CsoundAC C++ API, which provides a class hierarchy for doing algorithmic composition using Michael Gogins' concept of music graphs, and API wrappers for the LISP, LUA and HASKELL languages.

For now, this chapter chapter we will focus on the basic C/C++ API, and the Python and Java API.

## Threading

Before we begin to look at how to control Csound in real time we need to look at threads. Threads are used so that a program can split itself into two or more simultaneously running tasks. Multiple threads can be executed in parallel on many computer systems. The advantage of running threads is that you do not have to wait for one part of your software to finish executing before you start another.

In order to control aspects of your instruments in real time your will need to employ the use of threads. If you run the first example found on this page you will see that the host will run for as long as csoundPerform() returns 0. As soon as it returns non-zero it will exit the loop and cause the application to quit. Once called, csoundPerform() will cause the program to hang until it is finished. In order to interact with Csound while it is performing you will need to call csoundPerform() in a separate unique thread.

When implementing threads using the Csound API, we must define a special performance function thread. We then pass the name of this performance function to csoundCreateThread(), thus registering our performance-thread function with Csound. When defining a Csound performance-thread routine you must declare it to have a return type uintptr_t, hence it will need to return a value when called. The thread function will take only one parameter, a pointer to void. This pointer to void is quite important as it allows us to pass important data from the main thread to the performance thread. As several variables are needed in our thread function the best approach is to create a user defined data structure that will hold all the information your performance thread will need. For example:

```typedef struct{
/*result of csoundCompile()*/
int result;
/*instance of csound*/
CSOUND* csound;
/*performance status*/
bool PERF_STATUS;
}userData;
```

Below is a basic performance-thread routine. *data is cast as a userData data type so that we can access its members.

```uintptr_t csThread(void *data)
{
userData* udata = (userData*)data;
if(!udata->result)
{
while((csoundPerformKsmps(udata->csound) == 0)&&
(udata->PERF_STATUS==1));
csoundDestroy(udata->csound);
}
udata->PERF_STATUS = 0;
return 1;
}
```

In order to start this thread we must call the csoundCreateThread() API function which is declared in csound.h as:

```void *csoundCreateThread(uintptr_t (*threadRoutine) (void *),void *userdata);
```

If you are building a command line program you will need to use some kind of mechanism to prevent int main() from returning until after the performance has taken place. A simple while loop will suffice.

The first example presented above can now be rewritten to include a unique performance thread:

```
#include <stdio.h>
#include "csound.h"

uintptr_t csThread(void *clientData);

typedef struct {
int result;
CSOUND* csound;
int PERF_STATUS;
}userData;

int main(int argc, char *argv[])
{
void* ThreadID;
userData* ud;
ud = (userData *)malloc(sizeof(userData));
MYFLT* pvalue;
csoundInitialize(&argc, &argv, 0);
ud->csound=csoundCreate(NULL);
ud->result=csoundCompile(ud->csound,argc,argv);

if(!ud->result)  {
ud->PERF_STATUS=1;
ThreadID = csoundCreateThread(csThread, (void*)ud);
}
else{
return 0;
}

//keep performing until user presses enter
scanf("%d", &finish);
ud->PERF_STATUS=0;
csoundDestroy(ud->csound);
free(ud);
return 1;
}

//performance thread function
uintptr_t csThread(void *data)
{
userData* udata = (userData*)data;
if(!udata->result)
{
while((csoundPerformKsmps(udata->csound) == 0) &&(udata->PERF_STATUS==1));
csoundDestroy(udata->csound);
}
udata->PERF_STATUS = 0;
return 1;
}
```

The application above might not appear all that interesting. In fact it's almost the exact same as the first example presented except that users can now stop Csound by hitting 'enter'.  The real worth of threads can only be appreciated when you start to control your instrument in real time.

Channel I/O

The big advantage to using the API is that it allows a host to control your Csound instruments in real time. There are several mechanisms provided by the API that allow us to do this. The simplest mechanism makes use of a 'software bus'.

The term bus is usually used to describe a means of communication between hardware components. Buses are used in mixing consoles to route signals out of the mixing desk into external devices. Signals get sent through the sends and are taken back into the console through the returns. The same thing happens in a software bus, only instead of sending analog signals to different hardware devices we send data to and from different software.

Using one of the software bus opcodes in Csound we can provide an interface for communication with a host application. An example of one such opcode is chnget. The chnget opcode reads data that is being sent from a host Csound API application on a particular named channel, and assigns it to an output variable. In the following example instrument 1 retrieves any data the host may be sending on a channel named "pitch":

```instr 1
kval chnget "pitch"
a1 oscil 10000, kval, 1
out a1
endin
```

One way in which data can be sent from a host application to an instance of Csound is through the use of the csoundGetChannelPtr() API function which is defined in csound.h as:

int csoundGetChannelPtr(CSOUND *, MYFLT **p, const char *name,  int type);

CsoundGetChannelPtr() stores a pointer to the specified channel of the bus in p. The channel pointer p is of type MYFLT. The argument name is the name of the channel and the argument type is a bitwise OR of exactly one of the following values:

CSOUND_CONTROL_CHANNEL - control data (one MYFLT value)
CSOUND_AUDIO_CHANNEL - audio data (ksmps MYFLT values)
CSOUND_STRING_CHANNEL - string data (MYFLT values with enough space to store csoundGetStrVarMaxLen(CSOUND*) characters, including the NULL character at the end of the string)

and at least one of these:

CSOUND_INPUT_CHANNEL - when you need Csound to accept incoming values from a host
CSOUND_OUTPUT_CHANNEL - when you need Csound to send outgoing values to a host

If the call to csoundGetChannelPtr() is successful the function will return zero. If not, it will return a negative error code. We can now modify our previous code in order to send data from our application on a named software bus to an instance of Csound using csoundGetChannelPtr().

```#include <stdio.h>
#include "csound.h"

//performance thread function prototype
uintptr_t csThread(void* clientData);

//userData structure declaration
typedef struct {
int result;
CSOUND* csound;
int PERF_STATUS;
}userData;

//------------------------------------------------------------
// main function
//-----------------------------------------------------------int main(int argc, char *argv[])
{
int userInput=200;
void* ThreadID;
userData* ud;
ud = (userData*)malloc(sizeof(userData));
MYFLT* pvalue;
csoundInitialize(&argc, &argv, 0);
ud->csound=csoundCreate(NULL);
ud->result=csoundCompile(ud->csound,argc,argv);
if(!ud->result)
{
ud->PERF_STATUS=1;
ThreadID = csoundCreateThread(csThread, (void*)ud);
}
else{
printf("csoundCompiled returned an error");
return 0;
}
printf("\nEnter a pitch in
Hz(0 to Exit) and type return\n");
while(userInput!=0)
{
if(csoundGetChannelPtr(ud->csound,
&pvalue, "pitch",
CSOUND_INPUT_CHANNEL
| CSOUND_CONTROL_CHANNEL)==0);
*pvalue =
(MYFLT)userInput;
scanf("%d",
&userInput);
}
ud->PERF_STATUS=0;
csoundDestroy(ud->csound);
free(ud);
return 1;
}
//-------------------------------------------------------------
//definition of our performance thread function
//-------------------------------------------------------------
uintptr_t csThread(void *data)
{
userData* udata =(userData*)data;
if(!udata->result)
{
while((csoundPerformKsmps(udata->csound)== 0)
&&(udata->PERF_STATUS==1));
csoundDestroy(udata->csound);
}
udata->PERF_STATUS = 0;
return 1;
}
```

## Score Events

Adding score events to the csound instance is easy to do. It requires that csound has its threading done, see the paragraph above on threading. To enter a score event into csound, one calls the following function:

```void myInputMessageFunction( void* data, const char* message)
{
userData* udata = (userData*) data;
csoundInputMessage( udata->csound , message );
}
```

Now we can call that function to insert Score events into a running csound instance. The formatting of the message should be the same as one would normally have in the Score part of the .csd file. The example shows the format for the message. Note that if you're allowing csound to print its error messages, if you send a malformed message, it will warn you. Good for debugging. There's an example with the csound source code that allows you to type in a message, and then it will send it.

```                       instrNum    start    duration   p4   p5  p6   ...   pN
const char* message = "i1          0        1         0.5  0.3 0.1"   ;
myInputMessageFunction( (void*) udata , message);

```

Callbacks

## References & Links

Michael Gogins 2006, "Csound and CsoundVST API Reference Manual", http://csound.sourceforge.net/refman.pdf

Rory Walsh 2006, "Developing standalone applications using the Csound Host API and wxWidgets", Csound Journal Volume 1 Issue 4 - Summer 2006, http://www.csounds.com/journal/2006summer/wxCsound.html

Rory Walsh 2010, "Developing Audio Software with the Csound Host API",  The Audio Programming Book, DVD Chapter 35, The MIT Press

François Pinot 2011, "Real-time Coding Using the Python API: Score Events", Csound Journal Issue 14 - Winter 2011, http://www.csounds.com/journal/issue14/realtimeCsoundPython.html

# USING PYTHON INSIDE CSOUND

coming in the next release ...

For now, have a look at Andrés Cabrera, Using Python inside Csound, An introduction to the Python opcodes, Csound Journal Issue 6, Spring 2007: http://www.csounds.com/journal/issue6/pythonOpcodes.html

# EXTENDING CSOUND

coming in the next release ...

# OPCODE GUIDE: OVERVIEW

If you run Csound from the command line with the option -z, you get a list of all opcodes. Currently (Csound 5.13), the total number of all opcodes is about 1500. There are already overviews of all of Csound's opcodes in the Opcodes Overview and the Opcode Quick Reference of the Canonical Csound Manual.

This chapter is another attempt to provide some orientation within Csound's wealth of opcodes. Unlike to the references mentioned above not all opcodes are listed, but the ones listed are commented briefly. Some opcodes appear more than once, which is done intentionally, for example, there are different contexts within which you might use the ftgen opcode and the layout here reflects this multipurpose nature of a number of opcodes. This guide may also provide insights into the opcodes listed that the other sources do not.

## BASIC SIGNAL PROCESSING

• ### FILTERS

Compare Standard Filters and Specialized Filters overviews.

## ADVANCED SIGNAL PROCESSING

• ### PHYSICAL MODELS AND FM INSTRUMENTS

• #### Waveguide Physical Modelling

see here  and here

• #### FM Instrument Models

see here

## DATA

• ### PRINTING AND STRINGS

• #### String Manipulation And Conversion

see here  and here

## REALTIME INTERACTION

• ### HUMAN INTERFACES

• #### Widgets

FLTK overview here

## MATHS

• ### CONVERTERS

• #### Frequency To MIDI

F2M   F2MC  (UDO's)

• #### Scaling

Scali   Scalk   Scala  (UDO's)

# OPCODE GUIDE: BASIC SIGNAL PROCESSING

• ## OSCILLATORS AND PHASORS

• ### Standard Oscillators

oscils is a very simple sine oscillator which can be used for quick tests. It needs no function table, but provides just i-rate arguments.

ftgen generates a function table, which is needed by any oscillator except oscils. The GEN Routines fill the function table with any desired waveform, either a sine wave or any other curve. Compare the function table chapter of this manual for more information.

poscil can be recommended as standard oscillator because it is very precise also for long tables and low frequencies. It provides linear interpolation, any rate for the input arguments, and works also for non-power-of-two tables. poscil3 provides cubic interpolation, but has just k-rate input. Other common oscillators are oscili and oscil3. They are less precise than poscil/poscili, but you can skip the initialization which can be useful in certain situations. The oscil opcode does not provide any interpolation, so it should usually be avoided. More Csound oscillators can be found here.

• ### Dynamic Spectrum Oscillators

buzz and gbuzz generate a set of harmonically related sine resp. cosine partials.

mpulse generates a set of impulses.

vco and vco2 implement band-limited, analog modeled oscillators with different standard waveforms.

• #### Phasors

phasor produces the typical moving phase values between 0 and 1. The more complex syncphasor lets you synchronize more than one phasor precisely.

• ## RANDOM AND NOISE GENERATORS

seed sets the seed value for the majority of the Csound random generators (seed 0 generates each time another random output, while any other seed value generates the same random chain on each new run).

rand is the usual opcodes for bipolar random values. If you give 1 as input argument (called "amp"), you will get values between -1 and +1. randi interpolates between values which are generated in a (variable) frequency. randh holds the value until the next one is generated. You can control the seed value by an input argument (a value greater than 1 seeds from current time), you can decide whether to use a 16bit or a 31bit random number, and you can add an offset.

rnd31 can be used for alle rates of variables (i-rate variables are not supported by rand). It gives the user also control over the random distribution, but has no offset parameter.

random is often very convenient to use, because you have a minimum and a maximum value as input argument, instead of a range like rand and rnd31. It can also be used for all rates, but you have no direct seed input, and the randomi/randomh variants always start from the lower border, instead anywhere between the borders.

pinkish produces pink noise at audio-rate (white noise is produced by rand).

There are much more random opcodes. Here is an overview. It is also possible to use some GEN Routines for generating random distributions. They can be found in the GEN Routines overview.

• ## ENVELOPES

• ### Simple Standard Envelopes

linen applies a linear rise (fade in) and decay (fade out) to a signal. It is very easy to use, as you put the raw audio signal in and get the enveloped signal out.

linenr does the same for any note which's duration is not fixed at the beginning, like MIDI notes or any real time events. linenr begins to fade out exactly when the instrument is turned off, adding an extra time after this turnoff.

adsr calculates the classical attack-decay-sustain-release envelope. The result is to be multiplied with the audio signal to get the enveloped signal.

madsr does the same for a realtime note (like explained above for linenr).

Other standard envelope generators can be found in the Envelope Generators overview of the Canonical Csound Manual.

• ### Envelopes By Linear And Exponential Generators

linseg creates one or more segments of lines between specified points.

expseg does the same with exponential segments. Note that zero values are illegal.

transeg is very flexible to use, because you can specify the shape of the curve for each segment (continuous transitions from convex to linear to concave).

All these opcodes have a -r variant (linsegr, expsegr, transegr) for MIDI or other live events.

More opcodes can be found in this overview.

• ### Envelopes By Function Tables

Any curve, or parts of it, of any function table, can be used as envelope. Just create a function table by ftgen resp. by a GEN Routine. Then read the function table, or a part of it, by an oscillator, and multiply the result with the audio signal you want to envelope.

• ## DELAYS

• ### Audio Delays

The vdelay familiy of opcodes is easy to use and implement all necessary features to work with delays:

vdelay implements a variable delay at audio rate with linear interpolation.

vdelay3 offers cubic interpolation.

vdelayx has an even higher quality interpolation (and is by this reason slower). vdelayxs lets you input and output two channels, and vdelayxq four.

vdelayw changes the position of the write tap in the delay line instead of the read tap. vdelayws is for stereo, and vdelaywq for quadro.

The delayr/delayw opcodes establishes a delay line in a more complicated way. The advantage is that you can have as many taps in one delay line as you need.

delayr establishes a delay line and reads from it.

delayw writes an audio signal to the delay line.

deltap, deltapi, deltap3, deltapx and deltapxw are working similar to the relevant opcodes of the vdelay family (see above).

deltapn offers a tap delay measured in samples, not seconds.
• ### Control Delays

delk and vdel_k let you delay any k-signal by some time interval (usable for instance as a kind of wait mode).

• ## FILTERS

Csound has an extremely rich collection of filters and they are good available on the Csound Manual pages for Standard Filters and Specialized Filters. So here some most frequently used filters are mentioned, and some tips are given. Note that filters usually change the signal level, so you will need the balance opcode.

• ### Low Pass Filters

tone is a first order recursive low pass filter. tonex implements a series of tone filters.

butlp is a seond order low pass Butterworth filter.

clfilt lets you choose between different types and poles numbers.
• ### High Pass Filters

atone is a first order recursive high pass filter. atonex implements a series of atone filters.

buthp is a second order high pass Butterworth filter.

clfilt lets you choose between different types and poles numbers.
• ### Band Pass And Resonant Filters

reson is a second order resonant filter. resonx implements a series of reson filters, while resony emulates a bank of second order bandpass filters in parallel. resonr and resonz are variants of reson with variable frequency response.

butbp is a second order band-pass Butterworth filter.
• ### Band Reject Filters

areson is the complement of the reson filter.

butbr is a band-reject butterworth filter.

• ### Filters For Smoothing Control Signals

port and portk are very frequently used to smooth control signals which are received by MIDI or widgets.

• ## REVERB

Note that you can work easily in Csound with convolution reverbs based on impulse response files, for instance with pconvolve

freeverb is the implementation of Jezar's well-known free (stereo) reverb.

reverbsc is a stereo FDN reverb, based on work of Sean Costello.

reverb and nreverb are the traditional Csound reverb units.

babo is a physical model reverberator ("ball within the box").
• ## SIGNAL MEASUREMENT, DYNAMIC PROCESSING, SAMPLE LEVEL OPERATIONS

• ### Amplitude Measurement And Following

rms determines the root-mean-square amplitude of an audio signal.

balance adjusts the amplitudes of an audio signal according to the rms amplitudes of another audio signal.

follow / follow2 are envelope followers which report the average amplitude in a certain time span (follow) or according to an attack/decay rate (follow2).

peak reports the highest absolute amplitude value received.

max_k outputs the local maximum or minimum value of an incoming audio signal, checked in a certain time interval.
• ### Pitch Estimation

ptrack, pitch and pitchamdf track the pitch of an incoming audio signal, using different methods.

pvscent calculates the spectral centroid for FFT streaming signals (see below under "FFT And Spectral Processing")
• ### Tempo Estimation

tempest estimates the tempo of beat patterns in a control signal.

• ### Dynamic Processing

compress compresses, limits, expands, ducks or gates an audio signal.

dam is a dynamic compressor/expander.

clip clips an a-rate signal to a predefined limit, in a “soft” manner.
• ### Sample Level Operations

limit sets the lower and upper limits of an incoming value (all rates).

samphold performs a sample-and-hold operation on its a- or k-input.

vaget / vaset allow getting and setting certain samples of an audio vector at k-rate.
• ## SPATIALIZATION

• ### Panning

pan2 distributes a mono audio signal across two channels, with different envelope options.

pan distributes a mono audio signal amongst four channels.
• ### VBAP

vbaplsinit configures VBAP output according to loudspeaker parameters for a 2- or 3-dimensional space.

vbap4 / vbap8 / vbap16 distributes an audio signal among up to 16 channels, with k-rate control over azimut, elevation and spread.
• ### Ambisonics

bformenc1 encodes an audio signal to the Ambisonics B format.

bformdec1 decodes Ambisonics B format signals to loudspeaker signals in different possible configurations.
• ### Binaural / HRTF

hrtfstat, hrtfmove and hrtfmove2 are opcodes for creating 3d binaural audio for headphones. hrtfer is an older implementation, using an external file.

# OPCODE GUIDE: ADVANCED SIGNAL PROCESSING

• ## MODULATION AND DISTORTION

• ### Frequency Modulation

foscil and foscili implement composite units for FM in the Chowning setup.

crossfm, crossfmi, crosspm, crosspmi, crossfmpm and crossfmpmi are different units for frequency and/or phase modulation.

• ### Distortion And Wave Shaping

distort and distort1 perform waveshaping by a function table (distort) or by modified hyperbolic tangent distortion (distort1).

powershape waveshapes a signal by raising it to a variable exponent.

polynomial efficiently evaluates a polynomial of arbitrary order.

chebyshevpoly efficiently evaluates the sum of Chebyshev polynomials of arbitrary order.

GEN03, GEN13, GEN14 and GEN15 are also used for Waveshaping.
• ### Flanging, Phasing, Phase Shaping

flanger implements a user controllable flanger.

harmon analyzes an audio input and generates harmonizing voices in synchrony.

phaser1 and phaser2 implement first- or second-order allpass filters arranged in a series.

pdclip, pdhalf and pdhalfy are useful for phase distortion synthesis.
• ### Doppler Shift

doppler lets you calculate the doppler shift depending on the position of the sound source and the microphone.

• ## GRANULAR SYNTHESIS

partikkel is the most flexible opcode for granular synthesis. You should be able to do everything you like in this field. The only drawback is the large number of input arguments, so you may want to use other opcodes for certain purposes.

You can find a list of other relevant opcodes here

sndwarp focusses granular synthesis on time stretching and/or pitch modifications. Compare waveset and the pvs-opcodes pvsfread, pvsdiskin, pvscale, pvshift for other implementations of time and/or pitch modifications.

• ## CONVOLUTION

pconvolve performs convolution based on a uniformly partitioned overlap-save algorithm.

ftconv is similar to pconvolve, but you can also use parts of the impulse response file, instead of reading the whole file.

dconv performs direct convolution.
• ## FFT AND SPECTRAL PROCESSING

• ### Realtime Analysis And Resynthesis

pvsanal performs a Fast Fourier Transformation of an audio stream (a-signal) and stores the result in an f-variable.

pvstanal creates an f-signal directly from a sound file which is stored in a function table (usually via GEN01).

pvsynth performs an Inverse FFT (takes a f-signal and returns an audio-signal).

pvsadsyn is similar to pvsynth, but resynthesizes with a bank of oscillators, instead of direct IFFT.

• ### Writing FFT Data To A File And Reading From It

pvsfwrite writes an f-signal (= the FFT data) from inside Csound to a file. This file has the PVOCEX format and its name ends on .pvx.

pvanal does actually the same as Csound Utility (a seperate program which can be called in QuteCsound or via the Terminal). In this case, the input is an audio file.

pvsfread reads the FFT data from an extisting .pvx file. This file can be generated by the Csound Utility pvanal. Reading the file is done by a time pointer.

pvsdiskin is similar to pvsfread, but reading is done by a speed argument.

• ### Writing FFT Data To A Buffer And Reading From It

pvsbuffer writes a f-signal to a circular buffer (and creates it).

pvsbufread reads a f-signal from a buffer which was created by pvsbuffer.

pvsftw writes amplitude and/or frequency data from a f-signal to a function table.

pvsftr transforms amplitude and/or frequency data from a function table to a f-signal.

• ### FFT Info

pvsinfo gets info either from a realtime f-signal or from a .pvx file.

pvsbin gets the amplitude and frequency values from a single bin of a f-signal.

pvscent calculates the spectral centroid of a signal.

• ### Manipulating FFT Signals

pvscale transposes the frequency components of a f-stream by simple multiplication.

pvshift changes the frequency components of a f-stream by adding a shift value, starting at a certain bin.

pvsbandp and pvsbandr applies a band pass and band reject filter to the frequency components of a f-signal.

pvsmix, pvscross, pvsfilter, pvsvoc and pvsmorph perform different methods of cross synthesis between two f-signals.

pvsfreeze freezes the amplitude and/or frequency of a f-signal according to a k-rate trigger.

pvsmaska, pvsblur, pvstencil, pvsarp, pvsmooth perform other manipulations on a stream of FFT data.

• ## PHYSICAL MODELS AND FM INSTRUMENTS

• ### Waveguide Physical Modelling

see here  and here

• ### FM Instrument Models

see here

# OPCODE GUIDE: DATA

• ## BUFFER / FUNCTION TABLES

See the chapter about function tables for more detailled information.

• ### Creating Function Tables (Buffers)

ftgen generates any function table. The GEN Routines are used to fill a function table with different kind of data, like soundfiles, envelopes, window functions and much more.

• ### Writing To Tables

tableiw / tablew: Write values to a function table at i-rate (tableiw), k-rate and a-rate (tablew). These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables.

tabw_i / tabw: Write values to a function table at i-rate (tabw_i), k-rate or a-rate (tabw). Offer less options than the tableiw/tablew opcodes, but work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not writing any value off the table boundaries.

• ### Reading From Tables

table / tablei / table3: Read values from a function table at any rate, either by direct indexing (table), or by linear (tablei) or cubic (table3) interpolation. These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables.

tab_i / tab: Read values from a function table at i-rate (tab_i), k-rate or a-rate (tab). Offer no interpolation and less options than the table opcodes, but they work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not reading any value off the table boundaries.

• ### Saving Tables To Files

ftsave / ftsavek: Save a function table as a file, at i-time (ftsave) or k-time (ftsavek). This can be a text file or a binary file, but not a soundfile. If you want to save a soundfile, use the User Defined Opcode TableToSF

• ### Reading Tables From Files

ftload / ftloadk: Load a function table which has been written by ftsave/ftsavek.

GEN23 transfers a text file into a function table.
• ## SIGNAL INPUT/OUTPUT, SAMPLE AND LOOP PLAYBACK, SOUNDFONTS

• ### Signal Input And Output

inch read the audio input from any channel of your audio device. Make sure you have the nchnls value in the orchestra header set properly.

outch writes any audio signal(s) to any output channel(s). If Csound is in realtime mode (by the flag '-o dac' or by the 'Render in Realtime' mode of a frontend like QuteCsound), the output channels are the channels of your output device. If Csound is in 'Render to file' mode (by the flag '-o mysoundfile.wav' or the the frontend's choice), the output channels are the channels of the soundfile which is being written. Make sure you have the nchnls value in the orchestra header set properly to get the number of channels you wish to have.

out and outs are frequently used for mono and stereo output. They always write to channel 1 (out) resp. 1 and 2 (outs).

monitor can be used (in an instrument with the highest number) to get the sum of all audio on the different output channels.

• ### Sample Playback With Optional Looping

flooper2 is a function-table-based crossfading looper.

sndloop records input audio and plays it back in a loop with user-defined duration and crossfade time.

Note that there are also User Defined Opcodes for sample playback of buffers / function tables.

• ### Soundfonts And Fluid Opcodes

fluidEngine instantiates a FluidSynth engine.

fluidSetInterpMethod sets an interpolation method for a channel in a FluidSynth engine.

fluidLoad loads SoundFonts.

fluidProgramSelect assigns presets from a SoundFont to a FluidSynth engine's MIDI channel.

fluidNote plays a note on a FluidSynth engine's MIDI channel.

fluidCCi sends a controller message at i-time to a FluidSynth engine's MIDI channel.

fluidCCk sends a controller message at k-rate to a FluidSynth engine's MIDI channel.

fluidControl plays and controls loaded Soundfonts (using 'raw' MIDI messages).

fluidOut receives audio from a single FluidSynth engine.

fluidAllOut receives audio from all FluidSynth engines.
• ## FILE INPUT AND OUTPUT

• ### Sound File Input

soundin reads from a soundfile (up to 24 channels). Make sure that the sr value in the orchestra header matches the sample rate of your soundfile, or you will get higher or lower pitched sound.

diskin is like soundin, but can also alter the speed of reading (resulting in higher or lower pitches) and you have an option to loop the file.

diskin2 is like diskin, but automatically converts the sample rate of the soundfile if it does not match the sample rate of the orchestra, and it offers different interpolation methods for reading the soundfile at altered speed.

GEN01 reads soundfile into a function table (buffer).

mp3in lets you play mp3 sound files.
• ### Sound File Queries

filelen returns the length of a soundfile in seconds.

filesr returns the sample rate of a soundfile.

filenchnls returns the number of channels of a soundfile.

filepeak returns the peak absolute value of a soundfile, either of one specified channel, or from all channels. Make sure you have set 0dbfs to 1; otherwise you will get values relative to Csound's default 0dbfs value of 32768.

filebit returns the bit depth of a soundfile.
• ### Sound File Output

Keep in mind that Csound always writes output to a file if you have set the '-o' flag to the name of a soundfile (or if you choose 'render to file' in a frontend like QuteCound).

fout writes any audio signal(s) to a file, regardless Csound is in realtime or render-to-file mode. So you can record your live performance with this opcode.

• ### Non-Soundfile Input And Output

readk can read data from external files (for instance a text file) and transform them to k-rate values.

GEN23 transfers a text file into a function table.

dumpk writes k-rate signals to a text file.

fprints / fprintks write any formatted string to a file. If you call this opcode several times during one performance, the strings are appended. If you write to an already existing file, the file will be overwritten.

ftsave / ftsavek: Save a function table as a binary or text file, in a specific format.

ftload / ftloadk: Load a function table which has been written by ftsave/ftsavek.

• ## CONVERTERS OF DATA TYPES

• ### i <- k

i(k) returns the value of a k-variable at init-time. This can be useful to get the value of GUI controllers, or when using the reinit feature.

• ### k <- a

downsamp converts an a-rate signal to a k-rate signal, with optional averaging.

max_k returns the maximum of an a-rate signal in a certain time span, with different options of calculation
• ### a <- k

upsamp converts a k-rate signal to an a-rate signal by simple repetitions. It is the same as the statement asig=ksig.

interp converts a k-rate signal to an a-rate signal by interpolation.
• ## PRINTING AND STRINGS

• ### Simple Printing

print is a simple opcode for printing i-variables. Note that the printed numbers are rounded to 3 decimal places.

printk is its counterpart for k-variables. The itime argument specifies the time in seconds between printings (itime=0 means one printout in each k-cycle which is usually some thousand printings per second).

printk2 prints a k-variable whenever it has changed.

puts prints S-variables. The ktrig argument lets you print either at i-time or at k-time.
• ### Formatted Printing

prints lets you print a format string at i-time. The format is similar to the C-style syntax (verweis). There is no %s format, therefore no string variables can be printed.

printf_i is very similar to prints. It also works at init-time. The advantage in comparision to prints is the ability of printing string variables. On the other hand,  you need a trigger and at least one input argument.

printks is like prints, but takes k-variables, and like at printk you must specify a time between printing.

printf is like printf_i, but works at k-rate.
• ### String Variables

sprintf works like printf_i, but stores the output in a string variable, instead of printing it out.

sprintfk is the same for k-rate arguments.

strset links any string with a numeric value.

strget transforms a strset number back to a string.
• ### String Manipulation And Conversion

There are many opcodes for analysing, manipulating and conversing strings. There is a good overview in the Canonical Csound Manual on this and that page.

# OPCODE GUIDE: REALTIME INTERACTION

• ## MIDI

• ### Opcodes For Use In MIDI-Triggered Instruments

massign assigns certain midi channels to instrument numbers. See the Triggering Instrument Instances chapter for more information.

pgmassign assigns certain program changes to instrument numbers.

notnum gets the midi number of the key which has been pressed and activated this instrument instance.

cpsmidi converts this note number to the frequency in cycles per second (Hertz).

veloc and ampmidi get the velocity of the key which has been pressed and activated this instrument instance.

midichn returns the midi channel number from which the note was activated.

pchbend gets the pitch bend information.

aftouch and polyaft get the aftertouch information.
• ### Opcodes For Use In All Instruments

ctrl7 gets the values of a usual (7bit) controller and scales it. ctrl14 and ctrl21 can be used for high definition controllers.

initc7 or ctrlinit set the initial value of 7bit controllers. Use initc14 and initc21 for high definition devices.

midiin gives access to all incoming midi events.

midiout writes any event to the midi out port.

• ## OPEN SOUND CONTROL AND NETWORK

• ### Open Sound Control

OSCinit initializes a port for later use of the OSClisten opcode.

OSClisten receives messages of the port which was initialized by OSCinit.

OSCsend sends messages to a port.

• ### Remote Instruments

remoteport defines the port for use with the remote system.

insremot will send note events from a source machine to one destination.

insglobal will send note events from a source machine to many destinations.

midiremot will send midi events from a source machine to one destination.

midiglobal will broadcast the midi events to all the machines involved in the remote concert.
• ### Network Audio

socksend sends audio data to other processes using the low-level UDP or TCP protocols.

sockrecv receives audio data from other processes using the low-level UDP or TCP protocols.
• ## HUMAN INTERFACES

• ### Widgets

The FLTK Widgets are integrated in Csound. Information and examples can be found here.

QuteCsound implements a more modern and easy-to-use system for widgets. The communication between the widgets and Csound is done via invalue (or chnget) and outvalue (or chnset).

• ### Keys

sensekey gets the input of your computer keys.

• ### Mouse

xyin can get the mouse position if your front-end does not provide this sensing otherwise.

• ### WII

wiiconnect reads data from a number of external Nintendo Wiimote controllers.

wiidata reads data fields from a number of external Nintendo Wiimote controllers.

wiirange sets scaling and range limits for certain Wiimote fields.

wiisend sends data to one of a number of external Wii controllers.
• ### P5 Glove

p5gconnect reads data from an external P5 Glove controller.

p5gdata reads data fields from an external P5 Glove controller.

# OPCODE GUIDE: INSTRUMENT CONTROL

• ## SCORE PARAMETER ACCESS

p(x) gets the value of a specified p-field. (So, 'p(5)' and 'p5' both return the value of the fifth parameter in a certain score line, but in the former case you can insert a variable to specify the p-field.

pindex does actually the same, but as an opcode instead of an expression.

pset sets p-field values in case there is no value from a scoreline.

passign assigns a range of p-fields to i-variables.

pcount returns the number of p-fields belonging to a note event.
• ## TIME AND TEMPO

• ### Time Reading

times / timek return the time in seconds (times) or in control cycles (timek) since the start of the current Csound performance.

timeinsts / timeinstk return the time in seconds (timeinsts) or in control cycles (timeinstk) since the start of the instrument in which they are defined.

date / dates return the number of seconds since 1 January 1970, using the operating system's clock; either as a number (date) or as a string (dates).

setscorepos sets the playback position of the current score performance to a given position.
• ### Tempo Reading

tempo allows the performance speed of Csound scored events to be controlled from within an orchestra.

miditempo returns the current tempo at k-rate, of either the midi file (if available) or the score.

tempoval reads the current value of the tempo.

• ### Duration Modifications

ihold causes a finite-duration note to become a 'held' note.

xtratim extend the duration of the current instrument instance.
• ### Time Signal Generators

metro outputs a metronome-like control signal in a variable frequency.

mpulse generates an impulse for one sample (as audio-signal), followed by a variable time span.
• ## CONDITIONS AND LOOPS

changed reports whether a k-variable (or at least one of some k-variables) has changed.

trigger informs whether a k-rate signal crosses a certain threshold.

if branches conditionally at initialization or during performance time.

loop_lt, loop_le, loop_gt and loop_ge perform loops either at i- or k-time.
• ## PROGRAM FLOW

init initializes a k- or a-variable (assigns a value to a k- or a-variable which is valid at i-time).

igoto jumps to a label at i-time.

kgoto jumps to a label at k-time.

timout jumps to a label for a given time. Can be used in conjunction with reinit to perform time loops (see the chapter about Control Structures for more information).

reinit / rigoto / rireturn forces a certain section of code to be reinitialized (= i-rate variables are renewed).
• ## EVENT TRIGGERING

event_i / event: Generate an instrument event at i-time (event_i) or at k-time (event). Easy to use, but you cannot send a string to the subinstrument.

scoreline_i / scoreline: Generate an instrument at i-time (scoreline_i) or at k-time (scoreline). Like event_i/event, but you can send to more than one instrument but unlike event_i/event you can send strings. On the other hand, you must usually preformat your scoreline-string using sprintf.

schedkwhen triggers an instrument event at k-time if a certain condition is given.

seqtime / seqtime2 can be used to generate a trigger signal according to time values in a function table.

timedseq is an event-sequencer in which time can be controlled by a time-pointer. Sequence data are stored into a table.

• ## INSTRUMENT SUPERVISION

• ### Instances And Allocation

active returns the number of active instances of an instrument.

maxalloc limits the number of allocations (instances) of an instrument.

prealloc creates space for instruments but does not run them.
• ### Turning On And Off

turnon activates an instrument for an indefinite time.

turnoff / turnoff2 enables an instrument to turn itself, or another instrument, off.

mute mutes/unmutes new instances of a given instrument.

remove removes the definition of an instrument as long as it is not in use.

exitnow exits csound as fast as possible, with no cleaning up.
• ### Named Instruments

nstrnum returns the number of a named instrument.

• ## SIGNAL EXCHANGE AND MIXING

• ### chn opcodes

chn_k, chn_a, and chn_S declare a control, audio, or string channel. Note that this can be done implicitely in most cases by chnset/chnget.

chnset writes a value (i, k, S or a) to a software channel (which is identified by a string as its name).

chnget gets the value of a named software channel.

chnmix writes audio data to an named audio channel, mixing to the previous output.

chnclear clears an audio channel of the named software bus to zero.

# OPCODE GUIDE: MATH, PYTHON/ SYSTEM, PLUGINS

• ## MATHEMATICAL CALCULATIONS

• ### Arithmetic Operations

+, -, *, /, ^, % are the usual signs for addition, subtraction, multiplication, division, raising to a power and modulo. The precedence is like in common mathematics (a "*" binds stronger than "+" etc.), but you can change this behaviour with parentheses: 2^(1/12) returns 2 raised by 1/12 (= the 12st root of 2), while 2^1/12 returns 2 raised by 1, and the result divided by 12.

exp(x), log(x), log10(x) and sqrt(x) return e raised to the xth power, the natural log of x, the base 10 log of x, and the square root of x.

abs(x) returns the absolute value of a number.

int(x) and frac(x) return the integer respective the fractional part of a number.

round(x), ceil(x), floor(x) round a number to the nearest, the next higher or the next lower integer.
• ### Trigonometric Functions

sin(x), cos(x), tan(x) perform a sine, cosine or tangent function.

sinh(x), cosh(x), tanh(x) perform a hyperbolic sine, cosine or tangent function.

sininv(x), cosinv(x), taninv(x) and taninv2(x) perform the arcsine, arccosine and arctangent functions.
• ### Logic Operators

&& and ||  are the symbols for a logical "and" respective "or". Note that you can use here parentheses for defining the precedence, too, for instance: if (ival1 < 10 && ival2 > 5) || (ival1 > 20 && ival2 < 0) then ...

• ## CONVERTERS

• ### MIDI To Frequency

cpsmidi converts a MIDI note number from a triggered instrument to the frequency in Hertz.

cpsmidinn does the same for any input values (i- or k-rate).

Other opcodes convert to Csonund's pitch- or octave-class system. They can be found here.

• ### Frequency To MIDI

Csound has no own opcode for the conversion of a frequency to a midi note number, because this is a rather simple calculation. You can find a User Defined Opcode for rounding to the next possible midi note number or for the exact translation to a midi note number and a cent value as fractional part.

• ### Cent Values To Frequency

cent converts a cent value to a multiplier. For instance, cent(1200) returns 2, cent(100) returns 1.059403. If you multiply this with the frequency you reference to, you get frequency of the note which corresponds to the cent interval.

• ### Amplitude Converters

ampdb returns the amplitude equivalent of the dB value. ampdb(0) returns 1, ampdb(-6) returns 0.501187, and so on.

ampdbfs returns the amplitude equivalent of the dB value, according to what has been set as 0dbfs (1 is recommended, the default is 15bit = 32768). So ampdbfs(-6) returns 0.501187 for 0dbfs=1, but 16422.904297 for 0dbfs=32768.

dbamp returns the decibel equivalent of the amplitude value, where an amplitude of 1 is the maximum. So dbamp(1) -> 0 and dbamp(0.5) -> -6.020600.

dbfsamp returns the decibel equivalent of the amplitude value set by the 0dbfs statement. So dbfsamp(10) is 20.000002 for 0dbfs=0 but -70.308998 for 0dbfs=32768.
• ### Scaling

Scaling of signals from an input range to an output range, like the "scale" object in Max/MSP, is not implemented in Csound, because it is a rather simple calculation. It is available as User Defined Opcode: Scali (i-rate), Scalk (k-rate) or Scala (a-rate).

• ## PYTHON OPCODES

pyinit initializes the Python interpreter.

pyrun runs a Python statement or block of statements.

pyexec executes a script from a file at k-time, i-time or if a trigger has been received.

pycall invokes the specified Python callable at k-time or i-time.

pyeval evaluates a generic Python expression and stores the result in a Csound k- or i-variable, with optional trigger.

pyassign assigns the value of the given Csound variable to a Python variable possibly destroying its previous content.

• ## SYSTEM OPCODES

getcfg returns various Csound configuration settings as a string at init time.

system / system_i call an external program via the system call.

• ## PLUGIN HOSTING

• ### LADSPA

dssiinit loads a plugin.

dssiactivate activates or deactivates a plugin if it has this facility.

dssilist lists all available plugins found in the LADSPA_PATH and DSSI_PATH global variables.

dssiaudio processes audio using a plugin.

dssictls sends control information to a plugin's control port.

• ### VST

vstinit loads a plugin.

vstaudio / vstaudiog return a plugin's output.

vstmidiout sends midi data to a plugin.

vstparamset / vstparamget sends and receives automation data to and from the plugin.

vstnote sends a midi note with a definite duration.

vstinfo outputs the parameter and program names for a plugin.

vstbankload loads an .fxb bank.

vstprogset sets the program in a .fxb bank.

vstedit opens the GUI editor for the plugin, when available.

# GLOSSARY

control cycle, control period or k-loop is a pass during the performance of an instrument, in which all k- and a-variables are renewed. The time for one control cycle is measured in samples and determined by the ksmps constant in the orchestra header. If your sample rate is 44100 and your ksmps value is 10, the time for one control cycle is 1/4410 = 0.000227 seconds. See the chapter about Initialization And Performance Pass for more information.

control rate or k-rate (kr) is the number of control cycles per second. It can be calculated as the relationship of the sample rate sr and the number of samples in one control period ksmps. If your sample rate is 44100 and your ksmps value is 10, your control rate is 4410, so you have 4410 control cycles per second.

dummy f-statement see f-statement

f-statement or function table statement is a score line which starts with a "f" and generates a function table. See the chapter about function tables for more information. A dummy f-statement is a statement like "f 0 3600" which looks like a function table statement, but instead of generating any table, it serves just for running Csound for a certain time (here 3600 seconds = 1 hour).

FFT Fast Fourier Transform is a system whereby audio data is stored or represented in the frequency domain as opposed to the time domain as amplitude values as is more typical. Working with FFT data facilitates transformations and manipulations that are not possible, or are at least more difficult, with audio data stored in other formats.

GEN rountine a GEN (generation) routine is a mechanism within Csound used to create function tables of data that will be held in RAM for all or part of the performance. A GEN routine could be a waveform, a stored sound sample, a list of explicitly defined number such as tunings for a special musical scale or an amplitude envelope. In the past function tables could only be created only in the Csound score but now they can also be created (and deleted and over-written) within the orchestra.

GUI Graphical User Interface refers to a system of on-screen sliders, buttons etc. used to interact with Csound, normally in realtime.

i-time or init-time or i-rate signify the time in which all the variables starting with an "i" get their values. These values are just given once for an instrument call. See the chapter about Initialization And Performance Pass for more information.

k-loop see control cycle

k-time is the time during the performance of an instrument, after the initialization. Variables starting with a "k" can alter their values in each ->control cycle. See the chapter about Initialization And Performance Pass for more information.

k-rate see control rate

opcode the code word of a basic building block with which Csound code is written. As well as the opcode code word an opcode will commonly provide output arguments (variables), listed to the left of the opcode, and input arguments (variables). listed to the right of the opcode. An opcode is equivalent to a 'ugen' (unit generator) in other languages.

orchestra as in the Csound orchestra, is the section of Csound code where traditionally the instruments are written. In the past the 'orchestra' was one of two text files along with the 'score' that were needed to run Csound. Most people nowadays combine these two sections, along with other optional sections in a .csd (unified) Csound file. The orchestra will also normally contain header statements which will define global aspects of the Csound performance such as sampling rate.

p-field a 'p' (parameter) field normally refers to a value contained within the list of values after an event item with the Csound score.

performance pass see control cycle

score as in the Csound score, is the section of Csound code where note events are written that will instruct instruments within the Csound orchestra to play. The score can also contain function tables. In the past the 'score' was one of two text files along with the 'orchestra' that were needed to run Csound. Most people nowadays combine these two sections, along with other optional sections in a .csd (unified) Csound file.

time stretching can be done in various ways in Csound. See sndwarp, waveset, pvstanal and the Granular Synthesis opcodes. In the frequency domain, you can use the pvs-opcodes pvsfread, pvsdiskin, pvscale, pvshift.

widget normally refers to some sort of standard GUI element such as a slider or a button. GUI widgets normally permit some user modifications such as size, positioning colours etc. A variety options are available for the creation of widgets usable by Csound, from it own built-in FLTK widgets to those provided by front-ends such as CsoundQT, Cabbage and Blue.

# LINKS

## Downloads

Csound's User Defined Opcodes: http://www.csounds.com/udo/

WinXound:http://winxound.codeplex.com

Cabbage: http://code.google.com/p/cabbage

## Community

Csound's info page on sourceforge is a good collection of links and basic infos.

csounds.com is the main page for the Csound community, including news, online tutorial, forums and many links.

The Csound Journal is a main source for different aspects of working with Csound.

## Mailing Lists and Bug Tracker

To subscribe to the Csound User Discussion List, send a message with "subscribe csound <your name>" in the message body to sympa@lists.bath.ac.uk. To post, send messages to csound@lists.bath.ac.uk. You can search in the list archive at nabble.com.

To subscribe to the CsoundQt User Discussion List, go to https://lists.sourceforge.net/lists/listinfo/qutecsound-users. You can browse the list archive here.

Csound Developer Discussions: https://lists.sourceforge.net/lists/listinfo/csound-devel

Please report any bug you experienced in Csound at http://sourceforge.net/tracker/?group_id=81968&atid=564599, and a CsoundQt related bug at http://sourceforge.net/tracker/?func=browse&group_id=227265&atid=1070588. Every bug report is an important contribution.

## Tutorials

A Beginning Tutorial is a short introduction from Barry Vercoe, the "father of Csound".

An Instrument Design TOOTorial by Richard Boulanger (1991) is another classical introduction, still very worth to read.

Introduction to Sound Design in Csound also by Richard Boulanger, is the first chapter of the famous Csound Book (2000).

Virtual Sound by Alessandro Cipriani and Maurizio Giri (2000)

A Csound Tutorial by Michael Gogins (2009), one of the main Csound Developers.

## Video Tutorials

A playlist as overview by Alex Hofmann:

http://www.youtube.com/view_play_list?p=3EE3219702D17FD3

### CsoundQt (QuteCsound)

QuteCsound: Where to start?
http://www.youtube.com/watch?v=0XcQ3ReqJTM

First instrument:
http://www.youtube.com/watch?v=P5OOyFyNaCA

Using MIDI:
http://www.youtube.com/watch?v=8zszIN_N3bQ

About configuration:
http://www.youtube.com/watch?v=KgYea5s8tFs

New editing features in 0.6.0:
http://www.youtube.com/watch?v=Hk1qPlnyv88

### Csoundo (Csound and Processing)

http://csoundblog.com/2010/08/csound-processing-experiment-i/

### Open Sound Control in Csound

http://www.youtube.com/watch?v=JX1C3TqP_9Y

## The Csound Conference in Hannover (2011)

Web page with papers and program.

All Videos can be found via the YoutTube channel csconf2011.

## Example Collections

Csound Realtime Examples by Iain McCurdy is one of the most inspiring and up-to-date collections.

The Amsterdam Catalog by John-Philipp Gather is particularily interesting because of the adaption of Jean-Claude Risset's famous "Introductory Catalogue of Computer Synthesized Sounds" from 1969.

## Books

The Csound Book (2000) edited by Richard Boulanger is still the compendium for anyone who really wants to go in depth with Csound.

Virtual Sound by Alessandro Cipriani and Maurizio Giri (2000)

Signale, Systeme, und Klangsysteme by Martin Neukom (2003, german) has many interesting examples in Csound.

The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini (2011) is a major source with many references to Csound.

Csound Power! by Jim Aikin (2012) is a perfect up-to-date introduction for beginners.

# BUILDING CSOUND

Currently (April 2012) a collection of build instructions has been started at the Csound Media Wiki at Sourceforge. Please have a look there if you have problems in building Csound.

## Linux

### Debian

#### On Wheezy with an amd64 architecture.

Download a copy of the Csound sources from the Sourceforge. To do so, in the terminal type:

git clone --depth 1 git://csound.git.sourceforge.net/gitroot/csound/csound5

Use aptitude to get (at least) the dependencies for a basic build, which are: libsndfile1-dev, python2.6-dev, scons. To do so, use the following command (with sudo or as root):

aptitude install libsndfile1-dev python2.6-dev scons

There are many more optional dependencies, which are recommended to get in most cases (some are already part of Debian), and which are documented here. I built with the following libraries installed: libportaudiocpp0, alsa, libportmidi0, libfltk1.1, swig2.0, libfluidsynth1 and liblo7. To install them (some might already be in your sistem), type:

aptitude install libportaudiocpp0 alsa libportmidi0 libfltk1.1 swig2.0 libfluidsynth1 liblo7

Go inside the csound5/ folder you downloaded from sourceforge, and edit build-linux-double.sh in order to meet your building needs, once again, read about the options in the Build Csound section of the manual.

On amd64 architectures, it is IMPORTANT to change gcc4opt=atom to gcc4opt=generic (otherwise it will build for single processor). I also used buildNewParser=0, since I could not get to compile with the new parser. To finally build, run the script:

./build-linux-double.sh

If the installation was successful, use the following command to install:

./install.py

Make sure that the following environment
variables are set:

OPCODEDIR64=/usr/local/lib/csound/plugins64
CSSTRNGS=/usr/local/share/locale

If you built the python interface, move the csnd.py and -csnd.so from /usr/lib/python2.6/site-packages/ to /usr/lib/python2.6/dist-packages/ (the standard place for external Python modules since version 2.6). You can do so with the following commands:

/usr/lib/python2.6/site-packages/csnd.py /usr/lib/python2.6/dist-packages/

/usr/lib/python2.6/site-packages/_csnd.so /usr/lib/python2.6/dist-packages/

If you want to un-install, you can do so by running the following command:

/usr/local/bin/uninstall-csound5

Good luck!

### Ubuntu

1. Download the sources. Either the last stable release from http://sourceforge.net/projects/csound/files/csound5/ or the latest (possible unstable) sources from git (running the command git clone git://csound.git.sourceforge.net/gitroot/csound/csound5).

2. Open a Terminal window and run the command

``` sudo apt-get install csound
```

This should install all the dependencies which are needed to build Csound.

3. Change the directory to the folder you have downloaded in step 1, using the command cd.

4. Run the command scons. You can start with

```scons -h
```

to check the configuration and choose your options. See the Build Csound section of the manual for more information about the options. If you want to build the standard configuration, just run scons without any options.

If you get an error, these are possible reasons:

• You must install bison and flex to use the new parser.
• If there is a complaint about not finding a file called custom.py, copy the file custom-linux-jpff.py and rename it as custom.py.

There is also a detailed instruction by Menno Knevel at csounds.com which may help.

5. Run

```sudo python install.py
```

You should now be able to run csound by the command /usr/local/bin/csound, or simply by the command csound.

## OSX

As mentioned above, have a look at http://sourceforge.net/apps/mediawiki/csound/index.php?title=Main_Page.

## Windows

There is a detailed description of Michael Gogins, entitled How to Build Csound on Windows in the Csound Sources. You can either download the Csound Sources at http://sourceforge.net/projects/csound/files/csound5 or get the latest version at the Csound Git Repository

# METHODS OF WRITING CSOUND SCORES

Although the use of Csound real-time has become more prevalent and arguably more important whilst the use if the score has diminished and become less important, composing using score events within the Csound score remains an important bedrock to working with Csound. There are many methods for writing Csound score several of which are covered here, starting with the classical method of writing scores by hand, and concluding with the definition of a user-defined score language.

## Writing Score by Hand

In Csound's original incarnation the orchestra and score existed as separate text files. This arrangement existed partly in an attempt to appeal to composers who had come from a background of writing for conventional instruments by providing a more familiar paradigm. The three unavoidable attributes of a note event - which instrument plays it, when, and for how long - were hardwired into the structure of a note event through its first three attributes or 'p-fields'. All additional attributes (p4 and beyond), for example: dynamic, pitch, timbre, were left to the discretion of the composer, much as they would be when writing for conventional instruments. It is often overlooked that when writing score events in Csound we define start times and durations in 'beats'. It just so happens that 1 beat defaults to a duration of 1 second leading to the consequence that many Csound users spend years thinking that they are specifying note events in terms of seconds rather than beats. This default setting can easily be modified and manipulated as shown later on.

The most basic score event as described above might be something like this:

``` i 1 0 5
```

which would demand that instrument number '1' play a note at time zero (beats) for 5 beats. After time of constructing a score in this manner it quickly becomes apparent that certain patterns and repetitions recur. Frequently a single  instrument will be called repeatedly to play the notes that form a longer phrase therefore diminishing the worth of repeatedly typing the same instrument number for p1, an instrument may play a long sequence of notes of the same duration as in a phrase of running semiquavers rendering the task of inputting the same value for p3 over and over again slightly tedious and often a note will follow on immediately after the previous one as in a legato phrase intimating that the p2 start-time of that note might better be derived from the duration and start-time of the previous note by the computer than to be figured out by the composer. Inevitably short-cuts were added to the syntax to simplify these kinds of tasks:

```i 1 0 1 60
i 1 1 1 61
i 1 2 1 62
i 1 3 1 63
i 1 4 1 64
```

could now be expressed as:

```i 1 0 1 60
i . + 1 >
i . + 1 >
i . + 1 >
i . + 1 64
```

where '.' would indicate that that p-field would reuse the same p-field value from the previous score event, where '+', unique for p2, would indicate that the start time would follow on immediately after the previous note had ended and '>' would create a linear ramp from the first explicitly defined value (60) to the next explicitly defined value (64) in that p-field column (p4).

A more recent refinement of the p2 shortcut allows for staccato notes where the rhythm and timing remain unaffected. Each note lasts for 1/10 of a beat and each follows one second after the previous.

```i 1 0   .1 60
i . ^+1 .  >
i . ^+1 .  >
i . ^+1 .  >
i . ^+1 .  64
```

The benefits offered by these short cuts quickly becomes apparent when working on longer scores. In particular the editing of critical values once, rather than many times is soon appreciated.

Taking a step further back, a myriad of score tools, mostly also identified by a single letter, exist to manipulate entire sections of score. As previously mentioned Csound defaults to giving each beat a duration of 1 second which corresponds to this 't' statement at the beginning of a score:

```t 0 60
```

"At time (beat) zero set tempo to 60 beats per minute"; but this could easily be anything else or evena string of tempo change events following the format of a linsegb statement.

```t 0 120 5 120 5 90 10 60
```

This time tempo begins at 120bpm and remains steady until the 5th beat, whereupon there is an immediate change to 90bpm; thereafter the tempo declines in linear fashion until the 10th beat when the tempo has reached 60bpm.

'm' statements allow us to define sections of the score that might be repeated ('s' statements marking the end of that section). 'n' statements referencing the name given to the original 'm' statement via their first parameter field will call for a repetition of that section.

```m verse
i 1 0   1 60
i . ^+1 .  >
i . ^+1 .  >
i . ^+1 .  >
i . ^+1 . 64
s
n verse
n verse
n verse
```

Here a 'verse' section is first defined using an 'm' section (the section is also played at this stage). 's' marks the end of the section definition and 'n' recalls this section three more times.

Just a selection of the techniques and shortcuts available for hand-writing scores have been introduced here (refer to the Csound Reference Manual for a more encyclopedic overview). It has hopefully become clear however that with a full knowledge and implementation of these techniques the user can adeptly and efficiently write and manipulate scores by hand.

## Extension of the Score Language: bin="..."

It is possible to pass the score as written through a pre-processor before it is used by Csound to play notes. instead it can be first interpretted by a binary (application), which produces a usual csound score as a result. This is done by the statement bin="..." in the <CsScore> tag. What happens?

1. If just a binary is specified, this binary is called and two files are passed to it:
1. A copy of the user written score. This file has the suffix .ext
2. An empty file which will be read after the interpretation by Csound. This file has the usual score suffix .sco
2. If a binary and a script is specified, the binary calls the script and passes the two files to the script.

If you have Python1  installed on your computer, you should be able to run the following examples. They do actually nothing but print the arguments (= file names).

EXAMPLE ...csd: Calling a binary without a script

```<CsoundSynthesizer>
<CsInstruments>
instr 1
endin
</CsInstruments>
<CsScore bin="python">
from sys import argv
print "File to read = '%s'" % argv[0]
print "File to write = '%s'" % argv[1]
</CsScore>
</CsoundSynthesizer>
```

When you execute this .csd file in the terminal, your output should include something like this:

File to read = '/tmp/csound-idWDwO.ext'
File to write = '/tmp/csound-EdvgYC.sco'

And there should be a complaint because the empty .sco file has not been written:

cannot open scorefile /tmp/csound-EdvgYC.sco

EXAMPLE .... csd: Calling a binary and a script

To test this, first save this file as print.py in the same folder where your .csd examples are:

```from sys import argv
print "Script = '%s'" % argv[0]
print "File to read = '%s'" % argv[1]
print "File to write = '%s'" % argv[2]
```

Then run the ....csd:

```<CsoundSynthesizer>
<CsInstruments>
instr 1
endin
</CsInstruments>
<CsScore bin="python print.py">
</CsScore>
</CsoundSynthesizer>
```

The output should include these lines:

Script = 'print.py'
File to read = '/tmp/csound-jwZ9Uy.ext'
File to write = '/tmp/csound-NbMTfJ.sco'

And again a complaint about the invalid score file:

cannot open scorefile /tmp/csound-NbMTfJ.sco

### Scripting Language Examples

The following script uses a perl script to allow seeding options in the score. A random seed can be set as a comment; like ";;SEED 123". If no seed has been set, the current system clock is used. So there will be a different value for the first three random statements, while the last two statements will always generate the same values.

```<CsoundSynthesizer>
<CsInstruments>
;example by tito latini

instr 1
prints "amp = %f, freq = %f\n", p4, p5;
endin

</CsInstruments>
<CsScore bin="perl cs_sco_rand.pl">

i1  0  .01  rand()   [200 + rand(30)]
i1  +  .    rand()   [400 + rand(80)]
i1  +  .    rand()   [600 + rand(160)]
;; SEED 123
i1  +  .    rand()   [750 + rand(200)]
i1  +  .    rand()   [210 + rand(20)]
e

</CsScore>
</CsoundSynthesizer>

# cs_sco_rand.pl
my (\$in, \$out) = @ARGV;
open(EXT, "<", \$in);
open(SCO, ">", \$out);

while (<EXT>) {
s/SEED\s+(\d+)/srand(\$1);\$&/e;
s/rand\(\d*\)/eval \$&/ge;
print SCO;
}

```

1. www.python.org^

# Pure Data

Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.  Because all of these various media are handled as digital data within the program, many fascinating opportunities for cross-synthesis between them exist. Sound can be used to manipulate video, which could then be streamed over the internet to another computer which might analyze that video and use it to control a motor-driven installation.

Programming with Pure Data is a unique interaction that is much closer to the experience of manipulating things in the physical world.  The most basic unit of functionality is a box, and the program is formed by connecting these boxes together into diagrams that both represent the flow of data while actually performing the operations mapped out in the diagram.  The program itself is always running, there is no separation between writing the program and running the program, and each action takes effect the moment it is completed.

The community of users and programmers around Pure Data have created additional functions (called "externals" or "external libraries") which are used for a wide variety of other purposes, such as video processing, the playback and streaming of MP3s or Quicktime video, the manipulation and display of 3-dimensional objects and the modeling of virtual physical objects. There is a wide range of external libraries available which give Pure Data additional features. Just about any kind of programming is feasible using Pure Data as long as there are externals libraries which provide the most basic units of functionality required.

The core of Pure Data written and maintained by Miller S. Puckette (http://crca.ucsd.edu/~msp/) and includes the work of many developers (http://www.puredata.org/), making the whole package very much a community effort. Pd runs on GNU/Linux, Windows, and Mac OS X, as well as mobile platforms like Maemo, iPhoneOS, and Android.

# Introduction to Firefox

Our guess is that you wouldn't be reading this unless you already know what a web browser is. However, if you don't: it's the software used to visit and view web pages on the Internet.

The Internet is a giant network of computers all connected to each other. It has grown from the first four systems that were originally connected in 1969 to currently over a billion systems and growing. Some of the computers connected to the Internet are "web servers." These web servers run software that allows them to deliver web pages. The vast network of web Servers on the Internet provides access to over 10 billion web pages and a continually growing and evolving set of web content and services. If you want to access these pages from your personal computer, laptop, or mobile device, you need to run a software program that knows how to do this. This is the purpose of a web browser.

## History of web browsing

Browsers have had one of the most public and interesting competitive lives of any software.

The first browser that could display images alongside text was known as Mosaic, and it really was an innovation that changed the world. In 1994, Marc Andreessen and a few of the original Mosaic developers banded together to start Netscape along with Jim Clark, a well known Silicon Valley entrepreneur. The early days of commercial browser development were marked by high energy and many innovations that continued to expand and improve the kinds of information the internet could provide. Every week it seemed that new sites popped up and new features appeared in browsers. A Wired Magazine article (http://www.wired.com/wired/archive/2.10/mosaic.html) captured some of the excitement from those early days when browsers and the web were starting to take off and grow rapidly.

However, while Mosaic and then Netscape Navigator were first to enter the game, they failed to corner the market. After a relatively short and aggressive "browser war", Microsoft's Internet Explorer (IE) took the lead. Through spending over \$100 million on the development and promotion of IE and aggressive business practices, Microsoft was able to capture around 96% market share of all browsers in use. Some of the business practices Microsoft engaged in during the "browser war" were later determined to be anti-trust violations (http://www.usdoj.gov/atr/cases/ms_index.htm). With control of the market and no perceived business case for improving the browser Microsoft began scaling back development in 2002 and 2003. In 2003 it announced there would be no more standalone versions of Internet Explorer (http://www.usdoj.gov/atr/cases/ms_index.htm).

With Microsoft out of the game, Netscape in decline, and many websites using pop-ups and untargeted advertising schemes, some consider 2001-2004 as a dark age of innovation and improvements for browsers and the web.

## Development of Firefox

A new chapter was added to this story with the debut of Firefox. Firefox is a distant descendant of Mosaic and Netscape. In 1998, Netscape set up the Mozilla Project and made its browser code freely available as an experimental strategy to gain a competitive advantage against Microsoft. This allowed programmers around the world to study the Mozilla code and follow its development. As time passed, this community of developers continued to contribute to its development. Netscape and AOL (which had acquired Netscape) remained heavily involved with Mozilla and released several Netscape and AOL products from the evolving Mozilla code. However, interest in browser development waned at AOL over time and in early 2003 AOL decided to reduce involvement, setting the project free to chart its own course.

In 2003, the Mozilla Foundation was created as an independent organization. The stage was set for the growing Mozilla community to leverage Netscape's past investment in browser code and make its own mark. This passionate community focused on the creation of a next-generation browser named Firefox.The goals for Firefox were simple: Make a light-weight browser that was fast and easy to use. It would put users back in control of their web surfing experience. It would not compromise on any part of the user experience with annoyances. It would also add an extension system that would give users the power to customize, experiment with, and tune the browser in thousands of ways.

In 2004, Firefox 1.0 was released. Since then the number of users has grown steadily. As of 2009, Firefox has about a 23% worldwide market share, with more than 300 million of the 1.2 billion Internet users around the world using Firefox as their web browser. The day Firefox 3 was released, in an event known as "Download Day", more than 8 million users downloaded the new version in a single day.

Firefox continues to innovate. With features such as an easy method for subscribing to automatically updated news headlines, home page tabs that help you get to content on the web faster, built-in pop-up blocking, and the expanding number of extensions that allow you to tailor your browser, Firefox helps you to stay in control of the way you interact with the web. Firefox has sparked renewed interest in improving web browsers.

In response to Firefox, Microsoft changed its plans, restarted its browser development and released IE7 and recently IE8. Apple and Google have also become involved in efforts to build new web browsers (Safari and Chrome, respectively). Innovation and choice is returning to browsers and the web.

Firefox runs on any operating system and is localized in over 75 different languages. It is built by a community of developers around the world who are passionate about improving the browser and the web. Best of all, Firefox is free!

# Ardour

Ardour is a full-featured, free and open-source hard disk recorder and digital audio workstation program suitable for professional use. It features unlimited audio tracks and buses, non-destructive, non-linear editing with unlimited undo, and anything-to-anywhere signal routing. It supports standard file formats, such as BWF, WAV, WAV64, AIFF and CAF, and it can use LADSPA, LV2, VST and AudioUnit plugin formats.

Ardour runs on Linux and Mac OS X, and uses the Jack Audio Connection Kit (JACK) to interface with the computer's sound card, as well as with other audio applications running on the same system.

More information on Ardour can be found at http://ardour.org/.

About a four hour drive east of the bustling city of Mumbai (Bombay) is the idyllic village of Khairat. The drive itself descends from a private toll highway, to a one-lane "highway" to a dirt road and then its a trek on foot to get to Khairat. Hidden behind a small hill is a one room school with a single teacher and thirty children. Each one of them has an OLPC XO-1 laptop. Every child comes from an illiterate farmer family, and has never before seen a computer. How has the XO changed their lives? Read on.

# Somebody Should Set The Title For This Chapter!

The school consists of a single classroom managed by a sole teacher. He teaches the languages, mathematics, physics, biology. He has also become the role model for children in this village. The children have become adept at using the XO laptop to write, paint, record, photograph, and to peruse other pieces of software on the laptop to learn about concepts like the solar system, and arithmetic. All this is wonderful, but first, some history.

Back in October 2007, OLPC in conjunction with Reliance ADA (an Indian conglomerate) and some volunteers began work on the Khairat pilot project. Their goals were modest. The school at Khairat had many advantages: a passionate teacher, a passionate team, and a passionate village community. The problems they faced were with respect to unreliable electricity, high humidity, irregular topography, a few school dropouts. Would laptops it make a difference? Would they even survive?

Reliance ADA, being in the telecommunications business, agreed to provide Internet access using a cell phone connection. The volunteers set up the connection and the teacher was given a short training session. Some of the volunteers even rigged up a mill and yoke to create a cow power prototype for charging laptops. Off they went, exploring their digital worlds in a small green laptop that they would tote every day from home to school and back. In November 2008, I had the opportunity to visit Khairat as part of a research project. In my observations I found that the children had done a remarkable job of learning to use the XO and to purpose it for their own work and fun. They had documented events such as Mahatma Gandhi's birthday (October 2), and a visit by a tight rope walker troupe. They were quick to open the Paint activity and whip up a local scene. They were fluent in writing their names in English, Hindi and Marathi, oblivious to the fact that hundreds of volunteers worldwide had contributed to language translations to bring such features to communities like Khairat.

The XOs are currently used in Khairat as the primary method of instruction. The blackboards gather dust with remains of lessons from a year ago. The teacher teaches mathematics by creating "fill in the blanks" problems in Marathi, the local language using the Write activity. The children use their XOs and fill in the blanks. As the children fill in the answers, the teacher sees the characters appear magically on his screen. All these players are oblivious to the fact that their computers carry some of the world's most advanced networking technologies for collaboration. Mesh networks, collaborative word processors, and other bits that first-worlders haven't seen in their classrooms as yet! The teacher simply commands his class to "go to the neighborhood and join the mesh".

These six-year old charmers are truly standing on the shoulders of giants. We hope that many a toothless grin will address a boardroom or a swarm of voters someday. We hope that the XO along with Sugar will help these children open a window into the world we take for granted. We hope that some day, they will address problem that we have not been able to. We hope that some day, they will find ways to make their homes and families healthier, safer, and happier. The children and the teacher have done remarkably well in two years. They are headed towards  bright future, and we hope the worldwide OLPC and Sugar communities can help them get there.

This essay is written by Dr. Sameer Verma based on his visit to Khairat in November 2008. Dr. Verma is an Associate Professor of Information Systems at San Francisco State University in San Francisco, California. His research includes diffusion and adoption of sustainable and innovative technologies.

# Installing Audacity on Ubuntu

Software name : Audacity ­
Homepage : http://audacity.sourceforge.net
Software version used for this installation : 1.2.6
Operating System use for this installation : Ubuntu 9.04
Recommended Hardware : 300 Mhz processor (CPU) minimum, 64 Mb RAM, internet connection

If you are used to an Operating System like Windows or MacOSX you may have installed software by downloading it from a website and double clicking on the downloaded file, and clicking through all the licence agreements, configuration options etc. This is the 'old' way of installing software. The 'new' way is much smarter - you choose what you want to install from a list and press 'go'. The rest - finding the files, downloading the files, installing software, is taken care of by Ubuntu itself while you go and get a cup of tea or work on something else. It can't get much easier.

However, new ways take a little getting used to, and so we will first look at the basic tool needed to install software this way, and then how to use it to install Audacity.

## Synaptic Package Manager

The Synaptic Package Manager (SPM) is used for more than just installing software. It can also upgrade your entire operating system, and manage all software installed on your computer. However most people use SPM for just installing new software. Before you embark on this process there are a few concepts that you may wish to get familiar with. Its not crucial you understand them throughly, so just read the explanations and then let it soak in over time. The ideas behind SPM will become clearer with use.

### What is a repository?

SPM can automate the installation of software on your computer because it has a direct connection to one or more online software repositories. These repositories are vast archives of softwares that have been pre-configured for installation on your operating system . When your computer is online SPM can connect to these archives, check what software is available for installation, and present you with a list of installable software. All you have to do is select the software from the list that you want. SPM then downloads the software from the repository and takes care of the installation process.

So SPM is actually a repository manager, it manages which repositories (there are many) you wish to access, and which softwares to download and install from those repositories.

SPM allows you to choose which repositories it accesses through its settings. The default repositories used by Ubuntu can be extended through the SPM settings so you can access a wider range of softwares. Ubuntu calls each repository by a simple name. They are : Main, Universe, Multiverse, Restricted. By default Ubuntu only uses the Main online repository. If you wish to access you the other repositories you must do this by changing repository settings of SPM.

### What is a package?

When SPM downloads a software for installation it is in the form known as a 'package'. This means that it is a compressed archive of the software, pre-configured so that it will install nicely on your computer. If the package has been configured nicely, and Ubuntu spends a lot of time making sure this is the case, then many of the headaches that installing software can bring are taken away - its the panadol of software installation.  One of the biggest issues with installing software on any form of Linux (Ubuntu is one of many types of Linux), are dependencies. Dependencies are all the other softwares required by a particular software. If, for example, I want to install a audio editor, that audio editor software may use some functionality of other softwares to do its job.

SPM takes the (often) dark art of dependencies away from you, and manages this itself. So if you wish to install a software and it has dependencies (and the list can be long), you don't have to work this out yourself, SPM knows already what is needed, finds it, and installs it along with the software you have chosen.

So, SPM, as well as managing which repositories you access, is also a package management software. Hence the name - Synaptic Package Manager.

### apt

You don't really need to know about apt, so if you are on the verge of being confused then don't read this short section. If you are a geek wannabe then read on.

Ubuntu is a form of Linux that has derived from another form of Linux - Debian. This family of Linux has in common (amongst other things) the package/repository management system. Both Ubuntu and Debian use the apt system for managing packages. APT is actually an acronym short for Advanced Package Tool.

SPM is actually a 'front end' (graphical interface) for controlling apt. So SPM is the nice user interface that you see, but the real work is done by apt. There are other ways of managing apt also, such as the command line interface known as apt-get. In the world of Linux there are many varieties of Linux users and they have their own ways of doing things. In general its safe to say most Debian users use apt-get, and most Ubuntu users are happier using SPM.

## Configuring Synaptic Package Manager for installing Audacity

To install Audacity on Ubuntu you will need to change the default repository settings of SPM as Audacity is not contained in the default repository. To do this you will need to open the Synaptic Package Manager and you can do this via the System menu. If you haven't changed the default Desktop of Ubuntu then the System menu can be accessed at the top left of your screen:

If all is good you will be prompted for a password.

Here you must enter your password (the same one you use to log into the system). If you don't know the password then you have a problem and its probably caused by the fact that the computer you are using is not yours. In this case you have to find the computers owner and ask them for the password (which is usually not polite unless you know them well) or ask them to input the password while you look casually in the other direction.

Assuming the password entered is correct - you will now see the SPM open infront of you. It may be that you first see the following 'Quick Introduction' (this appears if you haven't used SPM before).

Just click Close and move on. Lets look at the Synaptic Package Manager interface...

Lets not worry about the details of the interface for now. All we want to do is change the repository settings. To do this click on the Settings and choose Repositories

Now we get to where we can do some business. Make sure that Community-maintained Open Source software (universe) is checked :

Now close that window by pressing the big close button at the bottom right.

Next, you will see a warning telling you the repository has changed :

Click the Close button and move on to SPM so you can refresh the repositories as the warning suggests. To do this, do as the warning says - click on Reload :

You should then get some feedback saying the repositories are being updated and showing the progress.

## Installing Audacity with Synaptic Package Manager

Now the real business. Its pretty easy. Click on the nice big Search button:

The search window will open and now enter 'audacity' in the field:

Great. Now press Search at the bottom right of the above window. The search should not take very long and when it is complete you will see Audacity listed :

If you highlight audacity (by clicking once on the name) you will see some information about Audacity displayed :

Now you can read the information if you wish but there is nothing critical there. Best thing to do is just to double-click on audacity. By doing this you are 'marking' (choosing) the software for installation. If you do so then an additional window appears:

Now click Mark:

You have now told Synaptic Package Manager that you want to install Audacity. If all is well then the package will be highlighted:

Now press the Apply button with the nice big tick next to it :

Next SPM informs you of how much space will be taken by the installation :

Now you can just click Apply at the bottom right of the screen and the installation will start. A progress bar will be shown:

When it is completed (downloading and installing) you will have this screen :

That means all is well. Pat yourself on the back and press Close. Synaptic Package Manager will then return to its original state. Close it :

Now you can open audacity...just browse to the Applications menu, choose Sound & Video, select Audacity and release the mouse :

If all is well you will see Audacity:

Congrats!

# What is Newscoop?

Updated for Newscoop 3.5.3

Newscoop is a multi-lingual enterprise content management system for on-line newspapers and magazines, enabling scheduled publishing of multimedia. Built-in revenue generation features include support for paid or trial reader subscriptions, and geolocation-based services.

The administration interface is built with the journalist, editor and publisher in mind, based on feedback from the news organizations around the world that have deployed Sourcefabric's newsroom software since the launch of the first version in 1999.

Newscoop follows a print publishing model, so it structures websites as Publications, Issues, Sections and Articles by default. Newscoop was designed for medium-to-large online publications, but it is capable of handling nimbler sites too.

With Newscoop, you can edit articles using an advanced WYSIWYG interface, and manage articles translated into different languages. A traditional editorial process ensures quality of content: the journalist writes the article, the editor reviews the content, and then the article can be published. The Newscoop administration interface works in most modern web browsers, including Mozilla Firefox and Microsoft Internet Explorer. No browser plug-ins are necessary.

You can also create your own definition of what a particular type of article should include. For example, one article type might consist of "Intro", "Body", and "References", while another type might include only "Body" and "Author Bio". You can incorporate images, audio and video into your articles, for delivery directly in the browser window or as attachments for download. Articles can be categorized with topics, and scheduled for release at a future date, or published immediately.

Newscoop offers full control over the look and feel of your web site via a flexible HTML5 and CSS3 ready template engine. The PHP object-oriented API enables your website developers to build third-party plug-ins too. Built-in support for translation allows the administration interface to be adapted to support new languages quickly and easily.

Managers will appreciate the fine-grained access control for different types of staff users, including journalists, editors, and subscription managers. The integrated subscription features include IP-based access control for corporate and institutional accounts.

Newscoop is Open Source software released by Sourcefabric under the GNU General Public License. It incorporates the TinyMCE WYSIWYG text editor and Plupload file handler from Moxiecode Systems AB, Christophe Dolivet's EditArea as a template editor, and PhotoViewer by Joseph Nicora for thumbnail zooming. Geographical data is provided by GeoNames.

# Feature list

This list of features is provided as a guide to help you decide if Newscoop is the right content management system for your publication.

## Multi-lingual content

All of the content that you create in Newscoop can be translated:

• Articles
• Sections
• Issues
• Publications
• Topics (content categorization)
• File attachment descriptions

The Newscoop administration interface has been translated into the following languages (in alphabetical order):

1. Arabic
2. Belarusian
3. Chinese (Simplified)
4. Chinese (Traditional)
5. Croatian
6. Czech
7. Dutch
8. English
9. French
10. Georgian
11. German
12. Korean
13. Polish
14. Portuguese
15. Romanian
16. Russian
17. Serbian (Cyrillic)
18. Serbo-Croatian (Latin)
19. Spanish
20. Swedish

Further translations can be added using a tool built into Newscoop called the "Localizer".

## Revenue generation

1. Control access to your content via:
• User's IP Address (for corporate accounts)
• Login/password (for individual users)
2. Control the content your subscribers have access to:
• The entire issue
• Specific sections in each issue
• Specific sections in a particular language
3. Support for a trial subscription period, for instant access while payment is made
4. Set your own pay periods (the time between each payment made by a subscriber)
5. Geolocation and mapping features, enabling location-based services

## Editorial features

1. Online WYSIWYG editor for article editing:
• Typical style formatting: bold, italic, etc.
• Special support for linking to existing (internal) articles
• Ability to split articles into multiple pages
• Insert images into your articles
• Copy & Paste clean text from Microsoft Word or OpenOffice.org (while preserving bold and italics)
• Insert subtitles, which can be used for breaking up the article (pagination)
2. Built with multiple simultaneous users in mind
• While a journalist is editing an article, it is "locked". A warning will be displayed if anyone else tries to edit the article at the same time. This prevents one journalist from deleting the work of another
3. Group articles into sections
4. Group sections into an issue
5. Release an issue all at once
6. Allow subscribers to access only certain issues
7. Scheduled Publishing: automatically publish articles or issues at some time in the future. For articles, you can schedule the following actions:
• publish
• unpublish
• show the article on the front page
• remove the article from front page
• show the article on the section page
• remove the article from the section page
8. Topics: Categorize your content
• Define however many topics you like
• Associate any number of topics to an article
• Topic can have subtopics, subtopics can have sub-subtopics, etc.
• Topics can be translated
9. Dynamic, Flexible Article Types
• "Article Types" allow you to define your own article format - you aren't limited to just "Introduction" and "Body" fields, for instance. An Article Type consists of a series of data fields
• You can define any number and combination of the following field types:
• date field
• single-line text field
• multiple-line text field with WYSIWYG editor
• drop-down selection containing a list of topics
• Reorder how the fields are displayed in the admin interface
• Hide fields that are no longer in use
• Translate the field names
• Change the data type for a field
10. Image Manager
• View/search all of the images in the system
• Edit image metadata
• Scale images any way you want on the fly. Scaled versions are cached
11. File Attachments
• Attach files to articles
• Files can have descriptions
• You can specify whether the file should be displayed in the browser or popup a download window
12. Comments
• Readers can make comments to articles
• Comments can be linked to a forum
• Flexible implementation: allow anyone to comment, just subscribers, logged in
• Comments can be moderated
• Built-in CAPTCHA for spam prevention
13. Works with SSL on both the front-end and back-end

## Site Design

Newscoop has its own template language specifically made for online newspapers and magazines. It allows you to customize your site however you want.

• You have access to the following data objects:
• publications
• issues
• sections
• articles
• topics
• current user
• current language (e.g. automatically show the user an article in their language)
• Control statements such as IF and LIST
• Include other templates
• Built on the SMARTY templating engine which allows easy customization and inclusion of third party modules

## Administration

1. Fine-grained access control, you can create different user types such as:
• journalists
• editors
• photographers
• photo editors
• subscriber managers
• site administrators
2. Multiple author support with contact information, biographies and article tracking
3. All administration actions are logged
4. Security
• Login page secured against automated scripting attacks with a CAPTCHA
• Login password is encrypted when sent to the server
5. Backup
• Command-line "backup" and "restore" scripts make it easy to backup your entire site and restore it with one command
• You can also easily transfer your site from one server to another using these scripts
6. Automated Feedback and Bug Reporting
• If something goes wrong in the administration interface, a special screen will appear which allows you to submit the problem back to the Newscoop team
• You can also submit feedback directly in the administration interface, such as suggestions or feature requests

## Developers

• Completely open-source LAMP development stack (Linux, Apache, MySQL, and PHP)
• Easy to use object-oriented API to develop plugins or alternative interfaces
• Easy-to-read code
• Open development process - all planning, specs, and reviews are done in the open. Developing Newscoop is a community process

## Full commercial support

• Paid per-incident support is available from Sourcefabric (http://www.sourcefabric.org). Guaranteed support contracts ensure a 24-hour response time
• If you need additional features in Newscoop, they can be ordered from Sourcefabric at a very reasonable cost and delivered in a timely manner
• Sourcefabric has a team of full-time developers working to make Newscoop easier to use, with the features you want
• Community support is available via mailing lists and forums (http://forum.sourcefabric.org)

# Getting started

Newscoop enables you to host multiple, multi-lingual publications on the same web server. The process of setting up a new on-line publication with Newscoop can be divided into three steps:

1. Configuring the publication, and specifying the templates to be used
2. Establishing the structure of your publication, with issues and sections
3. Adding content, managing content, and publishing it

This part of the Newscoop manual is aimed at editors and journalists working their way through these three steps. It assumes that the web server you will use is already up and running with Newscoop, and that templates have been designed for your publication. If you chose to install the sample templates when following the Installation Steps chapter, you can use these templates to learn about Newscoop in advance of having your own templates designed.

If you are a system administrator setting up a Newscoop server for production use, you should also read the administration chapters, later in this manual, before you begin work on the server.

If you do not yet have your own Newscoop server running, you can follow the steps in this manual using the Newscoop demonstration server and sample templates provided by Sourcefabric.

## Logging in

The first step begins with logging in to the administration interface of your Newscoop server. This is a special interface which is only available to the staff of your publication. Readers who subscribe to your online publication will log in using the home page of your website instead.

By default, the URL you should enter into your web browser for the administration interface is the name of your website, followed by 'admin'. For example:

http://www.example.com/admin/

If you installed Newscoop yourself, you would have set a password for the 'admin' user during the installation steps. If not, your system administrator should already have provided you with a login account name and password. Below the Account name and Password fields, click the drop-down menu to select an interface language other than the default of English, if you wish. Then click the Login button.

Alternatively, the administration interface of the Newscoop demonstration server can be found at:

http://newscoop-demo.sourcefabric.org/admin/

Please remember that the demonstration server is a public site, so don't enter any private information there. A variety of guest login accounts are set up on this system, and the passwords for these accounts are shown on the login page.

# How permissions change the interface

The appearance of the Newscoop administration interface changes, depending on the permissions that a particular staff member has. Each user sees only the options that he or she has the authority to use. A typical staff user (a section editor or journalist) will only see some of the options available to a fully authorized administrator (such as the publisher, or senior manager).

When you log into the Newscoop administration interface, across the top of the page you will see the main navigation menu, containing the options available to you. Here is how two typical users would see the main menu differently. Firstly, here's how the Actions sub-menu looks when an administrator is logged in:

And this is how the same Actions sub-menu looks when a journalist is logged in:

# The Dashboard

After logging into the administration interface for the first time, you'll see a page which Newscoop calls the Dashboard. This an area into which you can add widgets for the administration functions that you use most often. In this way, you can customize the administration interface to suit your needs. Click the Add more widgets link to open a page where you can select from more than a dozen potential widgets.

On the Widgets page, click the Add to dashboard link for each widget that you would like to start with. You can refine your choice of widgets later, as you get to know the Newscoop administration interface and its functions.

After all the widgets that you require have been added to the Dashboard, they change from black to green text to show that they are active. Click the Go to dashboard link to return to the Dashboard page.

Each widget has three small blue icons in the upper right corner. From left to right, these icons maximize the widget to the full width of your browser window, provide general information about the widget, or close it. When a widget is maximized, clicking the close icon returns the widget to normal size.

Some widgets have a blue spanner icon in the upper right corner, which enables you to adjust a setting for that particular widget. For example, the Map search widget enables setting of the default map location using the spanner icon.

# The Parts of a Command

The first word you type on a line is the command you wish to run.  In the "Getting Started" section we saw a call to the `date` command, which returned the current date and time.

## Arguments

Another command we could use is `echo`, which displays the specified information back to the user.  This isn't very useful if we don't actually specify information to display.  Fortunately, we can add more information to a command to modify its behavior; this information consists of arguments .  Luckily, the `echo` command doesn't argue back; it just repeats what we ask it:

```\$ echo foo
foo
```

In this case, the argument was foo, but there is no need to limit the number of arguments to one. Every word of the text entered, excluding the first word, will be considered an additional argument passed to the command. If we wanted `echo` to respond with multiple words, such as `foo bar`, we could give it multiple arguments:

```\$ echo foo bar
foo bar
```

Arguments are normally separated by "white space" (blanks and tabs -- things that show up white on paper).  It doesn't matter how many spaces you type, so long as there is at least one. For instance, if you type:

```\$ echo foo              bar
foo bar
```

with a lot of spaces between the two arguments, the "extra" spaces are ignored, and the output shows the two arguments separated by a single space.  To tell the command line that the spaces are part of a single argument, you have to delimit in some way that argument.  You can do it by quoting the entire content of the argument inside double-quote (`"`) characters:

```\$ echo "foo              bar"
foo              bar
```
As we'll see later, there is more than a way to quote text, and those ways may (or may not) differ in the result, depending on the content of the quoted text.

## Options

Revisiting the `date` command, suppose you actually wanted the UTC date/time information displayed.  For this, `date` provides the `--utc` option.  Notice the two initial hyphens.  These indicate arguments that a command checks when it starts and that control its behavior.  The `date` command checks specially for the `--utc` option and says, "OK, I know you're asking for UTC time".  This is different from arguments we invented, as when we issued `echo` with the arguments `foo bar`.

Other than the dashes preceding the word, `--utc` is entered just like an argument:

```\$ date --utc
Tue Mar 24 18:12:44 UTC 2009
```

Usually, you can shorten these options to a shorter value such as `date -u` (the shorter version often has only one hyphen).  Short options are quicker to type (use them when you are typing at the shell), whereas long options are easier to read (use them in scripts).

Now let's say we wanted to look at yesterday's date instead of today's.  For this we would want to specify the `--date` argument (or shortly `-d`), which takes an argument of its own. The argument for an option is simply the word following that option. In this case, the command would be `date --date yesterday`.

Since options are just arguments, you can combine options together to create more sophisticated behaviour.  For instance, to combine the previous two options and get  yesterday's date in UTC you would type:

````\$ date --date yesterday -u`
Mon Mar 23 18:16:58 UTC 2009
```

As you see, there are options that expect to be followed by an argument (`-d`, `--date`) and others that don't take any one (`-u`, `--utc`).  Passing a little bit more complex argument to the `--date` option allows you to obtain some interesting information, for example whether this year is a leap year (in which the last day of February is 29).  You need to known what day immediately precedes the 1st of March:

````\$ date --date "1march ``yesterday``" -u`
Sat Feb 28 00:00:00 UTC 2009
```
The question you posed to `date` is: if today were the 1st of March of the current year, what date would it be yesterday?  So no, 2009 is not a leap year.  It may be useful to get the weekday of a given date, say the 2009 New Year's Eve:
````\$ date -d 31dec +%A`
Thursday
```

which is the same as:

````\$ date --date 31december2009 +%A`
Thursday
```

In this case we passed to `date` the option `-d` (`--date`) followed by the New Year's Eve date, and then a special argument (that is specific to the `date` command). ⁞ Commands may once in a while have strange esoteric arguments...  The `date` command can accept a format argument starting with a plus (`+`).  The format `%A` asks to print the weekday name of the given date (while `%a` would have asked to print the abbreviated weekday: try it!).  For now don't worry about these hermetic details: we'll see how to obtain help from the command line in learning command details.  Let's only nibble a more savory morsel that combines the `echo` and `date` commands:

````\$ echo "This New Year's Eve falls on a \$( date -d 31dec +%A )"`
This New Year's Eve falls on a Thursday
```

## Repeating and editing commands

Use the Up-arrow key to retrieve a command you issued before.  You can move up and down using arrow keys to get earlier and later commands.  The Left-arrow and Right-arrow keys let you move around inside a single command.  Combined with the Backspace key, these let you change parts of the command and turn it into a new one.  Each time you press the Enter key, you submit the modified command to the terminal and it runs exactly as if you had typed it from scratch.

# Pure Data

Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.  Because all of these various media are handled as digital data within the program, many fascinating opportunities for cross-synthesis between them exist. Sound can be used to manipulate video, which could then be streamed over the internet to another computer which might analyze that video and use it to control a motor-driven installation.

Programming with Pure Data is a unique interaction that is much closer to the experience of manipulating things in the physical world.  The most basic unit of functionality is a box, and the program is formed by connecting these boxes together into diagrams that both represent the flow of data while actually performing the operations mapped out in the diagram.  The program itself is always running, there is no separation between writing the program and running the program, and each action takes effect the moment it is completed.

The community of users and programmers around Pure Data have created additional functions (called "externals" or "external libraries") which are used for a wide variety of other purposes, such as video processing, the playback and streaming of MP3s or Quicktime video, the manipulation and display of 3-dimensional objects and the modeling of virtual physical objects. There is a wide range of external libraries available which give Pure Data additional features. Just about any kind of programming is feasible using Pure Data as long as there are externals libraries which provide the most basic units of functionality required.

The core of Pure Data written and maintained by Miller S. Puckette (http://crca.ucsd.edu/~msp/) and includes the work of many developers (http://www.puredata.org/), making the whole package very much a community effort. Pd runs on GNU/Linux, Windows, and Mac OS X, as well as mobile platforms like Maemo, iPhoneOS, and Android.

# WordPress and Communities

WordPress is a good entry point into the many possible online communities. It is easy to use and it comes from a history of blogging. Bloggers were pioneers in increasing the interaction between writers on the internet. In this chapter we look at some of the things that online communities need to thrive and how you can use WordPress to achieve them.

## Low barriers to entry

Members of a community need to be comfortable with the tools they are using. They also need to be able to share knowledge about how these tools work. WordPress offers this in a number of different ways.

### Free WordPress sites are great for testing and evaluation

Unlike some other Content Management Systems (CMS) which require you to set them up on a server, you can sign up for a WordPress blog in many places for free - wordpress.com being the most popular - and it only takes a couple of minutes. This allows you to try WordPress out to see if it works for you. There is an easy to use export and import functionality for content, so you can always migrate to your own site later if you wish.

### Intuitive and standard user interface

WordPress has an accessible user interface; the forms and tips are fairly intuitive and it certainly compares very favorably with other similar open source tools. There are more 'advanced' community website systems out there which offer more flexibility, however they bring the disadvantage that they are configured in different ways. Having a standard makes it easier to write and provide documentation for WordPress users.

### Tools to build community and interaction

WordPress draws on the traditions of bloggers and indeed the very beginnings of the web. By default, WordPress includes comments, pingbacks, blog lists, and links to other related sites. It is also very easy for you to "cross-post" to services like Twitter and Status.net. This means that a post you write can be automatically posted to other communities.

## Staying independent and in control

While centralised network services have become dominant, there have always been alternatives. WordPress offers many of the advantages of social media, without the disadvantages of the 'walled-gardens' of centralised services.

Although WordPress in built on free software, the installation of it at WordPress.com can be seen as a centralized network service. The "Open Web" is built on a decentralized approach to hosting and as such has inbuilt resilience to censorship. Reliance on huge network services as the arbiters of free speech is a very weak position. Such services come under daily pressures from authorities for disclosure. As such, they often opt for an easy life by handing over personal details and suspending accounts, websites and blogs with little in the way of an appeal process.

By setting up your own WordPress installation and publishing your content there, you have a big advantage over using a hosted service: you remove the risk involved when relying on a network service that is out of your hands. A community made up of people sharing their content from independent nodes is extremely robust.

## Why create a WordPress Network

A WordPress Network is a way of setting up your WordPress installation so that many different blogs can be hosted on it. The website at wordpress.com is one example. Rather than setting up many different sites and maintaining them separately, creating a WordPress Network allows the creation of hundreds of sites with only one set of code to keep up to date.

With only limited webhosting skills, you can offer independently hosted blogs to members of your community. Let's look at how and why you might want to set up network and explore some tools and tactics to help foster communities.

Whether your community is sharing technical discussions, community matters, or political opinions, every community benefits from independence and robustness. A community that is in control of their means of content production is a good thing.

Even if you are managing a small network of, for example, 20 church websites, if you choose to run it as a WordPress network instead of using WordPress.com then you are helping the resilience of the Open Web. If you run this network well, its users will be able to interact without using more centralized services like Facebook.

Part of this chapter is adapted from a chapter in a book about the Open Web, which goes into more depth on these issues: http://en.flossmanuals.net/an-open-web/

### Technical considerations of running a network

WordPress is free software: you can download it and install it on your own server. As such, you are not bound by the take-down and privacy policies of WordPress.com. If you have website creation skills, it is relatively easy to install a WordPress network. This allows you to host many blogs, install extra functionality for them and makes it easy to keep the software updated. WordPress blogs are a great entry point into the social media maze as they have RSS feeds, publicly vetted APIs1  and useful plugins to allow cross posting. With the BuddyPress functionality you can also create a very usable social network.

You can anonymize blogs and services by not logging IP addresses. The process of not logging IP addresses on a server using Apache is relatively simple - you can use the removeip Apache module. Rather than trying to remove all logs of IP addresses, it replaces them with an arbitrary IP number.

### Avoid the cloud

Hosting your blogs or networks in the "cloud" may offer technical advantages, but it may come at the cost of reducing your control over your resources.2  Not all cloud computing is bad; many independent hosting companies are taking advantage of the open source Open Stack approach to cloud computing.3

If possible, choose a smaller provider who can offer more support and options. If they receive a take-down order, they may be able to discuss this with you to help develop a joint approach to the problem and may help you fight your corner.

### Be responsible

Don't over-reach yourself. If you are an individual or a small group setting up a network to help people escape from services like Facebook, try out a limited service first with a small number of essential plugins and well-tested themes. You can always add more later, when users request them.

Work with others as much as possible. If you alone set up a network of hundreds of sites, even if you have the best will in the world, it is a very weak position. You may lose interest, get a full time job, go a bit mad or have new personal commitments that mean you are unable to maintain the network. If you are part of a team then others can share the workload as well as bring their specific skills to the project.

### Create your own Acceptable Usage Policy

Take time as a group to make sure you are agreed on what/who you are prepared to host or not. Then make this agreement public as your AUP (Acceptable Usage Policy). You may want to support free speech but that does not mean that you have to support everyone. The internet is a big place and they will find a home somewhere else.

Create a clear (and perhaps automated) process for applying for a website or blog where people have to agree to these terms. You should also develop a clear and fair way of taking down websites that you no longer feel you can support, and suggest alternative hosting options for users you have to disconnect

### Create good help resources

You should to try to ensure that when your users have signed up for their site that they get a welcome message which directs them towards a place where they can get help on using the site. You can create a help page on your network which sign-posts people to existing help, and add other help that is specific to the set up of your network. This can point people towards forums and email lists where their questions may have already been answered.

There is a lot of good help that you can link to that is already out there and much that is written in an open license which means that you can adapt it for your site. There are also video tutorials that help people who best learn in this way. Seek them out and put them on the support pages of your network.

For an example of a help page for a network you can see one here: https://network23.org/help-and-faq

### Encourage interaction between network sites

One of things about social networking that makes it hard to beat is the immediate response from a community that users can quickly build up. In Facebook they work hard to make it easy to make links with people, share information with them and comment on each others' content. This is something that is not as intuitive in a WordPress network, but there are things that can be done.

BuddyPress is a plugin which is specifically set up to try to achieve this aim; it alters the front page of your network site to show recently posted content and a list of users. You can also enable the use of site-wide tags which create pages listing content tagged with the same term across the whole network of users.4  See later chapters on how to set up and use BuddyPress and site-wide tags.

1. API stands for "Application Programming Interface"; it is used by developers when they are writing code to extend an application.^
2. http://en.flossmanuals.net/an-open-web/introduction-excisions/^
3. http://openstack.org/^
4. https://network23.org/tags/tag/cuts/^

# Pure Data

Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.  Because all of these various media are handled as digital data within the program, many fascinating opportunities for cross-synthesis between them exist. Sound can be used to manipulate video, which could then be streamed over the internet to another computer which might analyze that video and use it to control a motor-driven installation.

Programming with Pure Data is a unique interaction that is much closer to the experience of manipulating things in the physical world.  The most basic unit of functionality is a box, and the program is formed by connecting these boxes together into diagrams that both represent the flow of data while actually performing the operations mapped out in the diagram.  The program itself is always running, there is no separation between writing the program and running the program, and each action takes effect the moment it is completed.

The community of users and programmers around Pure Data have created additional functions (called "externals" or "external libraries") which are used for a wide variety of other purposes, such as video processing, the playback and streaming of MP3s or Quicktime video, the manipulation and display of 3-dimensional objects and the modeling of virtual physical objects. There is a wide range of external libraries available which give Pure Data additional features. Just about any kind of programming is feasible using Pure Data as long as there are externals libraries which provide the most basic units of functionality required.

The core of Pure Data written and maintained by Miller S. Puckette (http://crca.ucsd.edu/~msp/) and includes the work of many developers (http://www.puredata.org/), making the whole package very much a community effort. Pd runs on GNU/Linux, Windows, and Mac OS X, as well as mobile platforms like Maemo, iPhoneOS, and Android.

# Audacity

The Audacity program is an example of an 'audio editor'. Which means Audacity can record and edit audio. Typically, one uses Audacity for recording sounds, like interviews or musical instruments. You can then use Audacity to combine these sounds and edit them to make documentaries, music, podcasts, etc.

In the old days, audio editing was done with huge machines that recorded sound to tape (similar to the tape in tape cassettes).

Audio engineers would then edit these tapes using razor blades and sticky tape. Much of the jargon used in audio editing today comes from this process. Making a "cut" meant literally cutting the audio tape at a certain point. "Multitrack" referred to recording many separate sounds onto extra wide tape to fit more 'tracks'. The recording industry still uses these terms, and more, today. Many of the fundamental techniques which formed good audio recording and editing practices then, laid the foundation for recording and editing software.

While many of the terms and techniques remain the same today, computers replaced tape machines, and digital files succeeded tapes. Hence, one records audio and edits with a computer (using software like Audacity), and stores these sounds in files on a computer. This makes the process faster and requires a lot less physical storage space.

Audacity is a powerful tool for recording and editing audio on a home computer. Its is a very sophisticated program and can do everything one would expect from a modern audio editor. Audacity perhaps falls short of meeting the needs of professional recording studios, but not by much.

One can install and run Audacity on Linux, Mac OS X and Windows

# Installing Audacity on Windows

Software name : Audacity ­
Homepage : http://audacity.sourceforge.net
Software version used for this installation : 1.2.6
Operating System use for this installation : Microsoft Windows XP
Recommended Hardware :

Windows version Recommended RAM/
processor speed
Minimum RAM/
processor speed
Windows 98, ME 128 MB / 500 MHz 64 MB / 300 MHz
Windows 2000, XP 512 MB/1 GHz 128 MB/300 MHz
Windows Vista Home Basic 2 GB / 1 GHz 512 MB / 1 GHz
Windows Vista Home Premium/
Business/Ultimate
4 GB / 2 GHz 1 GB / 1 GHz

## Downloading Audacity

The latest stable version of Audacity for Microsoft Windows can be downloaded from http://audacity.sourceforge.net/download/windows.  The latest stable version at the time of writing this document is Audacity 1.2.6.

Click on the "Audacity 1.2.6 installer" link. This will take you to a download page. This page lists locations around the world where the software can be downloaded. The idea is that downloads can be faster if they come from a place near you. To begin downloading, click on the download link of the location nearest you.

Once the download is complete you should see a downloads window like this:

or something like this on your desktop:

Open this file to begin installing Audacity by double clicking on the icon or clicking on "Open" on the downloads window.

You should now see a Setup Wizard like this:

Click "Next" to proceed.

The next step asks you to read and accept the License Agreement.

You cannot continue with the installation until you have accepted the agreement, so click on the radio button labelled "I accept the agreement" and then click "Next" to continue.

You should now see an Information window like this:

This window contains information such as credits and a changelog that you may find useful to read. Click "Next" continue.

You will now be prompted to select the folder Audacity will be installed into.

The Setup Wizard will automatically create a folder called "Audacity" in your "Program Files" folder so unless you want to install it somewhere else you can simply click "Next" to continue. If you wish to install Audacity somewhere else click "Browse".

You will now be asked to select which additional tasks you would like Setup to perform during the installation.

Click the check boxes to select or deselect the additional tasks then click "Next" to continue.

You will now see a window displaying the destination location and additional tasks.

Check that this information all correct and if it is click "Install" to continue. Click "Back" if you wish to change any of the installation settings.

You should now see a window like this:

Click "Finish" to complete the installation. If the check box labelled "Launch Audacity" is ticked Audacity will open straight away.

Installation is now complete. If you didn't choose "Launch Audacity" in the options above then you can launch Audacity by double clicking the following icon in the Audacity folder:

The first time you launch Audacity you will be prompted to select which language you want it to use.

Click on the dropdown menu to select the language you want.

Once you have chosen the language you want, click on "OK" to finish launching Audacity.

The Audacity interface should look like this:

That's it! You now have Audacity up and running and can begin making and editing recordings.

You may need to install an extra library to be able to export a file as an mp3.

# Mp3 Capability Installation

Software name : Audacity
Software version : 1.3

By default Audacity can play and edit mp3 files. However it can't export a project as an mp3 file unless you install an extra bit of software called  'library'.

This sounds complicated but what it really involved is downloading the right file and then linking to it in Audacity.

To download the file we need, which is called the 'Lame Mp3 Library'. Please point your browser to the following page http://lame.buanzo.com.ar/

You should see the following text or something similar.

You should click on the link to 'Lame x for Audacity on Windows.exe' and download that file to your computer.

Click on Save File when you see the above dialogue box. And choose where you want to save your file.

When you have downloaded the file, find it on your computer and double click it. This will start the installation process.

Click OK when asked 'Open Executable File?'

Click on Next for the following steps and then when you are asked to choose a folder to install the Lame library in keep this as the default.

Click on Install.

Click on Finish.

When you come to use Audacity and need to export a file, if you haven't already told Audacity where the Mp3 Encoder is located it should prompt you to locate it.

It's best to do this now while you remember where it is installed. So open up Audacity and with a small test project open click on File > Export

Fill in some test entries for the Metadata and then click OK on the following screen.

Choose a file name and a place to save it. Then select mp3 from the drop down menu.

If you are prompted to locate the Lame Library then you can locate this in the location you saved it in - by default in Windows this would be C:\Program Files\Lame for Audacity

You may not need to do this as from version 1.3 Audacity will scan your hard drive looking for the relevant file. If it succeeds then it will save your file without asking you anything.

When you save the mp3 you have the option to change the bit rate by clicking on the Options box.

You can see more on this in other chapters about Exporting.

# Track Area

Software name : Audacity
Software version : 1.2

In Audacity, a channel of sound is represented by one mono audio track, a two channel sound by one stereo audio track.  The example below is a stereo track :

Lets look at some of the controls available to you from this interface :

 option action Name edit the name of the track Move Track Up/Down move Track Up or Down in the display Waveform traditional display of audio material. Waveform (dB) like Waveform, but logarithmic instead of linear vertical units . Spectrum displays the frequency spectrum of the audio over time. Pitch (EAC) tries to detect the pitch of the current audio and displays that information over time. Mono set playback of this single channel track on the left and right channels. Left Channel set playback of this single channel track on the left channel. Right Channel set playback of single channel track on the right channel. Make Stereo Track the selected track and that beneath it is turned in to one stereo track. Split Stereo Track turn one stereo track in to two single channel tracks. Set Sample Format pick the sample format for this track. Set Rate set the sample rate of this track.

## Solo and Mute Mode

In solo mode, only tracks that have the solo button activated.

With mute a track is switched off without deleting it.

## Gain and Pan Controls

This slider set the panning position of the track in the stereo field.

This slider controls the track volume, or rather the overall gain of that particular track.

# Menu Bar

Software name : Audacity
Software version : 1.2

Lets look at the basic elements of the Audacity Menu Bar :

The Menu Bar is a typical element in many applications. It will look slightly different to this if you are not using Linux, most notably in Mac OSX this Menu Bar is not located on the application window itself but at the top of the screen in the "Apple Menu".  Lets go through the Menu Bar one item at a time.

## File

By clicking on "File" in the Menu Bar you get a drop down menu with several options to choose from. Some options maybe "greyed out" meaning you can't select them, you will only be able to choose the options that appear in solid black. The options available depend on the state of Audacity at the time. For example the following image was taken from Audacity with when the program had just been opened and no recording or editing had been started:

The File Menu is where you can process all the things related to the audio and project (.aup) files.

 option action New creates a new empty project window. Open... selecting "Open" presents you with a dialog where you can choose a file to open. Close closes the current project window. Save Project saves the current Audacity project (AUP) file. Save Project As... allows you to save the current Audacity project (AUP) file with a different name or in a new location. Recent Files... gives a list of recent files you ahve been working on. Export As WAV... exports the current Audacity project as a standard audio file format such as WAV or AIFF. Export Selection As WAV... this is the same as Export, but it only exports the part of the project that is selected. Export As MP3... exports the current Audacity project as an MP3 file. Export Selection As MP3... this is the same as Export MP3, but it only exports the part of the project that is selected. Export As OGG... exports the current Audacity project as an Ogg Vorbis file. Export Selection As OGG... this is the same as Export As OGG, but it only exports the part of the project that is selected. Export Labels... if you have any Label Tracks, this command will export them as a text file. This feature is commonly used in Speech Recognition. Export Multiple... this allows you to do multiple exports from Audacity. Exit/Quit closes all project windows and exits Audacity. It will ask you if you want to save changes.

## Edit Menu

The Edit Menu is only accessible when you are editing an audio file.

 option action Undo this will undo the last editing operation you performed to your project. Redo this will redo any editing operations that were just undone. Cut removes the selected audio data and places it on the clipboard. Copy copies the selected audio data to the clipboard without removing it from the project. Paste inserts whatever is on the clipboard at the position of the selection cursor in the project. Trim deletes everything but the selection. Delete removes the audio data that is currently selected without copying it to the clipboard. Silence erases the audio data currently selected, replacing it with silence. Split moves the selected region into its own track or tracks. Duplicate makes a copy of all or part of a track or set of tracks into new tracks. Select... selects part of the audio depending on the option chosen. Find Zero Crossings moves the cursor or the edges of the selection to the nearest point where the audio waveform passes though zero. Selection Save saves the current selection and position. Selection Restore restores the selection to the project. Move Cursor... these commands provide quick and accurate ways to manoeuvre the cursor around the project to the start and end of tracks and selections. Snap-To... turns snapping of the cursor to a grid of time values on or off. Preferences opens a dialog window that lets you configure Audacity.

## View

The View Menu is used to manage the display of the tracks ("channels") and various options to show and hide some interface elements :

 name action Zoom In zooms in on the horizontal axis of the audio displaying less time. Zoom Normal zooms to the default view, which displays about one inch per second. Zoom Out zooms out displaying  more time. Fit in Window Zooms out until the entire project just fits in the window. Fit Vertically adjusts the height of all the tracks until they fit in the project window. Zoom to Selection zooms in until the selected audio fills the width of the screen to show the selection in more detail. Set Selection Format sets the format in which selections are measured in at the bottom of the application window. History brings up the history window. It shows all the actions you have performed during the current session. Float or Dock Control Toolbar toggles between displaying the Tool Bar docked at the top of each project window, or in a separate floating window. Float or Dock EditToolbar toggles between displaying the Edit Tool Bar docked at the top of each project window, or in a separate floating window. Float or Dock Mixer Toolbar toggles between displaying the Mixer Tool Bar docked at the top of each project window, or in a separate floating window. Float or Dock Meter Toolbar toggles between displaying the Dock Meter Bar docked at the top of each project window, or in a separate floating window.

## Project

The Project Menu is used to add / remove / align tracks in the existing project :

 name action Import Audio... imports audio into your project. Import Labels... import Label Tracks (text files). Import MIDI... imports MIDI files. Import Raw Data... tries to open a file in virtually any format, as long as it is not compressed. Edit ID3 Tags... opens a dialog allowing you to edit the ID3 tags associated with a project, for MP3 exporting. Quick Mix this command mixes all of the selected tracks down to one or two tracks. New Audio Track this creates a new empty audio track. New Stereo Track creates a stereo version of the new audio track above. New Label Track creates a new Label track. New Time Track creates a special track that can be used to speed up and slow down playback over the course of the project. Remove Track(s) this command removes the selected track or tracks from the project. Align Tracks aligns tracks according to the options chosen. Align and move cursor same as "Align Tracks" but it also followed by the "Move Cursor" command (from the Edit Menu). Add Label at Selection this menu item lets you create a new label at the current selection. Add Label at Playback Position like "Add Label at Selection" but the label is added at the current position during playback.

## Generate

The Generate Menu allows you to insert various generated audio elements into a track :

The length of the generated audio is determined by the length of your selection and the position by the left boundary of your selection.  If no selection is made, the default length inserted at the cursor position is 30 seconds.

 name action Silence inserts silence. Tone inserts a wave of chosen type, frequency and amplitude. White Noise inserts white noise. Plugins there are too many plugins to describe here, experiment!

## Effect

The Effect Menu allows you to apply effects to audio. Note : this menu cannot be accessed while any tracks are in Playback or Record mode.

 name action Amplify this effect increases or decreases the volume of a track or set of tracks. Bass Boost this is a smooth filter which can amplify the lower frequencies while leaving most of the other frequencies alone. Change Pitch changes the pitch/frequency of the selected audio without changing the tempo. Change Speed changes the speed of the audio by resampling. Making the speed higher will also increase the pitch. Change Tempo changes the tempo (speed) of the audio without changing the pitch. Click Removal this effect is designed to remove the annoying clicks on recordings from vinyl records without damaging the rest of the audio. Compressor compresses the dynamic range of the selection so that the loud parts are softer while keeping the volume of the soft parts the same. Echo this effect repeats the audio you have selected again and again, softer each time. There is a fixed time delay between each repeat. Equalization boost or reduce frequencies. Fade In applies a linear fade-in to the selected audio. Fade Out applies a linear fade-out to the selected audio. FFT Filter you define a curve that shows how much louder or quieter each frequency in the signal should be made. Invert this effect flips the audio samples upside-down. This normally does not affect the sound of the audio at all. Noise Removal this effect is ideal for removing constant background noise such as fans, tape noise, or hums. It will not work very well for removing talking or music in the background. Normalize allows you to amplify such that the maximum amplitude is a fixed amount, -3 dB. Nyquist Prompt allows you to express arbitrary transormations using a powerful functional programming language (for advanced users). Phaser the name "Phaser" comes from "Phase Shifter", because it works by combining phase-shifted signals with the original signal. Repeat repeats the selection a certain number of times. Reverse this effect reverses the selected audio. Wahwah just like that guitar sound so popular in the 1970's. Plugins there are too many plugins to describe here. Experiment!

## Analyze

The Analyze Menu gives you many options for measuring your audio :

 name action Plot Spectrum displays the Power Spectrum of the audio over a selected region. Envelope Tracker (Maximum Peak) Envelope Tracker (Maximum RMS) Envelope Tracker (Peak) Envelope Tracker (RMS) Null Peak Monitor Silence Finder Marks periods of silence within a selection.

# Tool Bar

Software name : Audacity
Software version : 1.2

The Tool Bars are where you choose tools to directly work on the tracks. There are three main Tool Bars in Audacity :

• Main Tool Bar
• Mixer Tool Bar
• Edit Tool Bar

## Main Tool Bar

Lets look at each button:

 button action this is the main tool you use to select audio. the envelope tool gives you detailed control over how tracks fade in and out. this tool allows you to change the relative positioning of tracks relative to one another in time. this tool allows you to zoom in or out of a specific part of the audio. enables the user to draw in to the actual waveforms. places the cursor at the start of the project. press the play button to listen to the audio in your project. press the record button to record a new track from your computer's sound input device. will pause during playback, or during recording. Press again to unpause. press the stop button or hit the spacebar to stop playback immediately. places the cursor at the end of the project.

## Mixer Tool Bar

These sliders control the mixer settings of the soundcard in your system. The selector on the right controls what audio input you wish to use.

### Input Selector

Pick the input source you wish to record from. All these items are exposed by the soundcard driver, so the this of options will vary with different soundcards.

### Output Slider

This is the left hand slider that lets you control the output level of your soundcard. It actually controls the output setting of the soundcard driver.

### Input Slider

This is the right hand slider that lets you control the level of the input selected in the Input Selector. It actually controls the recording level setting of the soundcard driver.

## Edit Tool Bar

All these tools perform the exact same function, as those accessible through the "Edit" menu, "View" menu. Lets look at each button individually :

 button action removes the selected audio data and places it on the clipboard. copies the selected audio data to the clipboard without removing it from the project. inserts whatever is on the clipboard at the position of the selection cursor in the project. deletes everything but the selection. erases the audio data currently selected, replacing it with silence instead of removing it completely. this will undo the last editing operation you performed to your project. this will redo any editing operations that were just undone. zooms in on the horizontal axis of the audio displaying less time. zooms out displaying  more time. zooms in until the selected audio fills the width of the screen to show the selection in more detail. shows entire project

# Open a Sound File

You will need to have an audio file available to edit. If you don't have one and you are online then download an MP3 from somewhere. Make sure its not too big, a 1 minute file is fine. Choose the 'Open' option from the File menu :

You will then be presented with a window where you can browse to the location of the audio file on your computer :

You can see in the above example there are a couple of audio files. I will click on one (06_ice_cake.mp3) :

If I now press OK the file will be imported into Audacity.

Now its worth noting that Audacity has its own way of storing audio files. These are known as 'Audacity project files'. So when audio is imported into Audacity it is stored in the Audacity format. You cannot then go and edit these files with another audio editor unless you first export the file to another format (for example, to MP3).

Once the import has finished you will see the audio file displayed in the Audacity window :

# Recording a sound

Software name : Audacity
Software version : 1.2

Recording sound with Audacity is very straightforward you just need to have a computer that has a sound card with at least a microphone (mic) or line input.

## Getting started

Before making a recording you need to make sure that what you want to record from ( the "sound source") connected to the audio input of your computer's sound card. Once you have done that you can launch Audacity.

### MacOSX

OS X has a unique way to configure the audio hardware, which is not shared by other operating systems (Windows, and Linux). So if you use OSX you will need to make sure that it is set up appropriately. To do this first open the "Preferences" window by clicking on "Preferences" under "Audacity" in the Menu Bar :

The Preferences window open and look something like this:

Click on "Audio I/O". The use of "I/O" means "Input or Output", so "Audio I/O" means "Audio Input or Output". The Audio I/O preferences page is where you can choose the sound source  (audio input) and how you play back the sounds so you can hear them (the output settings). This can turn into a jungle of terms but essentially these things are the same:

• input
• sound source
• audio input
• input device
• recording device

and these are the same :

• output
• playback device
• output device
• sound output

The way you configure the input effects how you will record sounds. The configuration of the output effects how you will play back sounds so you can hear them.

Lets start with the output settings, these are refered to within the "Playback" section. In the "Playback" section use the "Device:" dropdown menu to select the audio output you wish to use. Unless you have another sound card installed "Built-in Audio" will be the only option available.

The input settings are chosen from the "Recording" section. In the "Recording" section use the "Device:" dropdown menu to select the audio input device you wish to use. Unless you have another sound card installed "Built-in Audio" will be the only option available.

In the "Recording" section use the "Channels" dropdown menu to select the number of channels you wish to use. A "Channel" (also known as a "track") refers to the number of audio signals you wish to use to record or playback. A mono recording uses one audio signal (1 channel), and a stereo recording records two audio signals (2 channels).

Audacity defaults to "1 (mono)" so you can leave it at this if you are recording from a mono audio input. Most microphones are only capable of producing a mono signal.  Select "2 (stereo)" if you are recording from a stereo audio input such as a cassette or mini disc player (or a stereo microphone). It is possible to select up to 16 channels but do not select more than 2 unless you have something other than a 'normal' sound card.

Below the "Playback" and "Recording" sections are three check boxes.

The first check box is not important for this exercise because we are only recording one channel. If you want to listen to the sound as you are recording it you will need to have either "Hardware Playthough" or "Software Playthrough" ticked. "Hardware Playthrough" lets you hear the sound directly from the input source while "Software Playthrough" lets you hear the sound as it will be when the recording is played back.

Now click on "Quality" to bring up this page of preferences:

For this exercise you only need to worry about the first two settings; Default Sample Rate and Default Sample Format. Unless you really know what you are doing, use the dropdown menus to set Default Sample Rate to "44100 Hz" and Default Sample Format to "16-bit". This will give you CD quality recording.

Those are the only preferences you need to adjust before beginning to record so click "OK" to save the changes and close the Preferences Window. Audacity remembers these preferences so the next time you go to make a recording you will not have to repeat the steps above unless you wish to make changes.

### Windows and Linux

Windows and Linux use the same kind of controls. First you need to choose the input device. The Mixer Toolbar has three controls, used to set the volume levels of your audio device and choose the input source.

The leftmost slider controls the output volume, the other slider controls the recording volume, and the control on the right lets you choose the input source (such as "Microphone", "Line In", "Audio CD", etc.). You will need to choose "Mic" or "Line In" as one of the inputs. If you are using a microphone choose "Mic". If you are using another audio device (CD Player, Mini disc etc), choose "Line In".

## Testing Audio Levels

Now that you have everything set up and ready to go you can begin the recording process.

Before making the recording it is important to preview the loudest section of the source audio so that you do not end up with a distorted recording.

First you need to switch the input meter on. This can be set in the main interface :

Simply click on the bars above the microphone symbol or click on the arrow next to the microphone symbol and select "Monitor input" like so :

Now play the loudest passage of the audio you are recording and, while doing so, look at the input level meter.

At the loudest point the red bars should be at about -12. You can adjust the input level by moving the slider next to the microphone symbol.

Keep playing back the loudest passage while adjust the input level until it peaks at about -12. Once you have done that click the "Stop" button :

## Recording

Now you are ready to make your proper recording.

Click the "Record" button,

then play the audio you wish to record. Once the sound source has finished click the "Stop" button.

Your recording is now complete so save it immediately by selecting "Save Project" from the "File" menu.

That's it! Your recording is completed and saved. You can play it back by clicking the "Play" button.

## Troubleshooting - Linux

### Linux :: Host Error?

If you are a Linux user and you see a message similar to this  "Error Initializing Audio: There was an error initializing the audio i/o layer. You will not be able to play or record audio. Error: Host error." then you may have to try one of the following :

Kill esd

It maybe that the esd sound server is running which is not permitting Audacity to access the sound card. You can try running this in a terminal:

```ps ax | grep esd
```

If you see an output similar to this :

```5164 ?        Ss     0:00 /usr/bin/esd -terminate -nobeeps -as 1 -spawnfd 18
10352 pts/1    R+     0:00 grep esd
```

Then you can see from the first line that esd is running ("/usr/bin/esd"). To kill the esd sound server you need to type this in a terminal (you need to have the permissions to run the sudo command) :

```sudo killall esd
```

You will then be prompted for a password, enter your password notthe superuser password (also known as the "root" or "admin" password). Then try and start Audacity again, hopefully you won't get this error.

Start with aRts

You could also try running Audacity through the aRts sound server ("analog Real time synthesizer").  To do this quit Audacity if you already have it opened and restart it with this command in a terminal:

```artsdsp audacity
```

Kill aRts

Lastly, you may wish to try starting Audacity after killing the aRts sound server.  You can try this:

```sudo killall artsd
```
Then try starting Audacity again.

# Add Another Sound File

Software name : ­Audacity­
Software version : 1.2

Audacity enables you to mix multiple sounds together. You will need Audacity open and an audio file already loaded, and then you can add as many new files as you like.

## Adding your new track

Ok, so Audacity should be open in front of you and you will have some audio already loaded. In this example we will be working with a sound file I have opened from my computer, and so my Audacity window looks like this :

Now, we wish to add another sound file. To do this you will need to have another sound piece on your computer ready to go, and you will need to know where this file is located on your computer. Then click on Project and choose 'Import Audio...' :

When you have done this a file browser will open :

In the above example I am very lucky as the file I wish to load is in the directory shown. If the file was not here I would have to use the file browser to locate the file on my computer. To do this you would open directories by double-clicking on the directory icons, or you can go 'up a directory' by clicking on the button with the directory icon and green arrow :

In my case I will click on the 'myfile.ogg' :

I know press OK and the file will be imported. 'Importing' means that the file will be converted into a format that Audacity understands and appear in the Audacity window as a new track. So you when you press 'OK' the importing process begins :

When it is complete the new track can be seen in the Audacity window :

You will notice that in the above example there is a new stereo track added at the bottom of the window. If you don't see this then you might need to scroll down on your Audacity window.

## Note on Playback and Exporting

If you now press the 'Play' Button :

you will hear both tracks playing back at once. If you were now to export this file the tracks would be combined together into one sound file.

# Envelope Tool

The envelope tool is probably the most important tool for Audacity users. It allows you to alter the volume of the sounds in Audacity which is especially important when you are combining ('mixing') several tracks together.

## ­Open Audacity

You will first have to have Audacity open with more than one track. We will use two stereo music files to mix together using the Envleope Tool. So my Audacity initially looks like this :

## Activating the Envelope Tool

The Envelope Tool has an icon in the Audacity Tool Bar, it looks like this :

When you click on it there are two parts of the Audacity interface that change, the first is that the Envelope Tool button looks like it has been pressed :

The second is that the tracks are surrounded by a blue line. Before pressing the Envelope Tool a track looks like this :

After pressing it looks like this:

You can see the blue line around the track in the above image. This means the Envelope Tool is activated.

## Alter the volume

The thin blue line actually represents the volume of the track. You can now lower the volume on chosen sections of the track by changing the shape of this blue line. To do this click on the blue line, you will see small white squares appear where you clicked :

Now you can 'grab' the blue line at the point where these squares appear. To show you how this can change the volume of just one part of the audio click on the blue line close to where you first clicked :

Now point your mouse cursor on the top white square on the left side, and while your mouse finger is still down, drag the square downwards :

You will notice the area to the left gets smaller (the volume is lowered), and the area to the right gets bigger until it reaches the second set of squares.  If you now play back the track you will hear the volume levels follow the lines you have made.

## Mixing 2 tracks together

Using the Envelope Tool is the secret to mixing two or more tracks together into one sound piece. You can now experiment using the Envelope Tool and playing back the audio so you can hear how the tracks blend together. At the end you might have a lot of sections affected to create a single sound piece :

# Basic Editing with Audacity

Software name : Audacity
Software version : 1.2

As far as audio editing software goes, Audacity is about as easy as it gets. Thats not to sayÂ­ its easy, if you haven't edited audio before then the whole concept can get a little bewildering. However with a minimal of practice you should be able to make fast work of editing.

Firstly, you will have to have some audio to edit. You can either record some using Audacity, or open an audio file from your computer.

There are some simple methods that form the basis of editing with Audacity. We will look at deleting sections of audio ('cutting') and shifting audio. With these two methods you can already do quite a lot.

## Cutting

You will of course have Audacity open in front of you with an audio file ready to edit. The process of editing requires that you first know your source file (the file you will edit). You need to know where a cut needs to be made so play the audio file and listen for where you want to make your first edit.

Lets assume you have chosen the area to be cut. You need to know select the area by clicking on where the cut should start, holding down the mouse button, and dragging the mouse to the end of the area to be cut. If you do this correctly the area to be deleted will be highlighted in grey :

In the above example you can see that I have highlighted the area from 1 minute (1:00) to one and a half minutes (1:30). A selection of thirty seconds. To delete the file I can now click on the Edit menu and choose 'cut' :

When you release the mouse button you will see that the selected area has disappeared and the length of your file will have been reduced.

### Focusing on the area to be cut

If you have just opened an audio file just press the green play button to listen to the entire file :

Once you have listened to it you may wish to take some notes to help you decide which area you wish to delete ('cut'). It is also a good idea to replay the area that you will cut to make sure you are selecting the right area. To do this you can select the area, as described above, and then press the play button and Audacity will only playback the selected area. This will help you decide if the selected area is actually the audio you wish to delete. If its not the right area then start again by selecting another area.

If you need to focus closer to the audio to make a 'finer' cut, then press on the magnifying glass icon :

This will enlarge the time scale shown and give you a 'closer' view of the audio. You can 'zoom out' of the audio again by pressing the magnifying glass with the minus sign in it:

## Pasting

If you wish to shift audio from one place to another then you can easily do so with Audacity. First select the area you wish to shift. I will use the same area I used in the cutting example :

Now choose cut like you did in the above example.

The audio will now be cut from the track. Now click on the audio that is left at the point you where you want this audio to be shifted to.

In the above example you can can see that I ahve decided to insert the audio at the 4 minute mark. Now choose the Edit menu and select Paste:

The audio will now be inserted and you if you look at the Audacity window you should see the selected audio in its new place :

Now experiment with cutting and pasting audio!

# Exporting A File

Software name : ­Audacity
Software version : 1.2

Projects created in Audacity are always saved in Audacity's own unique file format that cannot be opened by most other software. It is therefore necessary to export projects to more common file formats in order to use them with other audio software or media players.

Audacity can export the following formats: AIFF, MP3 and Ogg Vorbis.

AIFF files provide uncompressed CD quality audio so this format should be used if you want to open your Audacity project with other music production software or CD authoring software.

MP3 and Ogg Vorbis are both compressed? audio formats so they have lower sound quality but much smaller file sizes making them ideal for use in media players. The most important difference between these two formats is that Ogg Vorbis is completely open while MP3 is not. For this reason you will need to download and install the LAME MP3 encoder before you can export in MP3 format.

To export a file from Audacity you need to have an Audacity file open.  If you do not already have a file open from a recording or editing session then you can open one by pressing the Apple and O keys together or by clicking on "Open" in the Audacity File menu.

You should now see a window like this:

Use this window to browse to the file that you wish to open. Once you have selected the file simply click "Open" to open it. You should now see something a bit like this:

To export the file click "File" then click on the format that you want to export as.

You should now see a window like this:

Use this window to edit the file name and select or create an appropriate folder into which to save the new file. Once you are happy with that simply click on "Save" to begin exporting.

You should now see a window like this:

The time it takes to export the project will depend on the length of the recording and the speed of your computer.

When exporting is complete the above window will disappear. You should now be able to see the file in the folder that you chose to save it into looking something like this:

That's it. You can now enjoy listening to the results of your Audacity project through other audio software or transfer it on to your portable media player.

# Advanced Editing

Software name : Audacity
Software version : 1.3

There are some more advanced steps for editing that you can carry out with Audacity. Some of these include, adding silence, trimming audio, splitting and joining tracks, using panning.

## Add Silence

You may want to add a silence to a track for several reasons. If we take the example of a two track project with one track as music and the other track as a voice track then we can imagine a situation where we would want to insert a silence in the voice track for several seconds to allow us to fade up the music. This would work in an advert or introduction for a radio show.

To start with your workspace should look a little like the still below.

Click in the voice track where you want to insert your silence.

When the voice track is selected you will see the whole track go a darker colour.

You are now ready to insert some silence. Do this by selecting Generate > Silence from the menus at the top of the screen.

When you are inserting a number of seconds for your Silence, it is easier to add more and then delete some silence later than to add to it. So I'm going to over estimate and put in 20 seconds of silence, as shown below.

Enter the number of seconds silence you want to insert and then click 'OK'. You'll see the silence appear as a flat line on the relevant track of your project.

What we are going to do next is to decrease the volume of the music track while the voice track contains some voice audio.

This is done using a combination for the Time shift tool and the Envelope tool.

## Time Shift Tool

The time shift tool allows you to alter the time location of the audio on a particular track. This is useful when you are arranging audio on different tracks to be placed on after the other in a sequence.

In the example below we have imported two audio files into a project. They both are set to start at 0.00 seconds.

Our goal is to place the music track after the audio track. To do this use the mouse to select the Time Shift tool   from the menu at the top of the workspace.

You can select the track you want and drag it left or right to occupy a new time location.

As you click and drag the second track to the right you may see some yellow guidelines appear to shows you the ending point of the other tracks. In the shot above the yellow guideline appeared when the start of the second track matched up with the end of the first one.

You should let go of the mouse when you are happy with the new time location of the track you a shifting.

## Trimming Audio

Trimming audio tracks is useful when you only want to keep one part of the track. It is also different from using the cutting function as it maintains the time location of the part of the track you want to keep.

As an example we are going to trim a music track to only include the first 30 seconds or so. To do this select only the part of the track that you want to keep. Use the Selection tool to do this

When you have selected the part of the audio that you want to keep it should show up in a darker colour. You should then select 'Edit > Trim' .

If all works well then only the part you had selected will still be present on your track

## Splitting Stereo Tracks

Sometimes when you are recording you may only record one channel of your track correctly. Or there may be another reason that you want to only work with either the left or right channel.

Below shows a track with one channel much louder than the other.

We are going to work on only the left channel (the top one). So we need to Select 'Split Stereo Track' from the drop down menu by the right of the track.

We can then delete the Right channel signal, as it has now become a separate track. Click on the X on the left of the track area.

If we play the remaining track then we'll be able to hear that the sound is only coming through one channel. This can be seen in the green signal in the screenshot below.

To adjust this we can select 'Mono' from the drop down menu on the left of the track bar.

This track will now play the same mono audio signal through both channels. You can export it to a stereo track if you need to.

# More Help

For more help with Audacity you can try these avenues:

## Audacity Documentation

You should first look at the very good documentation at the developers site - http://audacity.sourceforge.net

Also try the Audacity FAQ (Frequenctly Asked Questions) - http://audacity.sourceforge.net/help/faq

## Online Forums

You can also try searching through the forums for information.

http://audacityteam.org/forum/

The forums contain a lot of postings from users on many topics. You can use the search system to locate topics or just browse the categories. If you don't find what you want then try subscribing to the forums and posting your question to the relevant category.

There are a few things to keep in mind when asking a question in a forum or to a mailing list. First, be as clear as you can with your question and provide any infromation that you might think would help some to try to help you. You might, for example, include information about the operating system you are using, or various specifics that relate to what you are trying to achieve. Additionally, it is always good practice to also post back to any forum or mailing list if you manage to solve your query and include clear information on how you solved the puzzle. This is so that someone else that may have the same issue can resolve it using what you have found out. If possible post back to the same thread (discussion topic) so that anyone searching through the forum can follow the discussion including the solution.

## Mailing Lists

Mailing lists are good places to look through for answers to questions. The subscription (also the archives are listed on each info page) information is located here :

You can also subscribe to the mailing lists and ask a question. Please note the suggestions about posting to forums and mailing lists in the above section.

## IRC

IRC is a type of online chat. it is not the easiest to use if you are not familair with it but it is a very good system. There are a variety of softwares for all operating systems that enable you to use IRC. The IRC channel for Audacity is where a number of the developers are online and some 'superusers'. So logging into this channel can be useful but it is very important that you know exactly what you are trying to find out before trying this route. The protocol for using the channel is just to log in, and ask the question immediately. Don't try and be too chatty as you are probably going to be ignored. It is also preferable if you have done some research using the other methods above before trying the channel. The details for the IRC channel are:

• IRC network: freenode
• Channel: #audacity

## Web Search

Searching the web is always useful. If you are looking for problems arising from errors reported by the software then try entering the error text into the search engine. Be sure to edit out any information that doesn't look generic when doing this. Some search engines also enable you to try searches of mailing lists, online groups etc, this can also provide good results.

# License

All chapters copyright of the authors (see below). Unless otherwise stated all chapters in this manual licensed with GNU General Public License version 2

This documentation is free documentation; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this documentation; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

## Authors

ADD A NEW TRACK
© adam hyde 2007, 2008
Modifications:
Flosstest Two 2007
TWikiGuest 2010
douglas 2010

ADVANCED EDITING
© mick fuzz 2009
BASIC EDITING
© adam hyde 2007, 2008
Modifications:
TWikiGuest 2010
douglas 2010

CREDITS
© adam hyde 2006, 2007, 2008
WHAT IS DIGITAL AUDIO?
© Anthony Oetzmann 2006
Modifications:
adam hyde 2006, 2007, 2008
Aleksandar Erkalović 2008
mick fuzz 2009
Rafe DiDomenico 2008
Seth Woodworth 2008
TWikiGuest 2010
douglas 2010

CREATING FADES
© adam hyde 2007, 2008
Modifications:
TWikiGuest 2010
Tom Kleen 2008
douglas 2010

EXPORTING
© Adam Willetts 2006
Modifications:
adam hyde 2007, 2008
Peter Shanks 2007
TWikiGuest 2010
douglas 2010

ADDITIONAL HELP
© adam hyde 2006, 2007, 2008
OSX
© Adam Willetts 2006
Modifications:
adam hyde 2006, 2007, 2008
Carla Morris 2008
Maria Inmaculada de la Torre 2009
TWikiGuest 2010
douglas 2010

UBUNTU
© adam hyde 2007, 2008
Modifications:
Maria Inmaculada de la Torre 2009
TWikiGuest 2010
Tomi Toivio 2009
douglas 2010

WINDOWS
© Adam Willetts 2006
Modifications:
adam hyde 2006, 2007, 2008
Carla Morris 2008
Maria Inmaculada de la Torre 2009
mick fuzz 2009
TWikiGuest 2010
douglas 2010

INTRODUCTION
© Adam Willetts 2006
Modifications:
adam hyde 2006, 2007, 2008, 2009
Rafe DiDomenico 2008
TWikiGuest 2010
douglas 2010

MENU BAR
© Anthony Oetzmann 2006
Modifications:
adam hyde 2006, 2007, 2008
TWikiGuest 2010
douglas 2010

MP3 INSTALLATION WINDOWS
© mick fuzz 2009
OPEN (IMPORT) A FILE
© adam hyde 2007, 2008
Modifications:
TWikiGuest 2010
douglas 2010

RECORDING
© Adam Willetts 2006
Modifications:
adam hyde 2006, 2007, 2008
mick fuzz 2009
TWikiGuest 2010
douglas 2010

TOOL BAR
© Anthony Oetzmann 2006
Modifications:
adam hyde 2006, 2007, 2008
Brent Simpson 2008
TWikiGuest 2010
douglas 2010

TRACK BAR
© Anthony Oetzmann 2006
Modifications:
adam hyde 2006, 2007, 2008
TWikiGuest 2010
douglas 2010

Free manuals for free software

## General Public License

Version 2, June 1991

Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA

Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.

Preamble

The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.

To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.

Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.

Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.

The precise terms and conditions for copying, distribution and modification follow.

TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".

Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.

1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.

You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.

2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:

a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.

b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.

c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)

These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.

Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.

In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.

3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:

a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)

The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.

If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.

4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.

6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.

7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.

If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.

It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.

This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.

8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.

9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.

10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.

NO WARRANTY

11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

END OF TERMS AND CONDITIONS

# SSH

The command line is such a useful tool that it won't be long before you need to have access to the command line on a computer that is not sitting in front of you.  In the old days, before security was a concern, people used `telnet` to get a command line on a remote computer.  For most purposes, `telnet` is no longer a good idea, because data is transmitted in a raw, unencrypted format.  The standard secure way to gain access to a command line on a remote computer is via `ssh` (secure shell).  The simplest invocation of the command is

```\$ ssh othermachine.domain.org
```

This command assumes that your username on the remote machine is the same as your username on the local machine at which you type the command.  The remote machine prompts you for your password.  If your username on the remote machine is different than your username on the local machine, use the `-l` (lower-case "L") option to indicate your username on the remote machine.

```\$ ssh -l remoteusername othermachine.domain.org
```

Alternatively, you can use email-style notation to indicate a different username.

```\$ ssh remoteusername@othermachine.domain.org
```

So far, all these commands display a command line on the remote machine from which you can then execute whatever commands that machine provides to you.  Sometimes you may want to execute a single command on a remote machine, returning afterward to the command line on your local machine. This can be achieved by placing the command to be executed by the remote machine in single quotes.

```\$ ssh remoteusername@othermachine.domain.org 'mkdir /home/myname/newdir'
```

`Sometimes` what you need is to execute time consuming commands on a remote machine, but you aren't sure to have sufficient time during your current `ssh` session.  If you close the remote connection before a command execution has been completed, that command will be aborted.  To avoid losing your work, you may start via `ssh` a remote `screen` session and then detach it and reconnect to it whenever you want.  To detach a remote `screen` session, simply close the `ssh` connection: a detached `screen` session will remain running on the remote machine.

`ssh` offers many other options, which are described on the manual page. You can also set up your favorite systems to allow you to log in or run commands without specifying your password each time. The setup is complicated but can save you a lot of typing; try doing some Web searches for "ssh-keygen", "ssh-add", and "authorized_keys".

## scp: file copying

The SSH protocol extends beyond the basic `ssh` command.  A particularly useful command based on the SSH protocol is `scp`, the secure copy command.  The following example copies a file from the current directory on your local machine to the directory /home/me/stuff on a remote machine.

```\$ scp myprog.py me@othermachine.domain.org:/home/me/stuff
```

Be warned that the command will overwrite any file that's already present with the name /home/me/stuff/myprog.py. (Or you'll get an error message if there's a file of that name and you don't have the privilege to overwrite it.) If /home/me is your home directory, the target directory can be abbreviated.

```\$ scp myprog.py me@othermachine.domain.org:stuff
```

You can just as easily copy in the other direction: from the remote machine to your local one.

```\$ scp me@othermachine.domain.org:docs/interview.txt yesterday-interview.txt
```

The file on the remote machine is interview.txt in the docs subdirectory of your home directory. The file will be copied to yesterday-interview.txt in the home directory of your local system

`scp` can be used to copy a file from one remote machine to another.

```\$ scp user1@host1:file1 user2@host2:otherdir
```

To recursively copy all of the files and subdirectories in a directory, use the `-r` option.

```\$ scp -r user1@host1:dir1 user2@host2:dir2
```

See the `scp` man page for more options.

## rsync: automated bulk transfers and backups

`rsync` is a very useful command that keeps a remote directory in sync with a local directory. We mention it here because it's a useful command-line way to do networking, like `ssh`, and because the SSH protocol is recommended as the underlying transmission for `rsync`.

The following is a simple and useful example. It copies files from your local /home/myname/docs directory to a directory named backup/ in your home directory on the system quantum.example.edu. `rsync` actually minimizes the amount of copying necessary through various sophisticated checks.

```\$ rsync -e ssh -a /home/myname/docs me@quantum.example.edu:backup/
```

The `-e` option to `ssh` uses the SSH protocol underneath for transmission, as recommended. The `-a` option (which stands for "archive") copies everything within the specified directory. If you want to delete the files on the local system as they're copied, include a `--delete` option. See the `rsync` manual page for more details about `rsync`.

## Making life easier when you use SSH often

If you use SSH to connect to a lot of different servers, you will often make mistakes by mistyping usernames or even host names (imagine trying to remember 20 different username/host combinations). Thankfully, SSH offers a simple method to manage session information through a configuration file.

The configuration file is hidden in your home directory under the directory .ssh (the full path would be something like /home/jsmith/.ssh/config --if this file does not exist you can create it). Use your favorite editor to open this file and specify hosts like this:

```Host dev
HostName example.com
User fc
```

You can set up multiple hosts like this in your configuration file, and after you have saved it, connect to the host you called "dev" by running the following command.

```\$ ssh dev
```

Remember, the more often you use these commands the more time you save.

# Piping hot commands

Pipes let programs work together by connecting the output from one to be the input for another. The term "output" has a precise meaning here:  it is what the program writes to the standard output, via C program statements such as printf or the equivalent, and normally it appears on the terminal screen. And "input" is the standard input, usually coming from the keyboard. Pipes are built using a vertical bar ("|") as the pipe symbol.

Say you help your eccentric Aunt Hortense manage her private book collection. You have a file named books containing a list of her holdings, one per line, in the format "author:title", something like this:

```\$ cat books
Carroll, Lewis:Through the Looking-Glass
Shakespeare, William:Hamlet
Bartlett, John:Familiar Quotations
Mill, John Stuart:On Nature
London, Jack:John Barleycorn
Bunyan, John:Pilgrim's Progress, The
Defoe, Daniel:Robinson Crusoe
Mill, John Stuart:System of Logic, A
Milton, John:Paradise Lost
Johnson, Samuel:Lives of the Poets
Shakespeare, William:Julius Caesar
Mill, John Stuart:On Liberty
Bunyan, John:Saved by Grace
```

This is somewhat untidy, as they are in no particular order. But we can use the `sort` command to straighten that out:

```\$ sort books
Bartlett, John:Familiar Quotations
Bunyan, John:Pilgrim's Progress, The
Bunyan, John:Saved by Grace
Carroll, Lewis:Through the Looking-Glass
Defoe, Daniel:Robinson Crusoe
Johnson, Samuel:Lives of the Poets
London, Jack:John Barleycorn
Mill, John Stuart:On Liberty
Mill, John Stuart:On Nature
Mill, John Stuart:System of Logic, A
Milton, John:Paradise Lost
Shakespeare, William:Hamlet
Shakespeare, William:Julius Caesar
```

Ah, now you have a list nicely sorted by author. How about getting a list just of authors, without titles? You can do that with the `cut` command:

```\$ cut -d: -f1 books
Carroll, Lewis
Shakespeare, William
Bartlett, John
Mill, John Stuart
London, Jack
Bunyan, John
Defoe, Daniel
Mill, John Stuart
Milton, John
Johnson, Samuel
Shakespeare, William
Mill, John Stuart
Bunyan, John
```

A little explanation here. The `-d` option chose a colon as the delimiter (separator). This tells `cut` to break up each line wherever a delimiter appears, and each separate part of the line is called a field. In our format, the author's name appears as the first field, so we have put a 1 with the `-f` option to tell `cut` that we want to see just that field.

But you'll notice the list is unsorted again. Pipelines to the rescue!

```\$ sort books | cut -d: -f1
Bartlett, John
Bunyan, John
Bunyan, John
Carroll, Lewis
Defoe, Daniel
Johnson, Samuel
London, Jack
Mill, John Stuart
Mill, John Stuart
Mill, John Stuart
Milton, John
Shakespeare, William
Shakespeare, William
```

Voila! You've taken the alphabetized list, which is the output of the `sort` command, and fed it as input to the `cut` command. Don't give the `cut` command a filename to use, because you want it to operate on the text that's piped out of the `sort` command.

Pipes are just that simple--text flows down the pipe from one command to the next.

How about if you wanted a sorted list of titles instead? Since the title is the second field, let's try using `-f2` with the `cut` command instead of `-f1`:

```\$ sort books | cut -d: -f2
Familiar Quotations
Pilgrim's Progress, The
Saved by Grace
Through the Looking-Glass
Robinson Crusoe
Lives of the Poets
John Barleycorn
On Liberty
On Nature
System of Logic, A
Paradise Lost
Hamlet
Julius Caesar
```

Oops. What happened? When looking at a pipeline, you need to go left-to-right. In this case, we sorted the file first before extracting the titles. So it dutifully sorted the lines starting with the author at the beginning of each line. To get the titles in the proper order, you need to do the sort after extracting them:

```\$ cut -d: -f2 books | sort
Familiar Quotations
Hamlet
John Barleycorn
Julius Caesar
Lives of the Poets
On Liberty
On Nature
Paradise Lost
Pilgrim's Progress, The
Robinson Crusoe
Saved by Grace
System of Logic, A
Through the Looking-Glass
```

Much better. Now this is all very nice, but you may be thinking you could have done these things with a spreadsheet. For simpler tasks, this is probably true. But suppose that Aunt Hortense is in the habit of asking odd questions about her collection. For example, she wants to know how many books she has from each author named John. A spreadsheet or other graphical program may have difficulty handling a request that wasn't anticipated by the program's authors. But the shell offers us many small, simple commands that can be combined in unforeseen ways to accomplish a complex task.

To find a particular string in a line of text, use the `grep` command. Now remember that when you combine commands, they need to go in the proper order. You can't run `grep` against the file first, because it will match the title "John Barleycorn" in addition to authors named John. So add it to the end of the pipeline:

```\$ cut -d: -f1 books | sort | grep "John"
Bartlett, John
Bunyan, John
Bunyan, John
Johnson, Samuel
Mill, John Stuart
Mill, John Stuart
Mill, John Stuart
Milton, John
```

This gets us close, but you don't want to get "Samuel Johnson" on the list and make Aunt Hortense angry. Often when working with `grep` you will need to refine the matching text to get exactly what you need. `grep` happens to offer a `-w` option that will let it match "John" only when "John" is a complete word, not when it's part of "Johnson". But we'll solve this particular dilemma by adding a comma and space on the front of the string to match, so it will match only when John is a first name:

```\$ cut -d: -f1 books | sort | grep ", John"
Bartlett, John
Bunyan, John
Bunyan, John
Mill, John Stuart
Mill, John Stuart
Mill, John Stuart
Milton, John
```

Ah, that's better. Now you just need to total up the number of books for each author. A little command called `uniq` will work nicely. It removes duplicate lines (duplicates must be on consecutive lines, so be sure your text is sorted first), and when used with the `-c` option also provides a count:

```\$ cut -d: -f1 books | sort | grep ", John" | uniq -c
1 Bartlett, John
2 Bunyan, John
3 Mill, John Stuart
1 Milton, John
```
And there you are! A nicely sorted list of Johns and the number of books from each. For our example set this is a simple job, one you could even do with pencil and paper. But this very same pipeline can be used to process far more data--it won't blink even if Aunt Hortense has hundreds of thousands of books stored in the barn.

System administrators often use pipelines like these to deal with log files generated by web and mail servers. Such files can grow to tens or hundreds of megabytes in size, and a command pipeline can be a quick way to generate summary statistics without trying to read through the entire log.

A nice thing about building pipelines is that you can do it one command at a time, seeing exactly what effect each one has on the output. This can help you discover when you might need to tweak options or rearrange the order of commands. For instance, to put the authors in ranking order, you can just add a `sort -nr` to the previous pipeline:

```\$ cut -d: -f1 books | sort | grep ", John" | uniq -c | sort -nr
3 Mill, John Stuart
2 Bunyan, John
1 Milton, John
1 Bartlett, John
```

Experiment!

# Basic commands

By now you have some basic knowledge about directories and files and you can interact with the command line interface.  We can learn some of the commands you'll be using many times each day.

### ls

The first thing you likely need to know before you can start creating and making changes to files is what's already there?  With a graphical interface you'd do this by opening a folder and inspecting its contents. From the command line you use the program `ls` instead to list a folder's contents.

```\$ ls
Desktop  Documents  Music  Photos
```

By default, `ls` will use a very compact output format. Many terminals show the files and subdirectories in different colors that represent different file types.  Regular files don't have special coloring applied to their names.  Some file types, like JPEG or PNG images, or tar and ZIP files, are usually colored differently, and the same is true for programs that you can run and for directories.  Try `ls` for yourself and compare the icons and emblems your graphical file manager uses with the colors that ls applies on the command line.  If the output isn't colored, you can call `ls` with the option `--color`:

```\$ ls --color
```

### man, info & apropos

You can learn about options and arguments using another program called `man` (`man` is short for manual) like this:

```\$ man ls
```

Here, `man` is being asked to bring up the manual page for `ls`. You can use the arrow keys to scroll up and down in the screen that appears and you can close it using the q key (for quit).

An alternative to obtain a comprehensive user documentation for a given program is to invoke `info` instead of `man`:

```\$ info ls
```

This is particularly effective to learn how to use complex GNU programs.  You can also browse the `info` documentation inside the editor Emacs, which greatly improves its readability.  But you should be ready to take your first step into the larger world of Emacs.  You may do so by invoking:

```\$ emacs -f info-standalone
```
that should display the Info main menu inside Emacs (if this does not work, try invoking `emacs` without arguments and then type Alt + x info, i.e. by depressing the Alt key, then pressing the x key, then releasing both keys and finally typing info followed by the Return or Enter key).  If you type then m ls, the interactive Info documentation for `ls` will be loaded inside Emacs.  In the standalone mode, the q key will quit the documentation, as usual with `man` and `info`.

Ok, now you know how to learn about using programs yourself.  If you don't know what something is or how to use it, the first place to look is its `man`ual and `info`rmation pages.  If you don't know the name of what you want to do, the `apropos` command can help.  Let's say you want to rename files but you don't know what command does that.  Try `apropos` with some word that is related to what you want, like this:

```\$ apropos rename
...
mv (1)               - move (rename) files
prename (1)          - renames multiple files
rename (2)           - change the name or location of a file
...
```

Here, `apropos` searches the manual pages that `man` knows about and prints commands it thinks are related to renaming.  On your computer this command might (and probably will) display more information but it's very likely to include the entries shown.

Note how the program names include a number besides them.  That number is called their section, and most programs that you can use from the command line will be in section 1.  You can pass `apropos` an option to display results from section 1 manuals only, like this:

```\$ apropos -s 1 rename
...
mv (1)               - move (rename) files
prename (1)          - renames multiple files
...
```

At this stage, the section number isn't terribly important.  Just know that section 1 manual pages are the ones that apply to programs you use on the command line.  To see a list of the other sections, look up the manual page for man using `man man`.

### mv

Looking at the results from `apropos`, that `mv` program looks interesting.  You can use it like this:

```\$ mv oldname newname
```

Depending on your system configuration, you may not be warned when renaming a file will overwrite an existing file whose name happens to be `newname`.  So, as a safe-guard, always use `-i' option when issuing `mv` like this:

```\$ mv -i oldname newname
```

Just as the description provided by `apropos` suggests, this program moves files.  If the last argument happens to be an existing directory, `mv` will move the file to that directory instead of renaming it. Because of this, you can provide `mv` more than two arguments:

```\$ mv one_file another_file a_third_file ~/stuff
```

If ~/stuff exists, then `mv` will move the files there.  If it doesn't exist, it will produce an error message, like this:

```\$ mv one_file another_file a_third_file stuff
mv: target 'stuff' is not a directory
```

### mkdir

How do you create a directory, anyway?  Use the `mkdir` command:

```\$ mkdir ~/stuff
```

And how do you remove it?  With the `rmdir` command:

```\$ rmdir ~/stuff
```

If you wish to create a subdirectory (say the directory bar) inside another directory (say the directory foo) but you are not sure whether this one exists or not, you can ensure to create the subdirectory and (if needed) its parent directory without raising errors by typing:

```\$ mkdir -p ~/foo/bar
```
This will work even for nested sub-sub-...-directories.

If the directory you wish to remove is not empty, `rmdir` will produce an error message and will not remove it.  If you want to remove a directory that contains files, you have to empty it first.  To see how this is done, we will need to create a directory and put some files in it first.  These files we can remove safely later.  Let's start by creating a directory called practice in your home and change the current working directory there:

```\$ mkdir ~/practice
\$ cd ~/practice
```

### cp, rm & rmdir

Now let's copy some files there using the program `cp`.  We are going to use some files that are very likely to exist on your computer, so the following commands should work for you:

```\$ cp /etc/fstab /etc/hosts /etc/issue /etc/motd .
\$ ls
fstab  hosts  issue  motd
```

Don't forget the dot at the end of the line!  Remember it means "this directory" and being the last argument passed to `cp` after a list of files, it represents the directory in which to copy them.  If that list is very long, you'd better learn using globbing (expanding file name patterns containing wildcard characters into sets of existing file names) or some other tricky ways to avoid wasting your time in typing file names.  One trick can help when dealing with the copy of an entire directory content.  Passing to `cp` the option `-R` you can recursively copy all the files and subdirectories from a given directory to the destination:

```\$ cp -R . ~/foo
\$ ls ~/foo
bar  fstab  hosts  issue  motd
\$ cp -R . ~/foo/bar
\$ ls -R ~/
~/foo:
bar  fstab  hosts  issue  motd

~/foo/bar:
fstab  hosts  issue  motd
```

In this case the current directory has no subdirectories so only files were copied.  As you can see, the option `-R` can be passed even to `ls` to list recursively the content of a given directory and of its subdirectories.

Now, if you go back to your home and try to remove the directory called practice, `rmdir` will produce an error message:

```\$ cd ..
\$ rmdir practice
rmdir: failed to remove 'practice': Directory not empty
```

You can use the program `rm` to remove the files first, like this:

```\$ rm practice/fstab practice/hosts practice/issue practice/motd
```

And now you can try removing the directory again:

```\$ rmdir practice
```

And now it works, without showing any output.

But what happens if your directories have directories inside that also have files, you could be there for weeks making sure each folder is empty!  The `rm` command solves this problem through the amazing option `-R`, which as usual stands for "recursive".  In the following example, the command fails because foo is not a plain file:

```\$ rm ~/foo/
rm: cannot remove `~/foo/`: Is a directory
```

So maybe you try `rmdir`, but that fails because foo has something else under it:

```\$ rmdir ~/foo
rmdir: ~/foo: Directory not empty
```

So you use `rm -R`, which succeeds and does not produce a message.

```\$ rm -R ~/foo/

```

So when you have a big directory, you don't have to go and empty every subdirectory.

But be warned that `-R` is a very powerful argument and you may lose data you wanted to keep!

### cat & less

You don't need an editor to view the contents of a file.  What you need is just to display it.  The `cat` program fits the bill here:

```\$ cat myspeech.txt
Friends, Romans, Countrymen! Lend me your ears!
```

Here, `cat` just opens the file myspeech.txt and prints the entire file to your screen, as fast as it can.   However if the file is really long, the contents will go by very quickly, and when `cat` is done, all you will see are the last few lines of the file.  To just view the contents of a long file (or any text file) you can use the `less` program:

```\$ less myspeech.txt
```

Just as with using `man`, use the arrow keys to navigate, and press q to quit.

# Scripting

If you have a collection of commands you'd like to run together, you can combine them in a script and run them all at once. You can also pass arguments to the script so that it can operate on different files or other input.

Like an actor reading a movie script, the computer runs each command in your shell script, without waiting for you to whisper the next line in its ear. A script is a handy way to:

• Save yourself typing on a group of commands you often run together.
• Remember complicated commands, so you don't have to look up, or risk forgetting, the particular syntax each time you use it.
• Use control structures, like loops and case statements, to allow your scripts to do complex jobs. Writing these structures into a script can make them more convenient to type and easier to read.

Let's say you often have collections of images (say, from a digital camera) that you would like to make thumbnails of. Instead of opening hundreds of images in your image editor, you choose to do the job quickly from the command line. And because you may need to do this same job in the future, you might write a script. This way, the job of making thumbnails takes you only two commands:

```\$ cd images/digital_camera/vacation_pictures_March_2009
\$ make_thumbnails.sh
```

The second command, `make_thumbnails.sh`, is the script that does the job. You have made it previously and it resides in a directory on your search path. It might look something like this:

```#!/bin/bash
# This makes a directory containing thumnails of all the jpegs in the current dir.
mkdir thumbnails
cp *.jpg thumbnails
cd thumbnails
mogrify -resize 400x300 *.jpg
```

If the first line begins with #! (pronounced "shee-bang"), it tells the kernel what interpreter is to be used. (Since bash is usually the default, you might omit this line). After this you should put in some comments about what the script is for and how to use it . Scripts without clear and complete usage instructions often do not "work right". For bash, comments start with the hash (#) character and may be on the ends of executable lines.

The file includes commands conforming to the syntax of the interpreter. We've seen three of them before: `mkdir`, `cp`, and `cd`. The last command, `mogrify`, is a program that can resize images (and do a lot of other things besides). Read its manual page to learn more about it.

## Making scripts executable

To write a script like the one we've shown, open your favorite text editor and type in the commands you would like to run. For bash, you can put multiple commands on a single line so long as you put a semi-colon after each command so the shell knows a new command is starting.

Save the script. One common convention for bash is to use the `.sh` extension -- for example, make_thumbnails.sh.

There is one more step before you can run the script: it has to be executable. Remember from the section on permissions that executability is one of the permissions a file can have, so you can make your script executable by granting the execute (x) permission. The following command allows any user to execute the script:

```\$ chmod +x make_thumbnails.sh
```

Because you're probably planning to use the script often, you'll find it worthwhile to check your PATH and add the script to one of the directories in it (for instance, /home/jdoe/bin is an easy choice given the PATH shown here).

```\$ echo \$PATH
/usr/bin:/usr/local/bin:/home/jdoe/bin
```

For simple testing, if you're in the directory that contains the script, you can run it like this:

```\$ ./make_thumbnails.sh
```

Why do you need the preceding ./ path? Because most users don't have the current directory in their PATH environment variables. You can add it, but some users consider that a security risk.

Finally, you can also execute a script, even without its execute bit set, by passing it as an argument to the command interpreter, thusly:

```\$ bash make_thumbnails.sh
```

## More control

To provide the flexibility you want, the bash shell and many other interpreters let you make choices in a script and run things repeatedly on a variety of inputs. In that regard, the shell is actually a programming language, and a nice way to get used to using the powerful features a programming language provides. We'll get you started here and show you the kinds of control the bash shell provides through compound statements.

### if

This statement was already introduced in the section on checking for errors, but we'll review it here. `if` is more or less what you'd expect, though its syntax is quite a bit different from its use in most other languages. It follows this form:

```if [ test-condition ]
then
do-something
else
do-something-else
fi
```

You read that right: the block must be terminated with the keyword `fi`.  (It's one of the things that makes using `if` fun.) The `else` portion is optional. Make sure to leave spaces around the opening and closing brackets; otherwise `if` reports a syntax error.

For example, if you need to check to see if you can read a file, you could write a chunk like this:

```if [ -r /home/joe/secretdata.txt ]
then
echo "You can read the file"
else
echo "You can't read that file!"
fi
```

`if` accepts a wide variety of tests. You can put any set of commands as the test-condition, but most `if` statements use the tests provided by the square bracket syntax. These are actually just a synonym for a command named `test`. So the first line of the preceding example could just as well have been written as follows.

```if test -r /home/joe/secretdata.txt
```

You can find out more about tests such as `-r` in the manual page for `test`. All the test operators can be used with square brackets, as we have.

Some useful `test` operators are:

 -r File is readable -x File is executable -e File exists -d File exists and is a directory

There are many, many more of them, and you can even test for multiple conditions at once. See the the manual page for `test`.

### while (and until)

`while` is a loop control structure. It keeps cycling through until its test condition is no longer true. It takes the following form:

```while test-condition
do
step1
step2
...
done
```

You can also create loops that run until they are interrupted by the user. For example, this is one way (though not necessarily the best one) to look at who is logged into your system once every 30 seconds:

```while true
do
who
sleep 30
done
```

This is inelegant because the user has to press Ctrl + c or kill it in some other way. You can write a loop that ends when it encounters a condition by using the `break` command. For instance the following script uses the `read` command (quite useful in interactive scripts) to read a line of input from the user. We store the input in a variable named userinput and check it in the next line. The script uses another compound command we've already seen, `if`, within the `while` block, which allows us to decide whether to finish the `while` block. The `break` command ends the `while` block and continues with the rest of the script (not shown here). Notice that we use two tests through `-o`, which means "or". The user can enter Q in either lowercase or uppercase to quit.

```while true
do
echo "Enter input to process (enter Q to quit)"
read userinput

if [ \$userinput == "q" -o \$userinput == "Q" ]
then
break
fi

process input...

done
```

`until` works exactly the same way, except that the loop runs until the test condition becomes true.

### case

`case` is a way for a script to respond to a set of test conditions. It works similarly to case statements in other programming languages, though it has its own peculiar syntax, which is best illustrated through an example.

```user=`whoami` # puts the username of the user executing the script
# into the \$user variable.
case \$user in
joe)
echo "Hello Joe. I know you'd like to know what time it is, so I'll show you below."
date
;;
amy)
echo "Good day, Amy. Here's your todo list."
cat /home/amy/amy-todo.txt
;;
sam|tex)
echo "Hi fella. Don't forget to watch the system load. The current system load is:"
uptime
;;
*)
echo "Welcome, whoever else you are. Get to work now, please."
;;
esac
```

Each case must be followed by the ) character, then a newline, then the list of steps to take, then a double semicolon (;;). The "*)" condition is a catchall, similar to the `default` keyword in some languages' `case` structures. If no other cases match, the shell executes this list of statements. Finally, the keyword `esac` ends the `case` statement. In the example shown, note the case that matches either the string "sam" or "tex".

### for

`for` is a useful way of iterating through items in a list. It can be any list of strings, but it's particularly useful for iterating through a file list. The following example iterates through all of the files in the directory myfiles and creates a backup file for each one. (It would choke on any directories, but let's keep the example simple and not test for whether the file is a directory.) ⁞

```for filename in myfiles/*
do
cp \$filename \$filename.bak
done
```

As with any command that sets a variable, the first line of the `for` block sets the variable called filename without a dollar sign.

There's another variety of `for`, which is similar to the `for` construct used in other languages, but which is used less in shell scripting than it's used in other languages, partially because the syntax for incrementing and decrementing variables in the shell is not entirely straightforward.

### parallel

`parallel` is a useful way of iterating through items in a list while maximizing the use of your computer by running jobs in parallel. The following example iterates through all of the files in the directory myfiles and creates a backup file for each one replacing the extension with `.bak`. (It would choke on any directories, but let's keep the example simple and not test for whether the file is a directory.)

```ls myfiles/* | parallel cp {} {.}.bak
```

`parallel` can often be used instead of `for` loops and `while read` loops, make these run faster by running them in parallel and make the code easier to read.

`parallel` can be used for a lot of more advanced features. You can watch an intro video here: http://www.youtube.com/watch?v=OpaiGYxkSuQ

# Maintainable Scripts

You are slowly delving into programming by the way of shell scripting.  Now it's the best time to start to learn about how to be a good programmer.  Since this book is just an introduction to the command line, we are only going to provide few but nevertheless very important hints centered around the idea of maintainability.

When programmers talk about maintainability they are talking about the ease with which a program can be modified, whether it's to correct defects, add new functionality, or improve its performance.  Unmaintainable programs are very easy to spot: they lack structure, so functionality is spread all over the place. When you push here they break way over there, a real nightmare. In general, they are very hard to read.  Consider for example this:

```#!/bin/sh
identify `find ~/Photos/Vacation/2008 -name \*.jpg` | cut -d ' ' -f 3 | sort | uniq -c
```

use your favorite editor to save this file as foo, then:

```\$ chmod +x foo
\$ ./foo
11 2304x3072
12 3072x2304
```

What that small monster does is find files that ends with ".jpg" in a certain directory, run `identify` on all of them, and report some kind of information that someone at some time must have thought very useful. If the programmer would only have added some hints as to what the programs does...

## Don't use long lines

The first thing you'll note is that our example of an unmaintainable program is one long line. There's really no need for that.  What if the program looked like this instead:

```#!/bin/sh
identify `find ~/Photos/Vacation/2008 -name \*.jpg` |
cut -d ' ' -f 3 |
sort |
uniq -c
```

It becomes a little bit easier to spot where each command begins and ends.  It's still the same set of piped programs, only their presentation is different.  You can break long lines at pipes and the functionality will be the same.

You can also split one command into several lines by using the \ character at the end of a line to join it with the next:

```#!/bin/sh
echo This \
is \
really \
one \
long \
command.
```

## Use descriptive names for your scripts

The second thing you might have noticed is that the script is called "foo".  It's short and convenient but it doesn't provide a clue as to what the program does.  What about this:

```\$ mv foo list_image_sizes
```

Now the name helps the user understand what the script does.  Much better, isn't it?

## Use variables

One bothersome thing about that program is its use of backticks.  Sure, it works, but it also has drawbacks.  Perhaps the biggest one is the least evident one, too: remember that backticks substitute the output of the command they contain in the position where they appear.  Some systems have a limit of the command line length they allow.  In this particular case, if the specified directory has lots and lots of pictures, the command line can become extraordinarily long, producing an obscure error when you call the program.  There are several methods that you can use to remedy this, but for the purpose of this explanation, let's try the following:

```#!/bin/sh
find ~/Photos/Vacation/2008 -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Now `find` is running the same as before, but its output, the list of filenames, is piped into a while-loop.  The condition for the loop is `read image``read` is a function that reads one line at a time, splits its input into fields and then assigns each field to a variable, image in this case.  Now `identify` works on one image at a time.

Notice how introducing a variable makes the program a bit easier to read: it literally says that you wish to identify an image.  Also note how the effect on future programmers wouldn't have been the same if the variable was called something like door or cdrom.  Names are important!

But there's still something bothersome about the program: that directory name is glowing like a sore thumb.  What if we change the program like this:

```#!/bin/sh
START_DIRECTORY=~/Photos/Vacation/2008

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

That's a little bit better: now you can edit your script and change the directory each time you wish to process a different one.

## Use arguments

That last bit didn't sound quite right, did it?  After all, you don't edit `ls` each time you wish to list the contents of a different directory, do you? Let's make our program just as adaptable:

```#!/bin/sh
START_DIRECTORY=\$1

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

The \$1 variable is the first argument that you pass to your script (\$0 is the name of the script you're running).  Now you can call your script like this:

```\$ ./list_image_sizes ~/Photos/Vacation/2008
```

Or you can examine the 2007 pictures, if you wish:

```\$ ./list_image_sizes ~/Photos/Vacation/2007
```

## Know where you begin

Consider what happens if you run the script like this:

```\$ ./list_image_sizes
```

Maybe that's what you want, but maybe it isn't.  What happens is that \$1 is empty, so \$START_DIRECTORY is empty as well and in turn the first argument to find is also empty.  That means that find will search your current working directory.  You might wish to make that behavior explicit:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

The program behaves exactly as before, with the only difference that in six months, when you come back and look at the program, you won't have to wonder why it's producing results even when you don't pass it a directory as argument.

## Look before you leap

Speaking of which, what happens if you do pass an argument to the script, but that argument isn't a directory or better yet, it doesn't even exist?  Try it.

Not pretty, ah?

What if we do this:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
exit
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

That's better.  Now the script won't even attempt to run if the argument it receives isn't a directory.  It isn't very polite, though: it silently exits with no hint of what went wrong.

## Complain if you must

That's easily fixed:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

## Mind your exit

The program now produces an error message if you don't pass it an existing directory as argument and it exits without further action.  It would be nice if you let other programs that might eventually call your script know that there was an error condition.  That is, it would be nice if your program exits with an error code.  Something like this:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Now, if there's an error, your script's exit code is 1. If the program exits normally, the exit code is 0.

## Use comments

Anything following a # symbol on a line will be ignored, allowing you to add notes about how your script works.  For example:

```#!/bin/sh
# This script reports the sizes of all the JPEG files found under the current
# directory (or the directory passed as an argument) and the number of photos
# of each size.

if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Comments are good, but don't fall prey to writing too many comments.  Try to construct your program so that the code itself is clear.  The reason behind this is simple: next year, when someone else changes your script, that other person could well change the commands and forget about the comments, making the later misleading.  Consider this:

```# count up to three
for n in `seq 1 4` ; do echo \$n ; done
```

Which one is it?  Three or four?  Evidently the program is counting up to four, but the comment says it's up to three.  You could adopt the position that the program is right and the comment is wrong.  But what if the person who wrote this meant to count to three and that's the reason why the comment is there?  Let's try it like this:

```# There are three little pigs
for n in `seq 1 3` ; do echo \$n ; done
```

The comment documents the reason why the program is counting up to three: it is not describing what the program does, it's describing what the program should do.  Let's consider a different approach:

```TOTAL_PIGS=3
for pig in `seq 1 \$TOTAL_PIGS` ; do echo \$pig ; done
```

Same result, slightly different program.  If you reformat your program, you can do without the comments (as a side note, the fancy word for the kinds of change we have been making is refactoring, but that goes outside the scope for this book).

## Avoid magic numbers

In our current example, there's a magic number, a number that makes the program work, but no one knows why it has to be that number.  It's magic!

```...
cut -d ' ' -f 3 |
...
```

You have two choices: write a comment and document why it has to be "3" instead of "2" or "4" or introduce a variable that explains why by way of its name.  Let's try the latter:

```#!/bin/sh
# This script reports the sizes of all the JPEG files found under the current
# directory (or the directory passed as an argument) and the number of photos
# of each size.

if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

IMAGE_SIZE_FIELD=3

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f \$IMAGE_SIZE_FIELD |
sort |
uniq -c
```

It does improve things a little; at least now we know where the 3 comes from.  If ImageMagick ever changes the output format, we can update the script accordingly.

## Did it work?

Last but not least, check the exit status of the commands you run.  As it stands right now, in our example there's not much that can fail.  So let's try one last example:

```#!/bin/sh
# Copy all the HTML and image files present in the source directory to the
# specified destination directory.

SRC=\$1
DST=\$2

if test -z "\$SRC" -o -z "\$DST" ; then
cat<<EOT
Usage:

\$0 source_directory destination_directory
EOT
exit 1
fi

if ! test -d "\$SRC" ; then
echo \"\$SRC\" is not a directory or it does not exist.  Stop.
exit 1
fi

if test -e "\$DST" ; then
echo \"\$DST\" already exists.  Stop.
exit 1
fi

if ! mkdir -p "\$DST" ; then
echo Can\'t create destination directory \"\$DST\".  Stop.
exit 1
fi

# Obtain the absolute path for \$DST
cd "\$DST"
DST=`pwd`
cd -

cd "\$SRC"

find ! -type d \( -name \*.html -o -name \*.jpg -o -name \*.png \) |
while read filename ; do
dir=`dirname "\$filename"`
mkdir -p "\$DST/\$dir" && cp -a "\$filename" "\$DST/\$filename"
if test \$? -ne 0 ; then
echo Can\'t copy \"\$filename\" to \"\$DST/\$filename\"
echo Abort.
exit 1
fi
done
```

Note that this example makes use of many things you learned in this book. It does not try to be definitive; you can practice improving it!

The thing you should note now is how the program pays attention to the error conditions that the different programs might produce.  For example, instead of just calling `mkdir` to check if a program worked, it does this:

```if ! mkdir -p "\$DST" ; then
echo Can\'t create destination directory \"\$DST\".  Stop.
exit 1
fi
```

It calls `mkdir` as the condition for `if`.  If `mkdir` encounters an error, it will exit with a non-zero status and the `if` clause will interpret that as a false condition. The "!" is a negation operator that inverts false to true (or vice versa. So the line as a whole basically says "Run the `mkdir` command, turn an error into a true value with the "!" operator, and take action if it's true that there's an error." In short, if `mkdir` encounters an error, the flow will enter the body of the `if`.  This might happen, for example, if the user running the script doesn't have permissions to create the requested directory.

Note also the usage of "&&" to verify error conditions:

```mkdir -p "\$DST/\$dir" && cp -a "\$filename" "\$DST/\$filename"
```

If `mkdir` fails, `cp` won't be called.  Furthermore, if either `mkdir` or `cp` fails, the exit status will be non-zero.  That condition is checked in the next line:

```if test \$? -ne 0 ; then
```

Since this might indicate something going awfully wrong (e.g., is the disk full?), we had better give up and stop the program.

## Wrapping up

Writing scripts is an art. You can become a better artist by looking at what others have done before you and doing a lot yourself. In other words: read a lot of scripts and write a lot of scripts yourself.

Happy hacking!

# The Sed Text Processor

Sed (stream editor) is a utility that does transformations on a line-by-line basis. The commands you give it are run on each line of input in turn. It is useful both for processing files and in a pipe to process output from other programs, such as here:

```\$ wc -c * | sort -n | sed ...
```

## Basic Syntax and Substitution

A common use of Sed is to change words within a file. You may have used "Find and Replace" in GUI based editors. Sed can do this much more powerfully and faster:

``` \$ sed "s/foo/bar/g" inputfile > outputfile
```

Let's break down this simple command. First we tell the shell to run `sed`. The processing we want to do is enclosed in double quotation marks; we'll come back to that in a moment. We then tell Sed the name of the inputfile and use standard shell redirection (>) to the name of our outputfile. You can specify multiple input files if you want; Sed processes them in order and creates a single stream of output from them.

The expression looks complex but is very simple once you learn to take it apart. The initial "s" means "substitute". This is followed by the text you want to find and the replacement text, with slashes (/) as separators. Thus, here we want to find "foo" in the inputfile and put "bar" in its places. Only the output file is affected; Sed never changes its input files.

Finally, the trailing "g" stands for "global", meaning to do this for the whole line. If you leave off the "g" and "foo" appears twice on the same line, only the first "foo" is changed to "bar".

```\$ cat testfile
this has foo then bar then foo then bar
this has bar then foo then bar then foo
\$ sed "s/foo/bar/g" testfile > testchangedfile
\$ cat testchangedfile
this has bar then bar then bar then bar
this has bar then bar then bar then bar
```

Now let's try that again without the `/g` on the command and see what happens.

```\$ cat testfile
this has foo then bar then foo then bar
this has bar then foo then bar then foo
\$ sed "s/foo/bar/" testfile > testchangedfile
\$ cat testchangedfile
this has bar then bar then foo then bar
this has bar then bar then bar then foo
```

Notice that without the "g", Sed performed the substitution only the first time it finds a match on each line.

This is all well and good, but what if you wanted to change the second occurrence of the word foo in our testfile? To specify a particular occurrence to change, just specify the number after the substitute commands.

```\$ sed "s/foo/bar/2" inputfile > outputfile
```

You can also combine this with the `g` flag (in some versions of Sed) to leave the first occurrence alone and change from the 2nd occurrence to the end of the line.

```\$ sed "s/foo/bar/2g" inputfile > outputfile
```

## Sed Expressions Explained

Sed understands regular expressions, to which a chapter is devoted in this book. Here are some of the special characters you can use to match what you wish to substitute.

```\$ matches the end of a line
^ matches the start of a line
* matches zero or more occurrences of the previous character
[ ] any characters within the brackets will be matched
```

For example, you could change any instance of the words "cat", "can", and "car" to "dog" by using the following:

``` \$ sed "s/ca[tnr]/dog/g" inputfile > outputfile
```

In the next example, the first [0-9] ensures that at least one digit must be present to be matched. The second [0-9] may be missing or may be present any number of times, because it is followed by the * metacharacter. Finally, the digits are removed because there is nothing between the second and third slashes where you can put your replacement text.

``` \$ sed "s/[0-9][0-9]*//g" inputfile > outputfile
```
Inside an expression, if the first character is a caret (^), Sed matches only if the text is at the start of the line.
```\$ echo dogs cats and dogs | sed "s/^dogs/doggy/"
doggy cats and dogs
```

A dollar sign at the end of a pattern expression tells Sed to match the text only if it is at the end of the line.

```\$ echo dogs cats and cats | sed "s/cats\$/kitty/"
dogs cats and kitty
```

A line changes only if the matching string is where you require it to be; if the same text occurs elsewhere in the sentence it is not be modified.

## Deletion

The "d" command deletes an entire line that contains a matching pattern. Unlike the "s" (substitute) command, the "d" goes after the pattern.

```\$ cat testfile
line with a cat
line with a dog
line with another cat
\$ sed "/cat/d" testfile > newtestfile
\$ cat newtestfile
line with a dog
```

The regular expression ^\$ means "match a line that has nothing between the beginning and the end", in other words, a blank line. So you can remove all blank lines using the "d" command with that regular expression:

```\$ sed "/^\$/d" inputfile > outputfile
```

## Controlling Printing

Suppose you want to print certain lines and suppress the rest. That is, instead of specifying which lines to delete using "d", you want specify which lines to keep.

This can be done with two features:

Specify the `-n` option, which means "do not print lines by default".

End the pattern with "p" to print the line matched by the pattern.

We'll show this with a file that contains names:

```\$ cat testfile
Mr. Jones
Mrs. Jones
Mrs. Lee                                                                        Mr. Lee
```

We've decided to standardize on "Ms" for women, so we want to change "Mrs." to "Ms". The pattern is:

``` s/Mrs\./Ms/
```

and to print only the lines we changed, enter:

```\$ sed -n "s/Mrs\./Ms/p" testfile
```

## Multiple Patterns

Sed can be passed more than one operation at a time. We can do this by specifying each pattern after an `-e` option.

```\$ echo Gnus eat grass | sed -e "s/Gnus/Penguins/" -e "s/grass/fish/"
Penguins eat fish.
```

## Controlling Edits With Patterns

We can also be more specific about which lines a pattern gets applied to.  By supplying a pattern before the operation, you restrict the operation to lines that have that pattern.

```\$ cat testfile
one: number
two: number
three: number
four: number
one: number
three: number
two: number
\$ sed "/one/ s/number/1/" testfile > testchangedfile
\$ cat testchangedfile
one 1
two: number
three: number
four: number
one: 1
three: number
two: number
```

The `sed` command in that example had two patterns. The first pattern, "one", simply controls which lines Sed changes. The second pattern replaces "number" with "1" on those lines.

This works with multiple patterns as well.

```\$ cat testfile
one: number
two: number
three: number
four: number
one: number
three: number
two: number
\$ sed -e "/one/ s/number/1/" -e "/two/ s/number/2/" \
-e "/three/ s/number/3/" -e "/four/ s/number/4/" \
< testfile > testchangedfile
\$ cat testchangedfile
one: 1
two: 2
three: 3
four: 4
one: 1
three: 3
two: 2
```

## Controlling Edits With Line Numbers

Instead of specifying patterns that can operate on any line, we can specify an exact line or range of lines to edit.

```\$ cat testfile
even number
odd number
odd number
even number
\$ sed "2,3 s/number/1/" < testfile > testchangedfile
\$ cat testchangedfile
even number
odd 1
odd 1
even number
```

The comma acts as the range separator, telling Sed to work only on lines two through three.

```\$ cat testfile
even number
odd number
odd number
\$ sed -e "2,3 s/number/1/" -e "1 s/number/2/" < testfile > testchangedfile
\$ cat testchangedfile
even 2
odd 1
odd 1
```

Sometimes you might not know exactly how long a file is, but you want to go from a specified line to the end of the file. You could use `wc` or the like and count the total lines, but you can also use a dollar sign (\$) to represent the last line:

```\$ sed "25,\$ s/number/1/" < testfile > testchangedfile
```

The \$ in an address range is Sed's way of specifying, "all the way to the end of the file".

## Scripting SED commands

By using the `-f` argument to the `sed` command, you can feed Sed a list of commands to run. For example, if you put the following patterns in a file called sedcommands:

```s/foo/bar/g
s/dog/cat/g
s/tree/house/g
s/little/big/g
```

You can use this on a single file by entering the following:

```\$ sed -f sedcommands < inputfile > outputfile
```

Note that each command in the file must be on a separate line.

There is much more to Sed than can be written in this chapter. In fact, whole books have been written about Sed, and there are many excellent tutorials about Sed online.

# Awk

AWK is a programming language designed for processing plain text data.  It is named after its founders, Alfred Aho, Peter Weinberger and Brian Kernighan.  AWK is quite a small language and easy to learn, making it the ideal tool for quick and easy text processing.  Its prime use is to extract data from table-like input.

Since programs written in AWK tend to be rather small, they are mostly entered directly on the command line.  Of course, saving larger scripts as text files is also possible.

In the next paragraphs, we present the basics of AWK through three simple examples.  All of them will be run on the following text file (containing the five highest scores ever achieved in the video game Donkey Kong as of March 2009):
```1050200 Billy Mitchell 2007
1049100 Steve Wiebe 2007
895400 Scott Kessler 2008
879200 Timothy Sczerby 2001
801700 Stephen Boyer 2007
```

The file is a table organized into fields.  The first field of each row contains the respective score, the second and third fields contain the name of the person who has achieved it, and the fourth and last field of each row contains the year in which the score was set.  You should copy and paste the text above into a text file and name it something like highscores.txt so that you can try out the following examples.

## Example 1

Let's say we want to print only those scores higher than 1,000,000 points. Also, we want only the first names of the persons who have achieved the scores.  By using AWK, it's easy to extract this information:

```\$ awk '\$1 > 1000000 { print \$2, \$1 }' highscores.txt
Billy 1050200
Steve 1049100
```

Try it out!

The little AWK program that we've just entered on the command line consists of two parts:

1. The part preceding the curly braces (\$1 > 1000000) says "Do this for all lines where the value of field no. 1 is greater than 1,000,000."
2. The part inside the curly braces (print \$2, \$1) says "Print field no. 2, followed by field no. 1."

What the combined program says is: "For all lines, if the value of the first field is greater than 1,000,000, print the second field of the line followed by the first field of the line."  (Note that AWK programs entered on the command line are usually enclosed in single quotation marks in order to prevent the shell from interpreting them.)

As we have seen in the previous example, the structure of an AWK statement is as follows:

```pattern { action }
```

The expression pattern specifies a condition that has to be met for action to take effect. AWK programs consist of an arbitrary number of these statements.  (The program we have discussed above contains only a single statement.)  An AWK program basically does the following:

1. It reads its input (e.g. a file or a text stream from standard input) line by line.
2. For each line, AWK carries out all statements whose condition/pattern is met.

Simple, isn't it?

## Example 2

Let's look at another example:

```\$ awk '\$4 == 2007 { print "Rank", NR, "-", \$3 }' highscores.txt
Rank 1 - Mitchell
Rank 2 - Wiebe
Rank 5 - Boyer
```

The program, again consisting of a single statement, may be paraphrased like this: "For each line, if the value of field no. 4 equals 2007, print the word 'Rank', followed by the value of the variable 'NR', followed by a dash ('-'), followed by field no. 3."

So what this little program does is print the surnames of all high score holders having set their record in 2007 along with their respective ranks in the high score table.

How does AWK know which ranks the individual high score holders occupy?  Since the table is sorted, the rank of each high score holder is equal to the row number of the entry.  And AWK can access the number of each row by means of the built-in variable NR (Number of Row).  AWK has quite a lot of useful built-in variables, which you can look up in its documentation.

## Example 3

The third and final example is a bit more complex than the other two, since it contains three AWK statements in total:

```\$ awk 'BEGIN {print "Together, the five best Donkey Kong players have achieved:"}\
{total += \$1} END {print total, "points"}' highscores.txt
```

This will output the following:

```Together, the five best Donkey Kong players have achieved:
4675600 points
```

Let's break up this program into its three parts/statements (which we have entered on a single command line):

### First statement

pattern: BEGIN
action: print "Together, the five best Donkey Kong players have achieved:"

### Second statement

pattern: none (= always execute action)
action: add the value of field no. 1 to the variable total

### Third statement

pattern: END
action: print the value of the variable total, followed by the string "points"

OK, now let's look at what is new in this short AWK program.

First of all, the patterns BEGIN and END have a special meaning: the action following BEGIN is executed before any input is read and the action introduced by END is executed when AWK has finished reading the input.

In the second statement, we can observe that an AWK statement does not need a pattern, only action is obligatory.  If a statement doesn't contain a pattern, the condition of the statement is always met and AWK executes the action for every single input line.

Finally, we have used our own variable for the first time, which we have called total. AWK variables do not need to be declared explicitly; you can introduce new ones by simply using them.  In our example program, the value of the variable total, starting out at 0 (zero), is increased by the value of field no. 1 for each input line. The += operator means "add the math expression on the right to the variable on the left."

So after all input lines have been read, total contains the sum of all field 1 values, that is, the sum of all high scores.  The END statement outputs the value of total followed by the string "points".

## Where to go from here?

We have seen that AWK is a fun and easy to use little programming language that may be applied to a wide range of data extraction tasks.  This short introduction to AWK can of course be little more than an appetizer.  If you want to learn more, we recommend you have a look at GAWK, the GNU implementation of AWK.  It is one of the most feature-rich implementations of the language, and comes with a comprehensive and easy to read manual (see http://www.gnu.org/software/gawk/manual/).

# Python

Python is a programming language that can be used to perform tasks that would be difficult or cumbersome on the command line. Python is included by default with most GNU/Linux distributions. Just like the command line, you can either use Python by typing commands individually, or you can create a script file. If you want to type commands individually, start the Python interpreter by typing `python`.

```\$ python
>>> 10 + 10
20

```

To exit an interactive Python session, type Ctrl + d.

To write a multi-line script in Python that you can run from outside of the Python interactive console, put the commands in a file. You can use any text editor to create this file -- Emacs, Vim, Gedit, or whatever your favorite is; and call it what you like (often the filename ends with ".py" to help distinguish it in a directory listing). A script could look like this:

```a = 1 + 2
print a
```

In this example, we create a variable, a, which stores the result of "1 + 2". It then uses the print command to print out the result, which should be 3. If we save this file as first.py, we can run it from the command line.

```\$ python first.py
3
```

The Python program printed out "3", just like we expected. We can add a first line specifying the python interpreter, make it executable and then just type ./first.py to run it. If we still have the first.pl file from the previous chapter, it does exactly the same thing and we can make a link to one of them in order to choose the method by which we add 1 and 2.

```\$ ln -s first.py 1plus2
\$./1plus2
3
\$ln -sf first.pl 1plus2
\$./1plus2
3
```

Of course, we can use Python to do more useful things. For example, we can look at all the files in the current directory.

```\$ python
>⁞⁞>> import os
>>> os.listdir('.')
['notes.txt', 'readme.txt', 'first.py']
```

Here we import the standard library "os", which has operating system-like functions in it.  We call the listdir function to return a list of names of files in the current directory.  We pass the directory name as a string (enclosed in single quotes); the single dot refers to the current directory.

Let's try doing something with these files -- here's a way to find all of the ".py" files in a directory.

```>>> files = os.listdir('.')
>>> files
['notes.txt', 'readme.txt', 'first.py']
>>> [file for file in files if '.py' in file]
['first.py']
```

Above we use a powerful construction called a list comprehension to produce a new list by transforming and filtering a list.  Below is a simpler but wordier way to pick out the all of the files with ".txt" in them.

```>>> for file in files:
...     if '.txt' in file:
...         print file
...
notes.txt
readme.txt
```

The indentation is required in Python.  Indentation tells the Python interpreter what to include in the for loop and what to include in the if statement.  Also, a you must press an additional Enter at the last set of three dots to tell the Python interpreter that you're done.

We can also use command line code in Python by passing it to the os.system function. For example, if we wanted to delete all of the ".txt" files, we could use.

```>>> for file in files:
...     if '.txt' in file:
...         cmd = 'rm ' + file
...         os.system(cmd)
...
```

Above, we construct a shell command cmd as a Python string by concatenating (using the "+" operator) the strings "rm " and the filename, then pass it to the os.system function.  Now we can check to see that the files have been deleted.

```>>> os.system('ls')
first.py
```

## More information about Python

The Python web site at http://www.python.org  contains an impressive amount of information and documentation about the Python language.  If you are just getting started with programming, the book "How to Think Like a Computer Scientist" by Jeffrey Elkner, Allen B. Downey and Chris Meyers at http://openbookproject.net/thinkCSpy/index.html is a good place to start.

# Nano

Nano is a simple editor. To open it and begin creating a new text file, type the following at the command line:

```\$ nano filepath
```

where filepath is the path to the file you want to edit (or nothing). The screen is taken over by the program as shown in Figure 1.

Figure 1. Opening screen for nano

The screen is no longer a place to execute commands; it has become a text editor.

## Exiting nano

To exit `nano`, hold down the Ctrl key and press the x key (a combination we call ctrl + x in this book).  If you have created or altered some text but have not yet saved it, `nano` asks:

```Save modified buffer (ANSWERING "No" WILL DESTROY CHANGES) ?
```

To save the changes, just type y and nano prompts for a destination filepath. To abandon your changes, type n.
To save changes without exiting, press ctrl + o. `nano` asks you for the filename in which to save the text:

```File Name to Write:
```

Type the name of the file, and press the Enter key (or if the buffer already has the right name just press Enter).  For instance:

```File Name to Write: textfile.txt
```

## Exploring Files

You can move around the file and view different parts using the arrow keys. This is a very fast and responsive way to explore a file.

## Help

Be sure to read the man page because it has a lot of good hints. There is help available in your nano session by typing ctrl + g and to get back to your file type ctrl + x.

# Gedit

Gedit, the default GUI editor if you use Gnome, also runs under KDE and other desktops. Most gNewSense and Ubuntu installations use Gnome by default. To start Gedit open a terminal and type

```\$ gedit &
```

You should see this:

This looks like most basic editors on any operating system. You can use Gedit through the GUI, and the commands are simple:

File->Open : Opens an existing file

File->New : Creates a new (blank) file

File->Save : Saves a file

ctrl + c : copy

ctrl + v : paste

That's all you really need to do. To add text just type!

## Line Numbers

Gedit tracks your cursor and displays the position at the bottom of the interface:

This can be handy information to know. If you keep track of the line numbers you can use these to jump quickly around the text file by using the "Go to Line" feature. This can be accessed via the interface (Search -> Go to Line) or via the shortcut ctrl + i.

# The Parts of a Command

The first word you type on a line is the command you wish to run.  In the "Getting Started" section we saw a call to the `date` command, which returned the current date and time.

## Arguments

Another command we could use is `echo`, which displays the specified information back to the user.  This isn't very useful if we don't actually specify information to display.  Fortunately, we can add more information to a command to modify its behavior; this information consists of arguments .  Luckily, the `echo` command doesn't argue back; it just repeats what we ask it:

```\$ echo foo
foo
```

In this case, the argument was foo, but there is no need to limit the number of arguments to one. Every word of the text entered, excluding the first word, will be considered an additional argument passed to the command. If we wanted `echo` to respond with multiple words, such as `foo bar`, we could give it multiple arguments:

```\$ echo foo bar
foo bar
```

Arguments are normally separated by "white space" (blanks and tabs -- things that show up white on paper).  It doesn't matter how many spaces you type, so long as there is at least one. For instance, if you type:

```\$ echo foo              bar
foo bar
```

with a lot of spaces between the two arguments, the "extra" spaces are ignored, and the output shows the two arguments separated by a single space.  To tell the command line that the spaces are part of a single argument, you have to delimit in some way that argument.  You can do it by quoting the entire content of the argument inside double-quote (`"`) characters:

```\$ echo "foo              bar"
foo              bar
```
As we'll see later, there is more than a way to quote text, and those ways may (or may not) differ in the result, depending on the content of the quoted text.

## Options

Revisiting the `date` command, suppose you actually wanted the UTC date/time information displayed.  For this, `date` provides the `--utc` option.  Notice the two initial hyphens.  These indicate arguments that a command checks when it starts and that control its behavior.  The `date` command checks specially for the `--utc` option and says, "OK, I know you're asking for UTC time".  This is different from arguments we invented, as when we issued `echo` with the arguments `foo bar`.

Other than the dashes preceding the word, `--utc` is entered just like an argument:

```\$ date --utc
Tue Mar 24 18:12:44 UTC 2009
```

Usually, you can shorten these options to a shorter value such as `date -u` (the shorter version often has only one hyphen).  Short options are quicker to type (use them when you are typing at the shell), whereas long options are easier to read (use them in scripts).

Now let's say we wanted to look at yesterday's date instead of today's.  For this we would want to specify the `--date` argument (or shortly `-d`), which takes an argument of its own. The argument for an option is simply the word following that option. In this case, the command would be `date --date yesterday`.

Since options are just arguments, you can combine options together to create more sophisticated behaviour.  For instance, to combine the previous two options and get  yesterday's date in UTC you would type:

````\$ date --date yesterday -u`
Mon Mar 23 18:16:58 UTC 2009
```

As you see, there are options that expect to be followed by an argument (`-d`, `--date`) and others that don't take any one (`-u`, `--utc`).  Passing a little bit more complex argument to the `--date` option allows you to obtain some interesting information, for example whether this year is a leap year (in which the last day of February is 29).  You need to known what day immediately precedes the 1st of March:

````\$ date --date "1march ``yesterday``" -u`
Sat Feb 28 00:00:00 UTC 2009
```
The question you posed to `date` is: if today were the 1st of March of the current year, what date would it be yesterday?  So no, 2009 is not a leap year.  It may be useful to get the weekday of a given date, say the 2009 New Year's Eve:
````\$ date -d 31dec +%A`
Thursday
```

which is the same as:

````\$ date --date 31december2009 +%A`
Thursday
```

In this case we passed to `date` the option `-d` (`--date`) followed by the New Year's Eve date, and then a special argument (that is specific to the `date` command). ⁞ Commands may once in a while have strange esoteric arguments...  The `date` command can accept a format argument starting with a plus (`+`).  The format `%A` asks to print the weekday name of the given date (while `%a` would have asked to print the abbreviated weekday: try it!).  For now don't worry about these hermetic details: we'll see how to obtain help from the command line in learning command details.  Let's only nibble a more savory morsel that combines the `echo` and `date` commands:

````\$ echo "This New Year's Eve falls on a \$( date -d 31dec +%A )"`
This New Year's Eve falls on a Thursday
```

## Repeating and editing commands

Use the Up-arrow key to retrieve a command you issued before.  You can move up and down using arrow keys to get earlier and later commands.  The Left-arrow and Right-arrow keys let you move around inside a single command.  Combined with the Backspace key, these let you change parts of the command and turn it into a new one.  Each time you press the Enter key, you submit the modified command to the terminal and it runs exactly as if you had typed it from scratch.

# Getting Started

Modern computing is highly interactive, and using the command line is just another form of interaction.  Most people use the computer through its desktop or graphical interface, interacting at a rapid pace.  They click on an object, drag and drop it, double-click another to open it, alter it, etc.

Although interactions happen so fast you don't think about it, each click or keystroke is a command to the computer, which it reacts to. Using the command line is the same thing, but more deliberate.  You type a command and press the Return or Enter key.  For instance, in my terminal I type:

```date
```

And the computer replies with:

```Thu Mar 12 17:15:09 EDT 2009
```

That's pretty computerish.  In later chapters we'll explain how to request the date and time in a more congenial format. We'll also explain how working in different countries and with different languages changes the output.  The idea is that you've just had an interaction.

## The Command Line Can Do Much Better

The date command, as seen so far, compares poorly with the alternative of glancing at a calendar or clock.  The main problem is not the unappetizing appearance of the output, mentioned already, but the inability to do anything of value with the output.  For instance, if I'm looking at the date in order to insert it into a document I'm writing or update an event on my online calendar, I have to do some retyping.  The command line can do much better.

After you learn basic commands and some nifty ways to save yourself time, you'll find out more in this book about feeding the output of commands into other commands, automating activities, and saving commands for later use.

## What Do We Mean By a Command?

At the beginning of this chapter we used the word "command" very generally to refer to any way of telling the computer what to do.  But in the context of this book, a command has a very specific meaning. It's a file on your computer that can be executed, or in some cases an action that is built into the shell program. Except for the built-in commands, the computer runs each command by finding the file that bears its name and executing that file. We'll give you more details as they become useful.

## Ways to Enter Commands

To follow along on this book, you need to open a command-line interpreter (called a shell or terminal in GNU/Linux) on your computer.  Pre-graphical computer screens presented people with this interpreter as soon as they logged in.  Nowadays almost everybody except professional system administrators uses a graphical interface, although the pre-graphical one is still easier and quicker to use for many purposes.   So we'll show you how to pull up a shell.

## Finding a Terminal

You can get a terminal interface from the desktop, but it may be easier to leave the desktop and use the original text-only terminal. To do that, use the <ctrl><alt><F1> key combination. You get a nearly blank screen with an invitation to log in. Give it your username and password. You can go to other terminals with <alt><F2> and so on, and set up sessions with different (or the same) users for whatever tasks you want to do. At any time, switch from one to another by using the <alt><F#> keystroke for the one you want. One of these, probably F7 or F8, will get you back to the desktop. In the text terminals you can use the mouse (assuming your system has gpm running) to select a word, line or range of lines.  You can then paste that somewhere else in that terminal or any other terminal.

GNU/Linux distributions come with different graphical user interfaces (GUI ) offering different aesthetics and semantic metaphors.  Those running on top of the operating system are known as desktop environments.  GNOME, KDE and Xfce are among the most widely used ones.  Virtually every desktop environment provides a program that mimics the old text-only terminals that computers used to offer as interfaces.  On your desktop, try looking through the menus of applications for a program called Terminal.  Often it's on a menu named something such as "Accessories", which is not really fair because once you read this book you'll be spending a lot of time in the terminal every day.

In GNOME you choose Applications -> Accessories -> Terminal.

In KDE you choose K Menu -> System -> Terminal; in Xfce you choose Xfce Menu -> System -> Terminal
Wherever it's located, you can almost certainly find a terminal program.

When you run the terminal program, it just shows a blank window; there's not much in the way of help.  You're expected to know what to do--and we'll show you.

The following figure shows the Terminal window opened on the desktop in GNOME.

## Running an Individual Command

Many graphical interfaces also provide a small dialog box called something like "Run command".  It presents a small text area where you can type in a command and press the Return or Enter key.

To invoke this dialog box, try typing the Alt + F2 key combination, or searching through the menus of applications.  You can use this box as a shortcut to quickly start up a terminal program, as long as you know the name of a terminal program installed on your computer.  If you are working on an unfamiliar computer and don't even know the name of the default terminal program, try typing `xterm` to start up a no-frills terminal program (no fancy menus allowing choice of color themes or fonts).  If you desperately need these fancy menus,

• in GNOME the default terminal program should be `gnome-terminal`;
• in KDE it should be `konsole`;
• in Xfce you'd try with `Terminal` or with version specific terminal names: for example in Xfce 4 you should find `xfce4-terminal`.

## How We Show Commands and Output in This Book

There's a common convention in books about the command-line. When you start up a terminal, you see a little message indicating that the terminal is ready to accept your command. This message is called a prompt, and it may be as simple as:

```\$
```

After you type your command and press the Return or Enter key, the terminal displays the command's output (if there is any) followed by another prompt. So my earlier interaction would be shown in the book like this:

```\$ date
Thu Mar 12 17:15:09 EDT 2009
\$
```

You have to know how to interpret examples like the preceding one. All you type here is date. Then press the Return key. The word date in the example is printed in bold to indicate that it's something you type. The rest is output on the terminal.

# Moving Around

Anyone who has used a graphical interface has moved between folders. A typical view of folders appears in Figure 1, where someone has opened a home directory, then a folder named "my-stuff" under that, and a folder named "music" under that.

Figure 1 : Folders

When you use the command line, folders are called directories. That's just an older term used commonly in computing to refer to collections of things. (Try making an icon that suggests "directory"). Anything you do in a folder on the desktop is reflected in the directory when you're on the command line, and vice versa. The desktop and the command line provide different ways of viewing a directory/folder, and each has advantages and disadvantages.

Files contain your information--whether pictures, text, music, spreadsheet data, or something else--while the directories are containers for files. Directories can also store other directories. You'll be much more comfortable with the command line once you can move around directories, view them, create and remove them, and so on.

Directories are organized, in turn, into filesystems. Your hard disk has one type of filesystem, a CD-ROM or DVD has another, a USB mass storage device has yet another, and so on. That's why a CD-ROM, DVD, or USB device shows up as something special on the desktop when you insert it. Luckily, you don't have to worry much about the differences because both the desktop and the terminal can hide the differences. But sometimes in this book we'll talk about the information a filesystem has about your files.

The "first" directory is called the root and is represented by the name / (just a forward slash).  You can think of all the directories and files on the system as a tree that grows upside-down from this root (Figure 2):

Figure 2 : Root Directory

## Absolute and relative paths

Every file and directory in the system has an "address" called its absolute path or sometimes just its path.  It describes the route you have to follow starting from the root that would take you to that particular file or directory.

For example, suppose you like the vim editor that we'll introduce in a later chapter, and are told you can start it by running the command `/usr/bin/vim`. This point underlines what we said in an earlier chapter: commands are just executable files. So the vim editor is a file with the path /usr/bin/vim, and if you run that command `/usr/bin/vim` you will execute the editor. As you can see from these examples, the slash / is also used as a separator between directories.

Can you find /usr/bin/vim in Figure 2? The pathname can be interpreted as follows:

1. Start at the root (/) directory.
2. Move from / down to a directory named usr.
1. Move from usr down to a directory named bin.
2. vim is located in that directory.
You are just getting used to the command line, and it may feel odd to be typing while reading this book. If you feel any confusion in this section, try scribbling the directory tree in Figure 2 on paper. Draw arrows on the paper as you run the commands in this section, to help orient you to where you are.

Note that you can't tell whether something is a file or a directory just by looking at its path.

When you work with the command line you will be always working "in" a directory.  You can find the path of this directory using the command `pwd` (print working directory), like this:

```\$ pwd
/home/ben
```

You can see that `pwd` prints an absolute path.  If you want to switch your working directory you can use the command `cd` (change directory) followed by an argument which points to the target directory:

```\$ cd /
```

You just changed your working directory to the root of the filesystem!  If you want to return to the previous directory, you can enter the command:

```\$ cd /home/ben
```

As an alternative, you can "work your way" back to /home/ben using relative paths.  They are called that because they are specified "in relation" to your current working directory.  If you go back to the root directory, you could enter the following commands:

```\$ cd /
\$ cd home
\$ cd ben
\$ pwd
/home/ben
```

The first command changes your current working directory to the root. The second changes to home, relative to /, making your current working directory /home.  The third command changes it to ben, relative to /home, landing you in /home/ben.

### Good to be back home

Every user in the system has a directory assigned to him or her, called the home directory.  No matter what your current working directory is, you can quickly return to your home directory like this:

```\$ cd
```

That is, enter the `cd` command without any arguments.

All your files and preferences are stored in your home directory (or its subdirectories). Every user of your system with a login account gets her own home directory. Home directories are usually named the same as users' login names, and are usually found in /home, although a few systems have them in /usr/home. When you start your terminal, it will place you in your home directory.

There's a special shortcut to refer to your home directory, namely the symbol ~ (usually called a tilde, and found near the very left top of most keyboards). You can use it as part of more complex path expressions, and it will always refer to your home directory. For example, ~/Desktop refers to the directory called Desktop that usually exists within your home directory.

### The . and .. directories

The entries . and .. are special and they exist in every directory, even the root directory itself (/). The first one is a shorthand for "this directory" while the latter is a shorthand for "the parent directory of this directory."  You can use them as a relative path, and you can try and see what happens when you do this:

```\$ pwd
/usr/bin
\$ cd .
\$ pwd
/usr/bin
```

If vim is in /usr/bin, at this point you could run it by typing the relative path:

```\$ ./vim
```

Continuing from the previous example, you can do this:

```\$ cd ..
\$ pwd
/usr
```

Since they are actual entries in the filesystem, you can use them as part of more complex paths, for example:

```\$ cd /usr/bin
\$ pwd
/usr/bin
\$ cd ../lib
\$ pwd
/usr/lib
\$ cd ../..
\$ pwd
/
\$ cd home
\$ pwd
/home
\$ cd ../usr/bin
\$ pwd
/usr/bin
```

The parent directory of the root directory, /.., is root itself.

Try moving around your computer on the command line and you will soon get used to it!

# Basic commands

By now you have some basic knowledge about directories and files and you can interact with the command line interface.  We can learn some of the commands you'll be using many times each day.

### ls

The first thing you likely need to know before you can start creating and making changes to files is what's already there?  With a graphical interface you'd do this by opening a folder and inspecting its contents. From the command line you use the program `ls` instead to list a folder's contents.

```\$ ls
Desktop  Documents  Music  Photos
```

By default, `ls` will use a very compact output format. Many terminals show the files and subdirectories in different colors that represent different file types.  Regular files don't have special coloring applied to their names.  Some file types, like JPEG or PNG images, or tar and ZIP files, are usually colored differently, and the same is true for programs that you can run and for directories.  Try `ls` for yourself and compare the icons and emblems your graphical file manager uses with the colors that ls applies on the command line.  If the output isn't colored, you can call `ls` with the option `--color`:

```\$ ls --color
```

### man, info & apropos

You can learn about options and arguments using another program called `man` (`man` is short for manual) like this:

```\$ man ls
```

Here, `man` is being asked to bring up the manual page for `ls`. You can use the arrow keys to scroll up and down in the screen that appears and you can close it using the q key (for quit).

An alternative to obtain a comprehensive user documentation for a given program is to invoke `info` instead of `man`:

```\$ info ls
```

This is particularly effective to learn how to use complex GNU programs.  You can also browse the `info` documentation inside the editor Emacs, which greatly improves its readability.  But you should be ready to take your first step into the larger world of Emacs.  You may do so by invoking:

```\$ emacs -f info-standalone
```
that should display the Info main menu inside Emacs (if this does not work, try invoking `emacs` without arguments and then type Alt + x info, i.e. by depressing the Alt key, then pressing the x key, then releasing both keys and finally typing info followed by the Return or Enter key).  If you type then m ls, the interactive Info documentation for `ls` will be loaded inside Emacs.  In the standalone mode, the q key will quit the documentation, as usual with `man` and `info`.

Ok, now you know how to learn about using programs yourself.  If you don't know what something is or how to use it, the first place to look is its `man`ual and `info`rmation pages.  If you don't know the name of what you want to do, the `apropos` command can help.  Let's say you want to rename files but you don't know what command does that.  Try `apropos` with some word that is related to what you want, like this:

```\$ apropos rename
...
mv (1)               - move (rename) files
prename (1)          - renames multiple files
rename (2)           - change the name or location of a file
...
```

Here, `apropos` searches the manual pages that `man` knows about and prints commands it thinks are related to renaming.  On your computer this command might (and probably will) display more information but it's very likely to include the entries shown.

Note how the program names include a number besides them.  That number is called their section, and most programs that you can use from the command line will be in section 1.  You can pass `apropos` an option to display results from section 1 manuals only, like this:

```\$ apropos -s 1 rename
...
mv (1)               - move (rename) files
prename (1)          - renames multiple files
...
```

At this stage, the section number isn't terribly important.  Just know that section 1 manual pages are the ones that apply to programs you use on the command line.  To see a list of the other sections, look up the manual page for man using `man man`.

### mv

Looking at the results from `apropos`, that `mv` program looks interesting.  You can use it like this:

```\$ mv oldname newname
```

Depending on your system configuration, you may not be warned when renaming a file will overwrite an existing file whose name happens to be `newname`.  So, as a safe-guard, always use `-i' option when issuing `mv` like this:

```\$ mv -i oldname newname
```

Just as the description provided by `apropos` suggests, this program moves files.  If the last argument happens to be an existing directory, `mv` will move the file to that directory instead of renaming it. Because of this, you can provide `mv` more than two arguments:

```\$ mv one_file another_file a_third_file ~/stuff
```

If ~/stuff exists, then `mv` will move the files there.  If it doesn't exist, it will produce an error message, like this:

```\$ mv one_file another_file a_third_file stuff
mv: target 'stuff' is not a directory
```

### mkdir

How do you create a directory, anyway?  Use the `mkdir` command:

```\$ mkdir ~/stuff
```

And how do you remove it?  With the `rmdir` command:

```\$ rmdir ~/stuff
```

If you wish to create a subdirectory (say the directory bar) inside another directory (say the directory foo) but you are not sure whether this one exists or not, you can ensure to create the subdirectory and (if needed) its parent directory without raising errors by typing:

```\$ mkdir -p ~/foo/bar
```
This will work even for nested sub-sub-...-directories.

If the directory you wish to remove is not empty, `rmdir` will produce an error message and will not remove it.  If you want to remove a directory that contains files, you have to empty it first.  To see how this is done, we will need to create a directory and put some files in it first.  These files we can remove safely later.  Let's start by creating a directory called practice in your home and change the current working directory there:

```\$ mkdir ~/practice
\$ cd ~/practice
```

### cp, rm & rmdir

Now let's copy some files there using the program `cp`.  We are going to use some files that are very likely to exist on your computer, so the following commands should work for you:

```\$ cp /etc/fstab /etc/hosts /etc/issue /etc/motd .
\$ ls
fstab  hosts  issue  motd
```

Don't forget the dot at the end of the line!  Remember it means "this directory" and being the last argument passed to `cp` after a list of files, it represents the directory in which to copy them.  If that list is very long, you'd better learn using globbing (expanding file name patterns containing wildcard characters into sets of existing file names) or some other tricky ways to avoid wasting your time in typing file names.  One trick can help when dealing with the copy of an entire directory content.  Passing to `cp` the option `-R` you can recursively copy all the files and subdirectories from a given directory to the destination:

```\$ cp -R . ~/foo
\$ ls ~/foo
bar  fstab  hosts  issue  motd
\$ cp -R . ~/foo/bar
\$ ls -R ~/
~/foo:
bar  fstab  hosts  issue  motd

~/foo/bar:
fstab  hosts  issue  motd
```

In this case the current directory has no subdirectories so only files were copied.  As you can see, the option `-R` can be passed even to `ls` to list recursively the content of a given directory and of its subdirectories.

Now, if you go back to your home and try to remove the directory called practice, `rmdir` will produce an error message:

```\$ cd ..
\$ rmdir practice
rmdir: failed to remove 'practice': Directory not empty
```

You can use the program `rm` to remove the files first, like this:

```\$ rm practice/fstab practice/hosts practice/issue practice/motd
```

And now you can try removing the directory again:

```\$ rmdir practice
```

And now it works, without showing any output.

But what happens if your directories have directories inside that also have files, you could be there for weeks making sure each folder is empty!  The `rm` command solves this problem through the amazing option `-R`, which as usual stands for "recursive".  In the following example, the command fails because foo is not a plain file:

```\$ rm ~/foo/
rm: cannot remove `~/foo/`: Is a directory
```

So maybe you try `rmdir`, but that fails because foo has something else under it:

```\$ rmdir ~/foo
rmdir: ~/foo: Directory not empty
```

So you use `rm -R`, which succeeds and does not produce a message.

```\$ rm -R ~/foo/

```

So when you have a big directory, you don't have to go and empty every subdirectory.

But be warned that `-R` is a very powerful argument and you may lose data you wanted to keep!

### cat & less

You don't need an editor to view the contents of a file.  What you need is just to display it.  The `cat` program fits the bill here:

```\$ cat myspeech.txt
Friends, Romans, Countrymen! Lend me your ears!
```

Here, `cat` just opens the file myspeech.txt and prints the entire file to your screen, as fast as it can.   However if the file is really long, the contents will go by very quickly, and when `cat` is done, all you will see are the last few lines of the file.  To just view the contents of a long file (or any text file) you can use the `less` program:

```\$ less myspeech.txt
```

Just as with using `man`, use the arrow keys to navigate, and press q to quit.

# Scripting

If you have a collection of commands you'd like to run together, you can combine them in a script and run them all at once. You can also pass arguments to the script so that it can operate on different files or other input.

Like an actor reading a movie script, the computer runs each command in your shell script, without waiting for you to whisper the next line in its ear. A script is a handy way to:

• Save yourself typing on a group of commands you often run together.
• Remember complicated commands, so you don't have to look up, or risk forgetting, the particular syntax each time you use it.
• Use control structures, like loops and case statements, to allow your scripts to do complex jobs. Writing these structures into a script can make them more convenient to type and easier to read.

Let's say you often have collections of images (say, from a digital camera) that you would like to make thumbnails of. Instead of opening hundreds of images in your image editor, you choose to do the job quickly from the command line. And because you may need to do this same job in the future, you might write a script. This way, the job of making thumbnails takes you only two commands:

```\$ cd images/digital_camera/vacation_pictures_March_2009
\$ make_thumbnails.sh
```

The second command, `make_thumbnails.sh`, is the script that does the job. You have made it previously and it resides in a directory on your search path. It might look something like this:

```#!/bin/bash
# This makes a directory containing thumnails of all the jpegs in the current dir.
mkdir thumbnails
cp *.jpg thumbnails
cd thumbnails
mogrify -resize 400x300 *.jpg
```

If the first line begins with #! (pronounced "shee-bang"), it tells the kernel what interpreter is to be used. (Since bash is usually the default, you might omit this line). After this you should put in some comments about what the script is for and how to use it . Scripts without clear and complete usage instructions often do not "work right". For bash, comments start with the hash (#) character and may be on the ends of executable lines.

The file includes commands conforming to the syntax of the interpreter. We've seen three of them before: `mkdir`, `cp`, and `cd`. The last command, `mogrify`, is a program that can resize images (and do a lot of other things besides). Read its manual page to learn more about it.

## Making scripts executable

To write a script like the one we've shown, open your favorite text editor and type in the commands you would like to run. For bash, you can put multiple commands on a single line so long as you put a semi-colon after each command so the shell knows a new command is starting.

Save the script. One common convention for bash is to use the `.sh` extension -- for example, make_thumbnails.sh.

There is one more step before you can run the script: it has to be executable. Remember from the section on permissions that executability is one of the permissions a file can have, so you can make your script executable by granting the execute (x) permission. The following command allows any user to execute the script:

```\$ chmod +x make_thumbnails.sh
```

Because you're probably planning to use the script often, you'll find it worthwhile to check your PATH and add the script to one of the directories in it (for instance, /home/jdoe/bin is an easy choice given the PATH shown here).

```\$ echo \$PATH
/usr/bin:/usr/local/bin:/home/jdoe/bin
```

For simple testing, if you're in the directory that contains the script, you can run it like this:

```\$ ./make_thumbnails.sh
```

Why do you need the preceding ./ path? Because most users don't have the current directory in their PATH environment variables. You can add it, but some users consider that a security risk.

Finally, you can also execute a script, even without its execute bit set, by passing it as an argument to the command interpreter, thusly:

```\$ bash make_thumbnails.sh
```

## More control

To provide the flexibility you want, the bash shell and many other interpreters let you make choices in a script and run things repeatedly on a variety of inputs. In that regard, the shell is actually a programming language, and a nice way to get used to using the powerful features a programming language provides. We'll get you started here and show you the kinds of control the bash shell provides through compound statements.

### if

This statement was already introduced in the section on checking for errors, but we'll review it here. `if` is more or less what you'd expect, though its syntax is quite a bit different from its use in most other languages. It follows this form:

```if [ test-condition ]
then
do-something
else
do-something-else
fi
```

You read that right: the block must be terminated with the keyword `fi`.  (It's one of the things that makes using `if` fun.) The `else` portion is optional. Make sure to leave spaces around the opening and closing brackets; otherwise `if` reports a syntax error.

For example, if you need to check to see if you can read a file, you could write a chunk like this:

```if [ -r /home/joe/secretdata.txt ]
then
echo "You can read the file"
else
echo "You can't read that file!"
fi
```

`if` accepts a wide variety of tests. You can put any set of commands as the test-condition, but most `if` statements use the tests provided by the square bracket syntax. These are actually just a synonym for a command named `test`. So the first line of the preceding example could just as well have been written as follows.

```if test -r /home/joe/secretdata.txt
```

You can find out more about tests such as `-r` in the manual page for `test`. All the test operators can be used with square brackets, as we have.

Some useful `test` operators are:

 -r File is readable -x File is executable -e File exists -d File exists and is a directory

There are many, many more of them, and you can even test for multiple conditions at once. See the the manual page for `test`.

### while (and until)

`while` is a loop control structure. It keeps cycling through until its test condition is no longer true. It takes the following form:

```while test-condition
do
step1
step2
...
done
```

You can also create loops that run until they are interrupted by the user. For example, this is one way (though not necessarily the best one) to look at who is logged into your system once every 30 seconds:

```while true
do
who
sleep 30
done
```

This is inelegant because the user has to press Ctrl + c or kill it in some other way. You can write a loop that ends when it encounters a condition by using the `break` command. For instance the following script uses the `read` command (quite useful in interactive scripts) to read a line of input from the user. We store the input in a variable named userinput and check it in the next line. The script uses another compound command we've already seen, `if`, within the `while` block, which allows us to decide whether to finish the `while` block. The `break` command ends the `while` block and continues with the rest of the script (not shown here). Notice that we use two tests through `-o`, which means "or". The user can enter Q in either lowercase or uppercase to quit.

```while true
do
echo "Enter input to process (enter Q to quit)"
read userinput

if [ \$userinput == "q" -o \$userinput == "Q" ]
then
break
fi

process input...

done
```

`until` works exactly the same way, except that the loop runs until the test condition becomes true.

### case

`case` is a way for a script to respond to a set of test conditions. It works similarly to case statements in other programming languages, though it has its own peculiar syntax, which is best illustrated through an example.

```user=`whoami` # puts the username of the user executing the script
# into the \$user variable.
case \$user in
joe)
echo "Hello Joe. I know you'd like to know what time it is, so I'll show you below."
date
;;
amy)
echo "Good day, Amy. Here's your todo list."
cat /home/amy/amy-todo.txt
;;
sam|tex)
echo "Hi fella. Don't forget to watch the system load. The current system load is:"
uptime
;;
*)
echo "Welcome, whoever else you are. Get to work now, please."
;;
esac
```

Each case must be followed by the ) character, then a newline, then the list of steps to take, then a double semicolon (;;). The "*)" condition is a catchall, similar to the `default` keyword in some languages' `case` structures. If no other cases match, the shell executes this list of statements. Finally, the keyword `esac` ends the `case` statement. In the example shown, note the case that matches either the string "sam" or "tex".

### for

`for` is a useful way of iterating through items in a list. It can be any list of strings, but it's particularly useful for iterating through a file list. The following example iterates through all of the files in the directory myfiles and creates a backup file for each one. (It would choke on any directories, but let's keep the example simple and not test for whether the file is a directory.) ⁞

```for filename in myfiles/*
do
cp \$filename \$filename.bak
done
```

As with any command that sets a variable, the first line of the `for` block sets the variable called filename without a dollar sign.

There's another variety of `for`, which is similar to the `for` construct used in other languages, but which is used less in shell scripting than it's used in other languages, partially because the syntax for incrementing and decrementing variables in the shell is not entirely straightforward.

### parallel

`parallel` is a useful way of iterating through items in a list while maximizing the use of your computer by running jobs in parallel. The following example iterates through all of the files in the directory myfiles and creates a backup file for each one replacing the extension with `.bak`. (It would choke on any directories, but let's keep the example simple and not test for whether the file is a directory.)

```ls myfiles/* | parallel cp {} {.}.bak
```

`parallel` can often be used instead of `for` loops and `while read` loops, make these run faster by running them in parallel and make the code easier to read.

`parallel` can be used for a lot of more advanced features. You can watch an intro video here: http://www.youtube.com/watch?v=OpaiGYxkSuQ

# Maintainable Scripts

You are slowly delving into programming by the way of shell scripting.  Now it's the best time to start to learn about how to be a good programmer.  Since this book is just an introduction to the command line, we are only going to provide few but nevertheless very important hints centered around the idea of maintainability.

When programmers talk about maintainability they are talking about the ease with which a program can be modified, whether it's to correct defects, add new functionality, or improve its performance.  Unmaintainable programs are very easy to spot: they lack structure, so functionality is spread all over the place. When you push here they break way over there, a real nightmare. In general, they are very hard to read.  Consider for example this:

```#!/bin/sh
identify `find ~/Photos/Vacation/2008 -name \*.jpg` | cut -d ' ' -f 3 | sort | uniq -c
```

use your favorite editor to save this file as foo, then:

```\$ chmod +x foo
\$ ./foo
11 2304x3072
12 3072x2304
```

What that small monster does is find files that ends with ".jpg" in a certain directory, run `identify` on all of them, and report some kind of information that someone at some time must have thought very useful. If the programmer would only have added some hints as to what the programs does...

## Don't use long lines

The first thing you'll note is that our example of an unmaintainable program is one long line. There's really no need for that.  What if the program looked like this instead:

```#!/bin/sh
identify `find ~/Photos/Vacation/2008 -name \*.jpg` |
cut -d ' ' -f 3 |
sort |
uniq -c
```

It becomes a little bit easier to spot where each command begins and ends.  It's still the same set of piped programs, only their presentation is different.  You can break long lines at pipes and the functionality will be the same.

You can also split one command into several lines by using the \ character at the end of a line to join it with the next:

```#!/bin/sh
echo This \
is \
really \
one \
long \
command.
```

## Use descriptive names for your scripts

The second thing you might have noticed is that the script is called "foo".  It's short and convenient but it doesn't provide a clue as to what the program does.  What about this:

```\$ mv foo list_image_sizes
```

Now the name helps the user understand what the script does.  Much better, isn't it?

## Use variables

One bothersome thing about that program is its use of backticks.  Sure, it works, but it also has drawbacks.  Perhaps the biggest one is the least evident one, too: remember that backticks substitute the output of the command they contain in the position where they appear.  Some systems have a limit of the command line length they allow.  In this particular case, if the specified directory has lots and lots of pictures, the command line can become extraordinarily long, producing an obscure error when you call the program.  There are several methods that you can use to remedy this, but for the purpose of this explanation, let's try the following:

```#!/bin/sh
find ~/Photos/Vacation/2008 -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Now `find` is running the same as before, but its output, the list of filenames, is piped into a while-loop.  The condition for the loop is `read image``read` is a function that reads one line at a time, splits its input into fields and then assigns each field to a variable, image in this case.  Now `identify` works on one image at a time.

Notice how introducing a variable makes the program a bit easier to read: it literally says that you wish to identify an image.  Also note how the effect on future programmers wouldn't have been the same if the variable was called something like door or cdrom.  Names are important!

But there's still something bothersome about the program: that directory name is glowing like a sore thumb.  What if we change the program like this:

```#!/bin/sh
START_DIRECTORY=~/Photos/Vacation/2008

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

That's a little bit better: now you can edit your script and change the directory each time you wish to process a different one.

## Use arguments

That last bit didn't sound quite right, did it?  After all, you don't edit `ls` each time you wish to list the contents of a different directory, do you? Let's make our program just as adaptable:

```#!/bin/sh
START_DIRECTORY=\$1

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

The \$1 variable is the first argument that you pass to your script (\$0 is the name of the script you're running).  Now you can call your script like this:

```\$ ./list_image_sizes ~/Photos/Vacation/2008
```

Or you can examine the 2007 pictures, if you wish:

```\$ ./list_image_sizes ~/Photos/Vacation/2007
```

## Know where you begin

Consider what happens if you run the script like this:

```\$ ./list_image_sizes
```

Maybe that's what you want, but maybe it isn't.  What happens is that \$1 is empty, so \$START_DIRECTORY is empty as well and in turn the first argument to find is also empty.  That means that find will search your current working directory.  You might wish to make that behavior explicit:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

The program behaves exactly as before, with the only difference that in six months, when you come back and look at the program, you won't have to wonder why it's producing results even when you don't pass it a directory as argument.

## Look before you leap

Speaking of which, what happens if you do pass an argument to the script, but that argument isn't a directory or better yet, it doesn't even exist?  Try it.

Not pretty, ah?

What if we do this:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
exit
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

That's better.  Now the script won't even attempt to run if the argument it receives isn't a directory.  It isn't very polite, though: it silently exits with no hint of what went wrong.

## Complain if you must

That's easily fixed:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

## Mind your exit

The program now produces an error message if you don't pass it an existing directory as argument and it exits without further action.  It would be nice if you let other programs that might eventually call your script know that there was an error condition.  That is, it would be nice if your program exits with an error code.  Something like this:

```#!/bin/sh
if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Now, if there's an error, your script's exit code is 1. If the program exits normally, the exit code is 0.

## Use comments

Anything following a # symbol on a line will be ignored, allowing you to add notes about how your script works.  For example:

```#!/bin/sh
# This script reports the sizes of all the JPEG files found under the current
# directory (or the directory passed as an argument) and the number of photos
# of each size.

if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f 3 |
sort |
uniq -c
```

Comments are good, but don't fall prey to writing too many comments.  Try to construct your program so that the code itself is clear.  The reason behind this is simple: next year, when someone else changes your script, that other person could well change the commands and forget about the comments, making the later misleading.  Consider this:

```# count up to three
for n in `seq 1 4` ; do echo \$n ; done
```

Which one is it?  Three or four?  Evidently the program is counting up to four, but the comment says it's up to three.  You could adopt the position that the program is right and the comment is wrong.  But what if the person who wrote this meant to count to three and that's the reason why the comment is there?  Let's try it like this:

```# There are three little pigs
for n in `seq 1 3` ; do echo \$n ; done
```

The comment documents the reason why the program is counting up to three: it is not describing what the program does, it's describing what the program should do.  Let's consider a different approach:

```TOTAL_PIGS=3
for pig in `seq 1 \$TOTAL_PIGS` ; do echo \$pig ; done
```

Same result, slightly different program.  If you reformat your program, you can do without the comments (as a side note, the fancy word for the kinds of change we have been making is refactoring, but that goes outside the scope for this book).

## Avoid magic numbers

In our current example, there's a magic number, a number that makes the program work, but no one knows why it has to be that number.  It's magic!

```...
cut -d ' ' -f 3 |
...
```

You have two choices: write a comment and document why it has to be "3" instead of "2" or "4" or introduce a variable that explains why by way of its name.  Let's try the latter:

```#!/bin/sh
# This script reports the sizes of all the JPEG files found under the current
# directory (or the directory passed as an argument) and the number of photos
# of each size.

if test -n "\$1" ; then
START_DIRECTORY=\$1
else
START_DIRECTORY=.
fi

if ! test -d \$START_DIRECTORY ; then
echo \"\$START_DIRECTORY\" is not a directory or it does not exist.  Stop.
exit 1
fi

IMAGE_SIZE_FIELD=3

find \$START_DIRECTORY -name \*.jpg |
while read image ; do identify \$image ; done |
cut -d ' ' -f \$IMAGE_SIZE_FIELD |
sort |
uniq -c
```

It does improve things a little; at least now we know where the 3 comes from.  If ImageMagick ever changes the output format, we can update the script accordingly.

## Did it work?

Last but not least, check the exit status of the commands you run.  As it stands right now, in our example there's not much that can fail.  So let's try one last example:

```#!/bin/sh
# Copy all the HTML and image files present in the source directory to the
# specified destination directory.

SRC=\$1
DST=\$2

if test -z "\$SRC" -o -z "\$DST" ; then
cat<<EOT
Usage:

\$0 source_directory destination_directory
EOT
exit 1
fi

if ! test -d "\$SRC" ; then
echo \"\$SRC\" is not a directory or it does not exist.  Stop.
exit 1
fi

if test -e "\$DST" ; then
echo \"\$DST\" already exists.  Stop.
exit 1
fi

if ! mkdir -p "\$DST" ; then
echo Can\'t create destination directory \"\$DST\".  Stop.
exit 1
fi

# Obtain the absolute path for \$DST
cd "\$DST"
DST=`pwd`
cd -

cd "\$SRC"

find ! -type d \( -name \*.html -o -name \*.jpg -o -name \*.png \) |
while read filename ; do
dir=`dirname "\$filename"`
mkdir -p "\$DST/\$dir" && cp -a "\$filename" "\$DST/\$filename"
if test \$? -ne 0 ; then
echo Can\'t copy \"\$filename\" to \"\$DST/\$filename\"
echo Abort.
exit 1
fi
done
```

Note that this example makes use of many things you learned in this book. It does not try to be definitive; you can practice improving it!

The thing you should note now is how the program pays attention to the error conditions that the different programs might produce.  For example, instead of just calling `mkdir` to check if a program worked, it does this:

```if ! mkdir -p "\$DST" ; then
echo Can\'t create destination directory \"\$DST\".  Stop.
exit 1
fi
```

It calls `mkdir` as the condition for `if`.  If `mkdir` encounters an error, it will exit with a non-zero status and the `if` clause will interpret that as a false condition. The "!" is a negation operator that inverts false to true (or vice versa. So the line as a whole basically says "Run the `mkdir` command, turn an error into a true value with the "!" operator, and take action if it's true that there's an error." In short, if `mkdir` encounters an error, the flow will enter the body of the `if`.  This might happen, for example, if the user running the script doesn't have permissions to create the requested directory.

Note also the usage of "&&" to verify error conditions:

```mkdir -p "\$DST/\$dir" && cp -a "\$filename" "\$DST/\$filename"
```

If `mkdir` fails, `cp` won't be called.  Furthermore, if either `mkdir` or `cp` fails, the exit status will be non-zero.  That condition is checked in the next line:

```if test \$? -ne 0 ; then
```

Since this might indicate something going awfully wrong (e.g., is the disk full?), we had better give up and stop the program.

## Wrapping up

Writing scripts is an art. You can become a better artist by looking at what others have done before you and doing a lot yourself. In other words: read a lot of scripts and write a lot of scripts yourself.

Happy hacking!

# Python

Python is a programming language that can be used to perform tasks that would be difficult or cumbersome on the command line. Python is included by default with most GNU/Linux distributions. Just like the command line, you can either use Python by typing commands individually, or you can create a script file. If you want to type commands individually, start the Python interpreter by typing `python`.

```\$ python
>>> 10 + 10
20

```

To exit an interactive Python session, type Ctrl + d.

To write a multi-line script in Python that you can run from outside of the Python interactive console, put the commands in a file. You can use any text editor to create this file -- Emacs, Vim, Gedit, or whatever your favorite is; and call it what you like (often the filename ends with ".py" to help distinguish it in a directory listing). A script could look like this:

```a = 1 + 2
print a
```

In this example, we create a variable, a, which stores the result of "1 + 2". It then uses the print command to print out the result, which should be 3. If we save this file as first.py, we can run it from the command line.

```\$ python first.py
3
```

The Python program printed out "3", just like we expected. We can add a first line specifying the python interpreter, make it executable and then just type ./first.py to run it. If we still have the first.pl file from the previous chapter, it does exactly the same thing and we can make a link to one of them in order to choose the method by which we add 1 and 2.

```\$ ln -s first.py 1plus2
\$./1plus2
3
\$ln -sf first.pl 1plus2
\$./1plus2
3
```

Of course, we can use Python to do more useful things. For example, we can look at all the files in the current directory.

```\$ python
>⁞⁞>> import os
>>> os.listdir('.')
['notes.txt', 'readme.txt', 'first.py']
```

Here we import the standard library "os", which has operating system-like functions in it.  We call the listdir function to return a list of names of files in the current directory.  We pass the directory name as a string (enclosed in single quotes); the single dot refers to the current directory.

Let's try doing something with these files -- here's a way to find all of the ".py" files in a directory.

```>>> files = os.listdir('.')
>>> files
['notes.txt', 'readme.txt', 'first.py']
>>> [file for file in files if '.py' in file]
['first.py']
```

Above we use a powerful construction called a list comprehension to produce a new list by transforming and filtering a list.  Below is a simpler but wordier way to pick out the all of the files with ".txt" in them.

```>>> for file in files:
...     if '.txt' in file:
...         print file
...
notes.txt
readme.txt
```

The indentation is required in Python.  Indentation tells the Python interpreter what to include in the for loop and what to include in the if statement.  Also, a you must press an additional Enter at the last set of three dots to tell the Python interpreter that you're done.

We can also use command line code in Python by passing it to the os.system function. For example, if we wanted to delete all of the ".txt" files, we could use.

```>>> for file in files:
...     if '.txt' in file:
...         cmd = 'rm ' + file
...         os.system(cmd)
...
```

Above, we construct a shell command cmd as a Python string by concatenating (using the "+" operator) the strings "rm " and the filename, then pass it to the os.system function.  Now we can check to see that the files have been deleted.

```>>> os.system('ls')
first.py
```

## More information about Python

The Python web site at http://www.python.org  contains an impressive amount of information and documentation about the Python language.  If you are just getting started with programming, the book "How to Think Like a Computer Scientist" by Jeffrey Elkner, Allen B. Downey and Chris Meyers at http://openbookproject.net/thinkCSpy/index.html is a good place to start.

# WordPress

WordPress was originally developed as blogging software: a blog (web-log) is an online journal, diary or commentary, presented as a website. Generally, one or more contributors (bloggers) add new content to the top of the website on a semi-regular basis. WordPress is now sophisticated enough to be used to create more advanced websites, and as such we should really call it a Content Management System (CMS). However, as it was not built for managing websites with large numbers of users, WordPress may sometimes require a bit of creativity to get it to do what you need.

WordPress has a reputation as being easy to use and flexible. It is particularly interesting for its enormous additional list of plugins. These are small additions to the software that add extra features. With plugins you can turn WordPress into a social networking site, add Testimonials, generate visitor statistics, and much much more.

There are two basic ways in which you can start using WordPress:

• Sign up for an account on WordPress.com or a similar site that offers free blog hosting.
• Install your own WordPress CMS on a server that you have access to.

The first option is the easiest, but if you wish to have more control over how WordPress is configured and be able to add more functionality, then you will want the second option of installing WordPress yourself. In this case, you should download the software from WordPress.org.

## What is the difference between WordPress.com and WordPress.org?

The difference between WordPress.com and WordPress.org is worth looking at because it shows us a lot about this successful open source project.

WordPress.org is home to the blogging software WordPress. Here you can download the software and get support on installing, troubleshooting and using the software. Huge numbers of people write code voluntarily to make the software work better. This is possible because the software has an open licence which allows anyone to download it, use it and improve it. There is more information about the WordPress software on the About page of WordPress.org.1 Over 25 million independently-hosted blogs use this software.

WordPress.com is run by Automattic, a company that uses and adapts the WordPress software to provide a  service for bloggers. In this way, you can use WordPress without having to know how to install software on a server. It's free to sign up and use the basic service, but you pay extra to host videos, modify the layout (by editing the CSS), and to add other premium services. This commercial venture funds coders to spend time developing the core of WordPress code which benefits the wider community. There is more information about Automattic and Wordpress.com on their About page.2 Over 20 million blogs are hosted on Wordpress.com.

### Privacy limitations of WordPress.com

WordPress.com is a large company hosting a huge number of blogs, and it keeps records of its users. It is also a US company and it complies to US law when it comes to disclosing users' details. Their privacy policy for international cases is less clear.

For the most part, WordPress.com seems to host controversial content without problem. However, there have been several cases where Automattic has come under criticism for releasing information about its users and for suspending blogs. If this is something that concerns you, then you should try to set up your own blog or use a WordPress service other than Wordpress.com.

## WordPress for Blogs and Websites

WordPress started out as blogging software but it is now being used for much more. This chapter deals with what is a blog and then compares Blogs to Websites. There is another chapter which looks at WordPress and online communities.

### What is a Blog?

A blog is a shortened version of the term "web-log".  A "log" is a written record, such as a diary, and a "web-log" is a diary that is kept on the web. The first blogs were very much like diaries, or personal accounts of peoples' lives. However blogs are now used for a wide range of activities. Blogs are used by independent journalists for publishing and they are used by companies for reporting on internal activities. Web businesses use them to inform their users about what they are doing, or they are used as a kind of online magazine. There are many, many other uses for blogs.

### Websites compared to Blogs

Websites are often slower to change compared to blogs, and have more static information. Structurally they have more of a focus on menus and sub-sections, allowing visitors to navigate to the part of the website that contains the information they need, whereas blogs often display a list of posts on a single page with the most recent entry at the top. Websites are often maintained by more than one person, so different levels of user access can be useful.

Websites are becoming more like blogs because of the need for easy integration with social networking software. At the same time, blogging software is becoming more suitable for creating websites. For WordPress this is especially true due to more advanced use of menus, different user roles, community focus and the intuitive user interface that it offers.

If you are considering spending a lot of time and potentially money (if you are paying others) on a website with other tools it is definitely worth asking the question: "Can we quickly create a site that fulfills our needs with WordPress?" If the answer is yes, keep reading this manual!

1. http://www.wordpress.org/about^
2. http://wordpress.com/about^

# Pure Data

Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.  Because all of these various media are handled as digital data within the program, many fascinating opportunities for cross-synthesis between them exist. Sound can be used to manipulate video, which could then be streamed over the internet to another computer which might analyze that video and use it to control a motor-driven installation.

Programming with Pure Data is a unique interaction that is much closer to the experience of manipulating things in the physical world.  The most basic unit of functionality is a box, and the program is formed by connecting these boxes together into diagrams that both represent the flow of data while actually performing the operations mapped out in the diagram.  The program itself is always running, there is no separation between writing the program and running the program, and each action takes effect the moment it is completed.

The community of users and programmers around Pure Data have created additional functions (called "externals" or "external libraries") which are used for a wide variety of other purposes, such as video processing, the playback and streaming of MP3s or Quicktime video, the manipulation and display of 3-dimensional objects and the modeling of virtual physical objects. There is a wide range of external libraries available which give Pure Data additional features. Just about any kind of programming is feasible using Pure Data as long as there are externals libraries which provide the most basic units of functionality required.

The core of Pure Data written and maintained by Miller S. Puckette (http://crca.ucsd.edu/~msp/) and includes the work of many developers (http://www.puredata.org/), making the whole package very much a community effort. Pd runs on GNU/Linux, Windows, and Mac OS X, as well as mobile platforms like Maemo, iPhoneOS, and Android.

# Pure Data

Pure Data (or Pd) is a real-time graphical programming environment for audio, video, and graphical processing. Pure Data is commonly used for live music performance, VeeJaying, sound effects, composition, audio analysis, interfacing with sensors, using cameras, controlling robots or even interacting with websites.  Because all of these various media are handled as digital data within the program, many fascinating opportunities for cross-synthesis between them exist. Sound can be used to manipulate video, which could then be streamed over the internet to another computer which might analyze that video and use it to control a motor-driven installation.

Programming with Pure Data is a unique interaction that is much closer to the experience of manipulating things in the physical world.  The most basic unit of functionality is a box, and the program is formed by connecting these boxes together into diagrams that both represent the flow of data while actually performing the operations mapped out in the diagram.  The program itself is always running, there is no separation between writing the program and running the program, and each action takes effect the moment it is completed.

The community of users and programmers around Pure Data have created additional functions (called "externals" or "external libraries") which are used for a wide variety of other purposes, such as video processing, the playback and streaming of MP3s or Quicktime video, the manipulation and display of 3-dimensional objects and the modeling of virtual physical objects. There is a wide range of external libraries available which give Pure Data additional features. Just about any kind of programming is feasible using Pure Data as long as there are externals libraries which provide the most basic units of functionality required.

The core of Pure Data written and maintained by Miller S. Puckette (http://crca.ucsd.edu/~msp/) and includes the work of many developers (http://www.puredata.org/), making the whole package very much a community effort. Pd runs on GNU/Linux, Windows, and Mac OS X, as well as mobile platforms like Maemo, iPhoneOS, and Android.

# Installing on Mac OS X

1. To download Firefox, visit http://www.mozilla.com/ and click on the big green button labeled "Firefox 3.5 Free Download." The page shown below opens, and the download starts. If it does not, click the link on the page.

2. When prompted, click OK.
Once the download is complete this window appears:
3. Click and hold the Firefox.app icon, then drag it on top of the Applications icon. When it is on top of the Applications icon, release the mouse button. This starts copying the program files to the Applications directory on your computer.
4. When the installation step is finished, close the two small Firefox windows.
5. Eject the Firefox disk image. If this does not work by normal means, select the disk image icon and then, in the Finder menu, select File > Eject Firefox.
6. Now, open the Applications directory and drag the Firefox icon to the dock:

7. Click either icon (in the Dock or the Applications folder) to start Firefox. The Import Wizard dialog box appears:

8. To import your bookmarks, passwords and other data from Safari, Click Continue.
9. Click Continue. Now you see the Welcome to Firefox page.

• To learn basic information about Firefox, click Getting Started.
• For assistance, click Visit Support.
• To customize your new installation using the addons wizard, click Customize Now!
• In the upper right of the Welcome page is a button labeled Know your rights. Click this button to display the following screen, which tells you about your rights under the Mozilla Public License and provides links to Mozilla's privacy policies and service terms, as well as trademark information.

10. Close the Welcome to Firefox page (click the x in the tab at the top of the page). Now you see the Firefox Start page.

Congratulations, you are now ready to use Firefox!

If you have permission problems when trying to copy Firefox from the disk image to your Applications folder, first try deleting your old Firefox copy, then proceeding.

If you're installing a beta and that you want to keep your former Firefox copy, first rename your old Firefox copy to something like "Firefox old" and then copy the beta to your Applications folder.

# Using Tabbed Browsing

Tabbed browsing enables you to open several web pages in a single Firefox window. Each page appears in a separate tab.

When you start Firefox for the first time, you see a single window with two tabs displayed. Firefox will show the "Welcome to Firefox" page and default start page.

## Setting Up Tabbed Browsing

To set up tabbed browsing, follow these steps:

1. Click the following menu command: Tools > Options for Mac OS X, or Edit > Preferences on Linux and Windows.
2. In the Options window, click the Tabs icon.
3. Under New pages should be opened in, click a new tab.
4. Check the following options:
• Always show the tab bar
• When I open a new tab, switch to it immediately
5. Click OK.
• Note: you can always right-click a link and choose "Open Link in New Tab" or "Open Link in New Page" regardless of the above settings.

## Opening a New Tab

You can open a new tab using any of the following techniques:

• Use this menu command: File > New Tab.
• Press Ctrl+T on Windows and Linux, or CMD+T (⌘+T) on Mac OS X.
• Double-click an empty space in the Tab Bar.
New tabs will open immediately to the right of the current tab in Firefox 3.6 and later. In earlier versions of Firefox, the tab would be opened to the right of all open tabs.

## Opening a Link in a New Tab

To open a link in a new tab, do one of the following:

• If you have a mouse with a scroll wheel, click the wheel while pointing to a link.
• Drag the link and drop it on an empty space in the Tab Bar.

Note: If only one web page is open, the Tab Bar may be hidden. See Setting Up Tabbed Browsing above for information on how to turn on the Tab Bar.

• Drag and drop the link on a tab to open the link in that tab.
• Right-click the link, then select Open Link in New Tab.
• To open a link in a new tab from the Location Bar, type the URL, then press Alt+Enter.

## Moving Tabs Within a Window

Tabs appear in the order in which you open them. This may not always be what you want.

To move a tab to a different location in the Firefox tab bar, simply drag it there using your mouse. While you are dragging the tab, Firefox displays a small indicator to show where the tab will be moved. You can also use keyboard shortcuts to move tabs within a window.

## Moving Tabs Between Windows

You can move tabs between windows or create a new window with a single tab by dragging the tab off the tab bar.  Firefox will create a new window with just that tab.  You can move tabs between windows by dragging tabs from the tab bar in one window to the tab bar to another.  If you drag away the only tab in a window, Firefox will close that window.

## Bookmarking a Set of Tabs

To bookmark a set of tabs, do either of the following:

• Click the following menu command: Bookmarks >Bookmark All Tabs.
• Press Ctrl+Shift+D.

## Opening a Set of Bookmarks in Tabs

To open a set of bookmarks in tabs, follow these steps:

1. Click the Bookmarks menu, then click a bookmark folder.
2. In the sub-menu, click Open All in Tabs; or

If you have a mouse with a scroll wheel, click the wheel while pointing to the folder.

## Setting Multiple Home Pages as Tabs

Instead of using only one web page as your home page, you can set up a collection of home pages using multiple tabs.

To set up multiple tabbed  home pages, follow these steps:

1. Navigate to a page that you want to include in your collection of home pages.
2. Open a new, blank tab, then navigate to another page that you want to include.
3. Continue opening tabs and navigating to pages until you have set up the entire collection.
4. Click the following menu command: Tools > Options.
5. Click the Main icon.
6. Under Startup, click .

Now, whenever you start Firefox or click the Home button on the Navigation toolbar, Firefox displays the entire collection of tabbed home pages.

## Closing Tabs

To close a single tab, do any of the following:

• Click the Close Tab button

• Press Ctrl+W.
• If you have a mouse with a scroll wheel, click the wheel while pointing to a tab.
• Click the following menu command: File > Close Tab.

To close all tabs except the current one, right-clickhold down the Ctrl key while you click the tab, then click Close Other Tabs.

If you close the last tab in a window, FIrefox will close the window.

## Restoring Closed Tabs

Firefox tracks the tabs that you have recently closed. To restore one or all closed tabs, follow these steps:

1. Click the following menu command: History > Recently Closed Tabs.
2. Do any of the following:
* Click the name of the tab that you want to re-open.
* Click Open All in Tabs to restore all of the closed tabs.
* Press Ctrl+Shift+T to open each tab one by one in reverse order.

# About This Manual

This manual was started by the team at FLOSS Manuals, and evolved during a two-day Book Sprint at Toronto Open Source Week 2010 held at Seneca College in Toronto, Canada. The sprint was a collaborative effort by FLOSS Manuals and Mozilla Messaging.

Scott Nesbitt did the organization for the event with considerable assistance from Chris Tyler (Seneca College), Beth Agnew (Seneca College) and Adam Hyde.

Blake Winton (Thunderbird Hacker at Mozilla Messaging) also attended.

Around 20 writers, including a number of students from Seneca College's Technical Communications program, collaborated in virtual and real space to produce a book in two days! In addition to original content, material was reused from the excellent Thunderbird Support Knowledge Base.

# Installing Thunderbird on Mac OS X

Thunderbird runs on Mac OS X 10.4.x and later. Thunderbird will run on a computer with at least the following hardware:

• An Intel x86 or PowerPC G3, G4, or G5 processor
• 256 MB of memory. Mozilla recommends 512 MB of memory or more
• 200 MB hard drive space

## Download and Install Thunderbird

1. Use your web browser to visit the Thunderbird download page at http://www.mozillamessaging.com/en-US/thunderbird/. This page detects your computer's operating system and language, and it recommends the best version of Thunderbird for you to use.

If you want to use Thunderbird in a different languages or with a different operating system, click the Other Systems and Languages link on the right side of the page and select the version you need.

2. Download the Thunderbird disk image. When the download is complete, the disc image may automatically open and mount a new volume called Thunderbird.
If the volume did not mount automatically, open the Download folder and double-click the disk image to mount it. A Finder window appears:

3. Drag the Thunderbird icon into your Applications folder. You've installed Thunderbird!
4. Optionally, drag the Thunderbird icon from the Applications folder into the Dock. Choosing the Thunderbird icon from the Dock lets you quickly open Thunderbird from there.

Note: When you run Thunderbird for the first time, newer versions of Mac OS X (10.5 or later) will warn you that the application Thunderbird.app was downloaded from the Internet.

If you downloaded Thunderbird from the Mozilla site, click the Open button.

# Uninstalling

Removing Thunderbird is pretty easy to do. But the process to uninstall Thunderbird varies depending on the operating system you are using. This chapter looks at the steps for uninstalling Thunderbird on Windows, Mac OS X, and Linux.

## Uninstalling Thunderbird in Windows

1. Go to the Mozilla Thunderbird installation folder. This is most likely located at C:\Program Files\Mozilla Thunderbird\uninstall.
2. Double-click the helper.exe application file.
3. The Thunderbird Uninstall Wizard opens up. Click the Uninstall button to begin removing Thunderbird from your computer. If you want to stop the removal process, click the Cancel button.
4. Click the Finish button to complete the removal.

## Uninstalling Thunderbird in Mac OS X

1. Open a new window in the Finder.
2. In the Applications folder, right-click Thunderbird.app. A menu appears.
3. Select Move to Trash.
4. In your user Library, right-click the Thunderbird folder. A menu appears.
5. Select Move to Trash.

## Uninstalling Thunderbird in Ubuntu

### To uninstall Thunderbird using the Ubuntu Software Center

1. Click Ubuntu Software Center under the Applications menu.
2. Type "Thunderbird" in the search box and press the Enter on your keyboard. The Ubuntu Software Center will find Thunderbird in its list of available software.
3. Click the Remove button.
4. After Thunderbird is uninstalled, start Nautilus and press Ctrl+H to show hidden files. Then, delete the folder .mozilla-thunderbird in your home directory.

### To uninstall Thunderbird using Synaptic Package Manager

1. Click Administration under the System menu, then click Synaptic Package Manager.
2. In the Quick search box, type "Thunderbird" and then press Enter on your keyboard. A list of software that you can install (called packages) appears.
3. Find Thunderbird in the list, right click on it, and then click on Mark for complete removal from the menu that appears.
4. Click the Apply button.
5. After Thunderbird is uninstalled, start Nautilus and press Ctrl+H to show hidden files. Then, delete the folder .mozilla-thunderbird in your home directory.

# Migrate to Thunderbird

If you want to migrate your email from another email client or a web email service like Gmail or Hotmail, Thunderbird can help you do the job. However, migration is not always easy, especially if you have a large archive of sorted email and address books that you want to keep. That said, Thunderbird does a quite a good job of migrating your mail from Outlook, Outlook Express, Eudora, and Apple Mail.

## Before you migrate

Before you consider migrating to Thunderbird, be prepared to do a little research on the best way to back up your email.

First, you should find out whether or not you can migrate your email from your current software. A good place to find information is the Mozilla knowledge base:
http://kb.mozillazine.org/Thunderbird_:_FAQs_:_Migration#Specific_programs

Always back up your email. How you do the back up depends on your operating system and the way that you currently manage your email. It can be as easy as finding the folder with all your email in it and copying it to another folder or, preferably, another computer or back up disk. However, things are seldom that easy.

When you have determined that you can migrate to Thunderbird and you have backed up your email, then follow the steps below.

## Import address books

Here's how to import address books from another email client into Thunderbird.

1. Go to the Tools menu and click Import.

You'll see the Import wizard window.

2. Click the Address Book button and then click the Next button.

3. Click on the name of your email client in the list and then click the Next button.

4. Click the Finish button to finish importing the address book.

5. Check your Thunderbird address book to confirm that your information was successfully moved over from your old email client.

## Transfer your e-mails

Here's how to transfer your email from another email client into Thunderbird.

1. Go to the Tools menu and click Import. You will see the Import wizard window.
2. Click the Mail button and then click the Next button.

3. Click on the name of your email client in the list and then click the Next button.

4. Click the Finish button to finish importing your email.

5. Check the Folders pane to confirm that your email was successfully imported.

You can find the imported mail in the Local Folders section of the Folders pane.

## Importing contacts from a text file

Thunderbird can import contact lists from other email applications and some web mail services, as long as the other applications can export their lists to a text file format that Thunderbird can read (for example, LDIF, tab delimited or comma separated). You can find information about how to export a contact list to a text file in the help for the other email application or web mail service.

After exporting the contact list from another application, you can import the contacts file into Thunderbird. Here's how:

1. Go to the Tools menu and click Import. In the Import window click the Address Books button and then click the Next button.

2. Specify whether you are importing a contact list stored in a text file and click Next.
3. Select the file that contains the contact list.
4. Thunderbird will display a window where you can map Thunderbird address book fields to the fields from the other application's contact list.

The column on the left shows the fields in Thunderbird's contact list. The column on the right shows the fields in the original email application's contact list. If you un-check fields they will not be imported. To change the match between the left and right columns, select an item and click Move Up or Move Down to map it to a different column.

5. When the fields are all lined up, click the OK button. Thunderbird creates a new address book with the addresses exported from the original application. The name of the address book will be the file name of the imported data file.

#### A few notes about importing contacts from a text file

• Matching Thunderbird fields to text file fields can be tricky. Using this feature is a little like putting together a puzzle. Almost every time you move a Thunderbird field it displaces a field in the text file. The displaced field in the text file may bounce another field that is correctly matched to a field in Thunderbird. You have to be patient and take your time while doing this. You want import the contact list correctly the first time that you do it. It's no fun to have realized that the import did not work properly and that you have to do it again or, even worse, that you have to manually update your contact information.
• Importing a contact list from a web mail application like Gmail can be very daunting because Google's contacts file contains many more fields than does Thunderbird's address book. You can make this a little simpler by opening the Gmail export file in a spreadsheet program, deleting the unwanted columns, and saving the file in CSV format. The smaller file should be easier to handle when importing into Thunderbird.

### Exporting Contacts

To export contacts from Thunderbird a text file for importing into another application, open the Address Book go to the Tools menu and click Export. You will be prompted to specify a name for the output file name and an export format. The three export formats are LDIF, CSV (comma delimited text), and tab delimited text.

# Fluxa

Fluxa is an optional addition to fluxus which add audio synthesis and sample playback. It’s also an experimental non-deterministic synth where each ‘note’ is it’s own synth graph.

Fluxa is a framework for constructing and sequencing sound. It uses a minimal and purely functional style which is designed for livecoding. It can be broken down into two parts, the descriptions of synthesis graphs and a set of language forms for describing procedural sequences.

(Fluxa is also a kind of primitive homage to supercollider – see also rsc, which is a scheme binding to sc)

Example:
```(require fluxus-017/fluxa)

(seq
(lambda (time clock)
(play time (mul (sine 440) (adsr 0 0.1 0 0)))
0.2))
```

Plays a sine tone with a decay of 0.1 seconds every 0.2 seconds

### Non-determinism

Fluxa has decidedly non-deterministic properties – synth lifetime is bound to some global constraints:

• A fixed number of operators, which are recycled (allocation time/space constraint)
• A maximum number of synths playing simultaneously (cpu time constraint)

What this means is that synths are stopped after a variable length of time, depending on the need for new operators. Nodes making up the synth graph may also be recycled while they are in use – resulting in interesting artefacts (which is considered a feature!)

### Synthesis commands

```(play time node-id)
```

Schedules the node id to play at the supplied time. This is for general musical use.

```(play-now node-id)
```

Plays the node id as soon as possible – mainly for testing

### Operator nodes

All these commands create and return a nodes which can be played. Parameters in the synthesis graph can be other nodes or normal number values.

### Generators

```(sine frequency)
```

A sine wave at the specified frequency

```(saw frequency)
```

A saw wave at the specified frequency

```(squ frequency)
```

A square wave at the specified frequency

```(white frequency)
```

White noise

```(pink frequency)
```

Pink noise

```(sample sample-filename frequency)
```

Loads and plays a sample – files can be relative to specified searchpaths. Samples will be loaded asynchronously, and won’t interfere with real-time audio.

```(adsr attack decay sustain release)
```

Generates an envelope signal

### Maths

```(add a b)
(sub a b)
(mul a b)
(div a b)
```

Remember that parameters can be nodes or number values, so you can do things like:

```(play time (mul (sine 440) 0.5))
```

or

```(play time (mul (sine 440) (adsr 0 0.1 0 0)))
```

### Filters

```(mooghp input-node cutoff resonance)
(moogbp input-node cutoff resonance)
(mooglp input-node cutoff resonance)
(formant input-node cutoff resonance)
```

### Global audio

```(volume 1)
```

Does what is says on the tin

```(eq 1 1 1)
```

Tweak bass, mid, treble

```(max-synths 20)
```

Change the maximum concurrent synths playing – default is a measly 10

```(searchpath path)
```

Add a path for sample loading

```(reset)
```

The panic command – deletes all synth graphs and reinitialises all the operators – not rt safe

### Sequencing commands

Fluxa provides a set of forms for sequencing.

```(seq (lambda (time clock) 0.1))
```

The top level sequence – there can only be one of these, and all code within the supplied procedure will be called when required. The time between calls is set by the returned value of the procedure – so you can change the global timing dynamically.

The parameters time and clock are passed to the procedure – time is the float real time value in seconds, to be passed to play commands. It’s actually a little bit ahead of real time, in order to give the network messages time to get to the server. You can also mess with the time like so:

```(play (+ time 0.5) ...)
```

Which will offset the time half a second into the future. You can also make them happen earlier – but only a little bit. Clock is an ever increasing value, which increments by one each time the procedure given to seq is called. The value of this is not important, but you can use zmod, which is simply this predefined procedure:

```(define (zmod clock v) (zero? (modulo clock v)))
```

Which is common enough to make this shortening helpful, so:

```(if (zmod clock 4) (play (mul (sine 440) (adsr 0 0.1 0 0))))
```

Will play a note every 4 beats.

```(note 10)
```

A utility for mapping note numbers to frequencies (I think the current scale is equal temperament) [todo: sort scala loading out]

```(seq
(lambda (time clock)
(clock-map
(lambda (n)
(play time (mul (sine (note n)) (adsr 0 0.1 0 0)))) clock
(list 10 12 14 15))
0.1))
```

`clock-map` maps the list to the play command each tick of the clock – the list can be used as a primitive sequence, and can obviously be used to drive much more than just the pitch.

```(seq
(lambda (time clock)
(clock-switch clock 128
(lambda ()
(play time (mul (sine (note n)) (adsr 0 0.1 0 0)))) (lambda ()
(play time (mul (saw (note n)) (adsr 0 0.1 0 0))))) 0.1))
```

This `clock-switch` switches between the procedures every 128 ticks of the clock – for higher level structure.

### Syncing

A osc message can be sent to the client for syncing for collaborative performances the format of the sync message is as follows:

```/sync [iiii] timestamp-seconds timestamp-fraction beats-per-bar tempo
```

When syncing, fluxa provides you with two extra global definitions:

sync-clock : a clock which is reset when a /sync is received sync-tempo : the current requested tempo (you are free to modify or ignore it)

[note: there is a program which adds timestamps to /sync messages coming from a network, which makes collaborative sync work properly (as it doesn’t require clocks to be in sync too) email me if you want more info]

### Known problems/todos

• Record execution – cyclic graphs wont work
• Permanent execution of some nodes – will fix delay/reverb

# Introduction

Fluxus is an environment which allows you to quickly make live animation and audio programs, and change them constantly and flexibly. This idea of constant change (flux) is where it's name comes from.

Fluxus does this with the aid of the Scheme programming language, which is designed for flexibility; and an interface which only needs to provide you with program code floating above the resulting visual output. This interface enables fluxus to be used for livecoding, the practice of programming as a performance art form. Most users of fluxus are naturally livecoders, and some write fluxus scripts in front of audiences, as well as using it to rapid prototype and design new programs for performance and art installation.

This emphasis on live coding, rapid prototyping and quick feedback also make fluxus fun for learning computer animation, graphics and programming – and it is often used in workshops exploring these themes.

This manual is vaguely organised in terms of thing you need to know first being at the beginning and with more complex things later on, but it's not too strict so I'd recommend jumping around to parts that interest you.

# Fluxus in DrScheme

DrScheme is a “integrated development environment” for Scheme, and it comes as part of PLT scheme, so you will have it installed if you are using fluxus.

You can use it instead of the built in fluxus scratchpad editor for writing scheme scripts. Reasons to want to do this include:

• The ability to profile and debug scheme code
• Better editing environment than the fluxus scratchpad editor will ever provide
• Makes fluxus useful in more ways (with the addition of widgets, multiple views etc)

I use it a lot for writing larger scheme scripts. All you need to do is add the following line to the top of your fluxus script:

```(require fluxus-[version]/drflux)
```

Where `[version]` is the current version number without the dot, eg “017”.

Load the script into DrScheme and press F5 to run it – a new window should pop up, displaying your graphics as normal. Rerunning the script in DrScheme should update the graphics window automatically.

### DrScheme in OS X

If you put the fluxus binary application to /Applications, you need to include the resources folder of the fluxus.app to the collects path, so DrScheme finds the fluxus modules. because they depend on the dynamic libraries and frameworks included in the application, you also need to set the dynamic library path. something like this from terminal:

```export PLTCOLLECTS=/Applications/PLT\ Scheme\ v4.2.5/collects/:/Applications/Fluxus.app/Contents/Resources/collects/
export DYLD_LIBRARY_PATH=/Applications/Fluxus.app/Contents/Frameworks/
```

then load DrScheme:

```open /Applications/PLT\ Scheme\ v4.2.5/DrScheme.app/
```

a simple test script should work now:

```#lang scheme
(require fluxus-[version]/drflux)
(clear)
(build-cube)
```

Where `[version]` is the current version number without the dot, eg “017”.

### Known issues

Some commands are known to crash DrScheme, (show-fps) should not be used. Hardware shading probably won’t work. Also DrScheme requires a lot of memory, which can cause problems.

# Making Movies

Fluxus is designed for real-time use, this means interactive performance or games mainly, but you can also use the frame dump commands to save out frames which can be converted to movies. This process can be fairly complex, if you want to sync visuals to audio, osc or keyboard input.

Used alone, frame dumping will simply save out frames as fast as your machine can render and save them to disk. This is useful in some cases, but not if we want to create a movie at a fixed frame rate, but with the same timing as they are generated at – ie synced with an audio track at 25fps.

### Syncing to audio

The `(process)` command does several things, it switches the audio from the jack input source to a file, but it also makes sure that every buffer of audio is used to produce exactly one frame. Usually in real-time operation, audio buffers will be skipped or duplicated, depending on the variable frame rate and fixed audio rate.

So, what this actually means is that if we want to produce video at 25fps, with audio at 44100 samplerate, 44100/25 = 1764 audio samples per frame. Set your (start-audio) buffer setting to this size. Then all you need to do is make sure the calls to (process) and (start-framedump) happen on the same frame, so that the first frame is at the start of the audio. As this process is not real-time, you can set your resolution as large as you want, or make the script as complex as you like.

### Syncing with keyboard input for livecoding recordings

You can use the keypress recorder to save livecoding performances and rerender them later.

To use the key press recorder, start fluxus with -r or -p (see in-fluxus docs for more info). It records the timing of each keypress to a file, it can then replay them at different frame rates correctly.

The keypress recorder works with the process command in the same way as the audio does (you always need an audio track, even if it’s silence). So the recorder will advance the number of seconds per frame as it renders, rather than using the real-time clock – so again, you can make the rendering as slow as you like, it will appear correct when you view the movie.

Recording OSC messages is also possible (for storing things like gamepad activity). Let me know if you want to do this.

### Syncing Problems Troubleshooting

Getting the syncing right when combining audio input can be a bit tricky. Some common problems I’ve seen with the resulting movies fall into two categories.

#### Syncing lags, and gets worse with time

The call to `(start-audio)` has the wrong buffer size. As I set this in my `.fluxus.scm` I often forget this. Set it correctly and re-render. Some lagging may happen unavoidably with really long (over 20 minutes or so) animations.

#### Syncing is offset in a constant manner

This happens when the start of the audio does not quite match the first frame. You can try adding or removing some silence at the beginning of the audio track to sort this out. I often just encode the first couple of seconds until I get it right.

# Frisbee

Frisbee is a simplified games engine. It is written in a different language to the rest of fluxus, and requires no knowledge or use of any of the other fluxus commands.

The language it uses is called 'Father Time' (FrTime), which is a functional reactive programming language available as part of PLT Scheme. Functional reactive programming (frp) is a way of programming which emphasises behaviours and events, and makes them a central part of the language.

Programming a graphics environment like a game is all about describing a scene and behaviours which modify it over time. Using normal programming languages (like the base fluxus one) you generally need to do these things separately, build the scene, then animate it. Using FRP, we can describe the scene with the behaviours built in. The idea is that this makes programs smaller and simpler to modify, thus making the process of programming more creative.

### A simple frisbee scene

This is the simplest frisbee scene:

```(require fluxus-017/frisbee)

(scene
(object))
```

`(scene)` is the main frisbee command - it is used to define a list of objects and their behaviours.

`(object)` creates a solid object, by default a cube (of course! :)

We can modify our object by using optional 'keyword parameters', they work like this:

```(scene
(object #:shape 'sphere))
```

This sets the shape of the object - there are some built in shapes:

```(object #:shape 'cube)
(object #:shape 'sphere)
(object #:shape 'torus)
(object #:shape 'cylinder)
```

Or, you can also load in .obj files to make your own shapes:

```(object #:shape "mushroom.obj")
```

These object files are relative to where you launch fluxus, or they can also live somewhere in the searchpaths for fluxus (which you can set up in your `.fluxus.scm` script using `(searchpath)`).

If we want to change the colour of our cube we can add a new parameter:

```(object
#:colour (vec3 1 0 0))
```

The vec3 specifies the rgb colour, so this makes a red cube. Note that frisbee uses (vec3) to make it's vectors, rather than (vector).

Here are the other parameters you can set on an object:

```(object
#:translate (vec3 0 1 0)
#:scale (vec3 0.1 1 0.1)
#:rotate (vec3 0 45 0)
#:texture "test.png"
#:hints '(unlit wire))
```

It doesn't matter what order you specify parameters in, the results will be the same. The transform order is always translate first, then rotate, then scale.

### Animation

FrTime makes specifying movement very simple:

```(object
#:rotate (vec3 0 (integral 10) 0))
```

Makes a cube which rotates 10 degrees every second. Rather than setting the angles explicitly, integral specifies the amount the rotation changes every second. We can also do this:

```(object
#:rotate (vec3-integral 0 10 0))
```

Which is easier in some situations.

### Making things reactive

What we have made with the integral command is what is called a behaviour - it's value depends on time. This is a core feature of FrTime, and there are many ways to create and manipulate behaviours. Frisbee also gives you some default behaviours which represent changing information coming from the outside world:

```(object
#:rotate (vec3 mouse-x mouse-y 0))
```

This rotates the cube according to the mouse position.

```(object
#:colour (key-press-b #\c (vec3 1 0 0) (vec3 0 1 0)))
```

This changes the colour of the cube when you press the 'c' key.

```(object
#:translate (vec3 0 (key-control-b #\q #\a 0.01) 0))
```

This moves the cube up and down as you press the 'q' and 'a' keys, by 0.01 units.

### Spawning objects

So far all the objects we have created have stayed active for the duration of the program running. Sometimes we want to control the lifetime of an object, or create new ones. This is obviously important for many games! To do this, we need to introduce events. Events are another basic part of FrTime and therefore Frisbee, and behaviours can be turned into events and vice versa. Events are things which happen at a specific time, rather than behaviours which can always be asked for their current value. For this reason events can't be directly used for driving objects in Frisbee in the same way as behaviours can - but they are used for triggering new objects into, or out of existence.

Here is a script which creates a continuous stream of cubes:

```(scene
(factory
(lambda (e)
(object #:translate (vec3-integral 0.1 0 0)))
(metro 1) 5))
```

There are several new things happening here. Firstly the metro command is short for metronome, which creates a stream of events happening at the rate specified (1 per second in this case). (factory) is a command that listens to a stream of events - taken as it's second argument, and runs a procedure passed as it's first argument on each one (passing the event in as an argument to the supplied function).

So in this case, each time an event occurs, the anonymous function is run, which creates an object moving away from the origin. Left like this frisbee would eventually slow down to a crawl, as more and more cubes are created. So the factory also takes a third parameter, which is the maximum number of things it can have alive at any time. Once 5 objects have been created it will recycle them, and remove the oldest objects.

Frisbee come with some built in events which we can visualise with this script:

```(scene
(factory
(lambda (e)
(object #:translate (vec3-integral 0.1 0 0)))
keyboard 5))
```

Which spawns a cube each time a key is pressed on the keyboard.

So far we have been ignoring the event which gets passed into our little cube making function, but as the events the keyboard spits out are the keys which have been pressed, we can make use of them thusly:

```(scene
(factory
(lambda (e)
(if (char=? e #\a) ; make a bigger cube if 'a' is pressed
(object
#:translate (vec3-integral 0.1 0 0)
#:scale (vec3 2 2 2))
(object #:translate (vec3-integral 0.1 0 0))))
keyboard 5))
```

### Converting behaviours to events

You can create events when a behaviour changes:

```(scene
(factory
(lambda (e)
(object #:translate (vec3-integral 0.1 0 0)))
(when-e (> mouse-x 200)) 5))
```

### Particles

Frisbee comes with it's own particle system primitive - which makes it easy to make different particle effects.

It is created in a similar way to the solid objects:

```(scene
(particles))
```

And comes with a set of parameters you can control explicitly or via behaviours:

```(scene
(particles
#:colour (vec3 1 1 1)
#:translate (vector 0 0 0)
#:scale (vector 0.1 0.1 0.1)
#:rotate (vector 0 0 0)
#:texture "test.png"
#:rate 1
#:speed 0.1
#:spread 360
#:reverse #f))
```

# Fluxus Scratchpad and Modules

This chapter documents fluxus in slightly lower level, only required if you want to hack a bit more.

Fluxus consists of two halves. One half is the window containing a script editor rendered on top of the scene display render. This is called the fluxus scratchpad, and it’s the way to use fluxus for livecoding and general playing.

The other half is the modules which provide functions for doing graphics, these can be loaded into any mzscheme interpreter, and run in any OpenGL context.

### Core modules

Fluxus’s functionality is further split between different Scheme modules. You don’t need to know any of this to simply use fluxus as is, as they are all loaded and setup for you.

#### fluxus-engine

This binary extension contains the core rendering functions, and the majority of the commands.

#### fluxus-audio

A binary extension containing a jack client and fft processor commands.

#### fluxus-osc

A binary extension with the osc server and client, and message commands.

#### fluxus-midi

A binary extension with midi event input support

### Scheme modules

There are also many scheme modules which come with fluxus. Some of these form the scratchpad interface and give you mouse/key input and camera setup, others are layers on top of the fluxus-engine to make it more convenient. This is where things like the with-* and pdata-map! Macros are specified and the import/export for example.

### Additional modules

#### fluxus-video

A binary extension that provides functions to load in a movie file via Quicktime in OSX or GStreamer in Linux, and offers various controls to play or control the properties of the movie. The module also provides access to live cameras.

#### fluxus-artkp

A binary ARToolKitPlus module.

# Miscellaneous Important Nuggets of Information

This chapter is for things I reallythink are important to know, but can't find the place to put them.

### Getting huge framerate speeds

By default fluxus has it's framerate throttled to stop it melting your computer. To remove this, use:

```(desiredfps 1000000)
```

It won't guarantee you such a framerate, but it will stop fluxus capping it's speed (which defaults to something around 50 fps). Use:

```(show-fps 1)
```

To check the fps before and after. Higher framerates are great for VJing, as it essentially reduces the latency for the results of the audio calculation getting to the visual output - it feels much more responsive.

### Unit tests

If you want to check fluxus is working ok on a new install - or if you suspect something is going wrong, try:

```(self-test #f)
```

Which will run through every single example scriptlet in the function reference documentation. If it crashes, or errors - please run:

```(self-test #t)
```

Which will save a log file - please post this to the mailing list and we'll have a go at fixing it. Its also highly recommended for developers to run this command before committing code to the source repository, so you can see if your changes have affected anything unexpected.

# Shaders

Hardware shaders allow you to have much finer control over the graphics pipeline used to render your objects. Fluxus has commands to set and control GLSL shaders from your scheme scripts, and even edit your shaders in the fluxus editor. GLSL is the OpenGL standard for shaders across various graphics card types, if your card and driver support OpenGL2, this should work for you.

```(shader vertshader fragshader)
```

Loads, compiles and binds the vertex and fragment shaders on to current state or grabbed primitive.

```(shader-set! paramlist)
```

Sets uniform parameters for the shader in a token, value list, e.g.:

``` (list “specular” 0.5 “mycolour” (vector 1 0 0))
```

This is very simple to set up – in your GLSL shader you just need to declare a uniform value eg:

``` uniform float deformamount;
```

This is then set by calling from scheme:

``` (shader-set! (list “deformamount” 1.4))
```

The deformamount is set once per object/shader – hence it’s a uniform value across the whole object. Shaders also get given all pdata as attribute (per vertex) parameters, so you can share all this information between shaders and scripts in a similar way:

In GLSL:

```attribute vec3 testcol;
```

To pass this from scheme, first create some new pdata with a matching name:

``` (pdata-add “testcol” “v”)
```

Then you can set it in the same way as any other pdata, controlling shader parameters on a per-vertex basis.

### Samplers

Samplers are the hardware shading word for textures, the word sampler is used to be a little more general in that they are used as a way of passing lots of information (which may not be visual in nature) around between shaders. Passing textures into GLSL shaders from fluxus is again fairly simple:

In your GLSL shader:

``` uniform sampler2D mytexture;
```

In scheme:

```(texture (load-texture “mytexturefile.png”))
(shader-set! (list “mytexture” 0))
```

This just tells GLSL to use the first texture unit (0) as the sampler for mytexture. This is the texture unit that the standard (texture) command loads textures to.
To pass more than one texture, you need multitexturing turned on:

In GLSL:

```uniform sampler2D mytexture;
uniform sampler2D mysecondtexture;
```

In scheme:

```; load to texture unit 0
(multitexture 0 (load-texture “mytexturefile.png”))
; load to texture unit 1
(multitexture 1 (load-texture “mytexturefile2.png”))
(shader-set! (list “mytexture” 0 “mysecondtexture” 1))
```

# Quick Start

When you start fluxus, you will see the welcome text and a prompt – this is called the repl (read evaluate print loop), or console. Generally fluxus scripts are written in text buffers, which can be switched to with ctrl and the numbers keys. Switch to the first one of these by pressing ctrl-1 (ctrl-0 switched you back to the fluxus console).

Now try entering the following command.

```(build-cube)
```

Now press F5 (or ctrl-e) – the script will be executed, and a white cube should appear in the centre of the screen. Use the mouse and to move around the cube, pressing the buttons to get different movement controls.

To animate a cube using audio, try this:

```; buffersize and samplerate need to match jack’s
(start-audio “jack-port-to-read-sound-from” 256 44100)

(define (render)
(colour (vector (gh 1) (gh 2) (gh 3)))
(draw-cube))

(every-frame (render))
```

To briefly explain, the (every-frame) function takes a function which is called once per frame by fluxus’s internal engine. In this case it calls a function that sets the current colour using harmonics from the incoming sound with the (gh) - get harmonic function; and draws a cube. Note that this time we use (draw-cube) not (build-cube). The difference will be explained below.

If everything goes as planned, and the audio is connected with some input – the cube will flash in a colourful manner along with the sound.

Now go and have a play with the examples. Load them by pressing ctrl-l or on the commandline, by entering the examples directory and typing fluxus followed by the script filename.

# Notes on Writing Large Programs in Fluxus

When writing large programs in fluxus, I've found that there are some aspects of the language (the PLT scheme dialect to be more precise) which are essential in managing code and keeping things tidy. See the PLT documentation on structs and classes for more detail, but I'll give some fluxus related examples to work with.

For example, at first we can get along quite nicely with using lists alone to store data. Let's use as an example a program with a robot we wish to control:

```(define myrobot (list (vector 0 0 0) (vector 1 0 0) (build-cube)))
```

We use a list to store the robot's position, velocity and a primitive associated with it. We could then use:

```(list-ref myrobot 0) ; returns the position of the robot
(list-ref myrobot 1) ; returns the velocity of the robot
(list-ref myrobot 2) ; returns the root primitive of the robot
```

To get at the values of the robot and use them later. Seems pretty handy, but this has problems when we scale up the problem, say we want to make a world to keep our robots in:

```; build a world with three robots
(define world (list (list (vector 0 0 0) (vector 1 0 0) (build-cube))
(list (vector 1 0 0) (vector 1 0 0) (build-cube))
(list (vector 2 0 0) (vector 1 0 0) (build-cube)))
```

And then say we want to access the 2nd robot's root primitive:

```(list-ref (list-ref world 1) 2)
```

It all starts to get a little confusing, as we are indexing by number.

### Structs

Structs are simple containers that allow you to name data. This is a much saner way of dealing with containers of data than using lists alone.
Let's start again with the robots example:

```(define-struct robot (pos vel root))
```

Where “pos” is the current position of the robot, “vel” it's velocity and “root” is the primitive for the robot. The (define-struct) automatically generates the following functions we can immediately use:

```(define myrobot (make-robot (vector 0 0 0) (vector 0 0 0) (build-cube)) ; returns a new robot
(robot-pos myrobot) ; returns the position of the robot
(robot-vel myrobot) ; returns the velocity of the robot
(robot-root myrobot) ; returns the root primitive for the robot
```

This makes for very readable code, as all the data has meaning, no numbers to decipher. It also means we can insert new data and not have to rewrite a lot of code, which is very important.
The world can now become:

```(define-struct world (robots))
```

And be used like this:

```; build a world with three robots
(define myworld (make-world (list (make-robot (vector 0 0 0) (vector 1 0 0) (build-cube))
(make-robot (vector 1 0 0) (vector 1 0 0) (build-cube))
(make-robot (vector 2 0 0) (vector 1 0 0) (build-cube)))))

; get the 2nd robot's root primitive:
(robot-root (list-ref (world-robots myworld) 1))
```

### Mutable state

So far we have only used get's not set's. This is partly because setting state is seen as slightly distasteful in Scheme, so you have to use the following syntax to enable it in a struct:

```(define-struct robot ((pos #:mutable) (vel #:mutable) root))
```

This `(define-struct)` also generates the following functions:

```(set-robot-pos! robot (vector 1 0 0))
(set-robot-vel! robot (vector 0 0.1 0))
```

So you can change the values in the program. For a full example, see the file dancing-robots.scm in the examples directory.

# User Guide

When using the fluxus scratchpad, the idea is that you only need the one window to build scripts, or play live. F5 (or ctrl-e) is the key that runs the script when you are ready. Selecting some text (using shift) and pressing F5 will execute the selected text only. This is handy for re-evaluating functions without running the whole script each time.
Camera control
The camera is controlled by moving the mouse and pressing mouse buttons.

• Left mouse button: Rotate
• Middle mouse button: Move
• Right mouse button: Zoom

## Workspaces

The script editor allows you to edit 9 scripts simultaneously by using workspaces. To switch workspaces, use ctrl+number key. Only one can be run at once though, hitting f5 will execute the currently active workspace script. Scripts in different workspaces can be saved to different files, press ctrl-s to save or ctrl-d to save-as and enter a new filename (the default filename is temp.scm).

## The REPL

If you press ctrl and 0, instead of getting another script workspace, you will be presented with a read evaluate print loop interpreter, or repl for short. This is really just an interactive interpreter similar to the commandline, where you can enter scheme code for immediate evaluation. This code is evaluated in the same interpreter as the other scripts, so you can use the repl to debug or inspect global variables and functions they define. This window is also where error reporting is printed, along with the terminal window you started fluxus from.
One of the important uses of the repl is to get help on fluxus commands. For instance, in order to find out about the build-cube command, try typing:

```(help "build-cube")
```

You can find out about new commands by typing

```(help "sections")
```

Which will print out a list of subsections, so to find out about maths commands you can try:

```(help "maths")
```

Will give you a list of all the maths commands availible, which you can ask for further help about. You can copy the example code by moving the cursor left and then up, shift select the code, press ctrl-c to copy it. Then you can switch to a workspace and paste the example in with ctrl-v order to try running them.

## Keyboard commands

ctrl-f : Full screen mode.
ctrl-w : Windowed mode.
ctrl-h : Hide/show the editor.
ctrl-l : Load a new script (navigate with cursors and return).
ctrl-s : Save current script.
ctrl-d : Save as current script (opens a filename dialogue).
ctrl-q : Clear the editor window.
ctrl-b : Blow up cursor.
ctrl-1 to 9 : Switch to selected workspace.
ctrl-0 : Switch to the REPL.
ctrl-p : Auto format the white space in your scheme script to be more pretty and readable
F3 : Resets the camera.
F4 : Execute the current highlighted s-expression
F5/ctrl-e : Execute the selected text, or all if none is selected.
F6 : Reset interpreter, and execute text
F9 : Switch scratchpad effects on/off
F10 : Make the text more transparent
F11 : Make the text less transparent

# Turtle Builder

The turtle polybuilder is an experimental way of building polygonal objects using a logo style turtle in 3D space. As you drive the turtle around you can place vertices and build shapes procedurally. The turtle can also be used to deform existing polygonal primitives, by attaching it to objects you have already created.

This script simply builds a single polygon circle, by playing the age old turtle trick of looping a function that moves a bit, turns a bit...

```(clear)

(define (build n)
(turtle-reset)
(turtle-prim 4)
(build-loop n n)
(turtle-build))

(define (build-loop n t)
(turtle-turn (vector 0 (/ 360 t) 0))
(turtle-move 1)
(turtle-vert)
(if (< n 1)
0
(build-loop (- n 1) t)))

(backfacecull 0)
(hint-unlit)
(hint-wire)
(wire-colour (vector 0 0 0))
(line-width 4)
(build 10)
```

For a more complex example, just modify the `(build-loop)` function as so:

```(define (build-loop n t)
(turtle-turn (vector 0 (/ 360 t) 0))
(turtle-move 1)
(turtle-vert)
(if (< n 1)
0
(begin
; add another call to the recursion
(build-loop (- n 1) t)
(turtle-turn (vector 0 0 45))   ; twist a bit
(build-loop (- n 1) t))))
```

# Primitive Loading and Saving

It's useful to be able to load and save primitives, this is for several reasons. You may wish to use other programs to make meshes and import them into fluxus, or to export primitives you've made in fluxus to render them in other programs. It's also useful to save the primitive as a file if the process to create it in fluxus is very slow. There are only two commands you need to know to do this:

```; load a primitive in:
(define newprim (load-primitive filename))
; save it out again:
(with-primitive newprim
(save-primitive filename))
```

At present these commands work on poly and pixel primitives, and load/save obj and png files respectively.

### COLLADA format support

Collada is a standard file format for complex 3D scenes. Collada files can be loaded, currently supported geometry is triangular data, vertex positions, normals and texture coordinates. The plan is to use collada for complex scenes containing different geometry types, including animation and physics data.

```(collada-import filename)
```

(collada-import) loads a collada scene file and returns a scene description list. Files need to contain triangulated model data - this is usually an option on the export. Note: this is slow for heavy models.

# Scheme

Scheme is a programming language invented by Gerald J. Sussman and Guy L. Steel Jr. in 1975. Scheme is based on another language – Lisp, which dates back to the fifties. It is a high level language, which means it is biased towards human, rather than machine understanding. The fluxus scratchpad embeds a Scheme interpreter (it can run Scheme programs) and the fluxus modules extend the Scheme language with commands for 3D computer graphics.

This chapter gives a very basic introduction to Scheme programming, and a fast path to working with fluxus – enough to get you started without prior programming experience, but I don’t explain the details very well. For general scheme learning, I heartily recommend the following books (two of which have the complete text on-line):

The Little Schemer Daniel P. Friedman and Matthias Felleisen

How to Design Programs An Introduction to Computing and Programming Matthias Felleisen Robert Bruce Findler Matthew Flatt Shriram Krishnamurthi Online: http://www.htdp.org/2003-09-26/Book/

Structure and Interpretation of Computer Programs Harold Abelson and Gerald Jay Sussman with Julie Sussman Online: http://mitpress.mit.edu/sicp/full-text/book/book.html

We’ll start by going through some language basics, which are easiest done in the fluxus scratchpad using the console mode – launch fluxus and press ctrl 0 to switch to console mode.

## Scheme as calculator

Languages like Scheme are composed of two things – operators (things which do things) and values which operators operate upon. Operators are always specified first in Scheme, so to add 1 and 2, we do the following:

```fluxus> (+ 1 2)
3
```

This looks pretty odd to begin with, and takes some getting used to, but it means the language has less rules and makes things easier later on. It also has some other benefits, in that to add 3 numbers we can simply do:

```fluxus> (+ 1 2 3)
6
```

It is common to “nest” the brackets inside one another, for example:

```fluxus> (+ 1 (* 2 3))
7
```

## Naming values

If we want to specify values and give them names we can use the Scheme command “define”:

```fluxus> (define size 2)
fluxus> size
2
fluxus> (* size 2)
4
```

Naming is arguably the most important part of programming, and is the simplest form of what is termed “abstraction” - which means to separate the details (e.g. The value 2) from the meaning – size. This is not important as far as the machine is concerned, but it makes all the difference to you and other people reading code you have written. In this example, we only have to specify the value of size once, after that all uses of it in the code can refer to it by name – making the code much easier to understand and maintain.

## Naming procedures

Naming values is very useful, but we can also name operations (or collections of them) to make the code simpler for us:

```fluxus> (define (square x) (* x x))
fluxus> (square 10)
100
fluxus> (square 2)
4
```

Look at this definition carefully, there are several things to take into account. Firstly we can describe the procedure definition in English as: To (define (square of x) (multiply x by itself)) The “x” is called an argument to the procedure, and like the size define above – it’s name doesn’t matter to the machine, so:

fluxus> (define (square apple) (* apple apple))

Will perform exactly the same work. Again, it is important to name these arguments so they actually make some sort of sense, otherwise you end up very confused. Now we are abstracting operations (or behaviour), rather than values, and this can be seen as adding to the vocabulary of the Scheme language with our own words, so we now have a square procedure, we can use it to make other procedures:

```fluxus> (define (sum-of-squares x y)
(+ (square x) (square y)))
fluxus> (sum-of-squares 10 2)
104
```

The newline and white space tab after the define above is just a text formatting convention, and means that you can visually separate the description and it’s argument from the internals (or body) of the procedure. Scheme doesn’t care about white space in it’s code, again it’s all about making it readable to us.

## Making some shapes

Now we know enough to make some shapes with fluxus. To start with, leave the console by pressing ctrl-1 – you can go back at any time by pressing ctrl-0. Fluxus is now in script editing mode. You can write a script, execute it by pressing F5, edit it further, press F5 again... this is the normal way fluxus is used.

Enter this script:

```(define (render)
(draw-cube))

(every-frame (render))
```

Then press F5, you should see a cube on the screen, drag the mouse around the fluxus window, and you should be able to move the camera – left mouse for rotate, middle for zoom, right for translate.

This script defines a procedure that draws a cube, and calls it every frame – resulting in a static cube.

You can change the colour of the cube like so:

```(define (render)
(colour (vector 0 0.5 1))
(draw-cube))

(every-frame (render))
```

The colour command sets the current colour, and takes a single input – a vector. Vectors are used a lot in fluxus to represent positions and directions in 3D space, and colours – which are treated as triplets of red green and blue. So in this case, the cube should turn a light blue colour.

## Transforms

Add a scale command to your script:

```(define (render)
(scale (vector 0.5 0.5 0.5))
(colour (vector 0 0.5 1))
(draw-cube))

(every-frame (render))
```

Now your cube should get smaller. This might be difficult to tell, as you don’t have anything to compare it with, so we can add another cube like so:

```(define (render)
(colour (vector 1 0 0))
(draw-cube)
(translate (vector 2 0 0))
(scale (vector 0.5 0.5 0.5))
(colour (vector 0 0.5 1))
(draw-cube))
(every-frame (render))
```

Now you should see two cubes, a red one, then the blue one, moved to one side (by the translate procedure) and scaled to half the size of the red one.

```(define (render)
(colour (vector 1 0 0))
(draw-cube)
(translate (vector 2 0 0))
(scale (vector 0.5 0.5 0.5))
(rotate (vector 0 45 0))
(colour (vector 0 0.5 1))
(draw-cube))

(every-frame (render))
```

For completeness, I added a rotate procedure, to twist the blue cube 45 degrees.

## Recursion

To do more interesting things, we will write a procedure to draw a row of cubes. This is done by recursion, where a procedure can call itself, and keep a record of how many times it’s called itself, and end after so many iterations.

In order to stop calling our self as a procedure, we need to take a decision – we use cond for decisions.

```(define (draw-row count)
(cond
((not (zero? count))
(draw-cube)
(translate (vector 1.1 0 0))
(draw-row (- count 1)))))
(every-frame (draw-row 10))
```

Be careful with the brackets – the fluxus editor should help you by highlighting the region each bracket corresponds to. Run this script and you should see a row of 10 cubes. You can build a lot out of the concepts in this script, so take some time over this bit.

Cond is used to ask questions, and it can ask as many as you like – it checks them in order and does the first one which is true. In the script above, we are only asking one question, (not (zero? Count)) – if this is true, if count is anything other than zero, we will draw a cube, move a bit and then call our self again. Importantly, the next time we call draw-row, we do so with one taken off count. If count is 0, we don’t do anything at all – the procedure exits without doing anything.

So to put it together, draw-row is called with count as 10 by every-frame. We enter the draw-row function, and ask a question – is count 0? No – so carry on, draw a cube, move a bit, call draw-row again with count as 9. Enter draw-row again, is count 0? No, and so on. After a while we call draw-row with count as 0, nothing happens – and all the other functions exit. We have drawn 10 cubes.

Recursion is a very powerful idea, and it’s very well suited to visuals and concepts like self similarity. It is also nice for quickly making very complex graphics with scripts not very much bigger than this one.

## Animation

Well, now you’ve got through that part, we can quite quickly take this script and make it move.

```(define (draw-row count)
(cond
((not (zero? count))
(draw-cube)
(rotate (vector 0 0 (* 45 (sin (time)))))
(translate (vector 1.1 0 0))
(draw-row (- count 1)))))

(every-frame (draw-row 10))
```

time is a procedure which returns the time in seconds since fluxus started running. Sin converts this into a sine wave, and the multiplication is used to scale it up to rotate in the range of -45 to +45 degrees (as sin only returns values between -1 and +1). Your row of cubes should be bending up and down. Try changing the number of cubes from 10, and the range of movement by changing the 45.

## More recursion

To give you something more visually interesting, this script calls itself twice – which results in an animating tree shape.

```(define (draw-row count)
(cond
((not (zero? count))
(translate (vector 2 0 0))
(draw-cube)
(rotate (vector (* 10 (sin (time))) 0 0))
(with-state
(rotate (vector 0 25 0))
(draw-row (- count 1)))
(with-state
(rotate (vector 0 -25 0))
(draw-row (- count 1))))))
(every-frame (draw-row 10))
```

For an explanation of with-state, see the next section.

## Comments

Comments in scheme are denoted by the ; character:

```; this is a comment
```

Everything after the ; up to the end of the line are ignored by the interpreter.

Using #; you can also comment out expressions in scheme easily, for example:

```(with-state
(colour (vector 1 0 0))
(draw-torus))
(translate (vector 0 1 0))
#;(with-state
(colour (vector 0 1 0))
(draw-torus))
```

Stops the interpreter from executing the second (with-state) expression, thus stopping it drawing the green torus.

## Let

Let is used to store temporary results. An example is needed:

```(define (animate)
(with-state
(translate (vector (sin (time)) (cos (time)) 0))
(draw-sphere))
(with-state
(translate (vmul (vector (sin (time)) (cos (time)) 0) 3))
(draw-sphere)))

(every-frame (animate))
```

This script draws two spheres orbiting the origin of the world. You may notice that there is some calculation which is being carried out twice - the (sin (time)) and the (cos (time)). It would be simpler and faster if we could calculate this once and store it to use again. One way of doing this is as follows:

```(define x 0)
(define y 0)

(define (animate)
(set! x (sin (time)))
(set! y (cos (time)))
(with-state
(translate (vector x y 0))
(draw-sphere))
(with-state
(translate (vmul (vector x y 0) 3))
(draw-sphere)))

(every-frame (animate))
```

Which is better - but x and y are globally defined and could be used and changed somewhere else in the code, causing confusion. A better way is by using let:

```(define (animate)
(let ((x (sin (time)))
(y (cos (time))))
(with-state
(translate (vector x y 0))
(draw-sphere))
(with-state
(translate (vmul (vector x y 0) 3))
(draw-sphere))))
(every-frame (animate))
```

This specifically restricts the use of x and y to inside the area defined by the outer let brackets. Lets can also be nested inside of each other for when you need to store a value which is dependant on another value, you can use let* to help you out here:

```(define (animate)
(let ((t (* (time) 2)) ; t is set here
(x (sin t)) ; we can use t here
(y (cos t))) ; and here

(with-state
(translate (vector x y 0))
(draw-sphere))

(with-state
(translate (vmul (vector x y 0) 3))
```