Import Approx Import part7-simpleosc Package FM{ Op(freq) { Op = Approx:Sin(0.5 - periodic-ramp(Audio:Clock(freq)) #7) } }

Sweet! Sounds quite pure, and you can bump up the approximation to #8 or #9 if desired. Let’s compare the performance by making a reference oscillator and trying out 10 of each.

ref-osc(freq) { ref-osc = Crt:sin(periodic-ramp(Audio:Clock(freq)) * 2 * Pi) } test(osc) { Use Algorithm freqs = Expand(#10 'arg + 0.008 0.008) test = Reduce(Add Map(osc freqs)) }

PS > .\k2cli.exe --audio-out "test(ref-osc)" .\part12-fm.k K2CLI 0.1 (c) 2011 Vesa Norilo Reactive processing active... Press any key to quit CPU: 8.7%

PS > .\k2cli.exe --audio-out "test(FM:Op)" .\part12-fm.k K2CLI 0.1 (c) 2011 Vesa Norilo Reactive processing active... Press any key to quit CPU: 1.5%

Not bad, a significant performance improvement! The sounds are frankly indistinguishable in this case. Let’s build a MIDI-controllable synth with it.

Internally, Kronos handles all MIDI as OSC. MIDI input triggers OSC methods at path ‘/midi/device_name/event_type’. MIDI events show up as packed doublewords in 32-bit integer format. The system provides an unpacking helper function as ‘MIDI:Unpack’ that converts this doubleword to a tuple of integers. To listen to all MIDI events, you can listen to OSC method ‘/midi/*’, as in this example:

test-midi() { test-midi = MIDI:Unpack(OSC:In("/midi/*" '0i)) }

PS > .\k2cli.exe --console "test-midi()" .\part12-fm.k K2CLI 0.1 (c) 2011 Vesa Norilo [OSC] Routing '/midi/Babyface Midi Port 1' to '/midi/*' (0i 0i 0i 0i) Reactive processing active... Press any key to quit (0i 144i 71i 55i) (0i 128i 71i 64i)

The system provides a helper function that converts MIDI data into a frequency/gate pair. The following example will assume a sampling rate of 44.1kHz;

test-synth() { note-on = IO:OSC:In("/midi/*/note-on" '0i) note-off = IO:OSC:In("/midi/*/note-off" '0i) (gate freq) = MIDI:To-CV(note-on note-off 440) test-synth = FM:Op(freq / 44100) * gate }

It should give you an oscillator to play with any MIDI input available to your system. If there’s a lot of latency, it’s time to revisit the setup; see part 1 on how to select an audio device. WASAPI or ASIO is recommended.

Next order of business is to improve the sound. Let’s get rid of the snaps by smoothing the gate signal to create an envelope;

smooth(sig lag) { out = z-1('0 sig + lag * (out - sig)) smooth = out } test-synth-env() { note-on = IO:OSC:In("/midi/*/note-on" '0i) note-off = IO:OSC:In("/midi/*/note-off" '0i) (gate freq) = MIDI:To-CV(note-on note-off 440) env = smooth(Audio:Clock(gate) 0.99) test-synth-env = FM:Op(freq / 44100) * env }

Let’s add frequency modulation; we need a carrier operator and a modulator operator, to modulate the carrier frequency.

test-synth-fm() { note-on = IO:OSC:In("/midi/*/note-on" '0i) note-off = IO:OSC:In("/midi/*/note-off" '0i) (gate freq) = MIDI:To-CV(note-on note-off 440) amp-env = smooth(Audio:Clock(gate) 0.99) fm-env = smooth(Audio:Clock(gate) 0.9999) modulator = FM:Op(3 * freq / 44100) * fm-env * 8 test-synth-fm = FM:Op((1 + modulator) * freq / 44100) * amp-env * 0.8 }

Nice! In upcoming parts, we will learn to make more involved envelopes.

Maybe we’d like to try the synth with different oscillators? Let’s make the oscillator an user supplied parameter:

test-generic-fm(oscillator) { note-on = IO:OSC:In("/midi/*/note-on" '0i) note-off = IO:OSC:In("/midi/*/note-off" '0i) (gate freq) = MIDI:To-CV(note-on note-off 440) amp-env = smooth(Audio:Clock(gate) 0.99) fm-env = smooth(Audio:Clock(gate) 0.9999) modulator = Eval(oscillator 3 * freq / 44100) * fm-env * 8 test-synth-fm = Eval(oscillator (1 + modulator) * freq / 44100) * amp-env * 0.8 }

Try the reference oscillator for CPU use comparisons:

PS > .\k2cli.exe --audio-dev 16 16 --audio-out "test-generic-fm(ref-osc)" .\part12-fm.k

Or even the geometric oscillators we made before for some rougher sounds:

PS > .\k2cli.exe --audio-dev 16 16 --audio-out "test-generic-fm(tri-osc)" .\part12-fm.k PS > .\k2cli.exe --audio-dev 16 16 --audio-out "test-generic-fm(saw-osc)" .\part12-fm.k]]>

In this section, we are going to build a very good approximation of a sinusoidal oscillator. In part 12 we will construct a playable FM-synth from it.

Sinusoidal oscillator is a common theme in DSP. Performance is important, as many applications require oodles of them. There are also surprisingly many different tradeoffs that can be made to optimize an oscillator to a certain application.

Part 10 ends up with a representative example of a certain type of optimized sinusoidal oscillator. It is a recursive method based on filters. This oscillator is cheap to *stream* and expensive to *modulate;* it is suitable when frequency updates are rare. The sinusoid is very pure, although some amplitude error may accumulate as it runs on.

If we require an oscillator to use as a FM operator, recursive methods are rarely useful. We could use a phase accumulator and sinusoid mapping, as in part 7. However, going to the C run time for a precise trigonometric function is very expensive, motivating us to build a more efficient, less precise version. The technique of choice is a Taylor series of the cosine function, chosen over sine for its nice symmetry properties.

The first thing for the series expansion is a factorial function. Somewhat of an ‘Hello world’ of functional programming, the function is predictably simple:

Factorial(n) { Factorial = #1 Factorial = When(n > #1 n * Factorial(n - #1)) }

The distinction is that all numbers used here are prefixed with ‘#’. In Kronos, this means that they are *compiler directives*. Any computations involving directives are resolved at compile time, meaning that the result of any computation on pure directives appears as a constant in the machine code. A further distinction is that you can actually branch on directives as opposed to data. This is demonstrated by the ‘When’ clause in the factorial example above.

To optimize the oscillator for our periodic ramp, let us expand ‘sin(2 pi x)’ instead of ‘sin(x)’. This gives us a function with a period of 1, just like our periodic ramp oscillators.

Looking at the Taylor expansion, we can note that every other coefficient is zero. That’s why we can skip over every other coefficient in the series.

Here is a function to compute the n:th non-zero Taylor coefficient:

Sine-Coef(n) { Sine-Coef = Crt:pow(#2 * Pi n * #2 - #1) * Crt:pow(#-1 n - #1) / Factorial(n * #2 - #1) }

It makes use of exponentiation and the ‘Factorial’ function we wrote previously. It’s probably a good idea to test it, so let’s write a small helper function to sample a function at a set of points:

Sample(func min max steps) { Use Algorithm inc = (max - min) / (steps - #1) xs = Expand(steps 'arg + inc min) Sample = Map(func xs) }

PS > .\k2cli.exe -e "Sample('(arg Approx:Sine-Coef(arg)) #1 #10 #10)" .\approx.k K2CLI 0.1 (c) 2011 Vesa Norilo Sample('(arg Approx:Sine-Coef(arg)) #1 #10 #10) => ( (#1 #6.28319) (#2 #-41.3417) (#3 #81.6052) (#4 #-76.7059) (#5 #42.0587) (#6 #-15.0946) (#7 #3.81995) (#8 #-0.718122) (#9 #0.104229) #10 #-0.0120316)

Seems reasonable.

One of the most efficient methods of computing a polynomial is the Horner method.

Horner-Compute(x result coefs) { Horner-Compute = result * x + coefs (c cs) = coefs Horner-Compute = Recur(x result * x + c cs) } Horner-Polynomial(x coefficients) { Use Algorithm Horner-Polynomial = Horner-Compute(x 0 coefficients) }

‘Horner-Polynomial’ is the front end to this evaluation algorithm, with a recursive computation realized in ‘Horner-Compute’.

Since our coefficient series skips all the zeroes, the polynomial we want for our sinusoid approximation is actually on *x squared. *Finally, the entire Horner evaluation should be multiplied by *x* once more to get the appropriate powers.

Sin(x order) { Use Algorithm Sin = x * Horner-Polynomial(x * x Reduce(Swap Map(Sine-Coef Expand(order 'arg + #1 #1)))) }

This little routine computes ‘order’ coefficients for sinusoid approximation, proceeding to evaluate the resulting taylor series with a value of ‘x’. Let’s evaluate the most accurate period of this approximation, from -0.5 to 0.5, using the ‘Sample’ function we wrote earlier:

Sample('Approx:Sin(arg #5) - Crt:sin(arg * 2 * Pi) -0.5 0.5 #10) => (-0.00692543 -0.000447333 -1.12653e-005 5.96046e-008 -0 -2.98023e-008 -0 1.12653e-005 0.000446856 0.00692495) Sample('Approx:Sin(arg #7) - Crt:sin(arg * 2 * Pi) -0.5 0.5 #10) => (-2.13067e-005 -5.96046e-007 -0 5.96046e-008 -0 -2.98023e-008 -5.96046e-008 -0 2.98023e-007 2.08298e-005) Sample('Approx:Sin(arg #9) - Crt:sin(arg * 2 * Pi) -0.5 0.5 #10) => (-8.74228e-008 -5.96046e-008 -0 5.96046e-008 -0 -2.98023e-008 -5.96046e-008 -0 -2.38419e-007 -1.50996e-007)

Here we see the errors at ten points from -0.5 to 0.5. Increasing the approximation order by two seems to bump our worst-case error down by several orders of magnitude. Ok! Let’s hear it, in part 12.

]]>To explore a *generic* process, let’s create a very simple filter;

Pole-Response(pole) { p = z-1('1 p * pole) Pole-Response = p }

As the name suggests, it is a pole response, basically the impulse response of a one pole filter.

To examine its output, let’s drop back to the interactive mode of k2cli, familiar from part 1.

PS C:\Users\Vesa\code\Kronos.2.HG\Debug> .\k2cli.exe --loop 8 .\part10-generics.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Pole-Response(0.5) Pole-Response(0.5) => 1 Pole-Response(0.5) => 0.5 Pole-Response(0.5) => 0.25 Pole-Response(0.5) => 0.125 Pole-Response(0.5) => 0.0625 Pole-Response(0.5) => 0.03125 Pole-Response(0.5) => 0.015625 Pole-Response(0.5) => 0.0078125

The ‘–loop’ command line switch causes every evaluation to be repeated a number of times, useful when we want to examine the output of a stateful process.

Now, suppose we would like a complex pole.

Knowing complex math, we might come up with:

Complex-Pole(pr pi) { (re im) = z-1('(1 0) (re * pr - im * pi re * pi + im * pr)) Complex-Pole = (re im) }

EXPR>Complex-Pole(0.7 0.7) Complex-Pole(0.7 0.7) => (1 0) Complex-Pole(0.7 0.7) => (0.7 0.7) Complex-Pole(0.7 0.7) => (-0 0.98) Complex-Pole(0.7 0.7) => (-0.686 0.686)

All right, so that wasn’t hard. We just provide real and imaginary parts separately, pass them both to the unit delay and spell out a piecewise complex multiplication as the recursive process.

However, operating on complex numbers is very tedious if everything has to be written out in terms of real and imaginary parts. A custom type can be used to automate all of this.

The first thing we need is a type tag. For a best practice I recommend naming the type tag descriptively and providing all functions related to the type in a package of the same name.

Type Complex Package Complex{ Cons(re im) { Cons = Make(Complex re im) } Real/Img(Z) { (Real Img) = Break(Complex Z) } }

These functions provide a *constructor* and *accessors*. ‘Complex:Cons’ can be called to create a tagged complex number out of two parts. ‘Complex:Real’ and ‘Complex:Img’ can be used to break up a complex number in its constituent parts. Like so;

EXPR>Complex:Cons(1 42) Complex:Cons(1 42) => <Complex(1 42)> EXPR>Complex:Real(Complex:Cons(1 2)) Complex:Real(Complex:Cons(1 2)) => 1 EXPR>Complex:Img(Complex:Cons(4 99)) Complex:Img(Complex:Cons(4 99)) => 99

We will obviously want to provide arithmetics as well. This can be done by adding a form to the global functions ‘Add’ and ‘Mul’ that can be used to compute on complex numbers.

Add(a b) { Add = Complex:Cons(Complex:Real(a) + Complex:Real(b) Complex:Img(a) + Complex:Img(b)) } Mul(a b) { Mul = Complex:Cons( Complex:Real(a) * Complex:Real(b) - Complex:Img(a) * Complex:Img(b) Complex:Real(a) * Complex:Img(b) + Complex:Img(a) * Complex:Real(b) ) }

The application of these forms is governed by the use of *accessors* ’Complex:Real’ and ‘Complex:Img’. Any type that doesn’t conform to these accessors will not be processed by these forms of ‘Add’ and ‘Mul’. It works as desired;

EXPR>Complex:Cons(3 3) * Complex:Cons(0 1) Complex:Cons(3 3) * Complex:Cons(0 1) => <Complex(-3 3)> EXPR>Complex:Cons(1 2) + Complex:Cons(10 20) Complex:Cons(1 2) + Complex:Cons(10 20) => <Complex(11 22)>

Looking back at the pole response, multiplication with the coefficient is actually the only thing ‘Pole-Response’ needs to be able to perform. Should it then work with our complex number type? Almost.

The compiler will spit out an error message, shortened here for brevity.

EXPR>Pole-Response(Complex:Cons(0.7 0.7)) Pole-Response(Complex:Cons(0.7 0.7)) => ** Specialization Error E-9995 ** [ERROR CALL TRACE] :Mul (f Complex) << E-9996:No valid forms >> < ... SNIP ... > << E:-9977:Exception (Multiplication failed for f Complex) >>

The compiler complains that there are no forms of ‘Mul’ that can accept arguments of type ‘f’ for float and ‘Complex’. Looking at the only multiplication in the filter, ‘p * pole’, we can deduce that the type of ‘p’ must be a float since ‘pole’ is something we pass directly as a complex number.

Indeed, our unit delay is initialized with ’1, a function that returns a float. That is the source of the stray type in our program.

We could, of course, replace the expression with ‘z-1(‘Complex:Cons(1 0) p * pole)’. However, then our function would *only* accept a complex number.

We want to pick the initializer dynamically, according to the type of the coefficient. This can be accomplished by yet another function;

unity(a) { zero = When(Type-Of(a) == Float '1 Type-Of(a) == Complex 'Complex:Cons(1 0)) }

Now, we can complete the generic pole response.

Generic-Pole(p) { out = z-1(unity(p) p * out) Generic-Pole = out }

And enjoy the results…

>.\k2cli.exe --loop 4 .\part10-generics.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Generic-Pole(0.8) Generic-Pole(0.8) => 1 Generic-Pole(0.8) => 0.8 Generic-Pole(0.8) => 0.64 Generic-Pole(0.8) => 0.512 EXPR>Generic-Pole(Complex:Cons(0.707 0.707)) Generic-Pole(Complex:Cons(0.707 0.707)) => <Complex(1 0)> Generic-Pole(Complex:Cons(0.707 0.707)) => <Complex(0.707 0.707)> Generic-Pole(Complex:Cons(0.707 0.707)) => <Complex(-0 0.999698)> Generic-Pole(Complex:Cons(0.707 0.707)) => <Complex(-0.706787 0.706787)>

This may seem like more work than stirctly necessary, but keep in mind the implications. If everywhere in our code, we initialize delays with ‘unity’ and a similar ‘zero’ function, we can make our filters dynamically configurable to whatever signal and coefficients are being fed to them.

In addition, we can add further types without touching old code. All that would be needed is to implement the necessary arithmetic along with the agreed-upon initializer routines, and suddenly all our previous signal processing code is able to handle the newly minted type.

As a final note,

> .\k2cli.exe --audio-out "Audio:Clock(Complex:Real(Generic-Pole(Complex:Cons(Sqrt(0.99) 0.1))))" .\part10-generics.k

Is the sound of a pole very near to the unit circle.

]]>To start, we can create a resonator function and a test bed:

Noise() { rng = z-1('0.5d Audio:Clock(rng * 3.9999d * (1d - rng))) Noise = Coerce(Float rng) - 0.5 } Reson(x0 freq reson) { x1 = z-1('0 x0) x2 = z-1('0 x1) y1 = z-1('0 y0) y2 = z-1('0 y1) r = Crt:pow(reson 0.125) y0 = x0 - x2 + y1 * 2 * r * Crt:cos(freq) - y2 * r * r Reson = y0 * 0.5 * (1 - r * r) } Filtered-Noise() { freq = OSC:In("/1/freq" '0.125) res = OSC:In("/1/res" '0) Filtered-Noise = Reson(Noise() freq res) }

If you listen to ‘Filtered-Noise()’, you should get a resonant band of noise controllable by OSC methods ‘/1/freq’ and ‘/1/res’.

We can co-opt some of the higher order functions we learned in part 5 to easily make a bunch of them. You may notice that if we feed the same input to all resonators, the only difference between them is the OSC method they listen to.

Let’s make a helper function that produces a resonator with proper OSC control paths when given the number of the band in question;

Import String Filter-Band(sig number) { band-string = String:Concat("/" Coerce(String number) "/") freq = OSC:In(String:Concat(band-string "freq") '0.125) res = OSC:In(String:Concat(band-string "res") '0.4) Filter-Band = Reson(sig freq res) }

We import ‘String.k’ to provide us with the concatenation method, ‘String:Concat’. Using it, we construct OSC methods based on the band number. Passing ‘#1′ to Filter-Band would yield “/1/freq” and “/1/res” and so on.

What remains is to construct a bank of filters;

Import Algorithm Filter-Bank3() { Use Algorithm sig = Noise() Filter-Bank3 = Reduce(Add Map('Filter-Band(sig arg) #1 #2 #3)) }

This routine uses Map to turn a list of numbers ‘(#1 #2 #3)’ into three filters using the ‘Filter-Band’ function we defined. Note that we actually construct an anonymous function on the go; it passes ‘sig’ to ‘Filter-Band’ as the first argument, and its own argument (#1, #2 or #3 in this case) as the second. This is actually a closure; the anonymous function is bound to ‘sig’ which is outside its scope.

The outputs of these filters are then summed using ‘Reduce’. The ‘Use Algorithm’ directive inside the function merely indicates that functions can be found inside the ‘Algorithm’ package. You may recall that we had to use ‘Algorithm:Map’ and ‘Algorithm:Reduce’ before.

Not clever enough for you yet?

Filter-Bank(N) { Use Algorithm sig = Noise() Filter-Bank = Reduce(Add Map('Filter-Band(sig arg) Expand(N Increment #1))) }

This filter bank function accepts a parameter specifying the number of bands you wish to create. The desired amount of OSC methods ranging from ‘/1/…’ to ‘/N/…’ will be made, and all the filters will be summed as in the three-filter example. Note that you must pass the number as a directive, such as ‘#5′. The reasons for this should become clear in part 11.

If this is still not enough for you, notice that you can address all ‘res’ methods with the OSC method ‘/*/res’. You can address the frequency of bands 3-7 with ‘/[3-7]/freq’. See the OSC specification for full address pattern matching reference.

To wrap up this tutorial, let me estimate the performance hit caused by using all this high level abstraction: 0.^{1}

- DISCLAIMER: Compilation time may increase with growing abstract complexity.

Much like audio files, control signals are brought into your program externally, as a cooperation between IO.k and k2cli. At this stage, all control is input via OSC. k2cli listens to an UDP port, configurable via command line switches (please consult –help).

An OSC control signal is produced by the function OSC:In(*“address-pattern”* *initializer*). The address-pattern can contain wildcards that comply to the OSC specification. The initializer function, much like in the case of delays, sets the value of this input before receiving any OSC messages. All incoming OSC messages that match the address pattern *and* the type of the initializer will be sent to this spot.

Before doing anything serious, let’s verify that we have an OSC connection. To do that, let’s introduce a new reactive sink, ‘–console’. To enter the address pattern, we need a Kronos string directive with double quotes — to sidestep nastiness with command line escaping, let’s just put it in a function in a text file:

OSC-test() { OSC-test = OSC:In("/test" '0) }

When you launch ‘k2cli –console “OSC-test()”‘ and send OSC messages consisting of floating point values to the default port (32000), they should appear on screen.

To set up a simple, controllable oscillator, we can grab one of our oscillators from part 6 or 7 and use it with some OSC inputs:

OSC-synth() { OSC-synth = tri-osc(OSC:In("/freq" '0.5) * 0.25) * OSC:In("/amp" '1) }

By sending floating point OSC control data to the addresses ‘/freq’ and ‘/amp’, the oscillator can be remotely controlled.

You may notice that changing the pitch and volume introduce some discontinuities in the sound. This is due to control events being relatively abrupt rather than gradual changes. We can fix this by introducing a simple smoothing filter;

smooth-control(sig coef) { out = z-1('0 out + (Audio:Clock(sig) - out) * coef) smooth-control = out }

This filter will let its output approach the input by the fraction ‘coef’ for every reactive tick. Reactivity is the key here; consider how we *clock* the incoming signal to audio by using Audio:Clock, as introduced in part 6. If this part were omitted, the update rate of this filter would be determined by its incoming signals, ‘sig’ and ‘coef’. It is quite likely that ‘sig’ is a control signal and ‘coef’ is a constant. Therefore, the filter would be updated at the rate of control signals. It would only cause a strange control lag, not smooth the signal in any way.

OSC-synth-smooth() { freq = OSC:In("/freq" '0.5) * 0.25 amp = OSC:In("/amp" '0.3) OSC-synth-smooth = tri-osc(smooth-control(freq 0.002)) * smooth-control(amp 0.002) }

Here, the smoothed signal approaches the incomign control signal by 0.2% per sample. This is quite small, and you will hear an audible lag in the sound. Experimentation is always required to find the desired balance between discontinuity and responsiveness.

]]>Since the trigonometric function is periodic, we could substitute a periodic ramp, or a *phasor*. This version maintains full numerical precision, no matter how long the oscillator keeps playing.

periodic-ramp(increment) { ramp = z-1('0 wrap + Audio:Clock(increment)) wrap = ramp - Truncate(ramp) periodic-ramp = wrap } sine-osc(freq) { sine-osc = Crt:sin(periodic-ramp(freq) * 2 * Pi) }

‘Truncate’ drops the fractional part of any number, with the resulting effect that ‘periodic-ramp’ is wrapped to a range between 0 and 1.

Naive geometric oscillators can be constructed directly from the periodic ramp. These are not antialiased, so they don’t sound very good, but present an opportunity to study periodic ramps and might be useful as low frequency oscillators.

saw-osc(freq) { saw-osc = 2 * periodic-ramp(freq) - 1 } tri-osc(freq) { tri-osc = 2 * Abs(saw-osc(freq)) - 1 } square-osc(freq) { square-osc = (2 & (saw-osc(freq) > 0)) - 1 }

All these examples employ the same basic principle. Triangular waveform is derived from a sawtooth wave by absolute value, while a square is implemented by using binary logic. The comparison operator will produce either ‘TRUE’ or 0, depending on the values. Applying a bitwise and ‘&’ with 2 to this value will return 2 in the case of TRUE or 0. This is then centered by substracting 1 to create a simple square wave oscillator.

In part 8, we will learn to control audio synthesis with an OSC signal.

]]>

To understand audio, you need a basic idea of how *reactivity* works in Kronos.

A data path that has *reactivity* has an implicit clock or an update rate. Signals like audio are such data paths; an audio signal is meaningful only when the samples are streamed at a regular rate.

Any functions that take a *reactive* argument automatically become reactive themselves. The simplest example of a reactive streaming function is probably the following;

audiofile() { audiofile = Audio:File("snare.wav") }

Audio:File is a function provided in IO.k that defines some hooks that tell k2cli to stream an audio file with the given file name. Kronos itself knows nothing about where the data is coming from; it merely relays the path along with a tag to k2cli, and is happy to receive the audio samples.

‘audiofile()’ is a reactive function; therefore its output needs to be refreshed at the audio rate. To enjoy the results, it must be connected to a *reactive sink*, an output capable of absorping the data as it becomes available. Several such reactive sinks are defined by k2cli. The one most relevant here is ‘–audio-out’. This tells k2cli to stream the reactive function ‘audiofile()’ to the appropriate reactive sink.

PS > .\k2cli.exe --audio-out "audiofile()" .\part6-audiobasics.k K2CLI 0.1 (c) 2011 Vesa Norilo Reactive instance size 4 bytes Reactive processing active... Press any key to quit CPU: 1.1%

You should hear a sound. This is the part where you might want to do some device setup. The command line switch –setup tells k2cli to enumerate the available hardware. Subsequently you can note the device ids you want to use and use the –audio-dev <out-id> <in-id> to select the preferred devices for playback.

Stateless processes are easy to implement for reactive audio streams. As a trivial example, let’s change the gain of the audio file by -6dB:

audiofile-6dB() { audiofile-6dB = Audio:File("snare.wav") * 0.5 }

As a result the audio plays softer. Let’s take a look at the multiplication. The left hand side is the reactive audio file source familiar from the first example. The right hand side is a constant — it is not reactive, as the value doesn’t vary in time.

The multiplication becomes reactive as well, being connected to a reactive source. The non-reactive constant doesn’t contest this. Therefore the entire function ‘audiofile-6dB’ becomes reactive as well, propagating reactivity to any computations dependant on its result. In practice, the system *knows* whether a signal path is audio or not.

As long as a program has only constants and audio inputs, the output will necessarily be audio, due to the dominance of reactivity over constants shown above. The situtation becomes slightly more complicated when we introduce user interaction and different signal clocks.

Let’s implement a simple echo-delay with feedback. We can use the built-in ring buffer operator, demonstrated below:

audiofile-delay() { file = Audio:File("snare.wav") delay = rbuf('0 #4000 file + delay * 0.5) audiofile-delay = file + delay }

#4000′ here defines the ring buffer size. But what is the unit? It is actually the number of *reactive ticks* of the signal input. In this case, the signal input comes from Audio:File, inheriting its reactivity and thus the audio sample rate of the system.

Please note that if the input to ‘rbuf’ is not reactive, the ring buffer never ticks and only outputs the initial value.

This becomes important once we would like to synthesize audio out of thin air.

To synthesize sound, a self-evolving signal is required. In the absence of an external audio stream, this always means recursion in Kronos. Let’s start by synthesizing a ramp signal that increases monotonically with time.

monotonic-ramp() { ramp = z-1('0 ramp + 1) monotonic-ramp = ramp }

‘z-1′ is a unit delay, much like a ring buffer with a size of 1. The first parameter is the initializer function, called to set the contents of the unit delay at the start of the audio stream. From part 5 you may recognize it as an anonymous function that returns zero.

The input to the unit delay is ‘ramp + 1′. Incidentally ‘ramp’ is defined as the output of the unit delay. This creates a recursion, where upon every reactive tick of the signal ramp is incremented by one. Please note that the only permitted definitions with recursion are those with delay, as they are finitely computable.

Well and good, except for the fact that the example doesn’t work. The reason for this is that the recursive loop is not reactive; therefore there’s no tick. We can clock the loop by using a function provided that turns an arbitray signal into audio by *clocking* it:

monotonic-ramp(increment) { ramp = z-1('0 ramp + Audio:Clock(increment))) monotonic-ramp = ramp }

‘Audio:Clock’ can be placed anywhere inside or upstream of the recursive loop, but the example shown above is recommended. The increment value itself is turned into an audio signal, activating the recursive loop that depends on it. This is handy when the argument to ‘monotonic-ramp’ is reactive in itself. It could well be adjustable from the user interface, subsequently reacting to slider movements. This placement makes sure that the audio clock overrides any reactivity supplied from outside the loop.

Having created a ramp, a simple sinusoidal oscillator can be constructed;

monotonic-ramp(increment) { ramp = z-1('0 ramp + Audio:Clock(increment))) monotonic-ramp = ramp } sine-osc(freq) { sine-osc = Crt:sin(monotonic-ramp(freq * Pi)) }

Note the unit of ‘freq’. Being radians / sample, it is dependant on the sample rate, so that 0 corresponds to 0Hz and 1 corresponds to the sampling rate. The highest valid frequency, Nyquist frequency, is always at 0.5.

This simple function plays a pure tone, at least until the numerical precision of the ramp becomes too low. This is evident by undesired pitch changes as the oscillator plays on. This might happen sooner than you expect, therefore, in part 7, we present a better way to synthesize sines.

]]>Basically, a higher order function is a function that expects a *function* as an argument.

In part 4, we defined a function that can add up a bunch of numbers, regardless of how many there are. It is highly likely that we would like to perform a similiar operation on a data structure in the future, but with – say – multiplication. What we would like is a function that will operate on data in a certain way, but with a combiner function the caller could *plug in*.

In fact, such a function is already provided with this technology preview. It is called ‘Reduce’, and it resides in the package ‘Algorithm’. Import algorithm.k to be able to use it, like so:

PS C:\Users\Vesa\Kronos.Preview1> .\k2cli.exe .\algorithm.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Algorithm:Reduce(Add 1 2 3 4 5) Algorithm:Reduce(Add 1 2 3 4 5) => 15 EXPR>Algorithm:Reduce(Mul 1 2 3 4 5) Algorithm:Reduce(Mul 1 2 3 4 5) => 120

Here we perform two reductions on the set (1 2 3 4 5); first by adding the numbers, then by multiplying them. Reduce accepts a function parameter, here supplied as either ‘Add’ or ‘Mul’. Slightly less intuitively;

EXPR>Algorithm:Swap(1 2) Algorithm:Swap(1 2) => (2 1) EXPR>Algorithm:Reduce(Algorithm:Swap 1 2 3 4 5) Algorithm:Reduce(Algorithm:Swap 1 2 3 4 5) => (5 4 3 2 1)

‘Reduce’ is highly general; as you learn functional programming, you will continuously come up with new things you can do with it.

Another highly general function is Map; likewise defined in algorithm.k and allowing you to perform a function repeatedly on a number of elements.

EXPR>Algorithm:Map(Sqrt 1 2 3 4 5) Algorithm:Map(Sqrt 1 2 3 4 5) => (1 1.41421 1.73205 2 2.23607)

Here, we take a square root of each item in the argument and provide a result as a list.

Suppose we have a function of our own? That is also fair game for Map and Reduce;

PS C:\Users\Vesa\Kronos.Preview1> .\k2cli.exe .\algorithm.k .\part3-myfunc.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Algorithm:Map(Take-Square 1 2 3 4 5) Algorithm:Map(Take-Square 1 2 3 4 5) => (1 4 9 16 25)

Here we import both algorithm.k and the very first function we implemented, and use that as the mapping function. Same goes for Reduce – however, note that the function passed to Map should operate on a single argument, while Reduce expects a *binary* or two-argument function.

It is not necessary to give a separate function definition to obtain something you can pass to Map or Reduce. Using a quote, you can denote a part of your program as an anonymous function.

EXPR>Algorithm:Map('Add(arg 100) 1 2 3 4 5) Algorithm:Map('Add(arg 100) 1 2 3 4 5) => (101 102 103 104 105)

Here, we quote a function that adds 100 to its argument. In anonymous functions, the reserved symbol ‘arg’ is always bound to the entire argument (it may, therefore, be a list of things as well).

Higher order functions provide constructs that replace loops and iterations in procedural languages. They are arguably more flexible, and provide you ways to write efficient code.

Equipped with the basics, we are finally going to produce some audio in part 6.

]]>

Suppose we would like to define a function that can add up three arguments. The basic definition would be as follows;

Add-Bunch(a b c) { Add-Bunch = a + b + c }

Fine and dandy, it works as expected.

PS C:\Users\Vesa\Kronos.Preview1> .\k2cli.exe .\part4-polymorph.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Add-Bunch(1 2 3) Add-Bunch(1 2 3) => 6

Maybe we would also like to add four numbers. We can utilize polymorphism by providing a second version of the function that does just that:

Add-Bunch(a b c) { Add-Bunch = a + b + c } Add-Bunch(a b c d) { Add-Bunch = a + b + c + d }

Or, if we’re really clever, utilize the first definition in the latter one.

Add-Bunch(a b c d) { Add-Bunch = a + Add-Bunch(b c d) }

Of course, it’s not very practical to write a separate form for every case.

Let’s think abstractly for a little bit. Adding a ‘bunch’ consists of adding all the elements separately. Finally, there’s just one. I have a hunch we should start with a really stupid-seeming definition:

Add-Bunch(a) { Add-Bunch = a } Add-Bunch(a as) { Add-Bunch = a + Add-Bunch(as) }

Huh?

Okay, so the sum of a bunch of one is the one element itself. But what about the second form? Based on our earlier knowledge, it appears that is should work for the two-argument case.

What happens in the case of three arguments if we have just these two forms?

When a function is called, it’s arguments are bound to whatever was supplied by the caller. Let’s paraphrase this by unfolding the contents of ‘Add-Bunch’ in a hypothetical function call:

result = Add-Bunch(some-stuff) /* is identical to ... */ (a as) = some-stuff result = a + Add-Bunch(as)

There’s a new thing here; defining or *binding* ’(a as)’ to ‘some-stuff’. Briefly put, this attempts to split ‘some-stuff’ in two parts, defining ‘a’ to be the first part and ‘as’ to be the latter. When ‘as’ is passed to Add-Bunch, it is again splitted into two parts. This may seem confusing, but most of the time it *just works* - here is a brief example; more detail on the *type algebra* to be supplied later.

bunch-of-stuff = (1 2 3 4 5) (a b c) = bunch-of-stuff /* Result: a = 1, b = 2, c = (3 4 5) */

If you parsed this already I salute you for some serious brain power; you now realize that our odd little two-form definition can handle any number of arguments.

EXPR>Add-Bunch(3) Add-Bunch(3) => 3 EXPR>Add-Bunch(3 5 9) Add-Bunch(3 5 9) => 17 EXPR>Add-Bunch(3 5 9 1 2 4) Add-Bunch(3 5 9 1 2 4) => 24

If this feels slightly mysterious, don’t worry. Part 5 on the staples of functional programs will give you examples of how to use this abstract power in the real world.

]]>Simple functions in Kronos are homomorphic. It means that they have a single form regardless of the context. This is likely what you would intuitively expect. Let’s revisit the simple function we defined in part 1.

Take-Square(x) { Take-Square = x * x }

The definition has *four *mandatory parts:

- Symbol, ‘Take-Square’
- Argument binding, ‘(x)’
- Scope, contents of {}
- A definition of the function symbol ‘Take-Square = …’

These four elements must be present in all function definitions.

The function definition can be placed anywhere in the source file. In case of simple functions, the placement has no effect.

At the moment, function definitions can not be *nested*. This means that functions can’t be defined within another function definition. This restriction may be lifted at some point, though – nested functions combined with *closures* can be useful. But I digress.

You may use a function by supplying a symbol and an argument bundle together. There must be no whitespace between the symbol and argument. Like so:

/* Right, a function call */ nine? = Take-Square(3) /* Wrong, syntax error! */ nine? = Take-Square (3)

This binds ‘nine?’ to the value of the function ‘Take-Square’ with the argument 3.

Functions can also appear without arguments. Take a look;

PS C:\Users\Vesa\Kronos.Preview1> .\k2cli.exe .\part3-myfunc.k K2CLI 0.1 (c) 2011 Vesa Norilo EXPR>Take-Square(3) Take-Square(3) => 9 EXPR>Take-Square Take-Square => ':Mul((Mul) pair(arg arg)) EXPR>

What emerges may seem like gibberish but is actually the abstract syntax tree associated with the ‘Take-Square’ function. The result of this evaluation is not any particular square, but rather a *program to take a square*.

This means that functions are *first class*. Basically, functions can appear as parts of expressions, passed as arguments and returned as values. Such a function can be explicitly called by using ‘Eval’ which takes a function definition and an argument.

EXPR>Eval(Take-Square 9) Eval(Take-Square 9) => 81

This pattern will start to make sense once you study higher order functions.

In part 4, we will examine polymorphic functions.

]]>