The Synchronous Data-flow Concept

next up previous contents
Next: Operations Up: C++ Environment for Synchronos Previous: Introduction   Contents

The Synchronous Data-flow Concept

There are a lot of possibilities doing real-time digital signal processing, but in computer music there is a preference to use a kind of data-flow model.1. Since native data-flow computers were not commercial implemented until now there is a need to code this principles in conventional van Neumann computers.

Here we take the term data-flow as the idea, that data is ´´flowing´´ through operations and not like stacks of operations computed by the processor. The program is represented as a data-flow-graph (directed graph) in which vertices, called actors, represent computations and the edges represent FIFO channels. These channels queue data values, encapsulated in objects called tokens , which are passed from the output of one computations to the input of another. These tokens can be anything meaningful like in the time domain operators sample arrays or in frequency domain spectral arrays.

With synchronous data-flow I mean, that all data flows are synchronized, so that in one graph all data is processed at the same rate (that is not the sample rate, but mostly a division of it). These makes all scheduling much easier than doing asynchrony computations. To handle different sample rates or similar concepts a mechanism is implemented known as down- and up-sampling.

In this system there is no concept of controlling the the processed data with events, but this is done in a separate library. Parameter control could be done through IPC-concepts like shared memory.

So there are three main issues which are supported, which is handling data for data-flow in the form of buffers, handling the computation in form of operations and handling the data-flow-graphs, doing the scheduling/dispatching forming a data-flow interface for the whole library.

C++ was used since all of the object-orientated concepts are already included in this language and most of the signal processing code is done in C, so easy porting or development of signal processing was an additional issue.

The power of this system lies mainly in simplicity of calling the operation stack, efficient use of buffers, also called signals, and probably the most important, an code optimized library of operations. Even it has been detected, that coding not to complicate will give the compiler more opportunities for code optimation, especially if we have different target CPU-architectures, were this is handled different.

Memory usage is not anymore an issue on computers, but cache sizes had become as big as the whole RAM memory in previous days, so fitting all signals, storage and code in the secondary (or third) or better in the primary cache will significant speed up the process. So efficient buffer handling can improve the computation speed.

next up previous contents
Next: Operations Up: C++ Environment for Synchronos Previous: Introduction   Contents
HAss.DI Winfried Ritsch - ritsch@algo.mur.at