The starting premise for the Ayyi project is that for GPL’d audio production software to exceed the capabilities of existing closed solutions, the modularity of its architecture needs to be improved. See the distributed development page for some ramblings on this subject.
Ayyi splits functionality along the natural and familiar MVC lines. In constrast to other similarly-architected systems in the domain, many of which are more oriented toward synthesis, Ayyi takes the centre piece as the Song (or Sequence, or Composition) model. The other significant model used is the Mixer. Both the user interfaces, and the audio/midi renderers are considered to be Views. This in fact is how many applications are divided internally (eg, c/w ARDOUR::Session, ARDOUR::engine, ARDOUR::UI), but here they are further separated into distinct processes.
This split gives Ayyi the essential qualities of being completely language and toolkit agnostic.
The example software on this site concentrates on the most basic level of separation - that between the “engine” and the gui (eg the audio renderer is still integrated with the Model).
The most essential characteristic of Ayyi is that of an interface between the participating elements of the production system.
The IPC used is a combination of messaging and shared memory (shm):
The most recent implementation of the messaging system uses Dbus and appears to be working satisfactorily. Dbus is not the most efficient way of transferring data, but this doesnt seem to constitute a bottleneck in the system so far. The example clients do have some abstraction of IPC mechanism with a view to supporting Osc or Corba, but after spending some time investigating other distributed object systems, plans to support alternatives to Dbus have been shelved for the time being. Dbus offers all the required features such as discoverability and type marshalling, without being overly complicated, and has excellent glib integration and library support.
Object listings and properties are shared between clients using SHM. Each server creates and manages one or more SHM segments. Other clients can request read-only access to the segments they are interested in.
With current implementations, all clients must be on the same machine, though the messaging systems are network capable, and it would be relatively simple to request the shared data currently in shm via a message instead.
The use of SHM is unpopular with some developers. The primary reason being that network clients cannot access it. And having to store application object properties in a language-agnostic (POD) manner is a non-trivial burden, even if equally it is not that difficult. For a c/c++ server developer, library code abstracts the memory allocation and destruction of public data in the shared memory segment, and the application references it as it would any other data.
It could be argued that having the interface defined in two separate places (half in SHM and half in the messaging system), is bad practice. But the performance benefits to direct access to properties in shared memory without making asynchronous calls, can’t be denied, and there is nothing in the architecture to prevent this access being duplicated in the message interface to satisfy network clients. In fact there is nothing to prevent a participating server from doing everything using RPC, and not using SHM at all.
Unlike DCOM/.Net or Cocoa, there is no library support for degeneration to in-process servers, though nothing prevents this from being done.
A server process - a service that exports data - can be referred to as an “engine”.
The job of the Song engine, is to make its state available to connected audio and graphical renderers, to make updates in response to requests from connected controllers, and to broadcast changes. The same applies to the Mixer and other engines.
There is currently only one publically available engine. The Ayyid1 package uses a slightly modified version of libardour that adds the ability for external processes to read and write some aspects of its model.
The gui functions by reading the Song Model from shared memory, and presenting it the user. In response to user actions, the gui makes asynchronous function calls, via the messaging service, to the server. The gui also registers to receive signals for Song objects that it is interested. On receipt of a signal, the gui rereads the shm model and updates the display the accordingly.
Most of the existing code uses Gtk, but could just as well be written with QT, Opengl, or any other toolkit.
There are two example gui apps currently available:
Now that the project works as a proof-of-concept, the next steps are:
to add more advanced functionality to the current implementation.
to canvas opinion leading to a stable documented api suitable for wider use.
perhaps add library support for other languages, eg, Python.