Once in a while, people contact me (or I read messages on newsgroups, or in some mailing list) that some programs that use SDL crashes easily when audio is in use.

How audio works on Atari:

When you use audio DMA, you simply get an hardware interrupt, an application simply hook up a vector there, to execute some routine to fill the next audio buffer to be played. As it is an hardware interrupt, this routine executes in supervisor mode, and run with the supervisor stack, which is small (2KB ?), compared to the user stack, that an application sets at startup, and can use whole available memory.

But sound works in Doom:

Yep, the SDL audio callback (triggered by the audio hardware interrupt) that is run by Doom is a very simple mixing routine. Most audio related work is done outside of it, and hence the small supervisor stack is not a problem. There is also the fact that you don't want the audio callback routine to be run if the next hardware interrupt triggers. A simple flag takes care of it in SDL.

So why it does not work in Scummvm ?

Scummvm (as far as I know) executes a much more complicated routine to generate audio (midi emulation? sound chip emulation? maybe more, in addition to the usual mixing of all voices), and hence needs more CPU time, and more stack space. As I said before, the hardware interrupt (and the SDL audio callback routine set for the application, like Scummvm), runs with the very small supervisor stack space. When it runs out of stack space, it simply crashes. It's a wild guess, but it's the only cause I can imagine for these crashes, as they only happen when audio is enabled.

What can be done to fix it ?

The only solution (from my point of view), is from the hardware interrupt (HI), to switch back to user mode, and the user stack which will give us plenty of space to execute the audio callback routine. It's like triggering a 'JSR audio_callback' inside the application, whatever said application may be currently running.

The first problem, is that the HI may have triggerred while the application is in supervisor mode, which happens easily when it calls the OS (TRAP #n on Atari). If it happens, there is no point in switching to the previous underlying context, as we'll run in the same small supervisor stack space. And from there, we can not even switch to usermode. This is not acceptable if we want a solution that work in every case.

Other OS simply run the audio related task in a thread, that simply fills audio buffer once in a while.

Could we also use threads?

On Atari, there is a port of GNU pth. It could nearly solve our problems, if it was not limited to MiNT (as it requires some /dev/pipe support), and if it was preemptive (the application must triggers a thread context switch, once in a while, for other threads to execute).

When running under TOS, we could use either the 200hz timer, or the vbl interrupt, to call the pth library function that manages context thread switches, but as I said above, pth needs /dev/pipe support.

When running under MiNT, we simply can not hook into either vector, because the application currently running may be different from the one in which we want to switch context. And I even don't bring the issue about memory protection between applications. You run MiNT because you want to multitask, don't you? In this case, MiNT should simply provides thread support for applications, and then we could patch pth to use it.