Adaptive Digital Audio Effects
This doctoral research is about digital audio effects and their control; it is about technologies of sound signal processing. Digital audio effects usually have their controls constant with time or controlled by the user: except when he/she modifies the control values, the control do not evolve. In thsi study, we automize the effect controls by extracting sound features and mapping them to controls. This is what we call adaptive effects: the controls adapt to the sound, as is the compressor/expandor that analyze the signal's dynamics and modifiy it according to non linear mapping laws.
An example of what the generalization of adaptive control brings is the adaptive time-scaling: a spoken of sung voice can be scaled only onto vowels, thus keeping the consonants unchanged (and keeping the intelligibilty of the text), contrarly to what happens when we slow down consonants (in that case, [k] becomes [g], [t] becomes [d]).
Since we focus on developping new tools for electroacoustic composition, the sound which are used evolves in time (in pitch, timbre, dynamics, duration, spatialization). This means that the features extracted are not only instantaneous, but also long-term and high level features.
Research Domain: Acoustics, Signal Processing and Computer Science applied to Music.
We apply existing techniques, already in use in speech processing as well as music processing, and also want to developp new solutions to new problems.
Description of the Context
Each analysis-synthesis technique is being developped and optimized for some class of sounds. We do not focus on optimizing them, but rather on controlling them. The control from sound features cna be used in differed time as well as in real-time. Conversely, the gesture control onto the mapping is only a real-time feature (except the case of automation given by the uset through the graphical interface). Each implementation may need specific algorithms to allow varying control.
We propose a generalization of adaptive effects: features can be extracted from the processed sound (auto-adaptive effect), from another sound (cross-adaptive of external adaptive effect), or from the processed sound (feedforward adaptive effect). We propose the use of any sound feature, since their is no a priori good sounding feature, depending of what the musicien wants to do with the effect. We also propose a mapping structure that takes into account the other researchs made in the research team, in order to propose a high-level gesture control on the adaptive effect.
We use scientific software to develop, test and optimize most of the effects (Matlab, for differed-time effects). We also use Max/MSP environment for real-time effects and their gestural control.