22 February 2007

Ideas, just ideas

  1. detect when a song is being played or not and use this information to turn on and off the reverb on the singers' voice.
    At first I thought using the singers' mic as an input, but that's not enough. Practically the algorithm must know the difference between someone talking and someone singing. Is it enough to do this by detecting when music is playing? No, because when singing (parts) in a capella the reverb must remain switched on!
    see http://ieeexplore.ieee.org/iel5/9248/29346/01326825.pdf
  2. detect the tempo of a song and use it as the delay time parameter of a delay effect.
  3. detect the pitch/tone at which a singer is singing and (when switched on) automatically pull the pitch to the closest existing note. Practically this might only be useful for specific notes, those notes that are difficult for the singer to reach. For the rest it must be turned off, because the importance of a singer as an instrumentalist is that he can form notes between existing ones, and not to forget glissandos too.
    Might already exist: autotune (DAFX p336)
  4. Another one that is metioned also in DAFX p336: detect amplitude and use it as a parameter for compression
  5. detect no silence and let the compressor with high output gain only work then. This way when there is silence the noise will not have such a high gain and disturb the sound.
  6. maybe a compressor that only compresses the track in the frequency bands (or even as detailed as the pitch itself) it's currently playing. Wait, I thought of this as means to not make the noise problems worse in silent passages, but this doesn't change. Better idea: same idea but with a noise gate.

1 comment:

ronny said...

We can also extract features from the live scene itself: lights, movement of artists,...
I guess this is is a nice idea to tell people that it's possible, but not to implement for this thesis