Dave Tosti-Lane
12-31-2004, 03:45 PM
(From another thread)
And then as the SAWStudio interface matured, I really lost interest and could no longer even see the need for the use of all these control surfaces... not one of them offered any real power and efficiency in using the program, and all that seemed to be going on was an ego satisfaction game for those that had to have physical faders.
Bob L
Hi Bob -
As always, I think you make good points for what may be most people's way of working. But let me describe one scenario in which the control surface makes sense - something I just confirmed for myself in a job using the recently posted 01V96 mct file.
I build a lot of very complex environmental soundscapes for playback in live theater work. Frequently, I start with up to 40 tracks of wind, rain, animal noises, birds, washing of surf, rivers, thunder, etc, etc. To make a soundscape work in the theater, and feel real, one of the important rules I use is taken from observation in real life - environmental sound never comes from one direction for any length of time - it shifts with a little rain patter over here, some wind over there, a cow in the distance back there, and maybe surf moving across the image from back left to right front, and at the same time getting closer and further away.
To work in a theater, there is also a pacing issue - a human pacing issue - sounds that move in a rythm and pace that is fundamentally connected to the way we breathe and move are just more believable, or perhaps more correctly, more easily engage our willing suspension of disbelief.
So, before the control surface, my process involved getting a base mix down, then track by track, making automation moves to vary the direction and level of each sound. That meant, typically, a minimum of 10 passes through the entire piece, at real time, to get in the ballpark, with another 10 passes or more to fine tune. Yes, of course that is speeded up by doing some eyeball work - making reasonable guesses about levels for specific sounds, and yes, some particular sounds are placed deliberately in a specific place and roughed in easily when dropped.
Now, the other day, I tried using the 01V96 as a controller for 16 stereo tracks, just laying down my various environmenal components - and doing some pre-assigned pans so I had one track of left favoring waves adjacent to one of the same regions set to right favoring. I was amazed at how quickly I was able to get into the ballpark riding gain on the 16 faders in automation writing mode for a soundscape - in fact, the very first pass was pretty darn good, because I could do something I couldn't do otherwise - move multiple faders up and down in real time at the same time - playing the control surface like an instrument really. Using this method, I found I could do one or two rehearsal passes, and then get 85% to the final feel I wanted with a third pass. Then, I admit, I went back to tweaking individual tracks to get the final sound I wanted - but that ability to ride gain on multiple individual tracks, each in different directions, at different velocities, and in real time really did make it easier to build the environment I was hearing in my head before starting. At least for this one type of work, I'd argue that the external controler does make sense.
One thing that would be appreciated at some point would be a mode in which new moves would overwrite old moves - perhaps with a toggle of some kind. So, for instance, if I'm playing back an automated sequence, and grab a fader to move it, from that point on the fader ignores earlier automation and follows my new movements. Perhaps this could be toggled on and off somehow - it could be active only on one track, or a grouped set of tracks, at a time, perhaps only within a marked region if that made it easier to control the toggle?
As it works now, moving a fader in automation mode with previously written automation does take control, but only as long as the fader is actually moving - if I want to, say, lower the level as I'm listening, when I get to the new lower level and stop moving the fader, even if I hold it in place with the mouse, the fader rapidly dances back and forth between the new position and the earlier automated position. THat dance is replayed when you play back over the area in question. Of course, I realize I can modify the automation using various tools when not in playback, or I could mark a region and delete the automation ahead of my on-the-fly adjustment and be able to manually change automation in playback to achieve what I'm describing - that's the way I do it now. Perhaps what I'm asking for is a sort of "auto-delete-previous-moves" mode for making automation adjustments on selected tracks.
Again the reason I would like to be able to do this, as opposed to making a change and then listening to it, is that in building these environments every track is relative to another, and being able to adjust while listening to the whole thing in real-time seems to work better for me - my decisions about level changes are related to things happening within a moving sequence of events, and frequently are reactive rather than proactive. That's not to say I can't work without it - I use STUDIO to produce everything that goes into the theater now - but this is a feature that I think I'd use a lot.
Sorry to run on so -
Dave Tosti-Lane
And then as the SAWStudio interface matured, I really lost interest and could no longer even see the need for the use of all these control surfaces... not one of them offered any real power and efficiency in using the program, and all that seemed to be going on was an ego satisfaction game for those that had to have physical faders.
Bob L
Hi Bob -
As always, I think you make good points for what may be most people's way of working. But let me describe one scenario in which the control surface makes sense - something I just confirmed for myself in a job using the recently posted 01V96 mct file.
I build a lot of very complex environmental soundscapes for playback in live theater work. Frequently, I start with up to 40 tracks of wind, rain, animal noises, birds, washing of surf, rivers, thunder, etc, etc. To make a soundscape work in the theater, and feel real, one of the important rules I use is taken from observation in real life - environmental sound never comes from one direction for any length of time - it shifts with a little rain patter over here, some wind over there, a cow in the distance back there, and maybe surf moving across the image from back left to right front, and at the same time getting closer and further away.
To work in a theater, there is also a pacing issue - a human pacing issue - sounds that move in a rythm and pace that is fundamentally connected to the way we breathe and move are just more believable, or perhaps more correctly, more easily engage our willing suspension of disbelief.
So, before the control surface, my process involved getting a base mix down, then track by track, making automation moves to vary the direction and level of each sound. That meant, typically, a minimum of 10 passes through the entire piece, at real time, to get in the ballpark, with another 10 passes or more to fine tune. Yes, of course that is speeded up by doing some eyeball work - making reasonable guesses about levels for specific sounds, and yes, some particular sounds are placed deliberately in a specific place and roughed in easily when dropped.
Now, the other day, I tried using the 01V96 as a controller for 16 stereo tracks, just laying down my various environmenal components - and doing some pre-assigned pans so I had one track of left favoring waves adjacent to one of the same regions set to right favoring. I was amazed at how quickly I was able to get into the ballpark riding gain on the 16 faders in automation writing mode for a soundscape - in fact, the very first pass was pretty darn good, because I could do something I couldn't do otherwise - move multiple faders up and down in real time at the same time - playing the control surface like an instrument really. Using this method, I found I could do one or two rehearsal passes, and then get 85% to the final feel I wanted with a third pass. Then, I admit, I went back to tweaking individual tracks to get the final sound I wanted - but that ability to ride gain on multiple individual tracks, each in different directions, at different velocities, and in real time really did make it easier to build the environment I was hearing in my head before starting. At least for this one type of work, I'd argue that the external controler does make sense.
One thing that would be appreciated at some point would be a mode in which new moves would overwrite old moves - perhaps with a toggle of some kind. So, for instance, if I'm playing back an automated sequence, and grab a fader to move it, from that point on the fader ignores earlier automation and follows my new movements. Perhaps this could be toggled on and off somehow - it could be active only on one track, or a grouped set of tracks, at a time, perhaps only within a marked region if that made it easier to control the toggle?
As it works now, moving a fader in automation mode with previously written automation does take control, but only as long as the fader is actually moving - if I want to, say, lower the level as I'm listening, when I get to the new lower level and stop moving the fader, even if I hold it in place with the mouse, the fader rapidly dances back and forth between the new position and the earlier automated position. THat dance is replayed when you play back over the area in question. Of course, I realize I can modify the automation using various tools when not in playback, or I could mark a region and delete the automation ahead of my on-the-fly adjustment and be able to manually change automation in playback to achieve what I'm describing - that's the way I do it now. Perhaps what I'm asking for is a sort of "auto-delete-previous-moves" mode for making automation adjustments on selected tracks.
Again the reason I would like to be able to do this, as opposed to making a change and then listening to it, is that in building these environments every track is relative to another, and being able to adjust while listening to the whole thing in real-time seems to work better for me - my decisions about level changes are related to things happening within a moving sequence of events, and frequently are reactive rather than proactive. That's not to say I can't work without it - I use STUDIO to produce everything that goes into the theater now - but this is a feature that I think I'd use a lot.
Sorry to run on so -
Dave Tosti-Lane