Close

Page 6 of 6 FirstFirst ... 456
Results 51 to 56 of 56
  1. #51

    Default Re: SAC SYSTEM BUILD QUESTIONS

    Quote Originally Posted by cgrafx View Post
    But then how do you decide when the channel is actually off or the source is just not playing anything?
    Well, I would have thought that the converter itself would send a message to the software that said: "This circuit is now closed" or "This circuit is now open". If you plugged a mic in, it would complete the circuit and if you unplugged it - it would render it open. The converter could notice that and report it. Not high tech, particularly, but useful. I presume the converter is sending events - one type could indicate "plugged in".

    I'm sure the reality is more complex than that. Maybe there's a standard that was itself the extension of an older standard some hardware still uses that wouldn't have required it and might crash if an event type was added. Or... you know: something. I've added bags on the side of enough tired old war horses to know that you rarely get what seems obvious to expect to work with.

  2. #52
    Join Date
    Jul 2006
    Location
    SF Bay Area
    Posts
    1,127

    Default Re: SAC SYSTEM BUILD QUESTIONS

    Quote Originally Posted by John Ludlow View Post
    Well, I would have thought that the converter itself would send a message to the software that said: "This circuit is now closed" or "This circuit is now open". If you plugged a mic in, it would complete the circuit and if you unplugged it - it would render it open. The converter could notice that and report it. Not high tech, particularly, but useful. I presume the converter is sending events - one type could indicate "plugged in".

    I'm sure the reality is more complex than that. Maybe there's a standard that was itself the extension of an older standard some hardware still uses that wouldn't have required it and might crash if an event type was added. Or... you know: something. I've added bags on the side of enough tired old war horses to know that you rarely get what seems obvious to expect to work with.
    No, events. That isn't a thing. Audio data is just a stream of data. A mic input doesn't have any physical switches attached to it. I doesn't know that a connection has been made or not. The closest thing to a switched circuit would be a TRS connector that can have switched inputs (Normally open, or Normally closed), but would almost always be used for signal routing, not circuit signaling.

    Again, how would the system determine if there is supposed to be a signal there or not?

    Inputs don't only have microphones attached to them. An input could be a keyboard or an iPad that wouldn't be making any sound until a key is pressed or the play button is pushed.

    Microphones can also have mute switches on them.

    In the rare instances when hardware has tried to be smart, its actually caused problems, because there is a delay in turning back on.

    There was a rev of the Behringer ADA's that did that and it was problematic. Keep in mind though that was for output D/A signals only. There just isn't effective way to do that in the input side of a system, because any analog input will always have some noise on it and will never produce a 000000 data stream.

    The other thing to consider is silence is not an invalid data state, nor is it one that you can make any hard-line specific assumptions about.
    Last edited by cgrafx; 07-13-2019 at 02:13 PM.
    ---------------------------------------
    Philip G.

  3. #53

    Default Re: SAC SYSTEM BUILD QUESTIONS

    Quote Originally Posted by cgrafx View Post
    No, events. That isn't a thing. Audio data is just a stream of data. A mic input doesn't have any physical switches attached to it. I doesn't know that a connection has been made or not. The closest thing to a switched circuit would be a TRS connector that can have switched inputs (Normally open, or Normally closed), but would almost always be used for signal routing, not circuit signaling.

    Again, how would the system determine if there is supposed to be a signal there or not?

    Inputs don't only have microphones attached to them. An input could be a keyboard or an iPad that wouldn't be making any sound until a key is pressed or the play button is pushed.

    Microphones can also have mute switches on them.

    In the rare instances when hardware has tried to be smart, its actually caused problems, because there is a delay in turning back on.

    There was a rev of the Behringer ADA's that did that and it was problematic. Keep in mind though that was for output D/A signals only. There just isn't effective way to do that in the input side of a system, because any analog input will always have some noise on it and will never produce a 000000 data stream.

    The other thing to consider is silence is not an invalid data state, nor is it one that you can make any hard-line specific assumptions about.
    Oh - I didn't realize that's the way the data stream is structured. That's too bad.

    When you plug a mic in - it completes the circuit and at least some electricity passes through it on the analog side - which can be detected. Unless, as you pointed out, the switch on the mic is turned off, in which case the circuit is open. But, if it is open there is no need to monitor it until someone turns it back on.

    The digital data stream is another matter - and that's probably the sticking point. If it is a dumb data stream then everything is considered to be digital audio and there's nothing further that could be done with it. There's that 'old standard' that holds it back. Doing it as events (the way midi, for instance, is done) would be smarter, in my opinion, because information besides data could be transferred within the stream. It would also be tougher to do because there's an awful lot of audio samples within a single second compared to, again for instance, MIDI. And the stream would have to be larger per event to handle the event baggage. But - look at TCP/IP. That also carries a heavy, potentially frequent, data stream and: it is done in discrete packets in which the packet declares what kind of data it represents. So - it's a choice, not a requirement of the limits of technology. The way it's done now is leaner on the transport side - but it requires much more processing on the application side to accommodate it.

    And, as far as a perceived delay - I don't know the specific instance you mention, but that was surely in the implementation. The on or off event need only be 1 sample/packet-length long and it could be automatically removed from the stream before buffering.

    All that is moot though. The increase in hardware sophistication that would be required to handle events, and the inter-company politics that would be required to change the standard, aren't worth the benefit to the software programmers in knowing that actively looking for music to appear on a channel is pointless. That, plus it ain't broke now and there's an awful lot of existing hardware that would have to be scrapped to make it the new standard. It's not going to happen.

    On the other hand - there's Dante...

  4. #54

    Default Re: SAC SYSTEM BUILD QUESTIONS

    What about LatencyMon, Philip - do you use it or something else to determine dropped samples?

  5. #55
    Join Date
    Jul 2006
    Location
    SF Bay Area
    Posts
    1,127

    Default Re: SAC SYSTEM BUILD QUESTIONS

    Quote Originally Posted by John Ludlow View Post
    What about LatencyMon, Philip - do you use it or something else to determine dropped samples?
    I'm just looking at dropped buffers in SAC or SAW Studio. This is what tells me if the system can process all the requested audio data in the time allocated based on the buffer setting. The lower the buffer setting the less time available to process data.

    Conversely the lower the buffer setting the lower the input to output latency as well. So the goal is to get the buffer setting as low as possible without dropping buffers.

    My current live system running at 48kHz and 1/64 is just a little over 6ms input to output.

    Running at 1/32 lowers the latency about 1.5-2ms, which would get me very close to 4ms input to output.
    ---------------------------------------
    Philip G.

  6. #56
    Join Date
    Jul 2006
    Location
    SF Bay Area
    Posts
    1,127

    Default Re: SAC SYSTEM BUILD QUESTIONS

    Quote Originally Posted by John Ludlow View Post
    Oh - I didn't realize that's the way the data stream is structured. That's too bad.

    When you plug a mic in - it completes the circuit and at least some electricity passes through it on the analog side - which can be detected. Unless, as you pointed out, the switch on the mic is turned off, in which case the circuit is open. But, if it is open there is no need to monitor it until someone turns it back on.

    The digital data stream is another matter - and that's probably the sticking point. If it is a dumb data stream then everything is considered to be digital audio and there's nothing further that could be done with it. There's that 'old standard' that holds it back. Doing it as events (the way midi, for instance, is done) would be smarter, in my opinion, because information besides data could be transferred within the stream. It would also be tougher to do because there's an awful lot of audio samples within a single second compared to, again for instance, MIDI. And the stream would have to be larger per event to handle the event baggage. But - look at TCP/IP. That also carries a heavy, potentially frequent, data stream and: it is done in discrete packets in which the packet declares what kind of data it represents. So - it's a choice, not a requirement of the limits of technology. The way it's done now is leaner on the transport side - but it requires much more processing on the application side to accommodate it.

    And, as far as a perceived delay - I don't know the specific instance you mention, but that was surely in the implementation. The on or off event need only be 1 sample/packet-length long and it could be automatically removed from the stream before buffering.

    All that is moot though. The increase in hardware sophistication that would be required to handle events, and the inter-company politics that would be required to change the standard, aren't worth the benefit to the software programmers in knowing that actively looking for music to appear on a channel is pointless. That, plus it ain't broke now and there's an awful lot of existing hardware that would have to be scrapped to make it the new standard. It's not going to happen.

    On the other hand - there's Dante...
    Dante isn't going to change this process, it still has to deliver data in order and on time.

    Digital audio was designed to work as a drop in replacement for analog audio and as such has to live in that continuous non-discrete environment.

    TCP/IP ethernet was specifically designed from the ground up to be a discrete packet-based protocol that can re-transmit errors, and handle both delayed and and out of order packet delivery.

    Real-time low-latency audio can't have any of those things. Data has to happen in order, can't be delayed and can't wait for retransmission of error packets.

    Thats one of the reasons ethernet can be such a difficult transport protocol for low-latency live streaming.

    MIDI by its very nature is nothing but an event protocol. That is in fact all it does, process events. Its only marginally real-time and it doesn't process audio.

    Plugging a microphone into a preamp doesn't complete a circuit in the traditional sense. Its not a light bulb, it is a source of real-time data from a transducer that is responding to changes in air pressure.

    The microphone preamp doesn't stop amplifying its input source because there isn't something plugged in, or is muted. The A/D circuit doesn't care if the signal is circuit noise or overloaded distortion, its job is simply to convert the source analog stream into its digital representation.

    You also can't turn on a disabled data stream faster than the buffer size, which will never be a single sample, because computers process audio as data packets defined by buffer size.

    If your buffer size is 32 samples and you've been force feeding it zeros, you'll need 32 samples worth of time before it can be filled with real data.

    The buffers are filled and emptied on a regular schedule defined by the sample size and frequency. The whole thing only works if everything stays on schedule where the input and output buffers are filled continuously and on schedule. If you miss your allotted slot (sample rate), you introduce distortion into the output stream. Miss a few buffers and you probably won't hear it, miss a lot of buffers and it will sound like crap. Get your timing out of sync and you'll also get strange artifacts.

    You can't just stop some things and add others. For live audio the chain from Analog Input to Analog Output is continuous real-time data with a fixed low-latency offset. Once its in the digital space it can take detours as long as it doesn't deviate from the assigned input/output schedule. If it can get through the reverb detour, or the compressor detour and still get to the output buffer before the train leaves than all is good, otherwise you start dropping buffers. Drop enough buffers and you derail the train.
    ---------------------------------------
    Philip G.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •