|
Audio synch issues with CBS HD
Hello,
I've been receiving over-the-air HD signals for almost 3 years now. I've always used a set-top from either DirecTV or Dish (depending on who I was currently subscribed to). I have a question about the HD broadcast from CBS. I notice that on all HD broadcasts the audio is about 1/2 - a full second behind the video. This applies to all CBS shows in HD. If it's an SD show the audio/video is fine. CBS is the only station I notice this on. I receive about 10 other OTA digital signals and the audio/video on their HD broadcasts are fine on all of them. I seem to remember this problem starting when I switched from DirecTV to Dish and switched set-top receivers. I wouldn't think the receiver had anything to do with it (since it only happens on one station) but I had it replaced just in case and I still have the problem. I contacted our local CBS affiliate (KDKA - I'm in the Pittsburgh area) and talked to their HD technician. He said he checked their equipment and everything seemed fine. Occasionaly he said they can get out of synch but tend to correct themselves (or someone there does it). I can't imagine this is a problem with CBS or else there would be a lot of complaints. I can't even think this is a problem with the local affiliate or else there would some complaints, I would think (unless I'm the only person watching CBS HD over-the-air in Pittsburgh). I can't seem to pinpoint what could be causing the problem. It's very distracting to watch prime-time shows or Letterman when the audio doesn't match the video. Does anyone have any ideas or maybe you are experiencing the same thing? Thanks. |
Yes, I've noticed this too...when I visit Pgh, KDKA is off with the sound
sync. I thought that it may have been the cable company, but this confirms that it has something to do with the network. Of course, I would take unsynched audio on HD now over what is offered in this ******** I currently live in. Just got a new 51" and can't get football in HD, not even OTA as these hillbillys think they should be paid to "share" HD with the public. West Virginia sucks...its a good thing I'm only here temp. Our tax dollars went towards establishing Digital and HD parameters through research etc, but we can't take advantage of them. wrote in message oups.com... Hello, I've been receiving over-the-air HD signals for almost 3 years now. I've always used a set-top from either DirecTV or Dish (depending on who I was currently subscribed to). I have a question about the HD broadcast from CBS. I notice that on all HD broadcasts the audio is about 1/2 - a full second behind the video. This applies to all CBS shows in HD. If it's an SD show the audio/video is fine. CBS is the only station I notice this on. I receive about 10 other OTA digital signals and the audio/video on their HD broadcasts are fine on all of them. I seem to remember this problem starting when I switched from DirecTV to Dish and switched set-top receivers. I wouldn't think the receiver had anything to do with it (since it only happens on one station) but I had it replaced just in case and I still have the problem. I contacted our local CBS affiliate (KDKA - I'm in the Pittsburgh area) and talked to their HD technician. He said he checked their equipment and everything seemed fine. Occasionaly he said they can get out of synch but tend to correct themselves (or someone there does it). I can't imagine this is a problem with CBS or else there would be a lot of complaints. I can't even think this is a problem with the local affiliate or else there would some complaints, I would think (unless I'm the only person watching CBS HD over-the-air in Pittsburgh). I can't seem to pinpoint what could be causing the problem. It's very distracting to watch prime-time shows or Letterman when the audio doesn't match the video. Does anyone have any ideas or maybe you are experiencing the same thing? Thanks. |
) wrote in alt.tv.tech.hdtv:
I wouldn't think the receiver had anything to do with it (since it only happens on one station) but I had it replaced just in case and I still have the problem. I contacted our local CBS affiliate (KDKA - I'm in the Pittsburgh area) and talked to their HD technician. He said he checked their equipment and everything seemed fine. Occasionaly he said they can get out of synch but tend to correct themselves (or someone there does it). It's completely the fault of the station *if* you are not feeding the audio through any processing. If you are, that might be exaggerating a slight problem with that station. Try and see if anybody else has this problem by reading the Pittsburgh area thread in the "Local HDTV" forum at AVS Forum. Despite what the tech said, often nobody from the station is actually monitoring the digital signal, so many things go wrong and nobody notices. "Checking the equipment" just says it is set up they way they think it should be, not that the signal is correct. The only thing you can do is complain and complain some more. Even if a station can't afford to have somebody sitting and watching the digital signal (as received, not "in house"), they could still afford to record it every night and take a look at the recording the next day. It should keep problems from repeating like they seem to. -- Jeff Rife | | http://www.nabs.net/Cartoons/OverThe...Workaholic.gif |
wrote in message
ups.com... I've tried the audio both ways - directly from the set-top receiver to the TV and also through my AV receiver and both ways it is delayed. I haven't heard of anyone having problems when I've asked before but now that I know others have experienced it I will try to contact the station again and ask at the AVS forum if anyone else has noticed this or has any ideas. Thanks. It must be local. No synch problems in the LA area on 2 different cable systems. |
I've tried the audio both ways - directly from the set-top receiver to
the TV and also through my AV receiver and both ways it is delayed. I haven't heard of anyone having problems when I've asked before but now that I know others have experienced it I will try to contact the station again and ask at the AVS forum if anyone else has noticed this or has any ideas. Thanks. |
Unfortunately, in the design of current digital TV systems, the audio and
video signals are independently compressed using two entirely different methods, then mixed / muxed together with some time stamps to allow the receiver to get things back into synch. This bifurcation of the data can and often is corrupted by DVD authoring programs, retransmission of the signal by local broadcasters, and video editing, compositing, or studio switching equipment. We are still witnessing the infancy of the HDTV broadcast industry with a lot of glitches and problems. The local affiliate / station's chief engineer is the guy to talk to. Smarty wrote in message ups.com... I've tried the audio both ways - directly from the set-top receiver to the TV and also through my AV receiver and both ways it is delayed. I haven't heard of anyone having problems when I've asked before but now that I know others have experienced it I will try to contact the station again and ask at the AVS forum if anyone else has noticed this or has any ideas. Thanks. |
On Thu, 22 Sep 2005 18:23:49 -0400, Smarty wrote:
Unfortunately, in the design of current digital TV systems, the audio and video signals are independently compressed using two entirely different methods, then mixed / muxed together with some time stamps to allow the receiver to get things back into synch. This bifurcation of the data can and often is corrupted by DVD authoring programs, retransmission of the signal by local broadcasters, and video editing, compositing, or studio switching equipment. We are still witnessing the infancy of the HDTV broadcast industry with a lot of glitches and problems. I've run into this while editing HDTV captures, and converting them to formats that will fit on a DVD-R (the largest storage format on my HTPC). Some software grabs the first time stamps, calculates the difference between the audio and video, and assumes that the delay will stay the same. That's a bad assumption. It may stay the same for a long period. But it may change at any time. Once the system starts ignoring the the incoming timestamps, then the timing is lost for everything downstream. -- hac |
I've had the same experience. I personally feel that the decision to
independently compress the video and audio and then maintain their synch was a short-sighted one. Using an "interleaved" audio/video format with inherent time synchronization like DV tape for example does impart a penalty in storage, but makes the whole process of presentation so much more reliable in terms of lip synch. In a world where storage costs drop by a factor of two or more each year, and transmission rates increase in much the same manner per dollar, then the design choice to separate the two streams for compression gains seems unfortunate. Getting movie film to use the same approach in the last century took a similar path until eventually the film and sound track were unified. Smarty "hac" wrote in message ... On Thu, 22 Sep 2005 18:23:49 -0400, Smarty wrote: Unfortunately, in the design of current digital TV systems, the audio and video signals are independently compressed using two entirely different methods, then mixed / muxed together with some time stamps to allow the receiver to get things back into synch. This bifurcation of the data can and often is corrupted by DVD authoring programs, retransmission of the signal by local broadcasters, and video editing, compositing, or studio switching equipment. We are still witnessing the infancy of the HDTV broadcast industry with a lot of glitches and problems. I've run into this while editing HDTV captures, and converting them to formats that will fit on a DVD-R (the largest storage format on my HTPC). Some software grabs the first time stamps, calculates the difference between the audio and video, and assumes that the delay will stay the same. That's a bad assumption. It may stay the same for a long period. But it may change at any time. Once the system starts ignoring the the incoming timestamps, then the timing is lost for everything downstream. -- hac |
On Thu, 22 Sep 2005 19:53:20 -0700 hac wrote:
| On Thu, 22 Sep 2005 18:23:49 -0400, Smarty wrote: | | Unfortunately, in the design of current digital TV systems, the audio | and video signals are independently compressed using two entirely | different methods, then mixed / muxed together with some time stamps to | allow the receiver to get things back into synch. | | This bifurcation of the data can and often is corrupted by DVD authoring | programs, retransmission of the signal by local broadcasters, and video | editing, compositing, or studio switching equipment. We are still | witnessing the infancy of the HDTV broadcast industry with a lot of | glitches and problems. | | I've run into this while editing HDTV captures, and converting them to | formats that will fit on a DVD-R (the largest storage format on my HTPC). | | Some software grabs the first time stamps, calculates the difference | between the audio and video, and assumes that the delay will stay the | same. | | That's a bad assumption. It may stay the same for a long period. But it | may change at any time. Once the system starts ignoring the the incoming | timestamps, then the timing is lost for everything downstream. Actually there are two assumptions being made here. If the delay will not be the same, then something is already wrong in the incoming A/V. That could be a missing section of one or the other. That could be bad software doing the original digitizing. But the big problem is, how do you know whether a jump in timestamp difference is due to missing data or due to miscalculated timestamps. Given no other means to syncronize, one assumption or the other has to be made. You have to either assume the timestamps themselves are the authority for syncronization, or you have to assume the quantity (time length) is consistent. Either of these assumptions could be inconsistent with what you get when dealing with any possible software bugs in the environment creating that A/V content. But I did say two assumptions are being made. In addition to assuming how content is supposed to stay in sync, there is also the initial assumption of how to get it in sync in the first place. If software can be faulty and produce inconsistent timestamps for consistent length material, then using the starting difference makes sense (until there is some missing content). However, if software can result in some missing content or otherwise unexplained timestamp shifts, then assuming that the starting point itself is in sync isn't even valid. Given a world where software developers so frequently do things wrong, such as inocrrect timing calculations, or poor error handling that lets physical errors corrupt data streams, no assumption can cover all the possible problems. And it doesn't help that audio sampling rates are not a whole number for a single frame of video in so many cases (e.g. "NTSC legacy" frame rates in North America). The methods to deal with that could be the cause of "strange software". -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
On Fri, 23 Sep 2005 02:06:00 -0400 Smarty wrote:
| I've had the same experience. I personally feel that the decision to | independently compress the video and audio and then maintain their synch was | a short-sighted one. Using an "interleaved" audio/video format with inherent | time synchronization like DV tape for example does impart a penalty in | storage, but makes the whole process of presentation so much more reliable | in terms of lip synch. In a world where storage costs drop by a factor of | two or more each year, and transmission rates increase in much the same | manner per dollar, then the design choice to separate the two streams for | compression gains seems unfortunate. Getting movie film to use the same | approach in the last century took a similar path until eventually the film | and sound track were unified. Personally, it seems to me that having to design formats (or protocols) in a certain way due in order to avoid the possibility of software errors is wrong. I'd put more blame on the software developers and require them to "get it right". Unfortunately, software development costs are not really dropping much, although businesses are trying to push it lower all the time (and hence, much of the problem). Still, an integrated interleaved format like DV does have attraction, including for other purposes like random frame access. While storage and transmission costs are rapidly declining, there are some places where limitations exist. DV would not have been practical for over the air television in the 6 MHz of bandwidth used in North America and Japan, and a high definition version of an interleaved DV format would certainly be much more imposing (at potentially 6 times the needed data rate). Getting that many more bits through 6 MHz, or more MHz, is just not going to happen. So there is value in the kinds of compression selected by ATSC (though we have better options now). -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
Phil,
I used DV merely as an illustration of an interleaved format but did not intend to suggest that it made sense as a transmission format per se. Very optimized interleaved formats could certainly have been developed for HDTV and DVD applications. I agree with your comments, and wanted to add that the "economy" of using two disparate compression schemes for video and audio and then relying on time stamps to ensure synch is a bad judgment call IMHO, since both software bugs (as you state) as well as other 'unexpected' corruptions can and will cause synch to slip unpredictably. Take the simple case where a fragmented hard disk or otherwise maxed-out CPU cannot keep up with frame rates of the stream. It is entirely possible (and often a real problem) that MPEG editing and DVD authoring (particularly during the program capture stage) merely run out of resources and record a stream with dropped frames. Similarly, a noisy RF channel with multipath, phase distortion, interference, or other fading / attenuation can briefly experience dropouts. In both cases, there is no forward or backward error correction code to recover the loss. Rather, there is a "hole" in the data stream, which for both video and audio cause huge problems since each relies on interframe (delta) transitions to reconstruct the original waveforms. At least as troublesome is that the time stamps themselves may be dropped as well. Even if they aren't, the hole in the audio or video stream prevents re-synchronization. I personally feel that the ATSC committee and DVD consortium did a disservice to the world with their reliance on methods which ignore some of the harsh realities of satellite links, UHF propagation effects, burst error statistics in noisy channels, etc. In making their choices they exposed the entire medium to the multipath, dropouts, and synch issues which now plague HDTV and DVD delivery systems. Smarty (KC2OZ) wrote in message ... On Fri, 23 Sep 2005 02:06:00 -0400 Smarty wrote: | I've had the same experience. I personally feel that the decision to | independently compress the video and audio and then maintain their synch was | a short-sighted one. Using an "interleaved" audio/video format with inherent | time synchronization like DV tape for example does impart a penalty in | storage, but makes the whole process of presentation so much more reliable | in terms of lip synch. In a world where storage costs drop by a factor of | two or more each year, and transmission rates increase in much the same | manner per dollar, then the design choice to separate the two streams for | compression gains seems unfortunate. Getting movie film to use the same | approach in the last century took a similar path until eventually the film | and sound track were unified. Personally, it seems to me that having to design formats (or protocols) in a certain way due in order to avoid the possibility of software errors is wrong. I'd put more blame on the software developers and require them to "get it right". Unfortunately, software development costs are not really dropping much, although businesses are trying to push it lower all the time (and hence, much of the problem). Still, an integrated interleaved format like DV does have attraction, including for other purposes like random frame access. While storage and transmission costs are rapidly declining, there are some places where limitations exist. DV would not have been practical for over the air television in the 6 MHz of bandwidth used in North America and Japan, and a high definition version of an interleaved DV format would certainly be much more imposing (at potentially 6 times the needed data rate). Getting that many more bits through 6 MHz, or more MHz, is just not going to happen. So there is value in the kinds of compression selected by ATSC (though we have better options now). -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
Phil,
Isn't it ironic that the 6 MHz channel assignments, ostensibly used as a rationale for creating highly compressed data streams, have now given way to FCC approval of standard definition SD sub-channels. Broadcasters LOVE the opportunity to sub-divide their HD bandwidth so as to have multiple "channels" to broadcast infomercials and other crap. So now we have the worst of both worlds IMHO.........inferior signaling / transmission as a result of trying to achieve the highest possible bandwidth reductions for HD, and then.......using the bandwidth to send multiple SD channels of garbage..... Smarty wrote in message ... On Fri, 23 Sep 2005 02:06:00 -0400 Smarty wrote: | I've had the same experience. I personally feel that the decision to | independently compress the video and audio and then maintain their synch was | a short-sighted one. Using an "interleaved" audio/video format with inherent | time synchronization like DV tape for example does impart a penalty in | storage, but makes the whole process of presentation so much more reliable | in terms of lip synch. In a world where storage costs drop by a factor of | two or more each year, and transmission rates increase in much the same | manner per dollar, then the design choice to separate the two streams for | compression gains seems unfortunate. Getting movie film to use the same | approach in the last century took a similar path until eventually the film | and sound track were unified. Personally, it seems to me that having to design formats (or protocols) in a certain way due in order to avoid the possibility of software errors is wrong. I'd put more blame on the software developers and require them to "get it right". Unfortunately, software development costs are not really dropping much, although businesses are trying to push it lower all the time (and hence, much of the problem). Still, an integrated interleaved format like DV does have attraction, including for other purposes like random frame access. While storage and transmission costs are rapidly declining, there are some places where limitations exist. DV would not have been practical for over the air television in the 6 MHz of bandwidth used in North America and Japan, and a high definition version of an interleaved DV format would certainly be much more imposing (at potentially 6 times the needed data rate). Getting that many more bits through 6 MHz, or more MHz, is just not going to happen. So there is value in the kinds of compression selected by ATSC (though we have better options now). -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
wrote in message ... And it doesn't help that audio sampling rates are not a whole number for a single frame of video in so many cases (e.g. "NTSC legacy" frame rates in North America). The methods to deal with that could be the cause of "strange software". Phil, NTSC timecode frame rates and audio sampling rates are two entirely different things. 48k is 48k at 25fps, 29.97fps and at 30fps drop or non-drop. Timecode and sample rate are not the same thing. Granted if a 24fps or 30fps film production is pulled down in telecine for NTSC transfer the sample rate would have to come down as well but that in practice entails either a sample rate conversion back to 48k or a two way trip through a DA/AD converter.There are also some field recordists that will record double system sound for film at 48.048 so the pulldown to NTSC will result in 48.00 if video is the final production or presentation format. Anyway...not to be a nitpicker but 1 second of time is 1 second of time and 48k is 48k regardless of the timecode frame rate. I'd be willing to bet that many of the sync issues that consumers notice are due to video up-conversion delays in their STB's and televisions which is a problem no broadcaster can address because of the wide variety of devices out there. One person's television takes 720P and makes it 1080i, another takes 1080i and converts to 720P etc etc. Charles Tomaras Seattle, WA |
I can understand the STB being the culprit if it happened on more than
one OTA signal. If the broadcast is 1080i from the broadcaster and the STB is processing it at 1080i (no down-conversion) then wouldn't other 1080i signals also have synch issues (assuming it's the STB)? But other 1080i signals do not have the synch issue so I would think I could rule out the STB and go back to the broadcaster as the source of the problem (either CBS nationally, which I doubt, or the local affiliate, which is probably more likely). |
| All times are GMT +1. The time now is 07:20 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
HomeCinemaBanter.com