![]() |
| If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|||||||
|
|
|
Thread Tools | Display Modes |
|
#11
|
|||
|
|||
|
Phil,
I used DV merely as an illustration of an interleaved format but did not intend to suggest that it made sense as a transmission format per se. Very optimized interleaved formats could certainly have been developed for HDTV and DVD applications. I agree with your comments, and wanted to add that the "economy" of using two disparate compression schemes for video and audio and then relying on time stamps to ensure synch is a bad judgment call IMHO, since both software bugs (as you state) as well as other 'unexpected' corruptions can and will cause synch to slip unpredictably. Take the simple case where a fragmented hard disk or otherwise maxed-out CPU cannot keep up with frame rates of the stream. It is entirely possible (and often a real problem) that MPEG editing and DVD authoring (particularly during the program capture stage) merely run out of resources and record a stream with dropped frames. Similarly, a noisy RF channel with multipath, phase distortion, interference, or other fading / attenuation can briefly experience dropouts. In both cases, there is no forward or backward error correction code to recover the loss. Rather, there is a "hole" in the data stream, which for both video and audio cause huge problems since each relies on interframe (delta) transitions to reconstruct the original waveforms. At least as troublesome is that the time stamps themselves may be dropped as well. Even if they aren't, the hole in the audio or video stream prevents re-synchronization. I personally feel that the ATSC committee and DVD consortium did a disservice to the world with their reliance on methods which ignore some of the harsh realities of satellite links, UHF propagation effects, burst error statistics in noisy channels, etc. In making their choices they exposed the entire medium to the multipath, dropouts, and synch issues which now plague HDTV and DVD delivery systems. Smarty (KC2OZ) wrote in message ... On Fri, 23 Sep 2005 02:06:00 -0400 Smarty wrote: | I've had the same experience. I personally feel that the decision to | independently compress the video and audio and then maintain their synch was | a short-sighted one. Using an "interleaved" audio/video format with inherent | time synchronization like DV tape for example does impart a penalty in | storage, but makes the whole process of presentation so much more reliable | in terms of lip synch. In a world where storage costs drop by a factor of | two or more each year, and transmission rates increase in much the same | manner per dollar, then the design choice to separate the two streams for | compression gains seems unfortunate. Getting movie film to use the same | approach in the last century took a similar path until eventually the film | and sound track were unified. Personally, it seems to me that having to design formats (or protocols) in a certain way due in order to avoid the possibility of software errors is wrong. I'd put more blame on the software developers and require them to "get it right". Unfortunately, software development costs are not really dropping much, although businesses are trying to push it lower all the time (and hence, much of the problem). Still, an integrated interleaved format like DV does have attraction, including for other purposes like random frame access. While storage and transmission costs are rapidly declining, there are some places where limitations exist. DV would not have been practical for over the air television in the 6 MHz of bandwidth used in North America and Japan, and a high definition version of an interleaved DV format would certainly be much more imposing (at potentially 6 times the needed data rate). Getting that many more bits through 6 MHz, or more MHz, is just not going to happen. So there is value in the kinds of compression selected by ATSC (though we have better options now). -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
|
#12
|
|||
|
|||
|
Phil,
Isn't it ironic that the 6 MHz channel assignments, ostensibly used as a rationale for creating highly compressed data streams, have now given way to FCC approval of standard definition SD sub-channels. Broadcasters LOVE the opportunity to sub-divide their HD bandwidth so as to have multiple "channels" to broadcast infomercials and other crap. So now we have the worst of both worlds IMHO.........inferior signaling / transmission as a result of trying to achieve the highest possible bandwidth reductions for HD, and then.......using the bandwidth to send multiple SD channels of garbage..... Smarty wrote in message ... On Fri, 23 Sep 2005 02:06:00 -0400 Smarty wrote: | I've had the same experience. I personally feel that the decision to | independently compress the video and audio and then maintain their synch was | a short-sighted one. Using an "interleaved" audio/video format with inherent | time synchronization like DV tape for example does impart a penalty in | storage, but makes the whole process of presentation so much more reliable | in terms of lip synch. In a world where storage costs drop by a factor of | two or more each year, and transmission rates increase in much the same | manner per dollar, then the design choice to separate the two streams for | compression gains seems unfortunate. Getting movie film to use the same | approach in the last century took a similar path until eventually the film | and sound track were unified. Personally, it seems to me that having to design formats (or protocols) in a certain way due in order to avoid the possibility of software errors is wrong. I'd put more blame on the software developers and require them to "get it right". Unfortunately, software development costs are not really dropping much, although businesses are trying to push it lower all the time (and hence, much of the problem). Still, an integrated interleaved format like DV does have attraction, including for other purposes like random frame access. While storage and transmission costs are rapidly declining, there are some places where limitations exist. DV would not have been practical for over the air television in the 6 MHz of bandwidth used in North America and Japan, and a high definition version of an interleaved DV format would certainly be much more imposing (at potentially 6 times the needed data rate). Getting that many more bits through 6 MHz, or more MHz, is just not going to happen. So there is value in the kinds of compression selected by ATSC (though we have better options now). -- ----------------------------------------------------------------------------- | Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ | | (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ | ----------------------------------------------------------------------------- |
|
#13
|
|||
|
|||
|
wrote in message ... And it doesn't help that audio sampling rates are not a whole number for a single frame of video in so many cases (e.g. "NTSC legacy" frame rates in North America). The methods to deal with that could be the cause of "strange software". Phil, NTSC timecode frame rates and audio sampling rates are two entirely different things. 48k is 48k at 25fps, 29.97fps and at 30fps drop or non-drop. Timecode and sample rate are not the same thing. Granted if a 24fps or 30fps film production is pulled down in telecine for NTSC transfer the sample rate would have to come down as well but that in practice entails either a sample rate conversion back to 48k or a two way trip through a DA/AD converter.There are also some field recordists that will record double system sound for film at 48.048 so the pulldown to NTSC will result in 48.00 if video is the final production or presentation format. Anyway...not to be a nitpicker but 1 second of time is 1 second of time and 48k is 48k regardless of the timecode frame rate. I'd be willing to bet that many of the sync issues that consumers notice are due to video up-conversion delays in their STB's and televisions which is a problem no broadcaster can address because of the wide variety of devices out there. One person's television takes 720P and makes it 1080i, another takes 1080i and converts to 720P etc etc. Charles Tomaras Seattle, WA |
|
#14
|
|||
|
|||
|
I can understand the STB being the culprit if it happened on more than
one OTA signal. If the broadcast is 1080i from the broadcaster and the STB is processing it at 1080i (no down-conversion) then wouldn't other 1080i signals also have synch issues (assuming it's the STB)? But other 1080i signals do not have the synch issue so I would think I could rule out the STB and go back to the broadcaster as the source of the problem (either CBS nationally, which I doubt, or the local affiliate, which is probably more likely). |
|
| Thread Tools | |
| Display Modes | |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| DVI & HDMI questions | Mark Hanson | High definition TV | 19 | January 18th 05 09:14 PM |
| advice on first HDTV & audio delay issues | Craig | High definition TV | 7 | December 22nd 04 03:23 AM |
| first HDTV purchase and audio sync issues | Craig | Home theater (general) | 4 | December 21st 04 11:35 PM |
| Hum On RCA Audio Outputs? | C. T. K. | Tivo personal television | 3 | June 28th 04 09:56 PM |
| Video & Audio Sync issues? | Chris H | Tivo personal television | 17 | June 24th 04 05:06 PM |