|
Whats the point of Freeview?
On Fri, 10 Oct 2008 09:40:14 +0100, Andy Burns
wrote: Mark wrote: On Thu, 09 Oct 2008 22:22:17 +0200, J G Miller wrote: And the FFT parameter is finally changed from the prehistoric 2k and brought into the modern age of 8k. What noticable difference will this make? Less impulse interference (mopeds, light switches, thermostats etc) Ah :-) -- (\__/) M. (='.'=) Owing to the amount of spam posted via googlegroups and (")_(") their inaction to the problem. I am blocking most articles posted from there. If you wish your postings to be seen by everyone you will need use a different method of posting. See http://improve-usenet.org |
Whats the point of Freeview?
On Fri, 10 Oct 2008 16:37:19 +0100, Java Jive wrote:
[skip rest of telling grandma how to suck eggs] Sorry grandma ;+) ;+) ;+) |
Whats the point of Freeview?
Paul Ratcliffe wrote:
On Fri, 10 Oct 2008 11:07:44 +0100, Mark Carver wrote: 2: Crop some of the sides, and some of the top and bottom, and present as 14:9 letterbox. 14L12 does not crop the top and/or the bottom. You're quite right, thanks for the correction. I must have had in my head those mad buggers from BBC Regional Centres that used to crop their 4:3 output to 14:9 letterbox purely so that their analogue output would match the 14:9 ARC'd pictures from Network. ;-) -- Mark Please replace invalid and invalid with gmx and net to reply. |
Whats the point of Freeview?
However, now I have a further question. If 270Mb/s is the figure for
SD, and 360Mb/s for SD WS, what is the figure for HD ... 1) As recorded? 2) After downconversion to SD? On Fri, 10 Oct 2008 09:58:53 +0100, Mark Carver wrote: Let's start with 270 Mb/s SD serial digital component video. First of all it's not compressed (unless you consider the chroma has only half samples per line of the luminance). http://en.wikipedia.org/wiki/CCIR_601 |
Whats the point of Freeview?
Java Jive wrote:
However, now I have a further question. If 270Mb/s is the figure for SD, and 360Mb/s for SD WS, what is the figure for HD ... 1) As recorded? Forget about as recorded for HD or SD. The most common recording standard in use for quality SD whether it's 4:3 or 16:9 is Digibeta, that has an off tape bit rate of around 140-150 Mb/s. Panasonic's D5 format had a data rate of 270 Mb/s but is rarely used now. C4 were the biggest UK user, but they quietly abandoned it when Sony's Digibeta became the de-faco standard. There are other lower bit rate 'lossy' compression standards for news acquisition, and server storage/playout. The 270 Mb/s SDI standard is used the world over as the interface format for connecting professional SD video equipment together within broadcast centres and OB trucks. http://en.wikipedia.org/wiki/Serial_Digital_Interface For HD it's 1.485 Gb/s, for 1080i and 720p HD formats. There are moves to use Dual-Link HD-SDI of almost 3 Gb/s for 1080p. Although 1080p50 will not be used for emission for a long time, if ever, the industry are still aiming to migrate all HD production to that format, as the downconversion to any other HD or SD format, interlaced or progressive, is easy. As for recording HD on tap, one of the best is Sony HDCAM SR in HQ mode, going at 880 Mb/s. http://en.wikipedia.org/wiki/HDCAM 2) After downconversion to SD? SD is back to 270 Mb/s -- Mark Please replace invalid and invalid with gmx and net to reply. |
Whats the point of Freeview?
On Oct 8, 3:30*pm, Boltar wrote:
Worse picture quality than analogue TV ( lots of nasty mpeg artifacts and motion blur) Worse reception than analogue TV I agree, *but* since 1999 analogue TV in the UK has been degraded because of: - MPEG2 stages in the analogue transmission chain - Lack of horizontal samples in digital sources (720 isn't enough) - ARC'ing to 14x9. This can only be done by deinterlacing to 50fps progressive, rescaling the 576-line signal to 504 lines, and then reinterlacing again. Deinterlacing causes extensive motion blur and unnatural movement tracking (much like MPEG2 at low bitrates does). Additionally, the resultant 14:9 image only has 630 horizontal samples. If the 14:9 letterbox image is then re-encoded into MPEG2, as it is in the BBC's RAMAN distribution system for their analogue networks, these 630 horizontal samples then have to be rescaled into 720, causing further loss. Also, the 'unnatural' movement now present in the 14:9 letterbox signal will not bring out the best in the MPEG2 encoder - MPEG2 encoders having been designed to work with uncompressed video as a source. |
Whats the point of Freeview?
wrote:
- ARC'ing to 14x9. This can only be done by deinterlacing to 50fps progressive, rescaling the 576-line signal to 504 lines, and then reinterlacing again. Please provide documentary evidence that this is the case. I provided you with evidence back in July that at least one major manufacturer of ARCs used in the UK does not. Here it is again:- ftp://ftp.axon.tv/Brochures/SynapseP...ns/ARC20PD.pdf -- Mark Please replace invalid and invalid with gmx and net to reply. |
Whats the point of Freeview?
On Oct 11, 11:59*am, Mark Carver wrote:
wrote: - ARC'ing to 14x9. This can only be done by deinterlacing to 50fps progressive, rescaling the 576-line signal to 504 lines, and then reinterlacing again. Please provide documentary evidence that this is the case. It's common sense. If you attempted to ARC to 14x9 without deinterlacing, the results would be far worse, and the artefacts far more noticible to the viewer. I provided you with evidence back in July that at least one major manufacturer of ARCs used in the UK does not. This is the first I've heard. :p Here it is again:- ftp://ftp.axon.tv/Brochures/SynapseP...ns/ARC20PD.pdf That doesn't prove that their product doesn't use deinterlacing? In fact, it clearly states that the ARC uses a temporal 3-field Finite Impulse Response Filter; in simple terms, the unit takes 3 of the interlaced fields, and uses the information contained in each (both the pictures themselves and the difference in movement between them) in helping to best decide how to create one, deinterlaced progressive frame with 576 lines. So for example: Information from 1B, 2A and 2B is used to create progressive frame 2 Information from 2A, 2B and 3A is used to create progressive frame 3 etc. There are numerous deinterlacing methods - some much worse than others, but the process is always lossy, because one cannot recover information which doesn't exist in the source signal. |
Whats the point of Freeview?
......the process is always lossy, because one cannot recover
information which doesn't exist in the source signal. Eh? How does that make it lossy? SteveT |
Whats the point of Freeview?
On Oct 11, 1:55*pm, "Steve Thackery" wrote:
......the process is always lossy, because one cannot recover information which doesn't exist in the source signal. Eh? *How does that make it lossy? Because the deinterlacer always has to best-guess what each newly- created progressive frame should look like. Mistakes will always be made. |
| All times are GMT +1. The time now is 08:01 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
HomeCinemaBanter.com