A Home cinema forum. HomeCinemaBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HomeCinemaBanter forum » Home cinema newsgroups » High definition TV
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

1080i or 720p



 
 
Thread Tools Display Modes
  #31  
Old November 22nd 07, 02:07 AM posted to alt.tv.tech.hdtv
[email protected]
external usenet poster
 
Posts: 2,039
Default 1080i or 720p

On Wed, 21 Nov 2007 12:16:46 -0800 (PST) ninphan wrote:
|
| If the picture has no motion, you should not see any difference at all.
|
| With motion, each point in time certainly has only 540 lines to resolve
| what changed since the previous point in time (scan field). When displayed
| as 1080 lines, it can still be interpolated. By detecting that motion did
| take place, the display (conversion) can choose to make the missing lines
| (the odd ones in an even field, for example) be filled in from interpolated
| content rather than the previous field held steady. Sure, it is technically
| just 540 lines of information. But that gets blended with a _different_
| 540 lines in the next field. You might lose those very thin horizontal
| lines, but they are almost certainly not an issue when motion is involved.
|
|
|
| This is only applicable if the material is being shot in 1080i and
| most cameras used in the field that are NOT using film, are shooting
| 1080p/24 (either at 1440x1080 or more recently in the last three or
| four years at 1920x1080)

In that case, there is even more information loss. But my bet is that in
live sports, it's p60 or maybe even p120 (yes, they make such cameras, now).


| Outside of consumer usage interlaced footage is rare. What I'm stating
| is that a 1080p source, delivered interlaced, will deinterlace pixel
| for pixel back to that 1080p source.

.... minus what is lost in the filtering that smoothes out the effects of the
interlacing (whether that be original shooting in interace or conversion to
interlaced from progressive), plus distortions in the temporal domain for
content in motion.

--
|---------------------------------------/----------------------------------|
| Phil Howard KA9WGN (ka9wgn.ham.org) / Do not send to the address below |
| first name lower case at ipal.net / |
|------------------------------------/-------------------------------------|
  #32  
Old November 22nd 07, 05:36 AM posted to alt.tv.tech.hdtv
Tricuspidor
external usenet poster
 
Posts: 2
Default 1080i or 720p

I'll probably regret sticking my neck out in this flamefest, but the
question has to be asked.

I assume that when we watch 1080i30 content, the process started with a
camera that did 1080p60. Each second it captured 60 full 1080-line frames.
However, since that much data wouldn't fit into the allotted 6 MHz bandwidth
for an OTA channel, something had to go. Therefore, they removed all the
odd-numbered lines from the even numbered frames, and they removed all the
even-numbered lines from the odd-numbered frames (or vice versa; that part
isn't really significant). Now these frames with half their data missing are
called fields. Two consecutive fields, when put together, make up a single
frame - sort of. The problem is that when they are reassembled, you have the
odd lines from frame 1, and the even lines from frame 2. Fortunately, the
eye can't discern that much detail at that refresh rate, and it all looks
like smooth motion.

The problem arises when they try to turn it back into 1080p60. This means
taking a field (i.e. a frame with half the lines missing) and reconstructing
the missing lines, so you can have 60 complete frames. They can get a decent
approximation of what was in those missing lines by interpolating. Now
here's the part I'm confused about. Do they interpolate in time or in
space - or a combination of both? i.e. if they want to reconstruct line 5 in
frame 2, do they interpolate what was in line 5 in fields 1 and 3, or do
they interpolate what was in lines 4 and 6 in frame 2? If the former (i.e.
interpolating in time), then I can understand what is causing the
Three-Ball-Effect. If an object has moved a significant distance from one
frame to another, then when you interpolate in time, then in any given
frame, you are going to see a washed-out copy of the object in the location
where it was in the previous frame, as well as where it will be in the next
frame.

Comments?

  #33  
Old November 22nd 07, 03:42 PM posted to alt.tv.tech.hdtv
Matthew L. Martin
external usenet poster
 
Posts: 675
Default 1080i or 720p

Tricuspidor wrote:
I'll probably regret sticking my neck out in this flamefest, but the
question has to be asked.

I assume that when we watch 1080i30 content,


Lets refer to it as 1080i60 since interlace implies fields instead of
frames and the field rate is 60f/s.

the process started with a camera that did 1080p60.


Highly unlikely, but possible. Capturing uncompressed 1080p60 requires
over 300MB/s data rate and storage. It is not currently feasible to edit
1080p60. Let's continue discussing 1080p30 as the source, since that is
what the industry is currently using.

Each second it captured 60 full 1080-line frames. However, since that
much data wouldn't fit into the allotted 6 MHz bandwidth for an OTA
channel, something had to go.


If the frame rate were 24F/s then it will fit. 1080p24 contains less
information than 1080i60. 1080p60 is not part of the ATSC specification.

Therefore, they removed all the odd-numbered lines from the even
numbered frames, and they removed all the even-numbered lines from
the odd-numbered frames (or vice versa; that part isn't really
significant).


Not really how it's done. The field rate is twice the frame rate so,
with 1080p30 source, both fields are generated from the same frame.

Now these frames with half their data missing are called fields. Two
consecutive fields, when put together, make up a single frame - sort
of. The problem is that when they are reassembled, you have the odd
lines from frame 1, and the even lines from frame 2. Fortunately, the
eye can't discern that much detail at that refresh rate, and it all
looks like smooth motion.


Here is where your analysis breaks down. The issue is that both fields
are from the same frame, but are going to be displayed 1/60th of a
second apart. The artifact that would be evident on an interlaced
display is called twitter.

http://www.adamwilt.com/TechDiffs/FieldsAndFrames.html

... to get 486 TV lines vertically requires black and white lines
only a single scanline tall. You can build such a picture in any
paint program; simply alternate rows of black and white pixels. If
you send it out to video, though, what you'll get is a flickering,
strobing nightma with one field of all-white scanlines and the
other all black, the update rate of the white bits of the picture is
only 1/30 second: you will see flicker! In its more usual
manifestation, single-line details in otherwise-quiescent graphics
will flicker at 1/30 second, an amusingly named but extremely
annoying artifact called twitter.


In graphics, the cure for interlace twitter is to avoid single-pixel
lines, or to "deflicker" the image by blurring it slightly in the
vertical direction. The idea is to ensure that both fields contain
roughly equal brightness energy; the original field might have one
line of 100% brightness while the other field has its two adjacent
scanlines at 30% - 50% brightness. As the fields are displayed, the
single full-brightness line alternates with the two half-brightness
lines; your eye integrates these over time and doesn't detect any
flickering.

In deflickering, of course, you've lost some vertical detail. It
turns out that the best tradeoff for most pix between resolution and
twitter is around 0.7 times the active line count; for NTSC this
means you can really only resolve about 340 lines or so vertically.
The 0.7 number, which includes the effects of both interlace and
discrete scanning lines, is called the Kell factor.


I didn't use the more useful wiki reference because ninphan doesn't
think it is credible. He's wrong, but I think people should get used to
that.

The problem arises when they try to turn it back into 1080p60. This
means taking a field (i.e. a frame with half the lines missing) and
reconstructing the missing lines, so you can have 60 complete frames.


No, you are completely off the rails here.

They can get a decent approximation of what was in those missing
lines by interpolating. Now here's the part I'm confused about. Do
they interpolate in time or in space - or a combination of both? i.e.
if they want to reconstruct line 5 in frame 2, do they interpolate
what was in line 5 in fields 1 and 3, or do they interpolate what was
in lines 4 and 6 in frame 2? If the former (i.e. interpolating in
time), then I can understand what is causing the Three-Ball-Effect.
If an object has moved a significant distance from one frame to
another, then when you interpolate in time, then in any given frame,
you are going to see a washed-out copy of the object in the location
where it was in the previous frame, as well as where it will be in
the next frame.


Since you started from a misunderstanding of the facts about how
interlace happens, de-interlacing is really hard to explain in your
frame of reference. Needless to say, modern digital de-interlacers,
working on properly prepared digital interlaced media can do an
excellent job of producing a progressive version of the media.

Comments?


Only this: even if perfect de-interlacers exist, they cannot recover
information that has been lost in the capture of or conversion to
interlaced form.

Most professional video editing is done in 1080i or 720p. Virtually none
is done in 1080p(anything). This may change as more editors use higher
compression codecs as many high compression HD codecs are progressive.

Before anyone says that interlace is no longer important because all
displays are progressive, consider that some large number of people
already have interlaced displays (mostly CRT and Plasma owners). Without
twitter filters in either their displays or playback devices,
progressive content would look like crap. Since these devices were
designed before progressive content (only available to consumers on
HD-DVD and Blu-Ray) was available so it is *very* unlikely that they
contain the required filters.

You can still buy refurbished interlaced Plasma displays, BTW.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):
  #34  
Old November 22nd 07, 08:44 PM posted to alt.tv.tech.hdtv
Tricuspidor
external usenet poster
 
Posts: 2
Default 1080i or 720p

OK - I'll run through it again, (hopefully) restating in my own words what
you just said.

The camera at the source captures 30 full 1080-line frames per second. For
various reasons, including the inherent flicker in CRTs and the limitations
of 1930s technology, the TV station doesn't broadcast 30 full frames per
second - as in transmit frame 1, transmit frame 2, etc.
Instead, it breaks each frame into two fields - one with the odd lines and
one with the even lines (which we shall call field 1a and field 1b for frame
1, field 2a and field 2b for frame 2, etc. This is what I want to be sure
of. Does the camera actually capture a full frame in one pass and then break
it into two fields, or does it capture one field, and then go back and
capture the second field?
Assuming the former is true (it gets the whole frame in a single pass), this
tells me that nothing is lost from the point where the camera captures 30
successive full frames per second, to the point where the signal arrives in
the TV, except for whatever is lost in MPEG compression. All it's doing is
changing the order in which the lines are sent. Instead of sending line 1,
line 2, line 3, line 4, etc., it sends line1, line3, line 5, ... line 2,
line 4, line 6, etc. Now, what happens in the TV? I'm making another
assumption here - that interlacing is done partly because of bandwidth
limitations and partly because of CRT limitations, and that in LCD or plasma
sets, it only provides more headaches and more work to do for the engineers.
Therefore, all the hype about your TV being able to do 1080p (as opposed to
mere 1080i) is kind of silly because 1080p is the easy case, and they had to
do additional processing to support 1080i. Anyway, I'm guessing that the
deinterlacer in the TV, in its effort to convert the two fields of 1080i to
1080p can either pull in both fields and then display the full frame for
1/30 of a second, or it can generate interpolated frames in between these
frames, and thus output a new frame 60 times per second.
I'm making yet another guess that a sufficiently sophisticated MPEG encoder
would recognize a moving ball and create a motion vector, which would work
well when interpolating. But if they do it on the cheap and just take two
successive frames as unrelated images and interpolate them pixel by pixel,
then you would get TBE. You would also get blur, but I suppose they could do
some artificial sharpening to cover that up.

Is this getting closer to reality?

  #35  
Old November 22nd 07, 08:57 PM posted to alt.tv.tech.hdtv
ninphan
external usenet poster
 
Posts: 351
Default 1080i or 720p

On Nov 21, 5:17 pm, "Matthew L. Martin" wrote:
ninphan wrote:
If the picture has no motion, you should not see any difference at all.


With motion, each point in time certainly has only 540 lines to resolve
what changed since the previous point in time (scan field). When displayed
as 1080 lines, it can still be interpolated. By detecting that motion did
take place, the display (conversion) can choose to make the missing lines
(the odd ones in an even field, for example) be filled in from interpolated
content rather than the previous field held steady. Sure, it is technically
just 540 lines of information. But that gets blended with a _different_
540 lines in the next field. You might lose those very thin horizontal
lines, but they are almost certainly not an issue when motion is involved.


This is only applicable if the material is being shot in 1080i and
most cameras used in the field that are NOT using film, are shooting
1080p/24 (either at 1440x1080 or more recently in the last three or
four years at 1920x1080)


Outside of consumer usage interlaced footage is rare. What I'm stating
is that a 1080p source, delivered interlaced, will deinterlace pixel
for pixel back to that 1080p source.


And you are wrong. The act of filtering a progessive source into an
interlaced stream that has to be properly displayed on an interlaced
display causes information to be lost forever.

You, of course, know better, no matter how wrong you are.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):- Hide quoted text -

- Show quoted text -


This isn't done anymore - there are no "proper 1080i" televisions
outside of a handful of CRT's. That doesn't even make up 1% of the
current HDTV market.
How long ago did you leave the entertainment industry, because my
guess would be sometime in 2000?
Wikipedia is great for info, but to suggest that someone questioning
the reliability of a site that anyone can edit is foolhardy, well
that's pretty unexplicable. I can edit any page on wikipedia, as can
you.
  #36  
Old November 22nd 07, 09:19 PM posted to alt.tv.tech.hdtv
Matthew L. Martin
external usenet poster
 
Posts: 675
Default 1080i or 720p

Tricuspidor wrote:
OK - I'll run through it again, (hopefully) restating in my own words
what you just said.

The camera at the source captures 30 full 1080-line frames per
second. For various reasons, including the inherent flicker in CRTs
and the limitations of 1930s technology, the TV station doesn't
broadcast 30 full frames per second - as in transmit frame 1,
transmit frame 2, etc. Instead, it breaks each frame into two fields
- one with the odd lines and one with the even lines (which we shall
call field 1a and field 1b for frame 1, field 2a and field 2b for
frame 2, etc.


The reason for interlaced format has nothing to do with 1930's
technology. It is used because it permits greater apparent resolution in
less bandwidth for any value of bandwidth.

This is what I want to be sure of.


Other than my caveat, above, you are correct.

Does the camera actually capture a full frame in one pass and then
break it into two fields, or does it capture one field, and then go
back and capture the second field?


That depends on your camera. All film cameras capture entire frames in
one exposure. Some video cameras capture interlaced video only, some are
capable of capturing both interlaced and progressive.

Assuming the former is true (it gets the whole frame in a single
pass), this tells me that nothing is lost from the point where the
camera captures 30 successive full frames per second, to the point
where the signal arrives in the TV, except for whatever is lost in
MPEG compression.


Not true. 1080 video will be processed for best results on an interlaced
display. That involves the removal of high frequency information in
order to remove the twitter artifact previously referenced.

All it's doing is changing the order in which the lines are sent.
Instead of sending line 1, line 2, line 3, line 4, etc., it sends
line1, line3, line 5, ... line 2, line 4, line 6, etc.


That is what happens after the vertical filtering removes the high
frequency information.

Now, what happens in the TV? I'm making another assumption here -
that interlacing is done partly because of bandwidth limitations and
partly because of CRT limitations, and that in LCD or plasma sets, it
only provides more headaches and more work to do for the engineers.


Some significant number of Plasma displays currently in homes are
interlaced displays. They use the ALiS technology (reference not from
wiki because ninphan thinks that source is not credible):

http://www.cnet.com/4520-7874_1-5107912-2.html

ALiS - ALiS (alternate lighting of surfaces) technology developed by
Fujitsu/Hitachi for plasma panel displays. On a conventional plasma
TV, all pixels are illuminated at all times. With an ALiS plasma
panel, alternate rows of pixels are illuminated so that half the
panel's pixels are illuminated at any moment, somewhat similarly to
interlaced scanning on a CRT-type TV. This allows higher native
resolution than designs with discrete pixels (typically 1,024x1,024
versus 1,024x768 for 42-inch plasmas), but ALiS has historically
suffered in other areas, including black-level performance.


Some ALiS plasma displays are still being sold, albeit as refurbished items.

Therefore, all the hype about your TV being able to do 1080p (as
opposed to mere 1080i) is kind of silly because 1080p is the easy
case, and they had to do additional processing to support 1080i.


1080p60 takes considerably more processing power inside of the display
due to the higher data rate.

Anyway, I'm guessing that the deinterlacer in the TV, in its effort
to convert the two fields of 1080i to 1080p can either pull in both
fields and then display the full frame for 1/30 of a second, or it
can generate interpolated frames in between these frames, and thus
output a new frame 60 times per second.


Yes, that is what they do. How well they do it is dependent on many
factors. Things like processor speed and number of significant digits
used in the calculations matter a lot, as does the algorithm chosen. In
any case, the information that was previously lost (either by vertical
filtering or interlaced capture) can not be recovered and is lost forever.

I'm making yet another guess
that a sufficiently sophisticated MPEG encoder would recognize a
moving ball and create a motion vector, which would work well when
interpolating. But if they do it on the cheap and just take two
successive frames as unrelated images and interpolate them pixel by
pixel, then you would get TBE. You would also get blur, but I suppose
they could do some artificial sharpening to cover that up.


IMHO, almost all artificial sharpening is evil. I'd rather have a
smoother picture than more noise.

Is this getting closer to reality?


Pretty much.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):
  #37  
Old November 22nd 07, 09:21 PM posted to alt.tv.tech.hdtv
Matthew L. Martin
external usenet poster
 
Posts: 675
Default 1080i or 720p

ninphan wrote:
On Nov 21, 5:17 pm, "Matthew L. Martin" wrote:
ninphan wrote:
If the picture has no motion, you should not see any difference at all.
With motion, each point in time certainly has only 540 lines to resolve
what changed since the previous point in time (scan field). When displayed
as 1080 lines, it can still be interpolated. By detecting that motion did
take place, the display (conversion) can choose to make the missing lines
(the odd ones in an even field, for example) be filled in from interpolated
content rather than the previous field held steady. Sure, it is technically
just 540 lines of information. But that gets blended with a _different_
540 lines in the next field. You might lose those very thin horizontal
lines, but they are almost certainly not an issue when motion is involved.
This is only applicable if the material is being shot in 1080i and
most cameras used in the field that are NOT using film, are shooting
1080p/24 (either at 1440x1080 or more recently in the last three or
four years at 1920x1080)
Outside of consumer usage interlaced footage is rare. What I'm stating
is that a 1080p source, delivered interlaced, will deinterlace pixel
for pixel back to that 1080p source.

And you are wrong. The act of filtering a progessive source into an
interlaced stream that has to be properly displayed on an interlaced
display causes information to be lost forever.

You, of course, know better, no matter how wrong you are.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):- Hide quoted text -

- Show quoted text -


This isn't done anymore - there are no "proper 1080i" televisions
outside of a handful of CRT's. That doesn't even make up 1% of the
current HDTV market.


I'm sure you have an authoritative citation to back that up, don't you?
I'd prefer something other than a wiki reference, because you don't
believe them to be credible.

Take your time, I'll wait.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):
  #38  
Old November 22nd 07, 09:31 PM posted to alt.tv.tech.hdtv
Jan B
external usenet poster
 
Posts: 361
Default 1080i or 720p

On Thu, 22 Nov 2007 09:42:54 -0500, "Matthew L. Martin"
wrote:

Tricuspidor wrote:
I'll probably regret sticking my neck out in this flamefest, but the
question has to be asked.

I assume that when we watch 1080i30 content,


Lets refer to it as 1080i60 since interlace implies fields instead of
frames and the field rate is 60f/s.

the process started with a camera that did 1080p60.


Highly unlikely, but possible. Capturing uncompressed 1080p60 requires
over 300MB/s data rate and storage. It is not currently feasible to edit
1080p60. Let's continue discussing 1080p30 as the source, since that is
what the industry is currently using.


The source does not have to be 1080p/60 (or/50) to produce the
interlaced signal.

I have assumed that 1080i60 respectively 1080i50 was the more common
variant for "TV" events.

At least when my HD-box perfoms conversion of such material to 576i/50
for SD output I get a truly interlaced 576i/50 signal (with motion
between fields) and not "PSF".

It could be that 1080i/50 is common in Europe and 1080p30 in "60Hz"
countries, maybe?

Interlaced CCD sensors have been common for 576i/50 generation. I
thought they had moved to 1080i/50 and thereby making it difficult to
get hold of a progressive 50 (or 60) Hz source to produce 720p from
50/60Hz frame rate.

Each second it captured 60 full 1080-line frames. However, since that
much data wouldn't fit into the allotted 6 MHz bandwidth for an OTA
channel, something had to go.


If the frame rate were 24F/s then it will fit. 1080p24 contains less
information than 1080i60. 1080p60 is not part of the ATSC specification.

Therefore, they removed all the odd-numbered lines from the even
numbered frames, and they removed all the even-numbered lines from
the odd-numbered frames (or vice versa; that part isn't really
significant).


Not really how it's done. The field rate is twice the frame rate so,
with 1080p30 source, both fields are generated from the same frame.

Now these frames with half their data missing are called fields. Two
consecutive fields, when put together, make up a single frame - sort
of. The problem is that when they are reassembled, you have the odd
lines from frame 1, and the even lines from frame 2. Fortunately, the
eye can't discern that much detail at that refresh rate, and it all
looks like smooth motion.


I don't agree to "the eye can't discern that much detail at that
refresh rate, and it all looks like smooth motion".
It depends how close you look (i.e size/distance).
The visible artefacts is what have driven development of higher
scanning frequencies for the CRT types and "progressive scanning"
already for SD displays. (The low frame rate of film sources has also
been addressed by motion estimated frame rate upsampling also first
introduced for CRT:s.)

Here is where your analysis breaks down. The issue is that both fields
are from the same frame, but are going to be displayed 1/60th of a
second apart. The artifact that would be evident on an interlaced
display is called twitter.

.....
The problem arises when they try to turn it back into 1080p60. This
means taking a field (i.e. a frame with half the lines missing) and
reconstructing the missing lines, so you can have 60 complete frames.


No, you are completely off the rails here.


Should depend on if the original source was progressive 1080p30 or
interlaced 1080i/60.

They can get a decent approximation of what was in those missing
lines by interpolating. Now here's the part I'm confused about. Do
they interpolate in time or in space - or a combination of both? i.e.
if they want to reconstruct line 5 in frame 2, do they interpolate
what was in line 5 in fields 1 and 3, or do they interpolate what was
in lines 4 and 6 in frame 2? If the former (i.e. interpolating in
time), then I can understand what is causing the Three-Ball-Effect.
If an object has moved a significant distance from one frame to
another, then when you interpolate in time, then in any given frame,
you are going to see a washed-out copy of the object in the location
where it was in the previous frame, as well as where it will be in
the next frame.


Since you started from a misunderstanding of the facts about how
interlace happens, de-interlacing is really hard to explain in your
frame of reference.


Only if the source was progressive.
My understanding is that in the case the source is interlaced, (which
is at least true for an SD "TV production") motion adaptive
de-interlacing make choises between interpolating in "space", that is
when there is a motion between fields above a certain threshold, the
"misssing" odd lines in an even field are interploated from the even
lines.

Only when there is vertually no motion in interlaced material (or the
field pairs are from a progressive soure), the two fields can be
simply merged (called weave).

It is clear from observing the normal motion between fields (e.g. with
a video editor) that this threshold is reached at rather low speed
motion.

There are special processing variants that use smoothing in time
domain by creating intermediate frames which use a more complicated
modelling with motion vectors to move group of pixels from one
position to an estimated position in the created intermediate frames.

snip
/Jan
  #39  
Old November 22nd 07, 09:48 PM posted to alt.tv.tech.hdtv
Matthew L. Martin
external usenet poster
 
Posts: 675
Default 1080i or 720p

Jan B wrote:
On Thu, 22 Nov 2007 09:42:54 -0500, "Matthew L. Martin"
wrote:

Tricuspidor wrote:
I'll probably regret sticking my neck out in this flamefest, but the
question has to be asked.

I assume that when we watch 1080i30 content,

Lets refer to it as 1080i60 since interlace implies fields instead of
frames and the field rate is 60f/s.

the process started with a camera that did 1080p60.

Highly unlikely, but possible. Capturing uncompressed 1080p60 requires
over 300MB/s data rate and storage. It is not currently feasible to edit
1080p60. Let's continue discussing 1080p30 as the source, since that is
what the industry is currently using.


The source does not have to be 1080p/60 (or/50) to produce the
interlaced signal.


True. It seemed reasonable to simplify the conversation by removing
extraneous information. Replace 30 with 25 and 60 with 50 and nothing
else changes.

I have assumed that 1080i60 respectively 1080i50 was the more common
variant for "TV" events.


Video, which implies TV, but does not limit the application to TV.

At least when my HD-box perfoms conversion of such material to 576i/50
for SD output I get a truly interlaced 576i/50 signal (with motion
between fields) and not "PSF".

It could be that 1080i/50 is common in Europe and 1080p30 in "60Hz"
countries, maybe?


1080p30 isn't often used in the US.

Interlaced CCD sensors have been common for 576i/50 generation. I
thought they had moved to 1080i/50 and thereby making it difficult to
get hold of a progressive 50 (or 60) Hz source to produce 720p from
50/60Hz frame rate.


I'm not sure I'm following where you are going.

Each second it captured 60 full 1080-line frames. However, since that
much data wouldn't fit into the allotted 6 MHz bandwidth for an OTA
channel, something had to go.

If the frame rate were 24F/s then it will fit. 1080p24 contains less
information than 1080i60. 1080p60 is not part of the ATSC specification.

Therefore, they removed all the odd-numbered lines from the even
numbered frames, and they removed all the even-numbered lines from
the odd-numbered frames (or vice versa; that part isn't really
significant).

Not really how it's done. The field rate is twice the frame rate so,
with 1080p30 source, both fields are generated from the same frame.

Now these frames with half their data missing are called fields. Two
consecutive fields, when put together, make up a single frame - sort
of. The problem is that when they are reassembled, you have the odd
lines from frame 1, and the even lines from frame 2. Fortunately, the
eye can't discern that much detail at that refresh rate, and it all
looks like smooth motion.


I don't agree to "the eye can't discern that much detail at that
refresh rate, and it all looks like smooth motion".
It depends how close you look (i.e size/distance).
The visible artefacts is what have driven development of higher
scanning frequencies for the CRT types and "progressive scanning"
already for SD displays. (The low frame rate of film sources has also
been addressed by motion estimated frame rate upsampling also first
introduced for CRT:s.)

Here is where your analysis breaks down. The issue is that both fields
are from the same frame, but are going to be displayed 1/60th of a
second apart. The artifact that would be evident on an interlaced
display is called twitter.

....
The problem arises when they try to turn it back into 1080p60. This
means taking a field (i.e. a frame with half the lines missing) and
reconstructing the missing lines, so you can have 60 complete frames.

No, you are completely off the rails here.


Should depend on if the original source was progressive 1080p30 or
interlaced 1080i/60.


Not really since almost all 1080 sources are edited as interlaced. As I
said before, uncompressed 1080p60 is currently not practical for
editing. 1080i60 is barely practical.

They can get a decent approximation of what was in those missing
lines by interpolating. Now here's the part I'm confused about. Do
they interpolate in time or in space - or a combination of both? i.e.
if they want to reconstruct line 5 in frame 2, do they interpolate
what was in line 5 in fields 1 and 3, or do they interpolate what was
in lines 4 and 6 in frame 2? If the former (i.e. interpolating in
time), then I can understand what is causing the Three-Ball-Effect.
If an object has moved a significant distance from one frame to
another, then when you interpolate in time, then in any given frame,
you are going to see a washed-out copy of the object in the location
where it was in the previous frame, as well as where it will be in
the next frame.

Since you started from a misunderstanding of the facts about how
interlace happens, de-interlacing is really hard to explain in your
frame of reference.


Only if the source was progressive.
My understanding is that in the case the source is interlaced, (which
is at least true for an SD "TV production") motion adaptive
de-interlacing make choises between interpolating in "space", that is
when there is a motion between fields above a certain threshold, the
"misssing" odd lines in an even field are interploated from the even
lines.


Those are some of the choices that have to be made. How well the choices
are implemented make a big difference, as well.

Only when there is vertually no motion in interlaced material (or the
field pairs are from a progressive soure), the two fields can be
simply merged (called weave).

It is clear from observing the normal motion between fields (e.g. with
a video editor) that this threshold is reached at rather low speed
motion.


Or, in the case of a scanned display with diagonal lines (CRT) with no
motion at all.

There are special processing variants that use smoothing in time
domain by creating intermediate frames which use a more complicated
modelling with motion vectors to move group of pixels from one
position to an estimated position in the created intermediate frames.


Matthew
--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):
  #40  
Old November 23rd 07, 02:36 PM posted to alt.tv.tech.hdtv
ninphan
external usenet poster
 
Posts: 351
Default 1080i or 720p

On Nov 22, 3:21 pm, "Matthew L. Martin" wrote:
ninphan wrote:
On Nov 21, 5:17 pm, "Matthew L. Martin" wrote:
ninphan wrote:
If the picture has no motion, you should not see any difference at all.
With motion, each point in time certainly has only 540 lines to resolve
what changed since the previous point in time (scan field). When displayed
as 1080 lines, it can still be interpolated. By detecting that motion did
take place, the display (conversion) can choose to make the missing lines
(the odd ones in an even field, for example) be filled in from interpolated
content rather than the previous field held steady. Sure, it is technically
just 540 lines of information. But that gets blended with a _different_
540 lines in the next field. You might lose those very thin horizontal
lines, but they are almost certainly not an issue when motion is involved.
This is only applicable if the material is being shot in 1080i and
most cameras used in the field that are NOT using film, are shooting
1080p/24 (either at 1440x1080 or more recently in the last three or
four years at 1920x1080)
Outside of consumer usage interlaced footage is rare. What I'm stating
is that a 1080p source, delivered interlaced, will deinterlace pixel
for pixel back to that 1080p source.
And you are wrong. The act of filtering a progessive source into an
interlaced stream that has to be properly displayed on an interlaced
display causes information to be lost forever.


You, of course, know better, no matter how wrong you are.


Matthew


--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):- Hide quoted text -


- Show quoted text -


This isn't done anymore - there are no "proper 1080i" televisions
outside of a handful of CRT's. That doesn't even make up 1% of the
current HDTV market.


I'm sure you have an authoritative citation to back that up, don't you?
I'd prefer something other than a wiki reference, because you don't
believe them to be credible.

Take your time, I'll wait.

Matthew

--
"All you need to start an asylum is an empty room and the right kind of
people". Alexander Bullock ("My Man Godfrey" 1936):- Hide quoted text -

- Show quoted text -


All you need to do is look up NPD numbers for the last several years
to see this. These are numbers the companies themselves are releasing.
You can follow the various press conferences from IFA, CEDIA, CES,
etc.

Go ahead and find me a non-CRT "proper 1080i" television. That would
be one with 1080 lines of vertical resolution that does not support
progressive scan.

Most editing is not done in interlaced format as most filming is not
filmed interlaced. Take "Lost" for example, it is edited in 1080p/24.
Yes there is 1080i/60 editing, but it's done from a progressive source
and converted back to a progressive source afterwards with no loss of
data. 1080i/60, remove 2:3, back to 1080p/24

http://www.panasonic.com/business/pr...pp_hd_faqs.asp
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
for vizio 720P plasma tv, which format is better? 720p or 1080i for dvd upconversion player?TIA joe High definition TV 0 August 8th 07 04:34 AM
Does an upconverted 720p/1080i to 1080p video look much better thannative 720p/1080i video? Paul L High definition TV 2 November 23rd 05 04:34 AM
720p? 1080i? How to tell? Noozer High definition TV 2 February 11th 05 12:04 AM
1080i or 720P Stephen Rabinowitz High definition TV 6 February 7th 05 11:34 AM
1080i vs.720p Caloonese High definition TV 11 October 29th 04 09:51 PM


All times are GMT +1. The time now is 08:17 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
Copyright ©2004-2021 HomeCinemaBanter.
The comments are property of their posters.