![]() |
| If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|||||||
|
|
Thread Tools | Display Modes |
|
#61
|
|||
|
|||
|
Jukka Aho wrote:
Alan Pemberton wrote: Why couldn't an lcd say display a 50i source as one field followed by the next? The concepts of scanning, refreshing and 'fields' fall apart with flat screens. In order to display a range of brightnesses bits of the screen are turned on and off, and the contents updated, ad hoc. That's one of the reasons movement looks a mess. I've been toying with the idea that interlaced scanning (and the non-ideal focusing of the electron beam in a CRT-based set, with the "hot spot" spanning over multiple picture elements on the screen) could be simulated on a SED display matrix. What would happen if a progressive display where to simply display the two fields of an interlaced display (ie, every other line) for 1/50 second each? A sort-of progressive interlacing? Would that provide a better approximation to a CRT in a non-complex way, or just display a mess? James |
|
#62
|
|||
|
|||
|
James Sargent wrote:
What would happen if a progressive display where to simply display the two fields of an interlaced display (ie, every other line) for 1/50 second each? A sort-of progressive interlacing? Would that provide a better approximation to a CRT in a non-complex way, or just display a mess? It's different from how a CRT would display the same image, but of course, it could work just fine. Note, however, that the method you suggest implies the same thing as the more complicated "scanning" method: the previous field - in-between the scanlines of the field that is currently being shown on the screen - must fade away to black. If it was retained on the screen, you would start seeing combing artifacts. Moreover, even if your method does not "scan" the image on the screen, I think that the overlap in the scanline structure would need to be simulated in the method you suggest just as well. The "hot spot" of the electron beam is a bit on the thick side: scanlines in one field, when drawn on a CRT-based tv screen, partially overlap the locations of the scanlines in the previous field. (In other words, the gaps between the scanlines in a single field are not as thick as the scanlines themselves.) See, for example, these mock-up single-field Amiga screens (at the bottom of the page) where I've tried to simulate that effect: http://www.iki.fi/znark/video/modes/ You can see the original graphics data by clicking on the images. This simulation was done by interpolating the original image data vertically by a 2x factor, with the "nearest neighbour" method, and then dimming every other line. (It certainly looks more accurate and "Amiga-like" to me that way than by just making every other line completely black.) -- znark |
|
#63
|
|||
|
|||
|
"Jukka Aho" wrote:
James Sargent wrote: What would happen if a progressive display where to simply display the two fields of an interlaced display (ie, every other line) for 1/50 second each? A sort-of progressive interlacing? ... Note, however, that the method you suggest implies the same thing as the more complicated "scanning" method: the previous field - in-between the scanlines of the field that is currently being shown on the screen - must fade away to black. If it was retained on the screen, you would start seeing combing artifacts. Also... Putting in black lines for the field not being displayed would work with LCD and plasma screens, but that'd cut the brightness by half even though the display would consume (almost) as much power as when fully illuminated. -- Dave Farrance |
|
#64
|
|||
|
|||
|
Dave Farrance wrote:
Also... Putting in black lines for the field not being displayed would work with LCD and plasma screens, but that'd cut the brightness by half even though the display would consume (almost) as much power as when fully illuminated. But as explained in my previous message, those lines probably shouldn't be totally black if we're trying to emulate a CRT, since the CRT scanlines tend to be thicker than the gaps between them. If the "gap" lines were merely fainter duplicates of the lines above or below, the net result would perhaps be more like a 25% reduction in brightness. I must say LCD panels are quite awkward and backwards technology: you're trying to produce huge amounts of bright (back)light which you are subsequently trying to suppress with liquid crystals. It's a pretty moronic system, really. ![]() -- znark |
|
#65
|
|||
|
|||
|
In message
o.uk.invalid, Alan Pemberton writes It's the same when you chew something crunchy while staring at a crt display. The picture breaks up quite badly. Chocolate-coated frogs are best to observe this phenomenon. It also works with Dutch or Belgians. "Albatross! Get your albatross!" "What flavour are they?" "They're bleedin' albatross flavoured!" |
|
#66
|
|||
|
|||
|
On Sat, 18 Nov 2006, Alan Pemberton typed this :
(snip) It's the same when you chew something crunchy while staring at a crt display. The picture breaks up quite badly. Could that be due to little bits of your Topic bar spattering on the screen? Keeping the mouth shut during eating might have an enormous effect on picture quality. HTH -- Roger Hunt |
|
#67
|
|||
|
|||
|
Bill Wright wrote:
That's it exactly. the phosphors used in TV CRTs have such a short persistence, they might as well have zero persisitence. It's persistence of vision (in the eye itself) that does the 'frame storage'. Simple proof can be obtained by photographing the TV screen using a fast shutter. Persistence of vision varies from person to person, and I have known people find their new TV set very flickery. If you go into a shop where there's a large display of TVs you can demonstrate to yourself that persistence of vision is less in the peripheral areas of the visual field. Someone told me once that cats have an extremely low persistence of vision, if so then they must think humans are mad, staring at a band of light inside that box in the corner of the room for hours on end ? -- Mark Please replace invalid and invalid with gmx and net to reply. |
|
#68
|
|||
|
|||
|
Mark Carver wrote:
Someone told me once that cats have an extremely low persistence of vision, if so then they must think humans are mad, staring at a band of light inside that box in the corner of the room for hours on end ? http://schnarff.com/pics/Marbury-CatTV1.jpg http://pets.webshots.com/photo/2481059650087862133mHbCVe http://www.marinhumanesociety.org/Images/LeftImage/CatTV.jpg -- znark |
|
#69
|
|||
|
|||
|
" wrote
in ups.com: Subject: Coast - awful filmic effect From: " Newsgroups: uk.tech.digital-tv,uk.tech.broadcast Alan Pemberton wrote: Michael Rozdoba wrote: In that case I'd like to ask one question. For material shot interlaced intended to be played back on a device which handles interlaced material (such as a CRT), I can see why you'd want to keep the material that way in many if not all cases. Rendering it on the display, due to the temporal blurring of the phosphors, will effectively merge the fields. NO! Nothing to do with phosphors. A crt produces a bright spot of light moving very quickly which the eye and brain interpret as a dim two dimensional moving picture. No (or very little) storage or integration gets done on the screen. ...as you can see very clearly if you point a video camera at a CRT and set the shutter speed faster than 1/50th of a second! You're forgetting that photographic equipment doesn't enjoy the wonderful contrast range of the human eye. See below. There is a great explanation showing why this is a great way of reproducing moving pictures. If your eye tracks the motion on-screen, a single clear image hits the back of your eye. Compare this with LCD technology, where the "always on" nature of the image means any eye movement simply smears the image on the back of your eye. Found the link I was looking for... http://www.poynton.com/papers/Motion...yal/index.html or http://www.poynton.com/PDFs/Motion_portrayal.pdf (same content in both links - relevant section near the end) It makes a superficially plausible story, but I suspect that's all it is - a story. Quite a lot of the second half of it doesn't stand up under close examination. Although there are two academic references to papers by Japanese researchers, it's not clear what information comes from those papers, and what is speculation by Poynton, and as I could find neither paper online, unfortunately I can't check, which I think needs to be done. Phosphor luminescence decay can be modeled as the sum of several exponential and power-law curves and levels off into a gradual decline after an initial precipitous decay. Although, despite a great deal of searching, I have not been able to find, free on-line, a characteristic curve for any of the P22 (XX?) compounds widely used in CRTs, on purely theoretical grounds I believe the graph of CRT decay shown as Figure 13 to be, at best, highly misleading in that firstly, it does not have a meaningful vertical scale, and secondly, that it is shown as decaying to zero, which is theoretically impossible. However, I don't actually need the characteristic curve. The term persistence when applied to phosphors has a specific, mathematical meaning: it's the time taken for the luminescence to decay to 10% of its starting value. It is NOT the time taken to fade away to nothing (that would be infinite) NOR even the time taken for luminescence to become invisible to the eye or any other detector. I get the impression that many do not understand this fundamental point. If the initial excitation is strong enough, the substance can still be emitting light long after its persistence period has expired, and this happens in a normal CRT TV. Here is photographic proof: http://en.wikipedia.org/wiki/Image:Refresh_scan.jpg http://upload.wikimedia.org/wikipedi...fresh_scan.jpg If you consider that the human eye has considerably more contrast range than ordinary photographic equipment, and that this was shot at f/1.6 and 1/3000s, yet the *entire* picture, even the oldest part of it just below the refresh line, is discernible as two people, the very top of the right hand one's head having beeen just been refreshed, then there can be no doubt that even the faintest parts of the picture would have been visible to the eye. But I fell to thinking that this was rather an unnatural situation, because the exposure is so low, and wondering whether it would be possible to perform the same sort of test closer to normal exposures. I set my still camera (Canon PowerShot S40) to manual mode, ISO400, 1/100s (so that I could capture the field refresh process), pointed it outside, and chose an aperture of f/8 as giving a reasonably exposed daylight shot. The focus was set to infinity. This is the result. http://tinyurl.com/y68b86 ... standing in for ... http://i28.photobucket.com/albums/c219/JavaJive/CRT- LCD/BaselineOutdoors.jpg I then took all the following at a distance of, and focussed to, 12", but all other settings on the camera remained the same. That implies that the exposure in every case has been proved to be suitable for a normal view for the human eye. First, a Panasonic TX-15LT2 LCD TV. By comparison with outdoors, the picture is somewhat under-exposed and therefore must be dimmer, but this was not noticeable to me viewing it, presumably through auto adjustment of the pupil: http://tinyurl.com/yxgmhg ... standing in for ... http://i28.photobucket.com/albums/c2...RT-LCD/LCD.jpg Then I took a whole series of pictures of a similarly sized CRT showing the same picture, a Sony KV-16WT1U (the only one I still have, I normally use it for CCTV). That part of the current field that was refreshed during exposure was well-exposed, as one would expect. Here is an example: http://tinyurl.com/vf6so ... standing in for ... http://i28.photobucket.com/albums/c2...urrentScan.jpg Further, as with the Wikipedia shot, even the previous field is still discernible, as he http://tinyurl.com/y72mv9 ... standing in for ... http://i28.photobucket.com/albums/c2...rviousScan.jpg So we have one photo taken at very low exposure that nevertheless shows detectable light coming from the previous field, and another taken at an everyday 'human' exposure showing the same thing. Taken all together, these photos: 1) Prove that the phosphors in two typical example CRTs emit detectable light from one field refresh at least until neighbouring lines in the next field are refreshed. 2) Strongly suggest that the level of light they emit is sufficient to be visible to the human eye throughout most if not all of one field refresh cycle. However, what I think is genuinely debatable, and neither these photos nor speculation are going to resolve this, is how much of the sensation of seeing the picture is due to the initial very bright but very small excitation area currently beeing refreshed, and how much due to the much greater area of gradual fading until the next excitation. Going back to CP ... I then have no problems with the capture characteristics, the big problems start with the section "Capture and display interactions" which entirely hinges on his understanding of the way we perceive motion, and as I suspect this is flawed, I suspect the whole section is likewise. What I am certain of is that at least some of his interpretations not only do not make sense, but also go in part against known research, as described here ... http://tinyurl.com/rcnn3 ... standing in for ... http://www.uca.edu/org/ccsmi/ccsmi/c...0Revisited.htm Consider his interpretation of how we see film and the associated diagram, Figure 19 ... He claims that we see an interpolated second image of the object due to the second showing of each frame. Like many before him, he is confusing motion perception and flicker perception: From the article linked above: """ It is only with hindsight that the problem seems to divide into such clearly separable categories -- the fusing of the flickering light, called flicker fusion in the literature of perceptual psychology, and the appearance of motion which is referred to as apparent motion. Early writers, without the benifit of hindsight, continually confused the two issues. """ The sole purpose of the second showing of each frame is to reduce flicker, it plays no part in motion detection. http://en.wikipedia.org/wiki/Frame_rate And it's a very good thing that it doesn't, as is obvious if we follow his misunderstanding to its logical conclusion. Figure 19 is an idealisation of a simple object in motion which I analyse here in ASCII art (which needs a fixed font, if your newsreader garbles it, cut'n'paste it into Notepad or equivalent) where X marks the object and - the background, but with the important difference that I include where the eye would have to be looking (I) in order to see the object where he claims it would appear to be: First showing of frame 1: -----Actual--------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| Second showing of frame 1, which being, as we know, actually exactly the same as the first, means that the eye would have to track to the left in order to see the second image in the position displaced to the right that Poynton claims for it: -----Act'lPoynton------------- -----xxxxxXXXXX--------------- -----xxxxxXXXXX--------------- --I----------------- |--------| First showing of frame 2: ---------------Actual-------------- ---------------XXXXX--------------- ---------------XXXXX--------------- -----------------I----------------- |--------| .... etc. So you can see that his ideas imply that in order to see what he claims, every frame the eye would have to backtrack half a frame's worth of movement then forwards one and a half frame's worth of movement! A far simpler, more coherent, and more convincing explanation, AFAIAA compatible with research findings, would be this: First showing of frame 1 -----Actual-------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| Second showing of frame 1 -----Actual-------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| First showing of frame 2 ---------------Actual-------------- ---------------XXXXX--------------- ---------------XXXXX--------------- -----------------I----------------- |--------| .... etc. If his interpretation of how we see film seems contrived, I don't find his interpretation of how we see video much more convincing. For one thing, the only time I have ever seen smeared video on an LCD was on a 1990s laptop, where the problem was simply that the update rate of the pixels in the display was insufficient to track movement. I have yet to see a recent LCD TV or even a laptop that shows any such problem, so I am not even convinced that there is any real difference between CRTs and modern LCDs to explain. For another, this statement is misleading: "A CRT has a very short flash: the persistence of the phosphor is a negligible fraction of the frame time." - but, as we have seen, persistence is not the same thing as luminescence, light is emitted from one field refresh to the next field refresh. If someone really wants to get to the bottom of whether there is any real difference between the two technologies wrt motion tracking, then I think something like the following experiment would be required ... Get a representative number of clips of a simple object moving across a patterned field of view at different speeds. Analyse the position of the object so that you know where it will be in each frame, and then display the clips on various types of display, tracking it with a movie and/or video camera under the control of a mechanism programmed with the known position of the object, and obtaining coherent pictures by choice of exposure settings and by linking the camera shutter to the frame mechanism of the display, eg: the flyback signal. If there was measurably more smearing from any one type of display technology, then that would be an independent, reproducible, scientific test that would put the matter beyond doubt, but CP's seemingly rather speculative article with its identifiable errors, and the unscientific regurgitation of it as though it were fact that has happened here, do not. I am sure this will bring down a storm of protest from the CRT diehards here, but unless someone comes up with *new* *evidence* that I haven't already seen and covered above, I shall ignore it all. |
|
#70
|
|||
|
|||
|
Java Jive wrote:
[long analysis about CRT refresh, phosphors, etc.] I am sure this will bring down a storm of protest from the CRT diehards here, but unless someone comes up with *new* *evidence* that I haven't already seen and covered above, I shall ignore it all. Since this thread was originally posted both to "uk.tech.broadcast" and "uk.tech.digital-tv", but your analysis wasn't, I'm reposting it here in its entirety to both groups so that those in uk.tech.broadcast can also read and comment on it: --- 8 --- Java Jive's post begins --- 8 --- " wrote in ups.com: Subject: Coast - awful filmic effect From: " Newsgroups: uk.tech.digital-tv,uk.tech.broadcast Alan Pemberton wrote: Michael Rozdoba wrote: In that case I'd like to ask one question. For material shot interlaced intended to be played back on a device which handles interlaced material (such as a CRT), I can see why you'd want to keep the material that way in many if not all cases. Rendering it on the display, due to the temporal blurring of the phosphors, will effectively merge the fields. NO! Nothing to do with phosphors. A crt produces a bright spot of light moving very quickly which the eye and brain interpret as a dim two dimensional moving picture. No (or very little) storage or integration gets done on the screen. ...as you can see very clearly if you point a video camera at a CRT and set the shutter speed faster than 1/50th of a second! You're forgetting that photographic equipment doesn't enjoy the wonderful contrast range of the human eye. See below. There is a great explanation showing why this is a great way of reproducing moving pictures. If your eye tracks the motion on-screen, a single clear image hits the back of your eye. Compare this with LCD technology, where the "always on" nature of the image means any eye movement simply smears the image on the back of your eye. Found the link I was looking for... http://www.poynton.com/papers/Motion...yal/index.html or http://www.poynton.com/PDFs/Motion_portrayal.pdf (same content in both links - relevant section near the end) It makes a superficially plausible story, but I suspect that's all it is - a story. Quite a lot of the second half of it doesn't stand up under close examination. Although there are two academic references to papers by Japanese researchers, it's not clear what information comes from those papers, and what is speculation by Poynton, and as I could find neither paper online, unfortunately I can't check, which I think needs to be done. Phosphor luminescence decay can be modeled as the sum of several exponential and power-law curves and levels off into a gradual decline after an initial precipitous decay. Although, despite a great deal of searching, I have not been able to find, free on-line, a characteristic curve for any of the P22 (XX?) compounds widely used in CRTs, on purely theoretical grounds I believe the graph of CRT decay shown as Figure 13 to be, at best, highly misleading in that firstly, it does not have a meaningful vertical scale, and secondly, that it is shown as decaying to zero, which is theoretically impossible. However, I don't actually need the characteristic curve. The term persistence when applied to phosphors has a specific, mathematical meaning: it's the time taken for the luminescence to decay to 10% of its starting value. It is NOT the time taken to fade away to nothing (that would be infinite) NOR even the time taken for luminescence to become invisible to the eye or any other detector. I get the impression that many do not understand this fundamental point. If the initial excitation is strong enough, the substance can still be emitting light long after its persistence period has expired, and this happens in a normal CRT TV. Here is photographic proof: http://en.wikipedia.org/wiki/Image:Refresh_scan.jpg http://upload.wikimedia.org/wikipedi...fresh_scan.jpg If you consider that the human eye has considerably more contrast range than ordinary photographic equipment, and that this was shot at f/1.6 and 1/3000s, yet the *entire* picture, even the oldest part of it just below the refresh line, is discernible as two people, the very top of the right hand one's head having beeen just been refreshed, then there can be no doubt that even the faintest parts of the picture would have been visible to the eye. But I fell to thinking that this was rather an unnatural situation, because the exposure is so low, and wondering whether it would be possible to perform the same sort of test closer to normal exposures. I set my still camera (Canon PowerShot S40) to manual mode, ISO400, 1/100s (so that I could capture the field refresh process), pointed it outside, and chose an aperture of f/8 as giving a reasonably exposed daylight shot. The focus was set to infinity. This is the result. http://tinyurl.com/y68b86 ... standing in for ... http://i28.photobucket.com/albums/c219/JavaJive/CRT- LCD/BaselineOutdoors.jpg I then took all the following at a distance of, and focussed to, 12", but all other settings on the camera remained the same. That implies that the exposure in every case has been proved to be suitable for a normal view for the human eye. First, a Panasonic TX-15LT2 LCD TV. By comparison with outdoors, the picture is somewhat under-exposed and therefore must be dimmer, but this was not noticeable to me viewing it, presumably through auto adjustment of the pupil: http://tinyurl.com/yxgmhg ... standing in for ... http://i28.photobucket.com/albums/c2...RT-LCD/LCD.jpg Then I took a whole series of pictures of a similarly sized CRT showing the same picture, a Sony KV-16WT1U (the only one I still have, I normally use it for CCTV). That part of the current field that was refreshed during exposure was well-exposed, as one would expect. Here is an example: http://tinyurl.com/vf6so ... standing in for ... http://i28.photobucket.com/albums/c2...urrentScan.jpg Further, as with the Wikipedia shot, even the previous field is still discernible, as he http://tinyurl.com/y72mv9 ... standing in for ... http://i28.photobucket.com/albums/c2...rviousScan.jpg So we have one photo taken at very low exposure that nevertheless shows detectable light coming from the previous field, and another taken at an everyday 'human' exposure showing the same thing. Taken all together, these photos: 1) Prove that the phosphors in two typical example CRTs emit detectable light from one field refresh at least until neighbouring lines in the next field are refreshed. 2) Strongly suggest that the level of light they emit is sufficient to be visible to the human eye throughout most if not all of one field refresh cycle. However, what I think is genuinely debatable, and neither these photos nor speculation are going to resolve this, is how much of the sensation of seeing the picture is due to the initial very bright but very small excitation area currently beeing refreshed, and how much due to the much greater area of gradual fading until the next excitation. Going back to CP ... I then have no problems with the capture characteristics, the big problems start with the section "Capture and display interactions" which entirely hinges on his understanding of the way we perceive motion, and as I suspect this is flawed, I suspect the whole section is likewise. What I am certain of is that at least some of his interpretations not only do not make sense, but also go in part against known research, as described here ... http://tinyurl.com/rcnn3 ... standing in for ... http://www.uca.edu/org/ccsmi/ccsmi/c...0Revisited.htm Consider his interpretation of how we see film and the associated diagram, Figure 19 ... He claims that we see an interpolated second image of the object due to the second showing of each frame. Like many before him, he is confusing motion perception and flicker perception: From the article linked above: """ It is only with hindsight that the problem seems to divide into such clearly separable categories -- the fusing of the flickering light, called flicker fusion in the literature of perceptual psychology, and the appearance of motion which is referred to as apparent motion. Early writers, without the benifit of hindsight, continually confused the two issues. """ The sole purpose of the second showing of each frame is to reduce flicker, it plays no part in motion detection. http://en.wikipedia.org/wiki/Frame_rate And it's a very good thing that it doesn't, as is obvious if we follow his misunderstanding to its logical conclusion. Figure 19 is an idealisation of a simple object in motion which I analyse here in ASCII art (which needs a fixed font, if your newsreader garbles it, cut'n'paste it into Notepad or equivalent) where X marks the object and - the background, but with the important difference that I include where the eye would have to be looking (I) in order to see the object where he claims it would appear to be: First showing of frame 1: -----Actual--------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| Second showing of frame 1, which being, as we know, actually exactly the same as the first, means that the eye would have to track to the left in order to see the second image in the position displaced to the right that Poynton claims for it: -----Act'lPoynton------------- -----xxxxxXXXXX--------------- -----xxxxxXXXXX--------------- --I----------------- |--------| First showing of frame 2: ---------------Actual-------------- ---------------XXXXX--------------- ---------------XXXXX--------------- -----------------I----------------- |--------| .... etc. So you can see that his ideas imply that in order to see what he claims, every frame the eye would have to backtrack half a frame's worth of movement then forwards one and a half frame's worth of movement! A far simpler, more coherent, and more convincing explanation, AFAIAA compatible with research findings, would be this: First showing of frame 1 -----Actual-------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| Second showing of frame 1 -----Actual-------------- -----XXXXX--------------- -----XXXXX--------------- -------I----------------- |--------| First showing of frame 2 ---------------Actual-------------- ---------------XXXXX--------------- ---------------XXXXX--------------- -----------------I----------------- |--------| .... etc. If his interpretation of how we see film seems contrived, I don't find his interpretation of how we see video much more convincing. For one thing, the only time I have ever seen smeared video on an LCD was on a 1990s laptop, where the problem was simply that the update rate of the pixels in the display was insufficient to track movement. I have yet to see a recent LCD TV or even a laptop that shows any such problem, so I am not even convinced that there is any real difference between CRTs and modern LCDs to explain. For another, this statement is misleading: "A CRT has a very short flash: the persistence of the phosphor is a negligible fraction of the frame time." - but, as we have seen, persistence is not the same thing as luminescence, light is emitted from one field refresh to the next field refresh. If someone really wants to get to the bottom of whether there is any real difference between the two technologies wrt motion tracking, then I think something like the following experiment would be required ... Get a representative number of clips of a simple object moving across a patterned field of view at different speeds. Analyse the position of the object so that you know where it will be in each frame, and then display the clips on various types of display, tracking it with a movie and/or video camera under the control of a mechanism programmed with the known position of the object, and obtaining coherent pictures by choice of exposure settings and by linking the camera shutter to the frame mechanism of the display, eg: the flyback signal. If there was measurably more smearing from any one type of display technology, then that would be an independent, reproducible, scientific test that would put the matter beyond doubt, but CP's seemingly rather speculative article with its identifiable errors, and the unscientific regurgitation of it as though it were fact that has happened here, do not. I am sure this will bring down a storm of protest from the CRT diehards here, but unless someone comes up with *new* *evidence* that I haven't already seen and covered above, I shall ignore it all. --- 8 --- Java Jive's post ends --- 8 --- -- znark |
| Thread Tools | |
| Display Modes | |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Is the filmic effect used to help low bitrate coding? | [email protected] | UK digital tv | 22 | August 11th 05 09:44 PM |
| Coast to Coast AM | mac | Satellite tvro | 5 | January 23rd 05 02:25 AM |
| Rainbow effect with DLP - visable with computer graphics? | Tommy Gilchrist | UK home cinema | 4 | June 14th 04 11:47 PM |
| Rainbow effect with DLP - visable with computer graphics? | Tommy Gilchrist | UK home cinema | 0 | June 14th 04 11:36 PM |
| SHVIA: Network exclisivity does not effect cable companys in grade Bareas. | Dishdude | Satellite dbs | 0 | April 3rd 04 02:36 PM |