{ "numMessagesInTopic": 5, "nextInTime": 377, "senderId": "_PycFtB17v2EtligtbpKXWx-T8Nh1vizKd1VML1xuRNJ-Ak-bRUp2mmZRXQ_u6ZarbbY15A6UmMtWMHz319YfqzLPfX0XMF3429Dffh5QQ", "systemMessage": false, "subject": "Re: color", "from": ""fluppeteer" <yahoo@...>", "authorName": "fluppeteer", "msgSnippet": "Thanks for the comments, Steve. (Are you Steve? Sorry, haven t been lurking here long enough to decipher people s ids; I m Andrew, if anyone has been", "msgId": 376, "profile": "fluppeteer", "topicId": 366, "spamInfo": { "reason": "0", "isSpam": false }, "replyTo": "LIST", "userId": 192443393, "messageBody": "
--- In IBM_T2X_LCD@yahoogroups.com, "sax00axe" <wrightsl@u...> wrote:
\n> here are a few quick statements:
\n>
\n> 1) the human visual system can detect at most a few million
\n> colors. The total number of possible colors is generally not
\n> important- what is important is which colors are actually chosen
\n> for display. Gamut mapping between devices is also important.
\n
\nTo an extent, yes. The human eye can't distinguish between, say,
\n0x00FF00 and 0x00FF01 - a fact Acorn used when it launched the 256
\ncolour mode on the Archimedes in the late 80s (if anyone's
\ninterested: 6 significant bits, two per colour, and the next two
\nbits were turned on in all three channels by a "tone" setting).
\nHowever, it can certainly tell 0x007F00 from 0x008000, even when
\ncolour corrected. A monitor doesn't need to display 16 million
\ncolours, but the ones it does need to distinguish don't map to RGB
\nin the obvious way. If someone makes an LCD capable of 8 million HSV
\ncolours, mapped properly, it may well be superior visually - but
\ncurrent display technology (arguably with the exception of things
\nlike http://www.sunnybrooktech.com/) don't work like that.
\n
\n> 2) the human visual system is completely incapable of
\ndiscriminating
\n> the least significant of 10 bits for either luminance or color for
\n> individual pixels at 200ppi. To understand this, look up the
\n> contrast sensitivity function for the human eye. One book on this
\n> subject is "Contrast Sensitivity of the Human Eye and Its Effect
\non
\n> Image Quality" by Peter Barton, SPIE Press 1999. ISBN 0-8194-3496-
\n5
\n> Better yet, put some test patterns up on T221.
\n
\nI don't doubt it. However, I *do* doubt that I'm unable to spot a
\ndither pattern in the LSB of *8* bits (especially greyscale) if I'm
\ntrying to peer at close detail (meaning I put my head near the
\nmonitor), even if all I discern is some kind of colour bleeding.
\nMore importantly, some images which get displayed have inherent
\ndither patterns which will suffer bad moire effects if the monitor
\nis trying to do its own dithering. Several applications attempt to
\nproduce a semi-transparent effect by a 50% transparent rectangle,
\nwhich will cause unpleasant flickering as it moves (I've seen this,
\nin a more pronounced way, on my laptop's 18-bit display as it tries
\nto emulate 24 bits); modern apps could probably use the card's
\ntransparency support, but that's a future fix.
\n
\nIf the source image consists largely of areas of single colour, or
\ncontains no high frequency dithering, the monitor's in-built
\nmechanism will, I'm sure, work well. For my own nefarious purposes
\nI'm inclined to reserve judgement, and welcome the ability to turn
\nit off rather than wonder why there's some high frequency noise in
\nthe image I've just ray traced (for example). It sounds as though
\nturning it on and off is slightly nontrivial, unfortunately. As I've
\nsaid, it should probably be up to the graphics card to do some
\ndithering using (in a modern card) its own 10-bit LUT, but I've not
\nyet met a driver which is up to it. Perhaps I should suggest it to
\nsomeone.
\n
\n> The discrimation of color is highly differential, not absolute.
\nThis
\n> is why any palette problems between two outputs of a graphics card
\n> are very easily seen.
\n
\nIndeed - and why a friend of mine has a calibrated light bulb in his
\ncomputer room! Calibration is nice, although smooth gradations are
\nalmost as important to me.
\n
\n> 3) the human visual system can easily discriminate the most
\n> significant bits for luminance or color at 200ppi. Although
\npeople
\n> in the halftone print world understand this very well, it has
\ntaken a
\n> long time for some in the display world to appreciate this (many
\n> people still don't...) For letter jaggies, this is clear, but it
\nalso
\n> is important for natural image quality.
\n
\nJust making sure I get the point, you're saying 100 dpi is
\ninsufficient resolution for highly contrasting edges (aliasing is
\nvisible)? Concurred - and also that even a T221 benefits from a bit
\nof antialiasing. The print world has been responsible for some
\npretty shocking halftone handling in its time, though! Also, the
\nprint world knows about moire and screen angles, bringing my back to
\nmy primary concern...
\n
\n> 4)devoting a DVI channel to carry the LSB beyond 8-bit is not a
\ngood
\n> idea. It is expensive and cumbersome to devote a cable for this.
\nIt
\n
\nI think the idea was to use the second *link* on a single head, thus
\nnot requiring a second cable (although possibly some extra wires
\ninternally). The delayed transmission of the extra bits which IBM
\nsuggests is even simpler from that point of view, although it may
\nmake the electronics horribly complicated. I may have misinterpreted
\neither or both, though.
\n
\n> is difficult to implement digital/analog electronics to do a good
\njob
\n> at the intrinsic 10-bit level (microvolt control is needed). The
\nonly
\n
\nQuite, but 9.5 bits is still better than 8! I guess differently
\ncalibrated TFTs would be a more important step forward than trying
\nto fix it in the driving hardware.
\n
\n> reasonable way to implement >8bit control of color and luminance
\nis
\n> to dither the data in space and/or time, especially effective for
\n> high ppi displays.
\n
\nThe paper I looked at was suggesting that temporal dithering is hard
\nto calibrate; I might be a bit concerned about flicker, too
\n(allowing for strobing from fluorescent lighting), but I remember an
\nold Atari ST art package getting extra colours that way, so I guess
\nI shouldn't knock it too much. Obviously the higher the resolution,
\nthe better dithering will work - although every little helps even at
\n640x256 and 16 colours (Acorns used to do it...)
\n
\n> Dither is what is done in nearly all
\n> implementations in displays or graphics cards to date.
\n
\nI knew low bit-depth TFTs did it; I didn't know that graphics cards
\nattempted it, although I guess that makes sense and obviates some of
\nmy other comments - I could see that if *both* attempt it then we're
\nback to my moire thing, albeit statically. AFAIK RAMDACs genuinely
\ntry to support 10-bit output rather than trying to dither to a CRT,
\nbut I guess each card may be different.
\n
\n> 5) T221 supports 10-bit LUT resolution in the monitor, so there
\nare
\n> 256 choices of levels for each primary R, G or B subpixel (16
\nmillion
\n> colors), but the precision of each level is 1 part in 1024. This
\nis
\n> done using a 2x2 spatial dither block. It does not mean that the
\nreal
\n> display resolution has been reduced to 1920x1200, since the eye
\n> cannot discriminate this dither at 200 ppi. The term "resolution"
\n
\nI'm sorry if my previous mention of using colour calibration at
\n1920x1200 confused anyone; what I meant was that I'd be less
\nbothered about using it at 1920x1200 because the opportunity for
\nmoire effects is (almost) removed. I'm not saying that dithering in
\nthe monitor (or elsewhere, although in the monitor is definitely
\nmost handy for lower resolutions) isn't useful, just that it's a
\ntechnique with disadvantages, and that the result won't always be
\ninvisible. Dithering in this way removes the ability to reproduce
\nall the solid colours of which the monitor is capable, for times
\nwhen having solid colours is preferable.
\n
\nOn the other hand, you *do* get an attempt at a calibrated colour
\nwithout the need for the graphics card to do anything complicated.
\n(I'm not against dithering, just against dithering without knowledge
\nof the underlying image.)
\n
\n> should also have a connotation of modulation- if the marketing
\npeople
\n> could get out the way, perhaps someday it will. Look at the
\n> confusion over dpi, spots, resolvable, addressable, optical mag
\netc
\n> with printers, scanners, and cameras.
\n
\nQuite. There's enough of a problem with the difference between dot
\nsize and image size, let alone getting the hardware involved.
\nPhotoshop has not helped this as much as I'd like to think; I've
\njust read a magazine letters page with someone asking whether a JPG
\nwas scaled to the right size for their projector (the editor
\nresponded that they should use 72dpi instead of 300, but that 1024
\npixels across was fine; sigh). I'll just be glad when people stop
\ncomplaining that a monitor has "too high" a resolution for its size
\n(I was perfectly happy at 1600x1200 on a 17", thanks), and start
\nrealizing that the size of the text isn't the beginning and end of
\nit. I remain convinced that the whole
\nVGA/SVGA/SVGA+/XGA/SXGA/UXGA/WUXGA/QWUXGA thing does more harm than
\njust giving out numbers. It's harder to find out the dot pitch on a
\nCRT these days, too.
\n
\n> 6) 10-bit control of luminance is very important for medical
\n> imaging. For graphics arts, 10-bit control of color is also
\n> important, but must be tightly coupled to an overall color
\nmanagement
\n> approach for all relevant devices (displays, cameras, printers,
\n> scanners, etc.)
\n
\nConcurred; however, given how subjective colours can be, I'm also
\naware of introducing aliasing issues as a price to pay, and would
\npersonally like to consider a more wholistic approach which can
\navoid some of the problems (handling images in higher bit depths,
\nconditional dithering with an eye on error diffusion boundaries,
\netc.) I'm probably not mainstream, though - just because *I* might
\nchoose not to use it universally doesn't mean in-monitor dithering
\nisn't the best approach for the average (Windows, especially) user.
\n
\nIn conclusion: I'm niggling here, so I don't want people to think I
\ndisapprove of the presence of the colour calibration utility (after
\nall, you can choose not to use it, and it will work just fine in
\nmany cases). I'm just providing counters to some of the arguments
\ngiven to suggest that it's not problem-free, and there's a price for
\ndoing it this way. My original question - is the monitor driven
\ninternally at greater than 8-bit precision, so that I should send
\nuncalibrated data to it? - has been answered: no, it's not. However,
\nin the absence of dithering support in an application, there is
\neffectively a "dithering acceleration" mode which I can use when I
\ndon't anticipate this dithering to cause problems, when I'm running
\nat lower resolutions (only on this monitor is 1920x1200 a "lower
\nresolution"), or when my source is of <= 8-bit colour depth.
\n
\n--
\nFluppeteer