{ "numMessagesInTopic": 10, "nextInTime": 936, "senderId": "DKo29RbsvwOg8hp4KC9nIYNIpFPxuSMV8G6GZ-fB5aIUaffYs3rIgXZWemHjaKp7yQdRpE3p0nZD0iGfwn9z0vQ2Ir2AY0vPNgxJa63NnA", "systemMessage": false, "subject": "Re: New DisplayPort Standard for PCs, Monitors, TV Displays and Projectors", "from": ""fluppeteer" <yahoo@...>", "authorName": "fluppeteer", "msgSnippet": "[Belatedly] ... wrote: [re. DisplayPort bandwidth] ... Certainly true. I m hoping that DisplayPort doesn t impose *that* much overhead (or, at least, that the", "msgId": 935, "profile": "fluppeteer", "topicId": 914, "spamInfo": { "reason": "12", "isSpam": false }, "replyTo": "LIST", "userId": 192443393, "messageBody": "
--- In IBM_T2X_LCD@yahoogroups.com, David Evans <key-yahoo@z...> wrote:
\n>
\n>
\n> fluppeteer wrote:
\n>
\n> > --- In IBM_T2X_LCD@yahoogroups.com, David Evans <key-yahoo@z...>
\nwrote:
\n
\n[re. DisplayPort bandwidth]
\n> The reality is that for data transmission the number that is
\n> commonly quoted for any particular transmission medium is
\n> usually the maximum theoretical burst transmission rate. As
\n> such, the numbers commonly used are merely approximations of
\n> what can actually be transmitted from the system's (end user's)
\n> point of view.
\n
\nCertainly true. I'm hoping that DisplayPort doesn't impose *that*
\nmuch overhead (or, at least, that the pixel data is the vast
\nmajority of what's going to be transmitted, at least if there's
\na 9MP monitor in the mix) and that a digital end-to-end link is
\ngoing to get nearer to its theoretical maxima than some of the
\ntechnologies of yesteryear, but there's certainly not what I'd
\ndescribe as a lot of headroom in the suggested figures, and
\nthere's bound to be *some* overhead. I guess we'll see when
\nproducts arrive.
\n
\n> It is certainly true that the most common thing to happen is
\n> that most bandwidth specifications are given in base 10 k's.
\n> There are many reasons for this. However, the reasons mainly
\n> come down to the numbers in base 10 k's are larger. Everyone
\n> involved, engineering included, usually wants "their" item
\n> (spec, product, etc.) to compare favorably to the competition.
\n> "The competition will probably use base 10 k's, so we
\n> should also..."
\n
\nAh, now I prefer to be less cynical about marketing and more
\ncynical about engineers/managers being able to think in different
\nnumber bases. Besides, except where there's an obvious precedent
\n(as with memory), foul might get called as it did with the hard
\ndrive sizes a few years back (not that it made any difference,
\nunlike the CRT "visible size" thing). That doesn't mean you're
\nnot right, though. :-)
\n
\n[Blanking intervals]
\n> > > > I'm hoping that they've given up on trying to make it
\n> > > > all look like a signal for CRTs.
\n> > >
\n> > > I would hope so. However, the numbers that I used came from
\n> > > IBM's choice to have the T221 DG3 EDID information explicitly
\n> > > state that the 33.32% overhead is required for the T221.
\n> > > This was IBM's choice, and not a generic DVI/CRT issue.
\n> >
\n> > Hmm. Maybe there's a minimum below which they expect the drivers
\n> > to get confused? (I believe the timings are tighter for the DG5
\n> > triple stripe mode, but my memory is hazy and your point is valid
\n> > with *any* "blanking" interval.) I guess there still needs to be
\n> > a synchronization period with DVI, but that could probably be
\n> > bypassed in a newer protocol. I'm guessing here.
\n>
\n> Some methodology of indicating the end of a row and the last
\n> row in the picture are needed. The minimum blanking times that
\n> the T221 DG3 require are actually only a very small percentage of
\n> the signal (0.5% at 3840x2400).
\n
\nOnce per frame would probably do - one could use something like a
\nPPM header (bit depth - possibly - plus x & y resolution) and just
\nstream the rest in binary data. Even once every few frames would
\nwork, if you could handle a delay in synching up. The days of
\npeople changing mode half way down the screen (the Amiga, the
\nBBC Micro implementation of Elite) seem to have gone. But as you
\nsay, there's not that much overhead with the current system.
\nThere's always the side band link (although I'm not sure it'd be
\nmuch good for synching) if 100% of the main data bandwidth is
\nneeded.
\n
\n> Both Windows and Linux (<=2.6) get confused when the timing does
\n> not correspond to the GTF. In general, the drivers do not appear
\n> to care about using timing that is near-minimum.
\n
\nHmm. There's nothing to say that the "timings" presented to the
\nOS need to bear much resemblance to the transmission format, of
\ncourse. (Although if at least the hsync doesn't line up then
\nthere'd be trouble.)
\n
\n> Part of the problem is that it is not the display driver that
\n> chooses the timing to use. The driver _may_/should report the
\n> EDID information upward to the windowing system which will then
\n> make a choice based on the user's preferences.
\n
\nSort of. The display driver *is* responsible for programming
\nthe transmitters on the card, though, and could reinterpret
\nvalues which the OS (or user) understands in such a manner
\nthat the equivalent effect is achieved with a completely
\ndifferent protocol. Not that there's necessarily a problem
\nwith the idea of flyback intervals, but I'm just saying we're
\nnot stuck with them.
\n
\n[Multiple variants/here we go again]
\n> Yeah, that is what it looked like to me. Particularly given
\n> that they kept the _maximum_ currently defined in the
\n> specification as the only configuration capable of driving
\n> a DG5. That implies to me that the people driving the
\n> specification _really_ are not all that interested in
\n> higher resolution displays, and do not think that the
\n> industry will head in that direction for at least another
\n> 5 years.
\n
\nI'm sure the main thing driving it is the content protection
\n(and not wanting to pay Silicon Image royalties, if I understand
\nthe situation with DVI/HDMI properly). At least it's a bit of
\na step up, and there's potential for future development, but I
\nthink they dropped the ball a little where they could have
\nremoved one of the hurdles to higher definition displays. I
\ncan hope the lower specs will only apply to devices other than
\nPC graphics cards, but I'm not that optimistic. Given that even
\nnVidia don't seem serious about pushing their internal transmitters
\nup to 165MHz (at least from what I've seen of nv40; G70 may be
\nbetter) I suspect 9MP will stay as a tickbox which people want to
\nbe theoretically possible, but not actually care about for the
\nmass market. But it's been a long day, and I may feel less depressed
\ntomorrow. :-)
\n
\n> > > 3D displays: Orders of magnitude higher, assuming you are
\n> > > not just transferring the "surface" information.
\n> >
\n> > Indeed - I remember a load of papers on compression of holograms.
\n> > Still, without knowing what the future scaling plans are, I
\n> > guess I shouldn't be quick to dismiss any small increase in
\n> > bandwidth, and it's not like there are a lot of holographic
\n> > displays out there.
\n> >
\n> True 3D opens a big can of worms which the industry is just not yet
\n> ready to handle for consumer level products.
\n
\n*Nod*. Image-based rendering and scene reconstruction give some
\ncool demos, and it'll be nice to see, but it'll take more than
\nDisplayPort to do that kind of thing. It's probably there for
\nthe future, but whether it'll ever pass stereoscopic views or
\nholography I don't know.
\n
\n> Stereoscopic simulated 3D is something that is within the grasp of
\n> current technology.
\n
\nBtw, what happened to 3DTV? There was supposed to be a launch
\nat SIGGRAPH 04, but AFAICT nothing happened. At least two views
\nwill probably start turning up once high def gets popular, once
\neveryone gets over the fact that their shutter specs stopped
\nworking when they chucked their CRTs...
\n
\n> From the wording of the review and press release, signaling and
\n> bandwidth were not the main goals for the specification. The
\n> main goals appeared to be connector issues and getting the
\n> interface to "fit" better in lower-end products.
\n
\nFingers crossed they'll manage one-size-fits-all. A connector
\nwhich doesn't keep falling off the back of the card but can
\nbe detached easily would be nice (DVI is bad at this, and worse
\nwith a VGA converter; the 1600SW's LVDS connector is *much*
\nbetter).
\n
\n> Yes, a single connector has merits. I am certainly not
\n> attempting to say "DVI forever", merely that they had an
\n> opportunity to be a bit more forward looking than they were
\n> with respect to moving the industry towards having more
\n> bandwidth as a default. 8 different configurations
\n> is a nightmare for consumers.
\n
\nAgreed. I wonder if the board manufacturers will be able to
\nwork out what they've actually used this time? :-)
\n
\nIf only more members of this group were able to promote the
\nbenefits of high resolution to the spec committee. I remember
\nsomeone telling me he couldn't work out the point of a 21"
\nCRT (which, since he was using 1280x1024, might be a justifiable
\nviewpoint). But then I remember when I first saw 640x512 and
\nthought it was really high resolution!
\n
\n--
\nFluppeteer