[Intel-gfx] A question about Sync the default modes for LVDS output device

yakui_zhao yakui.zhao at intel.com
Thu Apr 23 10:58:40 CEST 2009


     A. Hi, Eric
    It is not very good to add the giant tables of modelines. It will be
perfect if the modeline is generated by using CVT/GTF algorithm.

    Then I use the CVT/GTF to get the mode and find that the mode
generated by CVT/GTF is different with the default mode in Xserver.
    For example: the mode for 800*600 at 85HZ 
    GTF:  "800x600_85.00"  56.55MHz  800 840 928 1056  600 601 604 630
    CVT:  "800x600_85.00"  56.75MHz  800 848 928 1056  600 603 607 633
    default mode in xserver: 56.3MHz, 800 832 896 1048 600 601 604 631 
    
    At the same time there exists another issue. The GTF algorithm used
in userspace is realized by using float calculation. If we expect to add
it in kernel space, we will have to write it by using fix-point integer
calculation(32bit). I do such a test and find that there exists the
error about the mode parameter between userspace and kernel space.
    For example: For the 1024*768 at 75HZ
    GTF fix-point integer: 
		  hsync_start =1084, hsync_end=1192, htotal=1365, 
                   vsync_start =769, vsync_end=772, vtotal=803
		   Pixel_clock = 82210KHz
    GTF user space:
		  hsync_start =1080, hsync_end=1192, htotal=1360
		  vsync_start=769,  vsync_end=772, vtotal=802
                  Pixel_clock = 81180KHz
   
Thanks.
> These giant tables of modes are insane.  Especially having a bunch of
> different refresh rates when the LVDS actually has a fixed refresh rate.
> Just generate a mode at each appropriate size using GTF or CVT.
> 
> I'm not really sold on the whole idea of the kernel generating these
> fake modes for LVDS, given that we can support any size and that the
> refresh rate is a lie since we're always using the fixed mode.  Any
> other opinions on this?
> 




More information about the Intel-gfx mailing list