New DRM model

Jaymz Julian jaymz@artificial-stupidity.net
Tue, 10 Feb 2004 16:05:00 +1100


On Mon, Feb 09, 2004 at 12:01:46PM -0800, Jon Smirl wrote:
> My immediate goal is to get a login console up on a single screen. That means I
> need to get the mode set, make fonts work and write a mini-terminal emulator to
> the DRM API. 

in other words, "I want to reimplent fbcon with my own system, and it'll be 
so much better because it's mine, damnit!"

Do you really believe that X should be replcing the console?  

> A plus to this design is that the entire kernel TTY, VT, FB layers are
> by-passed. Some of that code is very ancient and it probably has SMP issues.
> Under the new model everything that doesn't actually play with the hardware is
> moved to user space. Also the old model only supported one card and the new one
> supports many.

How do I get pre-userspace boot messages under this scheme on my non-intel system,
which doens't boot into text mode?  It sounds like you still need the fbcon
layer to exist anyhow.  My egotistical opinion is that fixing that is a far
nicer path.

> My overall plan is something like this:
> 1) machine boots in VGA mode

obviously not always true, as ben has already mentioned.

> 2) DRM driver is built into the kernel

this is a horrific pain.  I should *not* have to rebuild my entire kernel just
to change video cards.  If it can't be moduled, it doens't exist.

> 3) when hotplug event happens (early boot)
> these are done in user space...
>   a) card is reset if needed
>   b) card is initialized and CP is started, optimal mode is set

I assume you mean "optimal or user selected" mode :).

>   c) a pseudo terminal is created, takeover_console is routed to the pseudo
> terminal

So you *do* need fbcon to still exist ;).

>   d) very small user space app listens to pseudo terminal and implements
> terminal emulator using DRM API
> 4) Full user space starts,
>   a) OpenGL library can be loaded
>   b) initial app execs more complex app which implements VTs using OpenGL API

Or as standard RGBA windows, either/or...

>   c) you can run one of these for each video card -- multiuser support
> 5) xserver starts
>   a) uses OpenGL for it's API, no access to framebuffer.

I still believe that X on OpenGL is a hideously stupid idea.  Given that
you're already writing the graphics card drivers, wouldn't it be better
to have an API specifically for the job of linux video card drivers, and
the GL on top of that, rather than vice versa?  Why be stuck with an API
that you don't control, which was designed for !(this job)?

Additionally, your solution sounds far more difficult to make place nice
with !linux (*bsd, for example), which you may or may not care about.  

> A plus to this design is that the entire kernel TTY, VT, FB layers are
> by-passed. Some of that code is very ancient and it probably has SMP issues.
> Under the new model everything that doesn't actually play with the hardware is
> moved to user space. Also the old model only supported one card and the new one
> supports many.

I do like supporting multiple cards (my desktop system has 3), and I like
moving as much to userspace as possible, right, I hate kernel code as much
as the next guy.  But moving the console into userspace just smells of 
bad idea(tm) - it's a great idea, until everything gets fucked up, at which
point we only have an unhelpful purple screen of doom, assuming the kernel
could write to it, of course, which if it can, then you've implented the 
console a third time, and a second time in kernel space etc etc etc.

All of that said, is there a way to get your code to look at without
bitkeeper yet?

	-- jj

-- 
Jaymz Julian aka A Life in Hell / Warriors of the Wasteland / Unreal
Coder, Visionary, Fat Ass.
"Hannibal is a serial killer. He only likes to kill and eat people. 
 Very few people have `I want to be killed and eaten' on their cards, 
 so Hannibal is out of a job." - http://cards.sf.net