Ilija Hadzic's Virtual CRTCs feature discuss
daniel at ffwll.ch
Tue Aug 26 01:18:49 PDT 2014
On Mon, Aug 25, 2014 at 12:02:09PM -0700, Derek wrote:
> Hi, Daniel
> Thanks for your response!
> Talking about v4l's virtual gpu driver? Could you please tell some more
> informations. is this driver in
> I want to take a look this driver. but I can't find it in their git
> repository. Or is there any introduction post related this driver? Thanks.
It isn't merged yet afaik, but Hans Verkuil has made a nice presentation
about it at LinuxCon. Unfortunately the linuxcon page doesn't even have
the slides afaics.
The driver was called vidid (there's an older one called vidi apparently).
But I don't think it's a good model for a virtual drm driver, if that's
why you want to look at it. Just mentioned it to show that there's lots of
uses for virtual drivers.
> Best wishes
> On Mon, Aug 25, 2014 at 6:12 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
> > On Mon, Aug 18, 2014 at 03:35:16PM -0700, Derek wrote:
> > > Hi every one
> > > I'm currently working on VirtualMonitor in my leisure time. It allows you
> > > to use compute/tablet/smartphone as a second monitor for your primary
> > > computer. please refer to http://virtualmonitor.github.io for more
> > > information. Currently I have released very basic version for
> > > windows2000-windows7 to demonstrate the project is feasible. When I was
> > > trying to make a further step on windows, I realized it is difficult for
> > an
> > > individual, as it is not open source and also without technical support
> > > from Microsoft.
> > >
> > > Then I want to move to linux, and I found Ilija Hadzic's post about
> > Virtual
> > > CRTCs. his post is here:
> > >
> > http://lists.freedesktop.org/archives/dri-devel/2011-November/015975.html
> > > In his implementation, GPU driver can create arbitrary number of CRTCs
> > > (configurable by user) instead of only those CRTCs that represent real
> > > hardware.
> > > It is very useful not only for VirtualMonitor, but also
> > > VNC/Virtualization/USB display etc. based on this implementation those
> > > application will be able to take full advantage of the physical graphic
> > > card(3D acceleration).
> > >
> > > I want raise Ilijia's original question agian, if anybody in this
> > community
> > > think Virtual CRTCs is useful, and willing to work together to make a
> > > further progress.
> > > My thought is if can implement a driver independent layer between dri and
> > > vendor specific GPU driver, with some general API. maybe implement this
> > > based one vendor related GPU first? e.g. based on Ilijia's implementation
> > > for Radeon as a daemon?
> > >
> > > GPU driver is not my expertise, If some expert from this community think
> > > this feature is interesting, and willing to initiate a project for this
> > > feature, that will be great. Then people can work and discuss together.
> > >
> > > Any comments regarding to Virtual CRTC or VirtualMonitor are very
> > welcome.
> > > Thanks.
> > I think the concept is overall sound (haven't looked at the old patches in
> > detail).For the actual implementation I think a separate virtual drm
> > driver is now
> > the better approach, since with dma-buf and soon native fence support we
> > can do this now properly.
> > And especially now that we have multi-gpu support in X it should integrate
> > almost seamlessly into X (and other display managers with multi-gpu
> > support). Instead of requering special support in all drivers.
> > Another thing for which iirc no one ever proposed a decent solution is
> > synchronization to consumers of the frontbuffers for virtual gpus. So I
> > guess the driver-private ioctl interface to make that magic work will be
> > key.
> > A (configurable) virtual gpu should also be really useful for automated
> > testing, e.g. of hot-plug and unplug (both drm core and userspace). The
> > v4l folks have such a driver (recently massively revamped for 3.18) and it
> > looks extremely useful.
> > For the configuration interface I guess a few simple module options to get
> > started should be enough, eventually we can switch to configfs (or
> > similar) to be able to configure and create/destroy virtual gpus at
> > runtime.
> > Just my random thoughts, probably good to kickstart the discussion with
> > some quick patches and chat with people on #dri-devel on freenode irc.
> > Cheers, Daniel
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the dri-devel