virtual filesystem ideas

Ken Deeter ktdeeter at
Mon Sep 22 23:21:19 EEST 2003

> So, can you give an example when the observer would be used?

Well, the main distinction between when to and when not to would just be
whether you wanted to know what was going on, or whether you didnt. As
you mentioned below, when the app does want to know about what is going
on, it will have a progress bar or a throbber or something like that.
The backend observer interface would be specifically used to monitor
this kind of thing. I think it would be interesting if a program that
knows its using http over SSL for example, could show you more
information in a progressbar than just a percentage. The observer lets
you capture that in a backend specific way, so we don't have to abstract
the sense of "progress" among all back ends. I think usually when apps
care whats going on at the lower level, then they want more than "10%
done, 20% done, etc..", and when apps don't care, then they shouldn't
have to.

As for the main vfs observer, the primary use i was thinking of was like
a FAM type of application. If we are going to extend our filesystem reach
to different kinds of fs's, then the alteration monitoring will surely be
different across each backend. In this case you want a central way to
capture what is going on, so that, say if you have konqueror looking
at a ssh:// it might be able to pop up new files as they are created.

But now that you mention it, maybe it would make sense for this traffic to
go between just the vfs server and the client.

But I think there still might be cases, when persistent connections go down,
for example, that maybe no-app particularly care about, but the user might,
and so there should be a way to capture that.

> In the ideal world, apps would use the vfs for all file access. Not all
> file access is of the form "user asks the app to load a file". Much of
> it is things that happen automatically, in the background, or as small
> parts of everyday program work. When loading an icon for display you
> can't have the vfs automatically poping up dialogs for you. And in the
> case you want to have status while loading, chances are that you don't
> want a standard dialog, but instead want to use some specific UI feature
> of your app (throbber, status bar in window, etc).

I agree with you. However, if the app hasn't implemented a progress bar
or throbber, or has not explicity decided that nothing should be said,
should it just say nothing then? I think this is the traditional
assumption or I/O programming to this date, and in the GUI context I
question it somewhat. I'm not saying there shouldn't be a choice of
whether somthing does or does not pop up, but if we want to see what is
going on, we should be able to. Maybe pop-up dialogs isn't the best way
to do it, maybe you could have a panel applet that lists current ongoing
vfs transactions or something. But even for like a command line program,
I think it would be easier on the user if he had some visual indication
that something was going on, w/o nec requiring every programmer to have to
program that in. Besides most progress bars look kind of the same anyways,
it is a huge duplication of effort.

If we clearly define an interface in which a status bar type thing
interacts with the backend, then the client API could even provide a standard
"vfs status widget" that did the right thing.
> It is worse with dbus. For the case above the kernel switches are not
> context switches. With dbus everything will go through the dbus daemon,
> so you'll get: app -> dbus daemon -> vfs server -> dbus daemon -> app.
> That is 4 context switches as opposed to the 2 that are "needed" for
> out-of-process communication.

Well, i guess such is the price you pay for being able to abstract away
whether you are using glib or qt main loops and such. Again, since I
can't really predict the performance, and I'm not sure even where the
bottlenecks will be, Just saying there are more context switches seems
unconvincing to me. What if we are talking about loading http:// or smb://
or something, you're bottleneck then is not likely to be context switches..
you're process is gonna get switched out when its waiting for the network

> Sure, it is nice if there is some common code for doing some thing. But
> there are so many different ways an error can be handled. Sometimes the
> app expects an operation to fail, for instance when searching for a file
> through a set of paths by calling open in each path. And even when a
> failure is really a failure its almost always not interesting what the
> lowlevel error was, but instead the i/o error needs to be propagated a
> few levels up in the application where it can be either handled by doing
> something different of a dialog with a much higher level description can
> be presented to the user. 

Right, so what I'm saying is if you don't care about what the error was,
then let the system handle it in a standard way, which it can provide
nice localized GUI messages or whatever. If you do want to know what the
error was, then you specify an observer, and you can react exactly how
yo want to.
> Yeah, maybe I was harsh on you due to this, but I just think the dbus
> direction is wrong, and will cause us problems, while not solving really
> many issues.

Well whether it will cause us problems I don't know ;-) Whether it will solve
issues, it seems like a pretty flexible way to me.. ;-)
> The gnome-vfs backends are in-process, but I'm moving some of them
> out-of-process, since then cached data, authentication and connections
> can be shared, and the backends get a much more well-specified
> environment to run in. I think a common vfs system has to be out-of-proc
> to have any chance to work, but that may mean bad performance for local
> files, slowing down things like the file manager...

konqueror seems to do ok for me ;-)
> > cancellation: presumably this would be part of the interface of the vfs
> > server. Since it would know about the various transactions going on at
> > the moment, it should know how to cancel one.
> This ties in with asyncronicity. There had to be a way to refer to an
> outstanding asynchronous job, so you can cancel it (or a group of jobs).
> Gnome-vfs has a whole scheduling subsystem to handle this, including I/O
> priorities and group cancellation.

Right, if this is needed (and I presume it is, otherwise you wouldn't
have put it in), then I'm all for putting it in. 

> > how uri's look: since there even seems to be RFC's regarding this issue,
> > it would probably be wise to stick to the protocol:// type URI's
> Thats not enough though. Both kde and gnome have extensions to the
> availible RFCs.

Ok. But I guess my argument would be that if you do have a common vfs then your
URI problems sort of go away because to the vfs, there is only one uri format.
Whether this agrees with the current gnome/kde implementations... maybe not,
but its probably a small matter of translation. I think its like saying 'well,
how do you specify filenames on linux?', and the answer is 'in this way, because
thats the only way it works'

> I don't care much about how the stacking is implemented, but how is
> chaining exposed. gnome-vfs uses '#' to chain uris, libferris treats
> e.g. zips as directories.

Oh ok, well I guess this is almost a UI issue. I think there are advantages to both.
Having it like a directory might make it easier for a user in the sense that
he/she doesn't have to understand the whole stacking concept, but it might confuse
them because there seems to be less distinction between a file and a directory.

Having explicit # is much more, well, exposed, but does the user really care? I tend
to think what will matter in the end, is whether the zip file has a 'file' like icon
or a 'folder' like icon in the file manager.

> Unfortunately you can't use unicode or any other encoding for the
> low-level filenames, since that doesn't allow you to express the set of
> all allowable filenames (and thus, you can't e.g. rename a non-utf8
> filename to a utf8 one to fix it, because the vfs doesn't allow you to
> express the current name). Furthermore its often impossible to know what
> encoding is on the disk (e.g. for a remote ftp site), so you can't
> convert to/from utf8. How this is handled differs between KDE and Gnome,
> and gnome even lets you configure it manually using env-vars. The
> solution has to be a way to ask the vfs for "display filenames", that
> can be used in the UI (being unicode or at least a well defined
> encoding).

Oh right, sorry, that is what i meant. I meant, internally unicode
should be passed around. I think if you want to say rename this non-utf8
enocded filename to this other name, then someone should translate both
utf8 strings to the encoding that is needed for that filesystem and do
the rename in the 'local' encoding. Maybe this is much easier said than
done. (kind of like how samba 3 is supposed to negotiate unicode over
the wire and do appropriate translation on both ends)

How one figures out the 'local' encoding has always been a bit of black
magic anyhow. You either do a bunch of conversions and see which one works,
or you just assume something, a la G_BROKEN_FILENAMES.

Probably the most sensible thing to do would be to do something along
the lines of G_BROKEN_FILENAMES, and say, you are going to get filenames
in the locale that you run the vfs server in. But the API should also
have something to allow you to override that.. and certain remote fs's
may be able to tell. If we have a central vfs server, then maybe we can
even have per-base-URI settings of encodings, so that we could say "on
this host, use this encoding", and backends would be initialized


(  Ken Deeter (Kentarou SHINOHARA)             (
 )                                              )
(  "If only God were alive to see this.. "     (
 )                             -Homer Simpson   )

More information about the xdg mailing list