Proposal for a MIME mapping spec
Philip Peake
pjp at osdl.org
Thu Jul 8 17:48:53 EEST 2004
Alexander Larsson wrote:
>On Wed, 2004-07-07 at 23:46, Philip Peake wrote:
>
>
>>
>>I would also like to add a plea to at least consider adding an
>>abstraction layer so that the on-disk hierarchy could be replaced by a
>>(possibly remote) database of some description (LDAP/RDBMS/etc).
>>
>>
>
>This is an immense change in complexity. Going from a shared file format
>specification to a common API with all the ABI stability issues, release
>schedule differences, dependency hell and language bindings problems.
>
>What exactly would this gain you? I see zero gain, only lots of pain.
>
>
>
I'm not certain I follow the API/ABI/dependency argument ... I think
life actually gets easier for the application developer who just loads a
library and makes a call to it to return the value(s) for a given MIME-type.
What would be gained would be the breaking of the dependency on the
user's home directory. Think of uses other than a single user sitting in
front of his own machine.
It would also allow optimization of the config "database".
The Oregon primary (K-12) system uses a mixture of Linux and Windows
desktops, about 10,000 desktops in all. Students use whatever machine
they sit in front of. The user data/home directory is on a Samba server
(SMB mounts, to allow for Linux or Windows). The big problem is that of
scaling; at lesson change time all the students logout, move to another
classroom and login again -- and the fileservers melt down with the load
caused by the Linux desktops starting up.
They have tried the fastest disks they can find, and different disk
configurations - the best they can manage is to acheive something like
10 KDE startups or 14 Gnome startups per drive - then the disks are
saturated with random IO (seeking...), there is no problem with network
bandwidth or disk throughput, its simply the randomness of the IO.
Enterprise deployments are going to have similar problems, although
probably not as exagerated.
I admit its much easier to just write a file in a fixed place and read a
file in a fixed place when looking at one item in isolation, but the
sheer number of these is a problem.
A first step could be to provide an API for adding and reading data
which actually just translates into creating exactly the same structure
as at present - in fact, this would be a requirement for backwards
compatability. But once the API comes into common use, adding other
"backends" would allow the data to use other storage methods much better
optimized for scaling.
Yes, its added complication, but probably necessary to get acceptable
performance for widespread use.
Philip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.freedesktop.org/archives/xdg/attachments/20040708/d9767f41/attachment.htm
More information about the xdg
mailing list