<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Alexander Larsson wrote:<br>
<blockquote cite="mid1089266926.22236.18.camel@localhost.localdomain"
type="cite">
<pre wrap="">On Wed, 2004-07-07 at 23:46, Philip Peake wrote:
</pre>
<blockquote type="cite"><br>
<pre wrap="">
I would also like to add a plea to at least consider adding an
abstraction layer so that the on-disk hierarchy could be replaced by a
(possibly remote) database of some description (LDAP/RDBMS/etc).
</pre>
</blockquote>
<pre wrap=""><!---->
This is an immense change in complexity. Going from a shared file format
specification to a common API with all the ABI stability issues, release
schedule differences, dependency hell and language bindings problems.
What exactly would this gain you? I see zero gain, only lots of pain.
</pre>
</blockquote>
I'm not certain I follow the API/ABI/dependency argument ... I think
life actually gets easier for the application developer who just loads
a library and makes a call to it to return the value(s) for a given
MIME-type.<br>
<br>
What would be gained would be the breaking of the dependency on the
user's home directory. Think of uses other than a single user sitting
in front of his own machine.<br>
<br>
It would also allow optimization of the config "database".<br>
<br>
The Oregon primary (K-12) system uses a mixture of Linux and Windows
desktops, about 10,000 desktops in all. Students use whatever machine
they sit in front of. The user data/home directory is on a Samba server
(SMB mounts, to allow for Linux or Windows). The big problem is that of
scaling; at lesson change time all the students logout, move to another
classroom and login again -- and the fileservers melt down with the
load caused by the Linux desktops starting up.<br>
<br>
They have tried the fastest disks they can find, and different disk
configurations - the best they can manage is to acheive something like
10 KDE startups or 14 Gnome startups per drive - then the disks are
saturated with random IO (seeking...), there is no problem with network
bandwidth or disk throughput, its simply the randomness of the IO.<br>
<br>
Enterprise deployments are going to have similar problems, although
probably not as exagerated.<br>
<br>
I admit its much easier to just write a file in a fixed place and read
a file in a fixed place when looking at one item in isolation, but the
sheer number of these is a problem.<br>
<br>
A first step could be to provide an API for adding and reading data
which actually just translates into creating exactly the same structure
as at present - in fact, this would be a requirement for backwards
compatability. But once the API comes into common use, adding other
"backends" would allow the data to use other storage methods much
better optimized for scaling.<br>
<br>
Yes, its added complication, but probably necessary to get acceptable
performance for widespread use.<br>
<br>
Philip<br>
<br>
</body>
</html>