dvfs api and toolkits
apenwarr at nit.ca
Sat Apr 9 23:16:55 EEST 2005
On Sat, Apr 09, 2005 at 03:20:40PM -0400, Sean Middleditch wrote:
> If we try to emulate seeking in HTTP by re-requesting files, we make that
> impossible without adding locking. If we just tell apps to download the
> files and seek on the local cache then we will still retain the atomic
> write capability.
Note that one potential solution to this is to *start* an http download of a
file and permit seeking around only in the part that has already been
downloaded; seeks past that point need to wait for the download to continue.
This limits slowness to, say, people who want to jump directly to page 267
in a PDF file they haven't fully downloaded yet, which is certainly not the
common case (who knows up front that they want page 267??). On the other
hand, it implies that you really want to implement a cache of *some* sort;
this seek-in-partial-file operation is essentially an extra fancy form of
To answer another point, "re-requesting" in HTTP is actually an extremely
cheap operation, relatively comparable to getting separate blocks in NFS.
With HTTP/1.1, you can have multiple outstanding requests at a time, each
one doing byte ranges, without doing stupid things like reopening the TCP
socket over and over or doing a full network turnaround per request. If
your http server is pretty efficient and your client is written properly,
performance can be pretty good here.
The atomic-overwrite problem is a serious one (unlike NFS, HTTP URLs refer
to filenames, not inodes) and I don't see much solution to it if you're
doing multiple byte-range requests. However, this is really not that
important; people getting documents from an HTTP server probably are not
expecting those documents to change very often, so as long as your error
message when the ETag changes is helpful, they'll survive.
More information about the xdg