Trash Can Question
alexl at redhat.com
Sat Aug 22 04:43:50 PDT 2009
On Sat, 2009-08-22 at 08:54 +0800, PCMan wrote:
> On Sat, Aug 22, 2009 at 3:23 AM, Andrea Francia<andrea at andreafrancia.it> wrote:
> >> Of course, we should all move towards all filenames being in UTF8, avoid
> >> creating non-UTF8 filenames, etc.
> This is not a real solution if you're going to support remote filesystems.
> On local machine, you can use any filename encoding you want.
> The remote servers, however, cannot be totally migrated to UTF-8 sometimes.
> So unless your vfs implementation can convert the encodings and only
> show UTF-8 to applications, handling non-UTF-8 will always be needed.
I'm of course talking about local files here. Obviously you have to
treat remote systems differently, exactly how depends on the remote
In gvfs for instance this is handled by all filenames being strings of
bytes in undefined encoding, but there are backend-implemented ways to
get the "display name" of a file (and back) so that you can display this
in a user interface.
> Besides, even the filesystem support using \0 inside filename, I don't
> believe that there is a real-world filemanager able to handle this. In
> theory this should be allowed but it's an extremely rare use case
> which doesn't exist in real world.
How could this be allowed? How would you pass such a filename to the
filesystem via e.g. the open() API? Any pathname passed via the POSIX
APIs are "c-strings", i.e. they end at the first passed in zero byte.
More information about the xdg