[musl] Re: Tweaking the program name for <err.h> functions
Zack Weinberg
zack at owlfolio.org
Tue Mar 12 19:25:31 UTC 2024
On Tue, Mar 12, 2024, at 10:42 AM, Rich Felker wrote:
> On Tue, Mar 12, 2024 at 03:31:04PM +0100, Florian Weimer wrote:
>> * Zack Weinberg:
>>
>> > On Tue, Mar 12, 2024, at 9:54 AM, Florian Weimer wrote:
>> >>> Doing this would break many programs, such as:
>> >>> - most of coreutils, e.g. programs like ls, cat or head, since they
>> >>> always `close` their input and output descriptors (when they've
>> >>> written or read something) to make sure to diagnose all errors
>> >>
>> >> A slightly better way to do this is to do fflush (stdout) followed by
>> >> error checking on close (dup (fileno (stdout))).
>> >
>> > Does that actually report delayed write errors? As you have it,
>> > the close() just drops the fd created by the dup(), the OFD is
>> > still referenced by fd 1 and therefore remains open.
>>
>> I don't think the VFS close action is subject to reference counting.
>> Otherwise the current coreutils error checking wouldn't work because in
>> many cases, another process retains a reference to the OFD.
>
> It is. close only reports errors if it's the last fd referring to the
> ofd. It's an incredibly stupid design choice by NFS that mismatches
> expected fd behavior.
I do fully agree that this is a design error in NFS and in close(2) more generally [like all destructors, it should be _impossible_ for close to fail], but there is no realistic prospect of changing it, and I've been burned a few times by programs that didn't notice delayed write errors. The risk of missing an error because another process still has a ref to the OFD is real, but it comes up rarely enough that I don't think it should deter us from trying to get the "only one process has this file open" case right. (Think shell `>` redirection.)
Also, I have written programs that expected fds 0 and/or 1 to be pipes, and closed them well before exiting as a cheap way to send one-shot events to the process on the other end of the pipe. I think that's a legitimate way to use inherited fds, and it would be significantly more difficult to do in some contexts if you had to use a fd > 2 to do it (e.g. the process on the other end used popen() to start things).
If there's something we can do to make it easier for programmers to do the Right Thing with closing standard fds, IMHO we should. For stdio, we perfectly well _could_ special case fclose(), when the fileno is 0/1/2, to do the dup/close dance and leave both the FILE and the fd in a valid, stubbed state. (For stdin, open("/dev/null", O_RDONLY) is clearly an appropriate stub semantic; for stdout and stderr I am unsure whether "discard all writes silently" or "fail all writes" would be better.) I don't think it's safe to make close() special case 0/1/2, though—not least because it's supposed to be atomic and async signal safe. And we should maybe give some thought to runtimes for languages other than C, which probably have their own buffering wrappers for fds 0/1/2 and might care about cross-language interop.
zw
More information about the libbsd
mailing list