[Libburn] Derek: Patch proposal after your recent decisions

Derek Foreman manmower at signalmarketing.com
Wed Feb 15 19:04:51 PST 2006


On Wed, 15 Feb 2006, scdbackup at gmx.net wrote:

> Hi,
>
> attached is my implementation proposal concerning
> your recent decisions about off_t, burn_source_file
> and burn_source_fd.
> It has been tested via  test/burniso  for regular file
> input as well as for piped stdin.
>
>
> Notes:
>
> -  #include <sys/types.h>  is back in libburn.h
>  I removed <stdint.h> because i believe you introduced it only
>  for the sake of the now dismissed int64_t.
>  If this is an error, i apologize. :)

Not an error.  Thanks.

> - The new class burn_source_fd is implemented in the source files
>  files.[ch]. That is because i feel unable to bring new files
>  into the binary build system of libburn.
>  Please split  file.[ch]  according to your style preferences.

Due to their similarity I'll probably leave them in the same file.

> - Both read methods of burn_source_file and of burn_source_fd
>  are now based on a generic wrapper function around read(2):
>   read_full_buffer()
>  Since this is neither a method of burn_source_file nor of
>  burn_source_fd it should probably be in a more neutral
>  source file.
>  Please move it as appropriate.

I think I'll make it static and keep it there, I don't forsee it being 
more useful in a general sense.  It can be moved later if it is.

> - off_t is currently rather 32 bit on conventional systems.
>  I feel unable to introduce external macro settings into the
>  binary build system of libburn. There seems to be no libburn
>  include file which is really included by all libburn sources.
>
>  It is not urgent to augment off_t to 64 bit unless you want
>  to introduce it quickly for clarity reasons. The code
>  proposal will work for CD sizes even with 32 bit off_t.
>  (My statement that one would need 64 bit to recognize files
>   >= 4GB was wrong. 32 bit fstat() returns -1 in such a case.
>   This suffices for now.)

Libburn can't currently handle more than 99 minutes in a track, since 
that's the CD limit.  That makes big files rather uninteresting at this 
point.  This issue will lie dormant as a comment in a file somewhere until 
the real need arises.

>  Please remember to eventually announce the need for 64-bit
>  macros in the API documentation at an exposed place.

<nod>

> - The get_size() methods for now restrict their return value to
>  2 GB minus 800 MB.
>  This shall ensure that no return value is issued which leads
>  to an integer rollover after any reasonable CD size computation
>  is applied to it. (2GB-1 might roll over if you add 150 sectors)
>  I assume that 1.2 GB is larger than any valid CD image and that
>  no CD size computation adds more than 800 MB.
>  As soon as libburn is checked for off_t safety, this restriction
>  might get lifted. For now it seems better that way.
>
>  I considered to set an assert() on the file size. But if libburn
>  uses abort() as a regular means of communication with the calling
>  application then it would need to offer appropriate signal handling.
>  (I could contribute some.)

Don't you dare trap SIGABRT in the front end! :)

If an assert is thrown, it's a mission critical bug and someone needs to 
fix some code.  (Or the hardware acted in a really flakey way and we need 
to know how to work around it.  heh.)

It might be nice to have some kind of softer assert() that still reports 
line number and failed expression, but also makes sure the drive returns 
to a sane state.

>  Whatever, the new get_size() implementation is in any case better
>  than the old one. I advise you to accept it for now, even if you
>  think that it is not optimal. (I think so myself, but will first
>  have to scan for the real int-off_t problems.)

All this code is in CVS now, but I hope to see a simpler get_size as soon 
as we add the right #defines to make off_t play 64 bit all the time.

Thanks


Is the whitelist your only out of tree patch now?


More information about the libburn mailing list