[Libburn] Re: Patch to enable arbitrary input fd (updated 2nd submission)

Derek Foreman manmower at signalmarketing.com
Sat Feb 11 10:42:38 PST 2006


On Sat, 11 Feb 2006, scdbackup at gmx.net wrote:

> Hi,
>
> now we got tangled asynchronously.

haha, I saw this come in just before I hit send. :)

> I just proposed to avoid off_t as long as we
> did not clean up the code for that and to use
> a kind of dummy data type
>  typedef int burn_source_size_t;
>
> ("burn_size_t" was a typo, sigh)

I don't want to introduce a new typedef.

In general, I don't like typedef's because I feel they hide things from 
the programmer.  While it should be fairly common knowledge what an off_t 
is, a developer would either be running to the manuals or making 
potentially bad assumptions if we used a new typedef.

> This would have the charming property not to
> change the existing semantics of the code.
> It would allow to clean up the (int) problems
> one by one and to run tests with a new type
> definition while the stable code goes on with
> the conservative definition.
> After all, 2 GB is more than a CD can take.

I'd like to do DVD in the near term, and 2GB isn't going to cut it there.

Also, we should really get compiler warnings about the int problems, so it 
should be a fairly quick thing to catch them all, no?

>> Derek:
>> Ok, I didn't realize this.  I'll take it as off_t if someone writes the
>> autoconf magic to properly handle it... :)
>> Dana:
>> I'm not sure exactly what we need. When we want defines when we don't
>> etc. I don't really want to think about it now. I'll discuss with you
>> later Derek and yeah, I'll make it all go like magic.
>
> Proper support for large files is necessary
> because those things are out there.
>
> We have to check nevertheless how the mix of
> large file aware library with an ignorant application
> does work. (I will provide the ignorance.)

This seems to me to be a "bug" in the usage of libburn, and therefor not 
an interesting case, no?

Don't we live in a world yet where we can just test in libburn.h that 
we've got 64-bit largefile stuff, and fail to compile anywhere else?

> The two links which i posted are the best explanations
> of the problem which i know
> http://www.suse.de/~aj/linux_lfs.html
> http://www.gnu.org/software/libc/manual/html_node/Feature-Test-Macros.html
>
> For my backup tool it was sufficient to define the
> both macros _FILE_OFFSET_BITS=64 _LARGEFILE_SOURCE
> and to use fseeko()/ftello() for random access purposes.
>
>
> [pun mode on:]
>
>> double is quite frankly repulsive for this purpose. ;)
>>
>> What if I want to find out how many sectors are in the file using /?  what
>> if I want to find out how far into a sector we've read using %?
>
>  (unsigned) (size/2048.0)
>  size - ((unsigned) (size/2048.0))*2048.0;
>
> Will be good for 42 bit ( 4 TB ).
>
> [:pun mode off]

haha, yes, but we could've just used unsigned from the start then, no? :)

> No. We got enough possibilities to use an integer type.

Yes, now we just have to pick the right one

I'm currently of the frame of mind that we should disregard any system 
without largefile support, and force all users to define it.

I'm quite interested in arguments to the contrary though...


More information about the libburn mailing list