[Libburn] Re: Patch to enable arbitrary input fd (updated 2nd submission

scdbackup at gmx.net scdbackup at gmx.net
Sat Feb 11 12:32:02 PST 2006


Hi,

> I don't want to introduce a new typedef.

It was just such a nice cheat. :(

(The problem is obvious by the question what printf
 formatter to use with off_t )


> I'd like to do DVD in the near term, and 2GB isn't going to cut it there.

2.5 years ago i made my first hack to use growisofs and
two weeks later a user pointed me to the large file problem.

Will i have to make a cdrskin-ProDVD with license keys and all ?
~:o)


> Also, we should really get compiler warnings about the int problems, so it 
> should be a fairly quick thing to catch them all, no?

We will see.
I confess not to have looked closely at the
output of make. It is so ... exotic.


> >
> > We have to check nevertheless how the mix of
> > large file aware library with an ignorant application
> > does work. (I will provide the ignorance.)
> 
> This seems to me to be a "bug" in the usage of libburn, and therefor not 
> an interesting case, no?

You will have to state in the API specs that
 _FILE_OFFSET_BITS=64 _LARGEFILE_SOURCE
is mandatory. And better also set them in libburn.h
Like:

#ifdef _FILE_OFFSET_BITS
#undef _FILE_OFFSET_BITS
#endif
#define _FILE_OFFSET_BITS 64
#ifndef _LARGEFILE_SOURCE
#define _LARGEFILE_SOURCE 1
#endif

This will give some hope for compiler or preprocessor
warnings if somebody tries to enforce incompatible
settings.


> Yes, now we just have to pick the right one

int64_t or off_t.
Throw a coin and let's regret it in a few years. Not now :))

If you prefer to decide rationally:
Weigh the clearness of int64_t versus the older tradition
and POSIX-faithfulness of off_t.

We need a decision. You are boss. Do your duty.


> I'm currently of the frame of mind that we should disregard any system 
> without largefile support, and force all users to define it.

Reasonable. I just tested  stat.st_size  without large
file support and it returns a rollovered size together with
a function return value of -1 .
But that return value is ignored in current  file_size().

 $ cc -g -o t t.c
 $ t
 sizeof(off_t)==4
 stat()==-1  stat.st_size==402653184
 $ cc -g -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -o t t.c 
 $ t
 sizeof(off_t)==8
 stat()==0  stat.st_size==4697620480

The test program t.c stats the file resulting from this command:
 $ dd bs=1M count=4480 if=/dev/zero of=/dvdbuffer/gorgo

The effect is quite unbearable, i'd say.
64 bit off_t is a must with a modern program.


> I'm quite interested in arguments to the contrary though...

Not from me. :))


> I was thinking a new struct burn_source_fd.  It might be that we need a 
> new read function too...

Is this really necessary ?

file descriptor is exactly the API which is used by
burn_source_file. Everything is ok. Even the close()
call (which seems not to actually happen, btw).
The only enhancement is the possibility of a fixed size.

But of course, i would implement a new set of functions.
They would strikingly resemble those of burn_source_file.


> I think fixed_data_size isn't meaningful for all data sources, as the rest 
> of the info directly in burn_source is.  I don't think it's useful for 
> files on disc - I think we already have a way to select only part of an 
> input file in the track structures (for things like burning a single large 
> wav file as multiple tracks, etc).

It cannot do much harm and maybe somebody finds a
good use for it with the other source types to come.

But ok, you decide.

That would be:
  struct burn_source_fd {burn_source_file minus subfd plus fixed_size};
  int fd_read(struct burn_source *s, unsigned char *buffer, int size);
  type_to_come fd_get_size(struct burn_source *s);
  void fd_free_data(struct burn_source *s);

To be implemented where ? source.c ? A new file fd.c ?

Aw, Derek. Consider the ease to insert just a little
  type_to_come fixed_data_size;
into 
  struct burn_source_file 
and to enhance  file_size()  by a few lines.
Really. A new source class would be overdone.


> Is there a need for fixed_data_size on sources where functions like stat 
> can be used?  Does it really need to be universal?

I cannot claim any need.
I can recommend a maximum of orthogonality, though.

My limit is that i _need_ to use stdin as source.
Anything else is negociable.

The fixed size is not my favorite at all. But as long
as libburn needs to know the size at an early stage of
burning, it's the only way to perform my hideous stunts.
It is also the way to emulate cdrecord tsize=N, of course.

I now take care to propose nothing that would prevent
a future equivalent of  cdrecord -data -tao -  .
(My last construtor proposal would have. Grrr.)


> I think the return type should be changed to something other than int. 
> What that type should be is still negotiable as per the other thread.

type_to_come :)


> The real concern here tho, is what happens if the file is deleted after 
> the front end decides it's ok, and before we stat it.  We still need to 
> return some kind of error...

Not necessarily. I would put this accident into the
same category as truncation or alteration of the source
file in the further course of burning. The difference is
just a matter of a race condition.
The burn is spoiled but libburn shows a good ruggedness
in such a situation. I regularly use its underrun facility
and it never let me down up to now.
I recently tested libburn's capability to burn a 0-byte CD
by mistake. Fully functional. Fully empty.

If we make large off_t the type_to_come, then we
have not to expect type related trouble with fstat() because
stat.st_size is off_t itself.
Let's make fstat()==-1 equivalent to stat.st_size==0. Basta.

If we make int64_t the type_to_come we might experience
trouble when in future off_t jumps to 96 or 128 bit and
we are too old to change our code appropriately. 


Ok, Derek, throw your coin. It's about time.


Have a nice day :)

Thomas



More information about the libburn mailing list