[Libburn] Re: How to pipe stdin to CD-RW ?

scdbackup at gmx.net scdbackup at gmx.net
Mon Dec 5 11:09:28 PST 2005


Hi,

i meanwhile discovered the tail parameter of burn_track_define_data()
so my proposal to introduce  read_file_and_pad()  and  
 burn_source_add_padding()  is obsolete, of course.

Nevertheless, this call does not work as expected :

       burn_track_define_data(tr,0,(int) skin->padding,1,BURN_MODE1);

with  skin->padding == 52428800 . 
The readable amount of data on the resulting CD is exactly the
150 MB which i set by the  fixed_size  parameter.
The time between the report of 150 MB and the end of burning
was not long enough to write 50 MB.
(With my own source padding 200 MB are readable.
 150 MB stream + 50 MB pad)


I learned that my  read_file_and_pad()  is not needed because
 sector.c:get_bytes()  already does the fill-up of the data stream
for me.
The discovery of chained burn_source then led me to the search
for track calls. If burn_track_define_data() would add the
required amount (usually 300 kB) of readable 0s, then i could
give up my own messing on that topic.
Up to now, i can only give up  read_file_and_pad() .


In write.c i read :
/* if we're padding, we'll clear any current shortage.
   if we're not, we'll slip toc entries by a sector every time our
   shortage is more than a sector
XXX this is untested :)
*/
                if (!tar[i]->pad) {

Well, it looks like i tested it now and it did not pass
on a first try. 


---------------------------------------------------------------
There is a stability threat if stdin is allowed as a data source
for burning. (It can be solved, i believe.)

 man 2 read  warns of partial buffer returns which may occur
with pipes or similar data sources.
It is not specified wether objects reachable via filesystem
paths and stat(2) are allowed to return partial buffers.
Given the wide range of filesystems for Linux, one should not
make assumptions on that.

A look into sector.c:get_bytes() tells me that libburn expects
partially filled read buffers only at end of file.

Therefore a loop becomes necessary in file.c:file_read()
which collects a complete buffer from several partial reads.
I have implemented this as

int file_read_pipesafe(struct burn_source *source,
                       unsigned char *buffer, int size)
{
        struct burn_source_file *fs = source->data;
        int ret,summed_ret= 0;

        /* make safe against partial buffer returns */
        while (1) {
                ret= read(fs->datafd, buffer+summed_ret, size-summed_ret);
                if (ret<=0)
        break;
                summed_ret += ret;
                if (summed_ret>=size)
        break; 
        }
        if (ret<0 && summed_ret<=0) /* error without any input */
                return ret;
        return summed_ret;
}

This will not impose much additional CPU load on full
buffer returns (3 ifs and 1 addition for >=2048 bytes).

So i think it should replace  file_read()  entirely.


---------------------------------------------------------------
The mail riddle is not solved by this mail which
arrived today (5 Dec 2005):

Date: Thu, 1 Dec 2005 20:07:24 -0600 (CST)
From: Derek Foreman <manmower at signalmarketing.com>
To: scdbackup at gmx.net
Cc: libburn at lists.freedesktop.org
...
Received: from S010600d0b75a99a6.wp.shawcable.net (EHLO janus)
    [24.79.91.125]
  by mx0.gmx.net (mx077) with SMTP; 05 Dec 2005 13:42:04 +0100
Received: from [192.168.2.2] (helo=gonopodium.signalmarketing.com)
        by janus.signalmarketing.com with esmtp (Exim 4.54)
        id 1Ei0KW-0006BI-VJ; Thu, 01 Dec 2005 20:07:21 -0600


Somewhere there is a bermuda triangle between janus.signalmarketing.com
and S010600d0b75a99a6.wp.shawcable.net.

But this is the mail which was addressed to me directly.
I still did not get the Cc: via libburn at lists.freedesktop.org 
But i do get all my own posts.

Derek, if you wrote more than two mails to me :
they did not show up in my mailbox or at 
http://lists.freedesktop.org/archives/libburn/2005-December/thread.html

---------------------------------------------------------------


Have a nice day :)

Thomas



More information about the libburn mailing list