[Mesa-dev] [PATCH shaderdb 3/3] run: shader program file created via GetProgramBinary (v3)
Dongwon Kim
dongwon.kim at intel.com
Wed Mar 14 22:52:06 UTC 2018
Thanks for the review, Ken
I agree on most of your proposals.
I will upload another version shortly.
On Wed, Mar 14, 2018 at 03:10:25PM -0700, Kenneth Graunke wrote:
> On Monday, February 26, 2018 2:17:05 PM PDT Dongwon Kim wrote:
> > extraction of linked binary program to a file using glGetProgramBinary.
> > This file is intended to be loaded by glProgramBinary in the graphic
> > application running on the target system.
> >
> > To enable this feature, a new option '--bin' has to be passed to the
> > program execution.
> >
> > v2: 1. define MAX_LOG_LEN and use it as the size of gl log
> > 2. define MAX_PROG_SIZE and use it as the max size of extracted
> > shader_program
> > 3. out_file is now pointer allocated by strdup for the file name
> >
> > v3: 1. automatically using original shader test file's name + ".bin"
> > as a filename for program binary - better way to cover the case
> > with batch compilation of many shader test files in the same
> > directory
> > 2. remove --out=<file name> since it is now unnecessary (due to v3-1.)
> > to provide custom file name. Instead, option, "--bin", which is
> > basically a flag that enables getting program binary as a file.
> > 3. Now it tries to get the length of binary by reading program's
> > GL_PROGRAM_BINARY_LENGTH_OES parameter
> >
> > Signed-off-by: Dongwon Kim <dongwon.kim at intel.com>
> > ---
> > run.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 64 insertions(+), 4 deletions(-)
> >
> > diff --git a/run.c b/run.c
> > index d066567..bbab5d9 100644
> > --- a/run.c
> > +++ b/run.c
> > @@ -52,6 +52,9 @@
> >
> > #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
> >
> > +#define MAX_LOG_LEN 4096
> > +#define MAX_PROG_SIZE (10*1024*1024) /* maximum 10MB for shader program */
> > +
> > struct context_info {
> > char *extension_string;
> > int extension_string_len;
> > @@ -358,18 +361,20 @@ const struct platform platforms[] = {
> > enum
> > {
> > PCI_ID_OVERRIDE_OPTION = CHAR_MAX + 1,
> > + LOADABLE_PROGRAM_BINARY_OPTION,
> > };
> >
> > const struct option const long_options[] =
> > {
> > {"pciid", required_argument, NULL, PCI_ID_OVERRIDE_OPTION},
> > + {"bin", no_argument, NULL, LOADABLE_PROGRAM_BINARY_OPTION},
>
> This sounds like we're loading binaries. Can we call it
> GENERATE_PROGRAM_BINARY_OPTION instead?
Yeah, I will change this.
>
> > {NULL, 0, NULL, 0}
> > };
> >
> > void print_usage(const char *prog_name)
> > {
> > fprintf(stderr,
> > - "Usage: %s [-d <device>] [-j <max_threads>] [-o <driver>] [-p <platform>] [--pciid=<chip id of targetted gen arch>] <directories and *.shader_test files>\n",
> > + "Usage: %s [-d <device>] [-j <max_threads>] [-o <driver>] [-p <platform>] [--pciid=<pci id of targetted gen arch>] <directories and *.shader_test files>\n",
> > prog_name);
> > }
> >
> > @@ -450,6 +455,7 @@ main(int argc, char **argv)
> > int opt;
> > bool platf_overridden = 0;
> > bool pci_id_overridden = 0;
> > + bool enable_prog_bin = 0;
>
> Maybe generate_prog_bin here as well.
sure.
>
> >
> > max_threads = omp_get_max_threads();
> >
> > @@ -518,6 +524,9 @@ main(int argc, char **argv)
> > setenv("INTEL_DEVID_OVERRIDE", optarg, 1);
> > pci_id_overridden = 1;
> > break;
> > + case LOADABLE_PROGRAM_BINARY_OPTION:
> > + enable_prog_bin = 1;
> > + break;
> > default:
> > fprintf(stderr, "Unknown option: %x\n", opt);
> > print_usage(argv[0]);
> > @@ -858,18 +867,18 @@ main(int argc, char **argv)
> > }
> > } else if (type == TYPE_CORE || type == TYPE_COMPAT || type == TYPE_ES) {
> > GLuint prog = glCreateProgram();
> > + GLint param;
>
> So...putting this here means that you're not going to support generating
> program binaries for SSO-based programs. That seems a bit unfortunate...
>
I can consider this later.
> >
> > for (unsigned i = 0; i < num_shaders; i++) {
> > GLuint s = glCreateShader(shader[i].type);
> > glShaderSource(s, 1, &shader[i].text, &shader[i].length);
> > glCompileShader(s);
> >
> > - GLint param;
> > glGetShaderiv(s, GL_COMPILE_STATUS, ¶m);
> > if (unlikely(!param)) {
> > - GLchar log[4096];
> > + GLchar log[MAX_LOG_LEN];
> > GLsizei length;
> > - glGetShaderInfoLog(s, 4096, &length, log);
> > + glGetShaderInfoLog(s, sizeof(log), &length, log);
>
> It would be nice to make a helper function for getting the info log and
> printing an error, since you've now got it twice. Should probably be a
> separate patch (and include the MAX_LOG_LEN change).
>
I will work on it (another patch.)
> >
> > fprintf(stderr, "ERROR: %s failed to compile:\n%s\n",
> > current_shader_name, log);
> > @@ -879,6 +888,57 @@ main(int argc, char **argv)
> > }
> >
> > glLinkProgram(prog);
> > +
> > + glGetProgramiv(prog, GL_LINK_STATUS, ¶m);
> > + if (unlikely(!param)) {
> > + GLchar log[MAX_LOG_LEN];
> > + GLsizei length;
> > + glGetProgramInfoLog(prog, sizeof(log), &length, log);
> > +
> > + fprintf(stderr, "ERROR: failed to link progam:\n%s\n",
> > + log);
> > + } else if (enable_prog_bin) { /* generate program binary if enable_prog_bin == true */
> > + char *prog_buf;
> > + GLenum format;
> > + GLsizei length = 0;
> > + FILE *fp;
> > +
> > + glGetProgramiv(prog,
> > + (type == TYPE_ES) ? GL_PROGRAM_BINARY_LENGTH_OES :
> > + GL_PROGRAM_BINARY_LENGTH, &length);
>
> GL_PROGRAM_BINARY_LENGTH_OES and GL_PROGRAM_BINARY_LENGTH are identical
> hex values (intentionally). Just use GL_PROGRAM_BINARY_LENGTH and skip
> the type == TYPE_ES check.
>
sure
> > +
> > + if (glGetError() != GL_NO_ERROR)
> > + length = MAX_PROG_SIZE;
>
> Surely you don't want to truncate the program if you exceed
> MAX_PROG_SIZE? It doesn't look like this is necessary now that you're
> malloc'ing an appropriately sized buffer, so we can probably just drop
> it altogether.
>
I was just thinking we may need to cover the case when length is not
corretly retrieved while shader program is still valid. I actually
don't think this is necessary now. I will drop this.
> > +
> > + prog_buf = (char *)malloc(length);
> > + glGetProgramBinary(prog, length, &length, &format, prog_buf);
> > +
> > + if (glGetError() != GL_NO_ERROR) {
>
> Maybe add || !prog_buf to guard against malloc failure?
>
> > + fprintf(stderr, "ERROR: failed to get Program Binary\n");
> > + } else {
> > + char *out_filename;
> > +
> > + out_filename = malloc(strlen(current_shader_name) + 4);
> > +
> > + printf("%s\n", current_shader_name);
> > + strncpy(out_filename, current_shader_name, strlen(current_shader_name) + 1);
> > + out_filename = strcat(out_filename, ".bin");
> > +
> > + fp = fopen(out_filename, "wb");
> > + fprintf(stdout, "\n");
> > + fprintf(stdout, "Binary program has been successfully generated.\n\n");
> > + fprintf(stdout, "Writing to file.....\n");
> > + fprintf(stdout, "===============================================\n");
> > + fprintf(stdout, "File Name : \"%s\"\n", out_filename);
> > + fprintf(stdout, "Format : %d\n", format);
> > + fprintf(stdout, "Size : %d Byte\n", length);
> > + fprintf(stdout, "===============================================\n");
>
> I suspect this output is going to be completely jumbled if you run with
> concurrency. If you're only supporting this with -1, that's probably
> OK...but maybe it would be better to just use a snigle fprintf?
>
ok will change this.
> > + fwrite(prog_buf, sizeof(char), length, fp);
> > + fclose(fp);
> > + free(out_filename);
> > + }
> > + free(prog_buf);
> > + }
> > glDeleteProgram(prog);
> > } else {
> > for (unsigned i = 0; i < num_shaders; i++) {
> >
>
More information about the mesa-dev
mailing list