[cairo] glyph caching bug

TOKUNAGA Hiroyuki tkng at xem.jp
Tue Dec 28 14:57:56 PST 2004


On Tue, 28 Dec 2004 12:11:15 -0800
Keith Packard <keithp at keithp.com> wrote:

> > +void
> > +_cairo_cache_lock (cairo_cache_t *cache)
> > +{
> > +    cache->locked = 1;
> > +}
> Should this be a counting lock instead?  iow, should it read:
>   _cairo_cache_lock (cairo_cache_t *cache) { ++cache->locked; }
> This way multiple agents (possibly multiple threads) could request
> that  the cache be locked. 

Yes, it should be. I forgot multiple agents...

> Alternatively, should we lock individual cache  elements?  That might
> be more complicated, but would avoid having the  cache explode.  You
> would fetch a cache element, and then at some future  point you would
> release it so that it could be destroyed.

I implemented lock per entry tentatively, but encountered another
problem. If all live entries are locked, _random_live_entry goes into an
infinite loop. I think locking individual cache elements is hard to

> Hmm.  Now I'm a bit confused -- it seems like the cache entries should
> be  regenerated as needed; if destroyed, they should be recreated at
> each step  in the process.  Is there some place in the system which
> assumes the cache  entries are still valid across multiple cache
> fetches?

I think it is difficult to regenerate a cache entry as needed. To know
'which entries are needed now' would be equivalent to lock individual
cache elements. I want to propose another model, create entry

In current model, each entry is created implicitly when lookup function
was called. IMO, the problem of this model is, locking the cache (or
cache entry) propery is hard to implement.

If entry was created explicitly, there's no need to call
create_entry/destroy_entry function in lookup function. We'll be able to
write code like this.

  for(i=0; i<length;i++) {
     if(lookup(keys[i], &entries[i]) == NOT_FOUND)
        entries[i] = create_entry(keys[i])

  // Use multipul entries.

  for(i=0; i<length;i++) {
    cache_register_entry(keys[i], entries[i])

If we consider multipul agents, cache locking is still need. But locking
cache is enough, we don't need to consider to lock each cache entry.
Since lookup function doesn't create new entry, cache explode would not

What do you think about this idea?


tkng at xem.jp

More information about the cairo mailing list