[cairo] glyph caching bug
TOKUNAGA Hiroyuki
tkng at xem.jp
Wed Dec 29 14:48:31 PST 2004
Hi,
On Tue, 28 Dec 2004 15:35:29 -0800
Keith Packard <keithp at keithp.com> wrote:
> Around 7 o'clock on Dec 29, TOKUNAGA Hiroyuki wrote:
>
> > I implemented lock per entry tentatively, but encountered another
> > problem. If all live entries are locked, _random_live_entry goes
> > into an infinite loop. I think locking individual cache elements is
> > hard to implement.
>
> It seems like this can be avoided by allowing the _random_live_entry
> function to fail and have the cache reduction loop terminate at that
> point. We'll want to recode the _random_live_entry function to walk
> through the entire cache instead of the current random probe technique
> though. We could also keep track of the number of locked entries and
> stop purging the cache when that number reached the number of live
> entries.
I supposed so... but now I encountered another problem. What do we
should do if _random_live_entry is failed? _random_live_entry is called
when we want to destroy an cache entry to keep cache size. So if this
function is failed, we wouldn't able to keep cache size. If this problem
is resolvable, lock per each cache entry is better than my previous
proposal, I think.
> What I'm trying to avoid is forcing the whole cache to be locked
> during all rendering operations; this will cause a large spike in
> memory usage, instead of limiting memory usage to the maximum of the
> cache size and the memory necessary to perform any single operation.
A little while ago I conceived, what happen if we use many glyphs which
over maximum cache size? Cache size limitation is of course necessary,
but I think cache size limitation shouldn't limit the number of glyphs
that can be used at a time. (Such a thing may be vanishingly rare or
never happen... Just a thought.)
> > I think it is difficult to regenerate a cache entry as needed. To
> > know 'which entries are needed now' would be equivalent to lock
> > individual cache elements. I want to propose another model, create
> > entry explicitly.
>
> Right -- because of multiple threads sharing the same cache, you have
> to lock when you want to use any entries. Except for this, you could
> assert that the last cache value fetched was always valid and make
> sure the code consistently used only a single cache value at a time.
>
> > If entry was created explicitly, there's no need to call
> > create_entry/destroy_entry function in lookup function. We'll be
> > able to write code like this.
> >
> > for(i=0; i<length;i++) {
> > if(lookup(keys[i], &entries[i]) == NOT_FOUND)
> > entries[i] = create_entry(keys[i])
> > }
>
> This doesn't appear to handle the case of multiple threads looking up
> the same element in the cache simultaneously -- they'll both miss,
> and then they'll both create the element and register it. I think we
> have to atomically look elements up in the cache, creating them if
> missing.
Yes, such a thing will happen. One will be registered, another will not
be registered. But both of created cache entries must be the same, so it
wouldn't be a problem of consistency of the cache. From the point of
view of performance, that is a problem.
Regards,
--
TOKUNAGA Hiroyuki
tkng at xem.jp
http://kodou.net/
More information about the cairo
mailing list