[Bug 19605] Gabble should cache client capabilities
bugzilla-daemon at freedesktop.org
bugzilla-daemon at freedesktop.org
Fri Mar 26 19:16:02 CET 2010
http://bugs.freedesktop.org/show_bug.cgi?id=19605
Will Thompson <will.thompson at collabora.co.uk> changed:
What |Removed |Added
----------------------------------------------------------------------------
AssignedTo|will.thompson at collabora.co.u|telepathy-
|k |bugs at lists.freedesktop.org
URL|http://git.collabora.co.uk/?|http://git.collabora.co.uk/?
|p=user/smcv/telepathy- |p=user/wjt/telepathy-gabble-
|gabble- |wjt.git;a=shortlog;h=refs/he
|smcv.git;a=shortlog;h=refs/h|ads/caps-cache-0.9
|eads/caps-cache |
--- Comment #10 from Will Thompson <will.thompson at collabora.co.uk> 2010-03-26 11:16:00 PST ---
(In reply to comment #8)
> Review up to 1c2bde:
>
> In fd22d6:
>
> * gabble_caps_cache_dup_share() dups the object if not NULL, but
> there's no weak pointer to set the singleton variable back to NULL
> after last unref.
> Elsewhere in the code you use "dup .. use .. unref" pattern, which means
> that unless ref was leaked before, you'll have singleton variable
> pointing to a destroyed object. That is, uless you explicitly
> take care of that somewhere in the initialisation, but I didn't
> see it anywhere.
shared_cache owns a hard ref, not a weak ref. gabble_caps_cache_free_shared()
drops that ref, and sets shared_cache to NULL; it's only called during CM
finalization to make it possible to check that no other refs are leaked.
So, I don't believe this is a bug.
> A few SQLite related comments for 11713f:
>
> * Since we don't care if the cache becomes corrupt due to power/system failure,
> we can safely turn off syncing and on-disk journal. This will lessen the disk
> usage while connecting (when presumably we have the highest number of writes
> to the database - since even a cache hit is a disk write for touching the
> entry). This can be done with PRAGMAs immediately after openning the db.
Okay, I've added a couple of pragmata that look right.
> * In caps_cache_insert(), instead of ignoring the error of INSERT, we can
> use INSERT OR UPDATE ... which will, as a side-effect, save up-to-date
> caps for the node, and also touch the timestamp so it won't be reaped
> before its time.
We shouldn't get that far if the entry is already in the hash; that code path
only occurs when we didn't get a hit from gabble_caps_cache_lookup(), and thus
discoed the peer. gabble_caps_cache_lookup() does touch the row (UPDATE-ing it,
setting timestamp = now) when it gets a hit; maybe there's a way to combine
that with the SELECT?
I've also added a couple more patches: one fixes the part of the test which
tries to get a particular entry GCed to actually guarantee that that happens
(which did involve a whiteboard!), and another removes a useless call to
sqlite3_column_bytes(). Re-review anyone?
--
Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.
More information about the telepathy-bugs
mailing list