From patrick.ohly at gmx.de Thu Jun 17 11:48:27 2010 From: patrick.ohly at gmx.de (Patrick Ohly) Date: Thu, 17 Jun 2010 13:48:27 +0200 Subject: local sync Message-ID: <1276775305.30675.336.camel@pohly-mobl1.ikn.intel.com> Hello! Let me write down some thoughts on local sync. With that I mean synchronization of databases which are accessed locally with SyncEvolution backends. This is in contrast to "normal" sync, where one side is covered by a backend and the other side is a SyncML peer (client or server). Right now, the only way to do this is to setup syncevo-dbus-server +syncevo-http-server on one side and syncevolution --daemon=no on the other. Two peers need to be configured. We could simplify the configuration as follows. In the normal, @default context we define the main databases to be used, for example Evolution. No change compared to what is done so far. For those not familiar with the concept, a "context" holds a set of peer-independent source settings (like which backend and database are used for each source like "addressbook"). In another context, say @xmlrpc, we define a sources that accesses different data via some other backend, like the XMLRPC interface. Let's assume we have one source defined, like "calendar". Now we configure one peer "xmlrpc" in the @default context: syncURL = local peerIsClient = 1 [calendar] URI = calendar at xmlrpc When we run a sync, we treat one side as the "SyncML server" and the other as "SyncML client". For performance reasons (more operations done there) it is desirable to have the truly local data being used by the server side. In terms of logging, one option is to use one session directory like this: xmlrpc-2010-06-17-13-00: calendar.after calendar.before calendar at xmlrpc.after calendar at xmlrpc.before syncevolution-log.html syncevolution-log_trm001_001_outgoing.xml ... syncevolution-log at xmlrpc_trm001_001_outgoing.xml status.ini status at xmlrpc.ini This requires a lot of changes, both for writing it like this and for accessing the information about the @xmlrpc status and backups. It probably cannot be done without D-Bus API extensions and new command line options. The simpler solution is to have two session directories, one for the server (xmlrpc-2010-06-17-13-00) and one for the client (@xmlrpc-2010-06-17-13-00). When running a sync with the @xmlrpc calendar source as client, some way of storing persistent state is needed. Normally this is done via the per-peer .internal.ini file. The @xmlrpc context doesn't have per-peer directories, but we could put the file into the @default configuration tree as peers/xmlrpc/sources/calendar/.internal-calendar at xmlrpc.ini, where calendar at xmlrpc is the URI of the pseudo-peer. That allows changing that parameter without accidentally running a sync with the wrong change tracking state. Removing the xmlrpc at default config would also remove this file. Now regarding actually running two sync sessions. One option is to put the server into the syncevo-dbus-server (as it is now) and the other into a forked process. The advantages of this solution are: * allows using backends with conflicting library requirements (think KDE and Evolution with different setup of glib/libical, etc) * a crash on one side can be detected by the other * no need to get rid of global variables (there are a few, related to logging and finding the context inside Synthesis plugins) * no need to change signal and event handling Drawbacks: * we are forced to serialize messages and exchange it via IPC mechanisms (sockets, shared memory); inside the same address space we at least have the option of skipping SyncML message encoding/decoding if and when the libsynthesis engine gets refactored to pass changes in its internal format directly back and forth * cannot write one common log file The other option is to run everything inside the same process and do a global context switch between the two sides. Currently I favor the idea of using two processes, mostly because it requires less rewriting of code. Loading conflicting backends would require further changes, because right now *all* backends are loaded to register them. Splitting backends into a general-purpose registry library and the actual implementation would be possible, or we could go for the more traditional method of defining backends in text files. I'll let this idea sit for a while, but might come back to it soon. -- Best Regards, Patrick Ohly The content of this message is my personal opinion only and although I am an employee of Intel, the statements I make here in no way represent Intel's position on the issue, nor am I authorized to speak on behalf of Intel on this matter.