2006/11/28, Magnus Bergman <<a href="mailto:magnus.bergman@observer.net">magnus.bergman@observer.net</a>>:<div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Mon, 27 Nov 2006 14:57:50 -0500<br>Joe Shaw <<a href="mailto:joeshaw@novell.com">joeshaw@novell.com</a>> wrote:<br><br>> Hi,<br>><br>> On Thu, 2006-11-23 at 16:26 +0100, Mikkel Kamstrup Erlandsen wrote:<br>
> > The situation at hand is that we have a handful of desktop search<br>> > engines, all implemented as daemons, both handling searches and<br>> > indexing.<br>><br>> Yeah, but this situation isn't realistic. The user should never be
<br>> running more than one search system at a time. When they type in<br>> "vacation photos" they don't care which engine is being used<br>> underneath, they care about getting their vacation photos.<br>
<br>That certainly depends on what you call realistic. Is it only realistic<br>that users would want (or should be able) to do such simple searches? I<br>think it's realistic to imagine that there can be different search<br>
engines which are good at different things. Perhaps one is good at<br>finding media files by their tags. One that is good at finding relevant<br>information from fuzzy terms. And maybe one that gathers information<br>from the whole local network. Even if this isn't very close at hand, I
<br>don't think it's right to assume that nobody would want this, ever.<br><br>> To ensure timely indexing of the user's data in the background, these<br>> search engines really have to be started as part of the user's session
<br>> and not on-demand. (Although a non-indexing searcher like "grep"<br>> would be fine on-demand.) I want all of my vacation photos to be<br>> indexed by the time I have to search for them.<br><br>
As I mentioned before. There no reason to assume that the daemon doing<br>the indexing is directly involved then the searching is done (some<br>search engine use the same daemon for both things, some don't). How and<br>when to do the indexing can be decided be each search engine
<br>implementation.<br><br>> > Having an extra daemon on top of that handling the query one extra<br>> > time before passing it to the search subsystem seems overkill...<br>> > Ideally I see the daemon/lib (or even executable) to only be used
<br>> > as a means of obtaining a dbus object path given a dbus interface<br>> > name (" org.freedesktop.search.simple").<br>><br>> I feel like I missed something in this thread thus far, but why is a
<br>> library or separate daemon necessary? Why would the engine not simply<br>> grab the "org.freedesktop.search.engine" (or whatever) name at startup<br>> if it's available? If nothing has the name, you could activate a
<br>> grep-based fallback or something like that on-demand.<br><br>At least I see it there has to be some kind of daemon if there is a<br>dbus interface. But some search engines don't have a daemon (involved<br>in the searching that is). In those cases the searching can be done in
<br>process (and you need neither daemon nor dbus). But a daemon could<br>be created to wrap those search engines, and it could provided a dbus<br>interface.</blockquote><div><br>Yes. I suggest this on the WasabiDraft2 wiki page. However this is technical concerns I think we need not worry about at the moment. At some point in the future we can implement a management interface but the dbus interfaces to the search engine(s) does not need to reflect this. Let's stick to what we can "implement" by API specifications only. Unless there is some obvious deep connection I have missed.
<br><br>Cheers,<br>Mikkel<br></div><br></div><br>