Cleaning up the Python D-Bus Bindings
Ray Strode
halfline at gmail.com
Thu Apr 7 11:46:41 PDT 2005
Hi,
Thanks for taking the time to write up the summary of all this. I
have a few comments.
> * Method calls
>
> * methods are called with their arguments listed first and
> may contain a number of optional keywords
>
> * method(arg1, arg2, ..., callback=None,
> error_callback=None, interface=None)
I don't think it really matters too much what we call the normal and
exception callback keywords, but "callback" is a bit general. Maybe
method_return_handler? That's a bit long, though. reply_handler or
reply_callback might be a good. What do you think?
> * when set to None method is
> blocking
>
> * catch errors using
> try/except blocks
So for blocking mode we'll raise exceptions and for non-blocking mode
will call the exception handling function passed as a keyword during
method invocation? That seems pretty reasonable to me. What do we
do about the mismatched cases of:
1) method (args..., callback=None, error_callback=my_error_handler) and
2) method (args..., callback=my_reply_handler, error_callback=None) ?
I'd propose that in the first case we synchronously call the error
handler instead of raising an exception and in the second case we
raise a "this isn't allowed" exception before sending anything over
the bus.
> * callback signature
> depend on the number of
> arguments sent in the
> return method.
we should probably call the user's error handling callback with an
"arguments don't match" exception if the the user's reply callback
signature doesn't match the number of arguments sent back over the bus
in the reply.
> * error_callback keyword
>
> * For blocking calls this will
> throw an error if set
Okay so this is 1) above. So you're saying that raising a "this isn't
allowed" exception is a solution for this case, too. Either way
(raising a "this isn't allowed exception" or calling the error handler
synchronously) seems like reasonable behavior to me. One advantage of
calling the error handler synchronously is the generic error handling
case. If a set of methods should all be handled in the same way on
errors, it would be nice if we could avoid
try:
remote_object.SomeMethod (some_args)
except:
<multiple lines of generic error handling code goes here>
scattered through out the code and instead just have
remote_object.SomeMethod (some_args, error_callback=generic_error_handler)
On the other hand, there is no reason that those multiple lines of
generic error handling code couldn't be stuffed in a function that is
invoked from the except clause anyway. Also, this is only an issue for
blocking calls which are pretty useless in gui applications.
> * interface keyword
>
> * using this one can override the
> default interface and call a
> method on another interface
I'm not sure that I really like the concept of a "default interface".
What makes one interface of an object more worthy of being the default
than another?
> * Interfaces
> * objects initialized with an interface will consider that
> interface the default interface
>
> * object =
> service.get_object("/org/designfu/SomeObject",
> "org.designfu.SampleInterface")
>
> * org.designfu.SampleInterface becomes the
> default interface
I'm pretty sure that D-Bus already does some sort of heuristics to
figure out which interface to use if the caller doesn't specify one.
I can't think of any good reason we should bypass those heuristics
and just say "use whatever interface the user wants to be default"
instead.
My feeling is the caller should always specify an interface, but if
the programmer is feeling particularly lazy and doesn't specify an
interface then if the method can be unambigiously determined it should
be and if it can't be then the user should expect undefined behavior.
That's how I think the d-bus heuristics work now.
So it should just be object = service.get_object
("/org/designfu/SomeObject") and have no default interface.
> * the Interface wrapper object acts just like an object
> but casts the wrapped object to another interface for
> all methods
>
> * introspect_interface = Interface(object,
> "org.freedesktop.DBus.Introspectable")
I like this and I think this is probably the right approach to having
"default" interfaces available. Although I think we could make it a
getter method on the object like we make get_object for service
objects. Something like:
object = service.get_object ("/org/designfu/SomeObject")
sample_interface = object.get_interface ("org.designfu.SampleInterface")
sample_interface.Foo(foo_arg)
sampe_interface.Bar(bar_arg)
would be equivalent to:
object.service.get_object("/org/designfu/SomeObject")
object.Foo(foo_arg, interface="org.designfu.SampleInterface")
object.Bar(bar_arg, interface="org.designfu.SampleInterface")
If we do this though, then we are putting get_interface and remote
methods in the same namespace. This is actually already being done
for for connect_to_signal (). This means every time we add another
method to dbus.Object we break being able to call a remote method with
that same name. Maybe we should move remote method calls to
object.remote or something similiar?
> * all parameters will be exported as
> varients
This kind of sucks but I guess there isn't much we can do because
there probably isn't any easy way to know ahead of time what the types
of the parameters are supposed to be.
> * Should we provide a way to add
> hints so that we can do type
> checking or make it easier to
> use from C? My gut says no but
> perhaps add this if some sort of
> static type checking enters
> python.
In python it's up to the caller of a function to know what are valid
types for the function's arguments. If the caller gets it wrong I
think functions are supposed to raise TypeError 's or something. I
guess it really shouldn't be any different just because the caller is
remote.
> * proposed standard annotation to the introspection format
> * org.freedesktop.DBus.Doc for document strings.
> It is easy enough for us to get from Python
> (method.__doc__) and allows users of the python
> bindings to call __doc__ on a proxy object's
> methods and get documentation
This is a general enough topic that you might want to bring it up in a
separate thread. One thing that should just work regardless of if
this annotation is added is if a caller calls
org.designfu.SampleInterface.SampleMethod.__doc__. Of course that
doesn't solve getting doc strings in the other direction, so I think
the annotation is a good idea.
> * Signals should be registered with dbus.Object so
> that they can be exported in the introspection
> data.
>
> * Parameter format needs to be passed when
> registering
This is a bit icky. I guess it's necessary though or the
introspection data will be incomplete.
> * do we need an signal_error callback also?
I'm pretty sure signals are set no reply by default so i don't think
it makes sense to have error reply handlers when emitting signals.
> * The low level bindings will be stripped of
> methods not used by the higher level bindings.
I'm not even sure there should be "low level bindings" at all. We
don't have to use python versions of low level apis to implement high
level apis; we can just use the C api directly to implement the high
level api. There's going to be less overhead that way anyway.
--Ray
More information about the dbus
mailing list