Cleaning up the Python D-Bus Bindings
John (J5) Palmieri
johnp at redhat.com
Thu Apr 7 12:24:33 PDT 2005
> Thanks for taking the time to write up the summary of all this. I
> have a few comments.
> > * Method calls
> > * methods are called with their arguments listed first
> > may contain a number of optional keywords
> > * method(arg1, arg2, ..., callback=None,
> > error_callback=None, interface=None)
> I don't think it really matters too much what we call the normal and
> exception callback keywords, but "callback" is a bit general. Maybe
> method_return_handler? That's a bit long, though. reply_handler or
> reply_callback might be a good. What do you think?
reply_handler, error_handler sounds good to me. It fits with the D-Bus
terminology. I was notified that interface might be part of the python
spec at some point so interface will change to dbus_interface to keep
consistent with dbus terminology and be name-spaced as to avoid naming
clashes with python proper.
> > * when set to None method is
> > blocking
> > * catch errors using
> > try/except blocks
> So for blocking mode we'll raise exceptions and for non-blocking mode
> will call the exception handling function passed as a keyword during
> method invocation? That seems pretty reasonable to me. What do we
> do about the mismatched cases of:
> 1) method (args..., callback=None, error_callback=my_error_handler)
> 2) method (args..., callback=my_reply_handler, error_callback=None) ?
Raise an exception. This is a programing mistake. For the most part
async error handlers are for handling errors that come over the bus with
a few exceptions.
> > * callback signature
> > depend on the number
> > arguments sent in
> > return method.
> we should probably call the user's error handling callback with an
> "arguments don't match" exception if the the user's reply callback
> signature doesn't match the number of arguments sent back over the bus
> in the reply.
Yes, this would be the only way to handle this error in the users code
since trying to call the reply_handler with the wrong number of elements
would just raise an exception in the bindings.
> > * error_callback keyword
> > * For blocking calls this will
> > throw an error if set
> Okay so this is 1) above. So you're saying that raising a "this isn't
> allowed" exception is a solution for this case, too. Either way
> (raising a "this isn't allowed exception" or calling the error handler
> synchronously) seems like reasonable behavior to me. One advantage of
> calling the error handler synchronously is the generic error handling
> case. If a set of methods should all be handled in the same way on
> errors, it would be nice if we could avoid
> remote_object.SomeMethod (some_args)
> <multiple lines of generic error handling code goes here>
> scattered through out the code and instead just have
Not really, as this is how python, java and c++ do error handling. The
nice thing about handling errors like this is that the except block and
the try block both have the same parent scope. There are times when the
same data is used in both blocks (as an example, closing open file
> remote_object.SomeMethod (some_args,
I don't like this. It is necessary for async calls but otherwise I want
to stick to the language conventions.
> On the other hand, there is no reason that those multiple lines of
> generic error handling code couldn't be stuffed in a function that is
> invoked from the except clause anyway. Also, this is only an issue for
> blocking calls which are pretty useless in gui applications.
> > * interface keyword
> > * using this one can override
> > default interface and call a
> > method on another interface
> I'm not sure that I really like the concept of a "default interface".
> What makes one interface of an object more worthy of being the default
> than another?
Whichever interface the user is going to use the most. I don't think
D-Bus enforces any policy of what gets used. In fact using the lowlevel
dbus API one can mostly ignore the interface. According to the spec:
In the absence of an INTERFACE field, if two interfaces on the same
object have a method with the same name, it is undefined which of the
two methods will be invoked. Implementations may also choose to return
an error in this ambiguous case. However, if a method name is unique
implementations must not require an interface field.
So setting a default just specifies which interface to use if a method
is not unique.
> > * Interfaces
> > * objects initialized with an interface will consider
> > interface the default interface
> > * object =
> > "org.designfu.SampleInterface")
> > * org.designfu.SampleInterface becomes
> > default interface
> So it should just be object = service.get_object
> ("/org/designfu/SomeObject") and have no default interface.
Lets get rid of get_object and just use dbus.Interface to talk to
objects just to make things more consistent. It could be a static
method that obtains an interface from a service or we could just check
if the input to the constructor is a dbus.Service or another
dbus.Interface. In the first case it would look like this i =
dbus.Interface.bind_service(service, "my.default.interface") or just i =
dbus.Interface(service, "my.default.interface") the first is more
descriptive but the second is cleaner.
> > * the Interface wrapper object acts just like an
> > but casts the wrapped object to another interface
> > all methods
> > * introspect_interface = Interface(object,
> > "org.freedesktop.DBus.Introspectable")
> I like this and I think this is probably the right approach to having
> "default" interfaces available. Although I think we could make it a
> getter method on the object like we make get_object for service
> objects. Something like:
> object = service.get_object ("/org/designfu/SomeObject")
> sample_interface = object.get_interface
> would be equivalent to:
> object.Foo(foo_arg, interface="org.designfu.SampleInterface")
> object.Bar(bar_arg, interface="org.designfu.SampleInterface")
I don't particularly like factory methods which create other methods.
For singletons they make sense but I think it is a bit wrong in these
cases. I would like to see what other python programmers think though.
Cleaning things up to just using Interfaces makes things simpler anyway.
> If we do this though, then we are putting get_interface and remote
> methods in the same namespace. This is actually already being done
> for for connect_to_signal (). This means every time we add another
> method to dbus.Object we break being able to call a remote method with
> that same name. Maybe we should move remote method calls to
> object.remote or something similiar?
I don't like exposing object.remote as it just makes more typing for the
user and is not intuitive. Why not just resolve name clashes using dbus
interfaces. We can specify a standard interface for local calls like
connect_to_signal. If names clash we refer to the current interface to
decide which method to call. org.freedesktop.DBus.Python.LocalInterface
or something like that. We can always have a constant
dbus.LOCAL_INTERFACE to reduce typing.
> > * all parameters will be exported as
> > varients
> This kind of sucks but I guess there isn't much we can do because
> there probably isn't any easy way to know ahead of time what the types
> of the parameters are supposed to be.
In reality it is just under the hood stuff so...
> > * Should we provide a way to
> > hints so that we can do type
> > checking or make it easier
> > use from C? My gut says no
> > perhaps add this if some
> > static type checking enters
> > python.
> In python it's up to the caller of a function to know what are valid
> types for the function's arguments. If the caller gets it wrong I
> think functions are supposed to raise TypeError 's or something. I
> guess it really shouldn't be any different just because the caller is
Yep that is what we will do.
> > * proposed standard annotation to the introspection
> > * org.freedesktop.DBus.Doc for document
> > It is easy enough for us to get from Python
> > (method.__doc__) and allows users of the
> > bindings to call __doc__ on a proxy object's
> > methods and get documentation
> This is a general enough topic that you might want to bring it up in a
> separate thread. One thing that should just work regardless of if
> this annotation is added is if a caller calls
> org.designfu.SampleInterface.SampleMethod.__doc__. Of course that
> doesn't solve getting doc strings in the other direction, so I think
> the annotation is a good idea.
Introspection in dbus is an all or nothing affair though it would be
nice to save some bandwidth by specifying a with_docs flag or having a
separate call. I guess this is what needs to be discussed in a separate
> > * Signals should be registered with
> > that they can be exported in the
> > data.
> > * Parameter format needs to be passed when
> > registering
> This is a bit icky. I guess it's necessary though or the
> introspection data will be incomplete.
Yep, it is very icky but needs to be done.
> > * do we need an signal_error callback also?
> I'm pretty sure signals are set no reply by default so i don't think
> it makes sense to have error reply handlers when emitting signals.
> > * The low level bindings will be stripped of
> > methods not used by the higher level
> I'm not even sure there should be "low level bindings" at all. We
> don't have to use python versions of low level apis to implement high
> level apis; we can just use the C api directly to implement the high
> level api. There's going to be less overhead that way anyway.
The low level bindings supplies a nice abstraction where we mix C code
and Pyrex code (I know you don't like Pyrex but it is an argument I
don't want to have) where as the high level bindings is just straight
Python code. I think it provides a nice abstraction and allows
developer to be able to look at the high level bindings and get a sense
on how to use the bindings. I think the split will be to add all
private members to the low level bindings and public members in the high
John (J5) Palmieri
Associate Software Engineer
Red Hat, Inc.
More information about the dbus