Idea: Attention API
lunchtimemama at gmail.com
Sat Jan 12 14:22:29 PST 2008
Here is the problem:
I like listening to music. I listen to music while doing almost
anything else - reading, writing, coding, browsing, painting. There
are some activities which are at odds with my musical enjoyment -
watching a video, listening to a podcast or audio book, voice-chatting
with someone. In the course of my daily web-browsing, I come across
perhaps ten or more youtube videos. On each occasions I must pause my
music player (Banshee) before watching the youtube video and resume my
music when the video is over. I am doing the machine's job for it.
What should happen:
If I am listening to music and any of the following happens - I play
video or audio in the browser; I open video or audio from the drive; I
receive or initiate a VoIP call; I record audio via Audacity or some
similar program; I do anything else which requires my aural attention
- the music should pause. When I complete the interstitial task the
music should resume.
How this should happen:
The general idea is to have a DBus-based "attention" API. There are
several ways such a system could work, but here is one example:
compliant applications report their "attention requirements" through
the API. For example, when Banshee plays a song, it would report
requiring "passive aural" attention. When Banshee plays a podcast or
any file of the "spoken word" genre, it would report requiring "active
aural" attention. When totem opens a video, it would report requiring
"active aural" and "active visual" attention. When OpenOffice opens a
text document, it would report requiring "passive visual" attention.
And so forth. Some logic then manages attention but culling passive
spectacles in the event of active spectacles, and restoring the culled
spectacles when appropriate. This logic could be centralized - when a
video plays, it tells Banshee to pause; when a video stops, it tells
Banshee to start again - or it could be distributed among the
compliant applications - Banshee listens for "active aural" spectacles
and pauses for their duration.
Aside from the very compelling use-case of music, there are other
situations where such an API could be useful.
* Presence - Pidgin automatically changes status to "away" depending
on the state of attention. If no windows are visible and no audio is
playing, the user is very likely away from the machine and the "away"
status is appropriate after a very short period of inactivity. If an
active video is in the foreground, the user is likely at the machine.
Perhaps "watching movie" is an appropriate status to automatically
* Screensaver - Similar to presence example, the screensaver should
not activate if a movie is active in the foreground, despite any
length of input inactivity.
* ? - At the moment, every application is coded under the assumption
that it has the user's full attention at all times - every app is
oblivious of every other app. The rise of multi-core processors only
means more multitasking, and empowering application developers with
attention data will enable new and sophisticated UIs and behaviours
never before possible.
Road to an API:
This is an idea I've had for a while and I've given some thought to a
few of the technical details and problems with this idea, but first I
would like to gauge the interest of the XDG. If you think this is a
useful idea, please voice your confidence. If you think there are
potential problems with this idea, please ask and I'll do my best to
make up something. If you think this is a terrible waist of time, lay
it on me! If there's sufficient interest, we can go from there.
More information about the xdg