xfce4-mailwatch-plugin pegging the CPU usage meter.
Brian J. Tarricone
bjt23 at cornell.edu
Fri Oct 17 06:39:59 CEST 2008
On Fri, 17 Oct 2008 03:55:52 +0000 (UTC) Grant Edwards wrote:
> On 2008-10-16, Brian J. Tarricone <bjt23 at cornell.edu> wrote:
> > Grant Edwards wrote:
> >
> >> The odd thing I've noticed is that the xfce4-mailwatch-plugins
> >> completely peg the CPU usage at 100% for 8-10 seconds each time
> >> it checks the IMAP mailboxes. I'm running an AMD Athlon 64
> >> 3200+ clocked at 2GHz, and it just shouldn't take that many
> >> clock cycles to check an IMAP mailbox.
> >>
> >> AFAICT, the each instance of the mailwatch plugin will suck up
> >> as much CPU as it can the whole time it's "awake".
> >>
> >> Any ideas on what might be wrong?
> >
> > Nothing. AFAIK that's normal.
>
> That's normal? What is it doing for 8-10 seconds at 2GHz?
> That's an _awful_ lot of processing for something that should
> be spending all it's time blocked waiting for network I/O.
>
> > Does it on my system too (tho mine's a bit slower than yours).
> > If you really care that much, profile it with sysprof or
> > callgrind or something and tell me where it's spending too
> > much time.
>
> Something seems broken to me. It shouldn't take that much CPU
> to send a few IMAP commands and parse the replies. Even
> setting up an SSL connection only takes a fraction of a second
> of CPU time at 2GHz. To me, it looks like the program is
> busy-waiting on something when it should be blocking.
Yes and no. The network bits in mailwatch sorta block while waiting
for data, but sorta not. The problem is, if something goes wrong
(either with the plugin or the server), and mailwatch is waiting for
data that never comes, there's no way to break out of a read() until
and unless the server eventually decides to time out and drop the
connection. This is... not particularly acceptable. So, I have a
short select() timeout (on the order of a second), with checks
to see if it should exit the receive loop between each select().
It's not particularly optimal; another option would be to use fully
async callback-oriented I/O using GIOChannel, but that's pretty much
a(nother) rewrite of all the network code. Within the current
framework, I guess what could work is, instead of calling a function to
check "hey, should I bail on this connection," keep a pipe open between
threads, and select() on both the socket and the read end of the pipe,
and set it for the full connection timeout (45 seconds or whatever).
But I'm not sure why that would peg the CPU all the time, esp.
considering most of the select() calls should actually return in only a
couple seconds with data ready.
But what's there now works, and I have too much to work on with the
impending Xfce 4.6 release to even think about it right now. In the
time being, I'm sure your CPU can spare 8-10 seconds worth of cycles
every 10 minutes or so.
(I'm assuming what I outlined above is the problem; I could be
completely wrong. Hence the need for some profiling.)
-b
More information about the Xfce
mailing list