xfce4-mailwatch-plugin pegging the CPU usage meter.
Grant Edwards
grante at visi.com
Fri Oct 17 16:33:02 CEST 2008
On 2008-10-17, Brian J. Tarricone <bjt23 at cornell.edu> wrote:
>> That's normal? What is it doing for 8-10 seconds at 2GHz?
>> That's an _awful_ lot of processing for something that should
>> be spending all it's time blocked waiting for network I/O.
>>
>>> Does it on my system too (tho mine's a bit slower than yours).
>>> If you really care that much, profile it with sysprof or
>>> callgrind or something and tell me where it's spending too
>>> much time.
>>
>> Something seems broken to me. It shouldn't take that much CPU
>> to send a few IMAP commands and parse the replies. Even
>> setting up an SSL connection only takes a fraction of a second
>> of CPU time at 2GHz. To me, it looks like the program is
>> busy-waiting on something when it should be blocking.
>
> Yes and no. The network bits in mailwatch sorta block while
> waiting for data, but sorta not. The problem is, if something
> goes wrong (either with the plugin or the server), and
> mailwatch is waiting for data that never comes, there's no way
> to break out of a read() until and unless the server
> eventually decides to time out and drop the connection. This
> is... not particularly acceptable. So, I have a short
> select() timeout (on the order of a second), with checks to
> see if it should exit the receive loop between each select().
Right. But, a select that blocks for a second, checks a
timeout, then blocks for another second will use a negligible
amount of CPU. So that's not actually what's going on. I
suspect that when the select() wakes up, the subsequent loop
that's calling gnutls_*_recv() is spinning without ever
returning to the top of the outer loop where the select()
happens. When I have an hour or two to spare, I'm going to
take a look at it...
> It's not particularly optimal; another option would be to use
> fully async callback-oriented I/O using GIOChannel, but that's
> pretty much a(nother) rewrite of all the network code. Within
> the current framework, I guess what could work is, instead of
> calling a function to check "hey, should I bail on this
> connection," keep a pipe open between threads, and select() on
> both the socket and the read end of the pipe, and set it for
> the full connection timeout (45 seconds or whatever).
>
> But I'm not sure why that would peg the CPU all the time, esp.
> considering most of the select() calls should actually return
> in only a couple seconds with data ready.
If it were working the way you describe (blocking in a select()
call for 1 second at a time), then it wouldn't be using a
measurable amount of CPU time.
> But what's there now works, and I have too much to work on
> with the impending Xfce 4.6 release to even think about it
> right now. In the time being, I'm sure your CPU can spare
> 8-10 seconds worth of cycles every 10 minutes or so.
It's four instances each waking up once a minute. That's
probably tolerable as well, except now that I know why my other
programs are slowing down, I'd like to fix it. I'll let you
know what I find out.
--
Grant Edwards grante Yow! The PILLSBURY DOUGHBOY
at is CRYING for an END to
visi.com BURT REYNOLDS movies!!
More information about the Xfce
mailing list