[ic] Iterations slow with mv_matchlimit incrementations
emailgrant at gmail.com
Tue Jun 2 07:22:12 UTC 2009
>> >> >> I'm baffled by this. I have no idea why increasing mv_matchlimit
>> >> >> would drastically increase the amount of time required for *each*
>> >> loop
>> >> >> iteration. Please let me know if you have any ideas.
>> >> >
>> >> > Is there any way you can post the relevant piece of code? Without
>> >> > knowing what's being iterated over, it's hard to offer
>> >> > In particular, are there any parts which are perl blocks of any
>> >> > particular flavor (calc, calcn, perl, prefix-exec, etc)? Are
>> >> > perhaps multiple nested loop constructs?
>> >> >
>> >> > Regards,
>> >> >
>> >> > David
>> >> Here's another illustration of my problem. I set up 15 [email] tags
>> >> throughout my code so I get an email whenever certain points are
>> >> reached. With ml=10 I get maybe 20 or so emails per second. With
>> >> ml=999999 I get much less than 1 email per second. My understanding
>> >> is that the first 10 iterations should take the same amount of time
>> >> either scenario.
>> >> Does anyone know why IC would execute a single iteration at a
>> >> drastically slower rate, just because it has more total iterations
>> >> execute?
>> >> My installation is a year or two old. Does this sound like a
>> >> an upgrade could fix?
>> > e-mail does not seem to be the most useful medium to do performance
>> > Servers can start holding e-mails or other external influences can
>> > e-mails to arrive less frequently.
>> > Why not have 15 timestamps written to a logfile, so you can look at
>> > data and see if you can find any trends. Is it that the 15 timestamps
>> > increasing in interval equally?
>> Exactly, that seems to be the behavior. They increase in interval
> Next step: you have 500 lines of code on which you try to increase the
> Can you reduce the amount of code to a couple of lines and maintain the same
> Eventually you might be able to reduce it to a certain level where it starts
> to become possible to send it to the list ... Then on guru level it can be
> traced through the Interchange core code to see if there is anything that
> can be done to solve it :)
I've developed a workaround for this problem, but I'm really not sure
if it's a workaround or a solution. Here's a summary.
Each iteration of this is very fast:
Each iteration of this is about 50 times slower:
Each iteration of this is very fast (workaround):
So it seems like IC is getting bogged down when there are too many
matches in a loop search. Should that happen? Does it indicate a
problem somewhere in my system?
I tried many times to narrow the problem down to a certain section of
my "processing" code but I always got nowhere. I have the problem in
two separate loop searches of two different tables.
More information about the interchange-users