[ic] Iterations slow with mv_matchlimit incrementations

Gert van der Spoel gert at 3edge.com
Tue Jun 2 12:27:22 UTC 2009


> -----Original Message-----
> From: interchange-users-bounces at icdevgroup.org [mailto:interchange-
> users-bounces at icdevgroup.org] On Behalf Of Grant
> Sent: Tuesday, June 02, 2009 10:22 AM
> To: interchange-users at icdevgroup.org
> Subject: Re: [ic] Iterations slow with mv_matchlimit incrementations
> 
> >> >> >> I'm baffled by this.  I have no idea why increasing
> mv_matchlimit
> >> >> >> would drastically increase the amount of time required for
> *each*
> >> >> loop
> >> >> >> iteration.  Please let me know if you have any ideas.
> >> >> >
> >> >> > Is there any way you can post the relevant piece of code?
>  Without
> >> >> > knowing what's being iterated over, it's hard to offer
> >> suggestions.
> >> >> > In particular, are there any parts which are perl blocks of any
> >> >> > particular flavor (calc, calcn, perl, prefix-exec, etc)?  Are
> >> there
> >> >> > perhaps multiple nested loop constructs?
> >> >> >
> >> >> > Regards,
> >> >> >
> >> >> > David
> >> >>
> >> >> Here's another illustration of my problem.  I set up 15 [email]
> tags
> >> >> throughout my code so I get an email whenever certain points are
> >> >> reached.  With ml=10 I get maybe 20 or so emails per second.
>  With
> >> >> ml=999999 I get much less than 1 email per second.  My
> understanding
> >> >> is that the first 10 iterations should take the same amount of
> time
> >> in
> >> >> either scenario.
> >> >>
> >> >> Does anyone know why IC would execute a single iteration at a
> >> >> drastically slower rate, just because it has more total
> iterations
> >> to
> >> >> execute?
> >> >>
> >> >> My installation is a year or two old.  Does this sound like a
> >> problem
> >> >> an upgrade could fix?
> >> >
> >> > e-mail does not seem to be the most useful medium to do
> performance
> >> tests.
> >> > Servers can start holding e-mails or other external influences can
> >> cause
> >> > e-mails to arrive less frequently.
> >> >
> >> > Why not have 15 timestamps written to a logfile, so you can look
> at
> >> this
> >> > data and see if you can find any trends. Is it that the 15
> timestamps
> >> are
> >> > increasing in interval equally?
> >>
> >> Exactly, that seems to be the behavior.  They increase in interval
> >> equally.
> >
> > Next step: you have 500 lines of code on which you try to increase
> the
> > ml=xxxxx
> > Can you reduce the amount of code to a couple of lines and maintain
> the same
> > behavior?
> >
> > Eventually you might be able to reduce it to a certain level where it
> starts
> > to become possible to send it to the list ... Then on guru level it
> can be
> > traced through the Interchange core code to see if there is anything
> that
> > can be done to solve it :)
> >
> > CU,
> >
> > Gert
> 
> I've developed a workaround for this problem, but I'm really not sure
> if it's a workaround or a solution.  Here's a summary.
> 
> Each iteration of this is very fast:
> 
> [loop search="fi=products/st=db/ml=10/ra=1"]
>    processing
> [/loop]


I created a testpage with the above code and ran it, it instantly pops up 10
times the word 'processing'

> Each iteration of this is about 50 times slower:
> 
> [loop search="fi=products/st=db/ml=999999/ra=1"]
>    processing
> [/loop]

I added 1,000,000 skus to my product database in a test environment which
was not a good idea ;)
Basically getting issues with memory/swap and all that kind of stuff, so I
don't have the 'power' to check that.

However I managed to add over 10,000 items and do:
[loop search="fi=products/st=db/ml=9999/ra=1"]
   processing
[/loop]

This also pretty much instantly pops up 9999 times the word 'processing' ...

 
> Each iteration of this is very fast (workaround):
> 
> [loop search="fi=categories/st=db/ml=999999/ra=1"]
>    [loop prefix=inside
> search="fi=products/st=db/ml=999999/sf=category/se=[loop-
> code]/op=eq/nu=0"]
>       processing
>    [/loop]
> [/loop]

I do not have a categories table to test this ... But 1) your categories
table probably has round about 100-1000 max results, so you can put
ml=999999999999999 and it won't be making any difference. Then you feed that
to the innerloop, where again you probably have 100-5000 results per
category so again the 9999999 match limit does not really get reached anyway
... 

So your fast workaround is eventually returning all products, but it breaks
the returns up in pieces ... Less data to handle at once ... 

Anyway in case you have a huge speed difference with 10 or 10000 then it
could be your IC version (I've tested on 5.7.1) , but if 10 and 10000 are
similar in speed and the problem really is with the 999999 then perhaps you
want to monitor you environment, check what happens when you do the query
(swap etc).

I also still do not understand that it is apparently for you working as:
processing <long break> processing <long break> processing <long break>

For me it 'thinks' and then put a processing blob all at once on screen.

 
> So it seems like IC is getting bogged down when there are too many
> matches in a loop search.  Should that happen?  Does it indicate a
> problem somewhere in my system?
> 
> I tried many times to narrow the problem down to a certain section of
> my "processing" code but I always got nowhere.  I have the problem in
> two separate loop searches of two different tables.
> 
> - Grant
> 
> _______________________________________________
> interchange-users mailing list
> interchange-users at icdevgroup.org
> http://www.icdevgroup.org/mailman/listinfo/interchange-users




More information about the interchange-users mailing list