[ic] Interchange using lots of cpu...

Mike Heins interchange-users@icdevgroup.org
Thu Dec 19 10:46:00 2002


Quoting gordon@nextrieve.com (gordon@nextrieve.com):
> Mike Heins <mike@perusion.com> replied:
> > 
> >     http://www.icdevgroup.org/i/dev/docfly.html?mv_arg=icfaq14.02
> 
> Thanks -- I'd already found that page which gave valuable pointers on
> what to do to avoid overall efficiency problems.  I'm using its
> [benchmark] code to get my benchmarks which was very helpful.  I
> didn't want to reach its final stage of simply using [query] to
> get the results, then writing all the list output in perl.  I
> might have to do that though. :(
> 
> > Besides the FAQ I posted, you are calling at least three Perl interpreters
> > per result (the [if scratch foo eq bar] stuff). With lots of results, this
> > will result in unacceptable processor use.
> > 
> > Also, since you haven't mentioned it, one wonders at the status of the
> > indexes in your database.
> 
> All good points, but the main problem here is that the loop runs
> acceptably quickly *per*iteration* with small result sets.  When the
> result set gets bigger the loop runs unacceptably slowly *per*iteration*
> initially and gradually speeds up to its previous speed as results are output.

Exactly the point. That is why you want to do all of your processing
in one pass for large lists, without post procesing via the [if
scratch ...] route. There *is* lots of copying going on, and the more
perl interpreters you spawn and the more re-interpolation you do the
worse it gets.

Use [if-loop-param foo] go/nogo tests if you want it to do better,
or code it in a [loop-sub foo]  and [loop-exec foo] combination. That
will cut down the copying in the post-loop reparse for ITL tags.

Large result sets need to be processed efficiently, bottom line.
I told you your code is not doing that -- believe me if you will.

-- 
Mike Heins
Perusion -- Expert Interchange Consulting    http://www.perusion.com/
phone +1.513.523.7621      <mike@perusion.com>

Fast, reliable, cheap.  Pick two and we'll talk.  -- unknown