[ic] Performance testing: timer variable?

cfm@maine.com cfm@maine.com
Fri, 9 Mar 2001 23:43:48 -0500


On Fri, Mar 09, 2001 at 03:05:43PM -0800, Dan B wrote:
> At 08:17 AM 3/9/2001 -0500, you wrote:
> >On Fri, Mar 09, 2001 at 02:33:58AM -0800, Dan B wrote:
> > > I am doing some performance testing/debugging, and would love to hear
> > > comments from others about this.
> > >
> > > A) Testing the load time (seconds) for an entire page
> > >          Is 'time wget $URL' a good indicator?
> > >
> > > B) Testing a specific block of code?
> > >          What kind of [perl] would you put in front and in back to take a
> > > before and after time sample, subtract it, and give the "x.xxxx" seconds
> > > passed" result?  Or another way all together?
> > >
> > > C) I'm also wondering how much of a difference there is between [query] 
> > and
> > > direct SQL querying (i.e. how much does the overhead of perl, DBI, IC
> > > amount to).
> >
> >It's not clear what you want to know or what you define as "good performance"
> >in your situation.  You need to define that first.
> 
> Good point.  I'm looking for ways to measure "page display times" (when 
> network isn't an issue).  As far as "good performance", here's the 
> background info:
> For hardware, we've got:
>          2-way zeon 1ghz web server (1gb ram)
>          4-way zeon database server (postgresql) with 4gb ram
>          EMC Clariion fibrechannel SAN array
> 
> With plans to scale out the webservers and do layer-4 www clustering (and 
> eventually try the Pgsql clustering code).
> 
> For my application, good performance is sub-second page displays no matter 
> the concurrency load (1 concurrent connection or 1000+ concurrent).
> 
> Basically I'm worried about performance that I can't fix by throwing more 
> hardware at the problem.  I need a good way to test the performance, and 
> that's what I'm hoping you guys can help me discover. :)  So it seems that 
> it would be very valuable to test the amount of time that it takes to 
> execute a given block of code in an .html so that I can find what areas 
> need tuning the most.
> 
> But since you've piqued my interest, what are the other metrics (or 
> dimensions) for "performance" testing on an interchange website?  (KB 
> transmit size per HTTP SEND?  processor utilization maybe?  low/high 
> concurrency? etc?).

Setting the reference platform is the first step.  Sometimes it is a 
laptop and rarely a high end mac workstation.  Usually it is a 
$1000 off the shelf windoze system from Staples on a dialup.  IE, 
Netscape, AOL.  We do everything on linux so that is covered, and 
we check with a Mac when we can get it working. Simple page render
time. 

We want a 3 second render time.  Yeah, ugly.  Do we get there 
with everything, no way.  I don't know where you will get sub-second 
page displays in any credible testing environment.

minivend is **never** the issue.  Poor performance is almost always poor 
design or poor concept; the rest of the time it's not enough RAM.

FWIW, my experience is that performance is relatively insensitive
to the hardware you through at it; generally our systems chug 
along under .1 load avg; that will spike without bound when we 
make a **mistake**.  Half the hardware or twice as much would 
not make a difference when a few robots chew into badly formed
queries all at once.

cfm

-- 

Christopher F. Miller, Publisher                             cfm@maine.com
MaineStreet Communications, Inc         208 Portland Road, Gray, ME  04039
1.207.657.5078                                       http://www.maine.com/
Content management, electronic commerce, internet integration, Debian linux