Average : Mean versus Median

Written by Ingmar Verheij on July 5th, 2011. Posted in Performance testing

I love being averageThe average of a set of numbers is a commonly used, but the defintion of average is poorly understood and can raise the risk of being manipulated.

The average of a set of numbers can be determined with the mean or the median. To better understand the difference between mean and median I will explain the definition of both and illustrate it with examples.

The impact of Silverlight on a virtual desktop

Written by Ingmar Verheij on June 24th, 2011. Posted in Desktop Virtualization, Performance testing, XenApp (Presentation Server)

Silverlight Logo

A customer has a (virtualized) Citrix XenApp farm scaled for 1500 concurrent users. The environment is based on Windows Server 2003 Standard with Citrix XenApp 5 (migration to XenApp 6.x is scheduled).

A business critital web-application is re-developped and requires Microsoft Silverlight. Before implementing Silverlight the impact should be determined. A big concern is a decrease in the capacity of the farm, number of concurrent users the farm can handle.

To determine the impact of Silverlight I’ve done some tests, including a LoadTest.

"true" client side testing best practices

Written by Ingmar Verheij on June 9th, 2011. Posted in Performance testing

iRobot NS5When performing a LoadTest user actions are simulated. This implicates that mouse or keyboard actions are executed based on a script, based on a scenario, and that the script waits for a response on the screen.

The response on the screen can be determined using API’s giving information about windows present, or the controls on the windows. For instance: the script waits until a window is active with the caption "Microsoft Word".

Another way of determing if a response is given is by comparing the content of the screen with a bitmap. For instance: the script waits until an empty document is displayed in Microsoft Word.

The difference between the two techniques is that a window caption is present right when the application is launched (even if the application is still loading) while the content on the screen is more simular to the way users interact in a session. So looking at a screen region is more accurate, it prevents assumptions (best practice #9 in loadtesting best practices) like "how much time should we wait between lauching an application and clicking on a menu?”.

In this article I will be discussing some best about practices about “true” client side testing (best practice #12 in loadtesting best practices).

Validating design for virtualized branch office server

Written by Ingmar Verheij on May 30th, 2011. Posted in Performance testing

Recently a large system integrater asked me to validate a design they’ve made for a customer. Their customer has around 100 branch offices in europe and requested a new infrastructure that would be managed by the system integrator.

Although the design in general has been validated (the building process was started months ago), the scaling was based on estimates. In fact, there where some assumptions during the design phase. With the deadline coming closer more doubts rised about the scaling.

Assignment

We agreed on performing a loadtest to simulate the user actions, validate the design and find bottlenecks before the implementation. A nominal load of 100 users was required.

Secondly, the impact of a System Center Configuration Manager (SCCM) deployment on the overall performance was needed. Will there be an impact on the file and print capabilities and how much is that impact?

Loadtesting best practices – Part 2

Written by Ingmar Verheij on May 25th, 2011. Posted in Performance testing

This is the second part in a series two about loadtesting best practices.

The first part focused on the “basics” of loadtesting, most of them where about preparation. You can find the first part here.

In this second part I’ll focus on some more advanced topics which are usefull in a later stage of the process.

 

Donate