A Better Way of Reporting Performance Test Results

A Better Way of Reporting Performance Test Results

Powerful reporting of test results is one of the holy grails of our profession. If done correctly, it improves the project's quality and helps us focus on the actual issues. But if done badly, it adds confusion and reduces the value that testers bring.

Reporting the results of functional tests is relatively simple because these tests have a clear pass or fail result. Reporting the results of performance testing is much more nuanced.

Let's start with a definition: For the purpose of this guide, I use the term performance test to imply any evaluation that performs a measurement, with a range of numerical values all, considered an acceptable result. It may be the measurement of energy intake, the number of users a website serves in parallel, the rate that information can be read from a disk, etc.--any measurement of a nonfunctional requirement.

The first challenge in performance testing is deciding what's considered a"pass" Often this is neglected in the requirements definition stage. I've seen many requirements that read something like, "Data extraction time from the database shall be under 10 mSec," or"The rate of processing a movie file will likely be at least 100 frames per seconds (fps)." Such requirements are incomplete since they do not include the actual target we want to hit. We just know the worst result we agree to tolerate and approve the product. There are two issues here.

First, let's assume I conducted a test and found that video file processing is done at a rate of 101 fps (recall that the requirement was"at least 100 fps"). Sounds great, right? But does this mean we are close to the edge (that is, the product hardly meets the requirement) or everything is fine? If the requirement was well defined, it would have contained both the goal and the minimal --for instance, goal: 120 fps; minimal: 100 fps. With such a requirement, a result of 101 fps clearly indicates the product hardly meets the requirements.

Second, when a test fails slightly (e.g., 99 fps), the product manager is under pressure to be"flexible" and accept the product as is. How often have we heard, "Really, we are below the minimum, but we are nearly passing, so we can determine it's fine"? If the full requirement were available (goal: 120 fps), it would be clear how far the results are out of the target and the product has a real problem.

For the sake of completeness, I will mention that a nonfunctional requirement should not just specify target and minimum, but also the test method because the test method influences the results. By way of example, when measuring CPU utilization, the results could vary significantly depending on how we perform the measurement. Can we measure the maximum value listed? Over how long a time? Do we average dimensions? How many dimensions a second? What's running on the CPU in parallel to our test?

In theory, reporting performance test results should not be an issue in any way. Just present the results and indicate a pass or fail. But again, we do not just wish to know the result; we would like to get an idea of how the result is related to the target. Crafting a report that is not overly complex but still delivers a comprehensive picture of the status is a balancing act.

We can use a table:

But because most products have many performance requirements, we will end up with a huge table filled with numbers. It'll be hard to quickly see where there's a problem. We could use color to improve readability:

But this brings up more questions. Does it make sense that frame processing rate and CPU utilization get the same color code? One is practically failing, while another is well within the acceptable variety. So perhaps color frame processing in red? But then what color would we use for a collapse? And how long would we believe a result green before it should become yellow? Not to mention the problems that could happen because of some people having color-blindness.

I was thinking about this issue when my doctor sent me for my yearly blood check, which I really do meticulously--about every 3 years. Anyhow, the results from the laboratory included a list of dozens of numbers displayed in this format:

Despite the fact that I'm not a physician, I could tell immediately which results were fine, which were marginal, and which were something I should discuss with my physician.

A light bulb went on in my head: Why not use this method for reporting performance tests? I took a few data points and experimented with PowerPoint:

Notice that I still use colours, but the axis explains the selection of color and explains where higher is better and where lower is better in a color-independent way. The reader can clearly understand the position of each measurement inside the allowed range; the colours serve mainly to focus attention where there is trouble. Creating such a report takes some time, but it could be automatic.

I haven't yet seen this idea implemented in a real project--I am still working on that--but if you do use this idea, I'd be delighted to learn about your experience and the response from your organization.

Similar Articles

Optimization Of Magento 2 Website

The world of eCommerce has become extremely competitive at this time. This is the era where eCommerce companies are at their peak. And, at the same time, the competition in the eCommerce industry is also increasing like anything.

Why Should You Spot the Testing Needs Quicker Than You Think?

Almost everybody who is involved in the software development process has seen this situation whereas, the team just released a new edition of the application, however, there is something that’s not right, and something needs improvement. You and your team might have spotted something while testing that still needs some work. But right now, you are feeling the heat, as you have to act quickly in order to fix the issue.

The Critical Need for Stress Testing Web and Mobile Applications

Testing one's restrictions is something most of us experience every day, and in the world of web and mobile applications it's important a little pressure is requested a company to learn how these function.

Why Understanding Regression Defects Is Important For Your Next Release

'Regression' a word that is considered with a lot of pain by software testers around the technical world. Sometimes, we even wonder whether regression testing is needed? Why do we need to execute it when a bug-free software can never be ready? 

QA testing

Is actually quite staggering to think about simply how much testing needs to be done around the world on a daily basis. It's a natural effect of the overwhelming rate of technological development, delivered of unprecedented scale and complexity

Top 8 Reasons Why Businesses Should Invest in Mobile App Development

Businesses have started to take mobility quite seriously, way beyond merely being a trend or because competitors are also adopting it. It’s now the demand of time. Smartphones have turned into the first computing screen; people prefer their smartphones and apps to do everything for which they used PCs in the past.

How to Automate HIPAA Compliance with DevOps?

HIPAA compliance is the Health Insurance Portability and Accountability Act of 1996 that is provided by the US government to safeguard and provide security provisions to the medical information of employees of various organizations.

4 Epic Ways To Test A Mobile Application

On the net era, the mobile app testing is binary and weird at the same time as we all know you cannot find any mid-ground; either you lose or succeed as there's no returning. There has been increasing use of smartphones, tablets, and other mobile devices that contain accelerated mobile applications and its testing consistently.

security testing services

Many businesses fail to conduct frequent security testing despite thinking it's far critically important to securing their systems and data. One in five of businesses surveyed admitted they will not do any security testing, despite the fact that 95 percent of study respondents reported encountering one of the dozen common security issues associated with security vulnerabilities.