• info@helpingtesters.com
  • helpingtesters

Test Metrics or Testing Metrics

July 28, 2017
Test Metrics, test metrics template, test matrices in agile, test matrics in jira

Before knowing what is Test Metrics or Testing Metrics, it is important to know what Software Metrics is. It will give a better understanding of what Test Metrics is. 

What is Software Metrics

Talking in Layman’s language in term of testing or in broader term Project Management, there has to be some time estimation for making project and estimation for tasks and activities to be performed while building the project. These estimations are provided by past project’s software metrics which help in making an estimation for present and future projects and control deviation of the project from the project plan. There is a very famous statement by “Tom DeMarco” – You can’t control what you cannot measure. Thus we can see how much important Software Metrics is for project management.

 

Test Metrics Lifecycle

Analysis

  • Identify metrics to use.
  • Define metrics identified.
  • Define parameters for evaluating the metrics identified.

Communication

  • Educate the need of metric to stakeholder and the testing team.

Evaluation

  • Capture the data.
  • Verify the data.
  • Calculate particular metrics value using the data captured.

Reporting

  • Develop the report with an effective conclusion.
  • Distribute the report to the stakeholder and their representative.
  • Take feedback from the stakeholder.

 

Why Test Metrics

  • Managing with metrics-for testing allows us to manage with facts and thus helps in proper estimation.
  • Subjective, uninformed opinions are not appropriate for taking decisions.
  • What sounds reasonable can be wrong in test metrics.

 

Manual Test Metrics

  • Test Case Productivity
  • Test Execution Summary
  • Defect Acceptance
  • Defect Rejection
  • Defect Leakage
  • Bad Fix Defect
  • Test execution Productivity
  • Test Efficiency
  • Defect severity Index

 

Common Metrics

  • Effort Variance
  • Schedule Variance
  • Scope Change

 

Manual Test Metrics

 

1. Test Case Productivity

Test Case Productivity is also known as Test Case Design Productivity. It is defined as the ratio of the total number of test steps and total efforts in hours to design test cases. More is the test case productivity; more is the number of test cases prepared in less time. This information helps in future estimation for designing test cases. It is measured in unit Test Steps/ Hour and formula for calculation is as below:

Test Case Productivity = (Number of Test Steps Prepared) / (Effort spent for Test Case Preparation)            

Example

Test Case Name No. of Test Steps
TC01 30
TC02 32
TC03 40
TC04 36
TC05 45
Total no. of  Steps 183

 

Now if efforts taken to write 183 steps are 8 hours

Test Case Productivity = 183/8 = 22.8 steps/hour

Test Case Productivity = 23 steps/hour

So, we can calculate Test Case Productivity of each month and get Test Case Productivity Trend and get to know whether test case productivity is increasing or decreasing over a certain period of time. Sample graph showing Test Case Productivity is as below:

2. Test Execution Summary

Test Execution Summary report contains a status report of all test case executed in the certain period of time or for a particular release, generally done weekly, in the form of a weekly status report or at the end of the release. Status of test cases may be Pass, Fail or not executed. It may be in Excel sheet or graphical. Sample Test Execution Summary is as shown below:

3. Defect Acceptance

When a defect is found by QA team, it is sent to development team which validates or rejects the sent bug. Defect after getting validated by development team becomes a valid Defect. So, we can say that total number of defects sent by QA team is not usually valid defects. Defect acceptance which is measured as Defect Acceptance Ratio denotes the percentage of valid defects and is calculated by formula as given below:

Defect Acceptance Ratio = (Number of Valid Defects Accepted by Development team) / (Number of Defects Reported by Test Team X100)%

Defect Acceptance can be compared on monthly basis to get Defect Acceptance Trend. Sample Defect Acceptance Trend is as shown below.

4. Defect Rejection

When a defect is found by QA team, it is sent to development team which validates or rejects the sent bug. Defect after getting rejected by development team becomes an invalid Defect. So, we can say that total number of defects sent by QA team is not usually valid defects. Defect rejection which is measured as Defect rejection Ratio denotes the percentage of valid defects and is calculated by formula as given below:

Defect Rejection Ratio = (Number of Invalid Defects rejected by Development team) / (Number of Defects Reported by Test Team X100)%

Defect rejection can be compared on monthly basis to get Defect Rejection Trend. Sample Defect Rejection Trend is as shown below:

5. Defect Leakage

Defect Leakage is a defect(s) which are not detected during testing before production to defect detected after application moves to production. It is calculated as Defect Leakage Ratio, the formula for which is as given below.

Defect Leakage Ratio = (Number of defects left undetected) / (Number of valid defects reported by testers)*100

If 21 defects are found by customer post-release and 250 valid defects are reported by tester, then Defect Leakage Ratio=21/250 =8.4%

6. Bad Fix Defect

Whenever a defect is found by QA team, it is sent to the development team for fixing. Development team after validating the defect start working on it and fix it and sent back to the testing team for regression testing. During regression testing, it is many times found that reported defect is fixed but many new defects arise in related features due to bad defect fixing. It is not a good thing and should be minimized. Bad fix defect is calculated as below:

Bad fix defect = (Total number of bad fix defect(s) / Total number of defects validated by development team X 100)%

Bad fix defect can be compared on monthly basis to get Bad fix defect Trend. Sample Bad fix defect is as shown below:

7. Test Execution Productivity

Test Execution Productivity may be defined as the ratio of the total number of Test Cases executed to Total Effort taken to complete Test Execution. The effort is measured in Test Cases/ Person hour or Test Cases/ Person Day. Test Execution Effort includes total test execution effort including re-testing and test results review effort

Test Execution Productivity = (Total number of Test Cases executed) / Total Effort was taken to complete Test Execution.

Resource Number of Test Cases Executed Efforts Spend per Resource Test Case Execution Productivity
Associate 1 20 8 10
Associate 2 10
Associate 3 15
Associate 4 15
Associate 5 20
Total 80

Here the total number of Test Cases Executed is 80 and effort spent per resource are 8. Total effort in hours is 40.

So, Test Execution Productivity

= Total number of Test Cases Executed / Effort spent per resource

= 80/8 =10 Test Case/Person Day

= 80/ 40 = 2 Test Cases/Person Hour

Test Execution Productivity can be compared on monthly basis to get Test Execution Productivity Trend. Sample Test Execution Productivity is as shown below:

 

8. Test Efficiency or Defect Removal Efficiency

Test Efficiency is the ratio of Test Defects found during testing to the total number of defect found during testing as well as defect found during User Acceptance Testing (Test Defects + Acceptance Defects). The formula for calculating Test Efficiency is as below:

Test efficiency = [DT/DT+DU*100] %

Where,

DT = number of valid defects identified during testing.

DU = number of valid defects identified by the end user after the release of the application.

Test Efficiency can be compared on monthly basis to get Test Efficiency Trend. Sample Test Efficiency Trend graph is as shown below:

9. Defect Severity Index

Defect Severity Index is the measurement of the severity index of the defects in the software application. The formula for calculating Defect Severity Index is as below:

Defect Severity Index = [Sum of (Defect * Severity Level)] / Total number of defects

The unique number is assigned to each severity level with the highest value given to most severe defect. Numbering is done as 4 for the Critical defect, 3 for the Major defect, 2 for the Medium defect, 1 for a Minor defect. More is the value of Defect Severity Index, lesser is the quality of the application under test. Unit of measurement is the number.

Defect Severity Index can be compared on monthly basis to get Defect Severity Index Trend. Sample Defect Severity Index Trend Analysis is as shown below.

 

Common Metrics for all types of testing

10. Effort Variance (EV)

Effort variance (EV) denotes the actual variance between actual effort and planned effort. It is calculated as shown below:

Effort variance = [Actual effort-Estimated effort / Estimated effort *100] %

As shown in the graph above, Estimated Effort is 100 and Actual Effort is 105. Though it is not perfect, considered as good estimation but if it had been 95 instead of 105, estimation would be considered as bad estimation.

11. Schedule Variance (SV)

Schedule variance is used as indicator denoting whether the project is meeting the deadline and maintaining the schedule or lagging behind. It is calculated as shown below:

Schedule variance = [actual Number Of days – estimated Number Of days / estimated Number Of days *100] %

We can also make Trend Analysis Graph based on estimated days and Actual days

12. Scope Changes (SC)

Scope change is agreed upon decrease or increase of scope by stakeholders such as client, Project Managers, and other related people to make adjustments to the cost, budget, timeline etc..

Scope changes = [total scope – previous scope / previous scope*100] %

Overall, we can say that Testing metrics not only helps in tracking testing status and quality of the application but also helps in the estimation of testing activities for future projects.

About the author

RamPrakash Singh author

Ram Prakash has worked in various domains of testing including security, performance, security testing and automation testing. Including several tools like QTP, selenium, LoadRunner,JMeter, VSTS Coded UI, soap UI, Burp Suite etc.

Leave a Reply

Your email address will not be published.