Search This Blog

Monday, July 29, 2013

lists of objects that are frequently used in QTP & VB Script

Here are some lists of objects that are frequently used in VB Script:

Set objEmail = CreateObject( “CDO.Message” )
Set objIE = CreateObject( “InternetExplorer.Application” )
Set objInet = CreateObject( “InetCtls.Inet.1″ )
Set objHTTP = CreateObject( “WinHttp.WinHttpRequest.5.1″ )
Set objExcel = CreateObject( “Excel.Application” )
Set objExcelSheet = CreateObject( “Excel.Sheet” )
Set objOutlook = CreateObject( “Outlook.Application” )
Set objPpt = CreateObject( “PowerPoint.Application” )
Set objWord = CreateObject( “Word.Application” )
Set objCal = CreateObject( “MSCAL.Calendar” )
Set objQPro = CreateObject( “QuattroPro.PerfectScript” )
Set objWP = CreateObject( “WordPerfect.PerfectScript” )
Set objConn = CreateObject( “ADODB.Connection” )
Set objRecSet = CreateObject( “ADODB.Recordset” )
Set objDic = CreateObject( “Scripting.Dictionary” )
Set objFSO = CreateObject( “Scripting.FileSystemObject” )
Set wshNetwork = CreateObject( “WScript.Network” )
Set wshShell = CreateObject( “WScript.Shell” )
Set objRandom = CreateObject( “System.Random” )
Set objArrList = CreateObject( “System.Collections.ArrayList” )
Set objSortList = CreateObject( “System.Collections.SortedList” )
Set xmlDoc = CreateObject( “Microsoft.XmlDom” )
Set xml2Doc = CreateObject( “Msxml2.DOMDocument.5.0″ )
Set objiTunes = CreateObject( “iTunes.Application” )
Set objPlayer = CreateObject( “WMPlayer.OCX” )
Set objWMPlayer = CreateObject( “WMPlayer.OCX.7″ )
Set objReal = CreateObject( “rmocx.RealPlayer G2 Control.1″ )
Set objFSDialog = CreateObject( “SAFRCFileDlg.FileSave” )
Set objFODialog = CreateObject( “SAFRCFileDlg.FileOpen” )
Set objDialog = CreateObject( “UserAccounts.CommonDialog” )
Set SOAPClient = CreateObject( “MSSOAP.SOAPClient” )
Set objWOL = CreateObject( “UltraWOL.ctlUltraWOL” )
Set objSearcher = CreateObject( “Microsoft.Update.Searcher” )
Set objShell = CreateObject( “Shell.Application” )
Set objDeviceReplay=CreateObject(“Mercury.DeviceReplay”)

Here are some examples how to use this objects:

Description: Creates and returns a reference to an Automation object.

Syntax: CreateObject(class)

The class argument uses the syntax servername.typename and has these parts:
servername: The name of the application providing the object.
typename: The type or class of the object to create.

Remarks: Automation servers provide at least one type of object. For example, a word-processing application may provide an application object, a document object, and a toolbar object. To create an Automation object, assign the object returned by CreateObject to an object variable:

Dim ExcelSheet
Set ExcelSheet = CreateObject(“Excel.Sheet”)

This code starts the application creating the object (in this case, a Microsoft Excel spreadsheet). Once an object is created, you refer to it in code using the object variable you defined. In the following example, you access properties and methods of the new object using the object variable, ExcelSheet, and other Excel objects, including the Application object and the Cells collection. For example:

‘ Make Excel visible through the Application object.
ExcelSheet.Application.Visible = True

‘ Place some text in the first cell of the sheet.
ExcelSheet.Cells(1,1).Value = “This is column A, row 1″

‘ Save the sheet.
ExcelSheet.SaveAs “C:\DOCS\TEST.XLS”

‘ Close Excel with the Quit method on the Application object.

‘ Release the object variable.
Set ExcelSheet = Nothing

Example 2: (“Excel.Application” )

For example to create an excel application object:

‘Close all the open excel sheet open on your desktop
Systemutil.CloseProcessByName “excel.exe”

‘Create a new excel file
Set Excel = createObject(“Excel.Application”)

‘Open the excel sheet
Set SExcelSheet = Excel.Workbooks.Open(“D:\Data\Compa.xls”)

‘Show the excel sheet in your desk to

‘Write the value (text) in the excel sheet(in 1st row, 2nd column)

‘Close any pop up alart message box due to excel error
Excel.DisplayAlerts = False
‘To run a macro in excel
Excel.Run “Compa”
‘Save the same updated file in different location with different name
SExcelSheet.SaveAs “D:\Elements\BaseLine.xls”

‘Close the excel sheet

‘quit the excel application from system

Example 3A: (“Scripting.FileSystemObject”)

‘Drive path where you want to create the folder
strDrive = “D:/Data”

‘Name of the folder to be created
strfoldername= “New Folder”‘

‘Combined the path with folder name
strPath= strDrive&strfoldername

‘ Create FileSystemObject.
Set objFSO = CreateObject(“Scripting.FileSystemObject”)

On Error Resume Next ‘ pass this error if folder already exist

‘ Create a Folder, using strPath
Set objFolder = objFSO.CreateFolder(strPath)

Example 3B:

‘ Get the name of file extentionmsgbox
GetAnExtension(“D:\Documents and Settings\Execution Summary.htm
Function GetAnExtension(DriveSpec)
Dim fso Set fso = CreateObject(“Scripting.FileSystemObject”)
GetAnExtension = fso.GetExtensionName(Drivespec) ‘msgbox GetAnExtension
End Function

Example 4: (“CDO.Message”)

Dim objMessage
‘create the message object to send an email
Set objMessage = CreateObject(“CDO.Message”)

‘Add subject on your message
objMessage.Subject = “QTP Results – Automated Testing”

objMessage.From = “” ‘ Change this for your own from address

objMessage.To = ‘Send to email id
objMessage.CC = ‘CC to email id
‘Body text message
objMessage.TextBody =”N.B. – Please Do Not Reply This Message Directly.”

‘Include File attachments here
objMessage.AddAttachment “D:\Data\file.text”

‘This section provides the configuration information for the remote SMTP server.
objMessage.Configuration.Fields.Item _
(“”) = 2

‘Name or IP of Remote SMTP Server
objMessage.Configuration.Fields.Item _
(“”) = “”

‘Server port (typically 25)
objMessage.Configuration.Fields.Item _
(“”) = 25

‘End remote SMTP server configuration section==
‘Send the email

Example 5: (“Wscript.Shell”)

Set WshShell = CreateObject(“Wscript.Shell”)
Dim Response
‘ Displays a message box with the yes and no options.
Response = MsgBox(“Please Select your choice as ‘Yes’ or ‘No’.” & vbcrlf & vbcrlf & “Do you want to Select “Yes” or “No” ?”, vbYesNo)
‘ If statement to check if the yes button was selected.
If Response = vbYes Then
‘message box will appear for 3 second
WshShell.Popup “You Have Been Selected “Yes”. Please wait.”, 3, “Your Selection” ‘-WshSheel.Popup “message”, “time to wait”, “message box title”

‘ The no button was selected.
‘message box will appear for 5 second
WshShell.Popup “You Have Been Selected “No””, 5, “Your Selection”

Example 6A: (“Mercury.DeviceReplay”)

Here is the example of ‘Mercury.DeviceReplay’ Object used in QTP:

abs_x = objWebList.GetROProperty(“abs_x”)
abs_y = objWebList.GetROProperty(“abs_y”)
Set objMercuryMouse = CreateObject (“Mercury.DeviceReplay”)
mercuryMouse.MouseMove abs_x,abs_y

Example 6B:(“Mercury.DeviceReplay”)
We can use ‘Mercury.DeviceReplay’ simply to enter data in the fields. Here is a simple example of this. But before using this object, you need to select the object where the data needs to enter
For i = 1 to 10
Set dr=CreateObject(“Mercury.DeviceReplay”)
Set dr=Nothing

Example 7(“Scripting.Dictionary”)

This function will generate user specified random numbers


Function RanNumber(val)
Dim d Set d=nothing
Set d = createobject(“Scripting.Dictionary”)
For i =1 to val
r=RandomNumber (0,9)
d.add i, r
a = d.items ‘Get the items.
For i = 0 To d.Count -1 ‘ Iterate the array.
s = s&a(i)’Create return string.
End Function

Example 8A: (“ADODB.Connection”) /(“ADODB.Recordset”)

This function will execute an specific query from database using a dedicated database connection string

Function database()

‘DATABASE connection
Const adOpenStatic = 3
Const adLockOptimistic = 3
Const adUseClient = 3
Set objConnection = CreateObject(“ADODB.Connection”)
Set objRecordset = CreateObject(“ADODB.Recordset”)
objConnection.Open “DRIVER={Microsoft ODBC for Oracle};UID=”& v_MHXMLDBSchema &” ;PWD=” & v_MHXMLDBPwd & “;SERVER=” & v_DBInstance &”;”
objRecordset.CursorLocation = adUseClient
objRecordset.CursorType = adopenstatic
objRecordset.LockType = adlockoptimistic
objRecordset.Source=”select SOP from MHXML.FIRMS_STG where org_id in 681915″
ObjRecordset.Open ‘This will execute query
If ObjRecordset.recordcount>0 then
Field1 = ObjRecordset(“SOP”).Value
‘Field2 = ObjRecordset(“LAST_NAME”).Value
msgbox Field1
‘msgbox Field2
End if
End Function

Example 8B: (“Database connection without ADODB.Connection Object”)

This function will retrive the database value even if the value is null or empty.

Function GetAttorneyInfo(InField,ALid)

‘ Creating the database connection
MHXMLconnection_string=”DRIVER={Microsoft ODBC for Oracle};UID=”& v_MHXMLDBSchema &” ;PWD=” & v_MHXMLDBPwd & “;SERVER=” & v_DBInstance &”;”
isMHXMLConnected = db_connect ( MHXMLConnection ,MHXMLconnection_string )
If isMHXMLConnected=0Then ‘ get the data from the table
v_Exe_SQL2=”Select length(NVL(” & InField & “,’Data Not Found’)) from lbmgradmin.ilv_vw where ilisting_id = ” & ALid
set RecSet_SOPInfo_LEN=db_execute_query( MHXMLConnection , v_Exe_SQL2 )
d_SOPInfo_Length=db_get_field_value( RecSet_SOPInfo_LEN , 0 , 0 )
‘msgbox d_SOPInfo_Length
v_Exe_SQL2=”select substr(to_char(NVL(” & InField & “,’Data Not Found’)),1,” & d_SOPInfo_Length & “) from lbmgradmin.ilv_vw where ilisting_id = ” & ALid
set RecSet_SOPInfo=db_execute_query( MHXMLConnection , v_Exe_SQL2 )
RowCnt=db_get_rows_count( RecSet_SOPInfo )
If RowCnt=1Then
d_SOPInfo=db_get_field_value( RecSet_SOPInfo , 0 , 0 )
d_SOPInfo=db_get_field_value( RecSet_SOPInfo , 0 , 0 )
End If
End If
‘If isMHXMLConnected=0 Then db_disconnect MHXMLConnection

End Function

‘ Database functions
Function db_connect( byRef curSession ,connection_string)
dim connection
on error Resume next
‘ Opening connection
set connection = CreateObject(“ADODB.Connection”)
If Err.Number 0 then
db_connect= “Error # ” & CStr(Err.Number) & ” ” & Err.Description
Exit Function
End If
connection.Open connection_string
If Err.Number 0 then
db_connect= “Error # ” & CStr(Err.Number) & ” ” & Err.Description
Exit Function
End If
set curSession=connection
End Function

‘ Db Disconnect – Function to disconnect the database connection
Function db_disconnect( byRef curSession )
set curSession = Nothing
End Function

‘ DB Execute Query – Function to execute the query
Function db_execute_query ( byRef curSession , SQL)
set rs = curSession.Execute( SQL )
set db_execute_query = rs
End Function

‘ DB Function to get the number of rows in the record set
Function db_get_rows_count( byRef curRS )
dim rows
rows = 0
Do Until curRS.EOF
rows = rows+1
db_get_rows_count = rows
End Function

‘ Function to fetch the records from the record set

Function db_get_field_value( curRecordSet , rowIndex , colIndex )
dim curRow
count_fields = curRecordSet.fields.count-1
If ( TypeName(colIndex) “String” ) and ( count_fields


Example 8C: (“ADODB.Connection”) /(“ADODB.Recordset”)
This following code will get the data from excel sheet located on the following path:
Dim Get_Field
set connectToDB = CreateObject(“ADODB.Connection”)
connectToDB.Provider = “Microsoft.Jet.OLEDB.4.0″
connectToDB.Properties(“Extended Properties”).Value = “Excel 8.0″
connectToDB.Open “D:\Documents and Settings\pauldx\Desktop\Data.xls”
strQuery=”Select Age from [Data$] WHERE Name =’Joli’”
Set rsRecord = CreateObject(“ADODB.Recordset”)
rsRecord.Open strQuery,connectToDB,1,1
‘ msgbox rsRecord.RecordCount
If rsRecord.RecordCount>0 Then
for i= 1 to rsRecord.RecordCount
print Get_Field
Get_Field=”Field Not Present”
End If

Example 9:(“AcroExch.App” / “AcroExch.AVDoc”)

‘Below code search for word ‘Software’ from the pdf file

Option Explicit
Dim accapp, acavdocu
Dim pdf_path, bReset, Wrd_count
pdf_path=”C:\Program Files\Om\Om 1.1 User Manual.pdf”
‘AcroExch is acrobat application object
Set accapp=CreateObject(“AcroExch.App”)

‘Need to create one AVDoc object par displayed document
Set acavdocu=CreateObject(“AcroExch.AVDoc”)

‘Opening the PDF
If acavdocu.Open(pdf_path,””) Then
bReset=1 : Wrd_count = 0
‘Find Text Finds the specified text, scrolls so that it is visible, and highlights it
Do While acavdocu.FindText(“software”, 1, 1, bReset)
bReset=0 : Wrd_count=Wrd_count+1
Wait 0, 200
End If

msgbox “The word ‘software’ was found ” & Wrd_count & “times”
Set accap=nothing : Set accapp=nothing

(Note: you can only use the following code if you have acrobat professional installed. If you just have adobe reader standard version installed you will get this error message – “ActiveX component can’t create object: ‘AcroExch.PDDoc”)

Example 10:(“DotNetFactory”)
These Functions will conversion of Binary to Hexadecimal/ Decimal or vice versa.
Here we can see the use of DotNetFactory utility with create an instance of “System.Convert”

‘Binary to Hexadecimal conversion
Print “&H: ” & BinToHex(“00001110100111011111101000111011″)

Function BinToHex( bits )
If( bits “” ) Then
BinToHex = 2 * BinToHex( Left( bits, Len( bits ) – 1 ) ) + CLng( Right( bits, 1 ) )
End If
End Function

‘Decimal to Binary conversion
Print DecToBin(245234235,32)

Public Function DecToBin( decNum, bitsCount )
Dim str
str = DotNetFactory.CreateInstance( “System.Convert” ).ToString( Clng( decNum ) , 2 )
DecToBin = String( bitsCount – Len( str ), “0″ ) & str
End Function


Metrics in Automation

                                                            AUTOMATION MTERICS


“When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”

-- Lord Kelvin, a physicist.



As part of a successful automated testing program it is important that goals and strategies are defined and then implemented. During implementation progress against these goals and strategies set out to be accomplished at the onset of the program needs to be continuously tracked and measured. This article discusses various types of automated and general testing metrics that can be used to measure and track progress.

Based on the outcome of these various metrics the defects remaining to be fixed in a testing cycle can be assessed; schedules can be adjusted accordingly or goals can be reduced. For example, if a feature is still left with too many high priority defects a decision can be made that the ship date is moved or that the system is shipped or even goes live without that specific feature.

Success is measured based on the goal we set out to accomplish relative to the expectations of our stakeholders and customers.

if you can measure something, then you have something you can quantify. If you can quantify something, then you can explain it in more detail and know something more about it. If you can explain it, then you have a better chance to attempt to improve upon it, and so on.

Metrics can provide insight into the status of automated testing efforts.

Automation efforts can provide a larger test coverage area and increase the overall quality of the product. Automation can also reduce the time of testing and the cost of

delivery. This benefit is typically realized over multiple test cycles and project cycles. Automated testing metrics can aid in making assessments as to whether progress, productivity and quality goals are being met.

What is a Metric?

The basic definition of a metric is a standard of measurement. It also can be described as a system of related measures that facilitates the quantification of some particular characteristic.1 For our purposes, a metric can be looked at as a measure which can be utilized to display past and present performance and/or used for predicting future performance.

What Are Automated Testing Metrics?

Automated testing metrics are metrics used to measure the performance (e.g. past, present, future) of the implemented automated testing process.

What Makes A Good Automated Testing Metric?

As with any metrics, automated testing metrics should have clearly defined goals of the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, it should be something that directly relates to the performance of the effort.

Prior to defining the automated testing metrics, there are metrics setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important, if you do not have goals, what is it that you are measuring? It is also important to continuously track and measure on an ongoing basis. Based on the metrics outcome, then you can decide if changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, there may be questions that need to be asked of the current state of affairs. Decide what questions can be asked to determine whether or not you are tracking towards the defined goals. For example:

                        How much time does it take to run the test plan?

                        How is test coverage defined (KLOC, FP, etc)?

                        How much time does it take to do data analysis?

                        How long does it take to build a scenario/driver?

                        How often do we run the test(s) selected?

                        How many permutations of the test(s) selected do we run?

                        How many people do we require to run the test(s) selected?

                        How much system time/lab time is required to run the test(s) selected?


In essence, a good automated testing metric has the following characteristics:

                        is Objective

                        is Measurable

                        is Meaningful

                        has data that is easily gathered

                        can help identify areas of test automation improvement

                        is Simple


A good metric is clear and not subjective, it is able to be measured, it has meaning to the project, it does not take enormous effort and/or resources to obtain the data for the metric, and it is simple to understand. A few more words about metrics being simple. Albert Einstein once said

“Make everything simple as possible, but not simpler.”

When applying this wisdom towards software testing, you will see that:

                        Simple reduces errors

                        Simple is more effective

                        Simple is elegant

                        Simple brings focus


Percent Automatable

At the beginning of an automated testing effort, the project is either automating existing manual test procedures, starting a new automation effort from scratch, or some combination of both. Whichever the case, a percent automatable metric can be determined.

Percent automatable can be defined as: of a set of given test cases, how many are automatable? This could be represented in the following equation:

ATC # of test cases automatable

PA (%) = -------- = ( ----------------------------------- )

TC # of total test cases

PA = Percent Automatable

ATC = # of test cases automatable

TC = # of total test cases

In evaluating test cases to be developed, what is to be considered automatable and what is not to be considered automatable? Given enough ingenuity and resources, one can argue that almost anything can be automated. So where do you draw the line? Something that can be considered ‘not automatable’ for example, could be an application area that is still under design, not very stable, and much of it is in flux. In cases such as this, we should:

“evaluate whether it make sense to automate”


We would evaluate for example, given the set of automatable test cases, which ones would provide the biggest return on investment:

“just because a test is automatable doesn’t necessary mean it should be automated”

When going through the test case development process, determine what tests can be AND makes sense to automate. Prioritize your automation effort based on your outcome. This metric can be used to summarize, for example, the % automatable of various projects or component within a project, and set the automation goal.


Automation Progress

Automation Progress refers to, of the percent automatable test cases, how many have been automated at a given time? Basically, how well are you doing in the goal of automated testing? The goal is to automat 100% of the “automatable” test cases. This metric is useful to track during the various stages of automated testing development.

AA # of actual test cases automated

AP (%) = -------- = ( -------------------------------------- )

ATC # of test cases automatable

AP = Automation Progress

AA = # of actual test cases automated

ATC = # of test cases automatable

The Automation Progress metric is a metric typically tracked over time. In the case below, time in “weeks”.

A common metric closely associated with progress of automation, yet not exclusive to automation is Test Progress. Test progress can simply be defined as the number of test cases attempted (or completed) over time.

TC # of test cases (attempted or completed)

TP = -------- = ( ------------------------------------------------ )

T time (days/weeks/months, etc)

TP = Test Progress

TC = # of test cases (either attempted or completed)

T = some unit of time (days / weeks / months, etc)

The purpose of this metric is to track test progress and compare it to the plan. This metric can be used to show where testing is tracking against the overall project plan. Test Progress over the period of time of a project usually follows an “S” shape. This typical “S” shape usually mirrors the testing activity during the project lifecycle. Little initial testing, followed by an increased amount of testing through the various development phases, into quality assurance, prior to release or delivery.

This is a metric to show progress over time. A more detailed analysis is needed to determine pass/fail, which can be represented in other metrics.

Percent of Automated Testing Test Coverage

Another automated software metric we want to consider is Percent of Automated Testing Test Coverage. That is a long title for a metric to determine what test coverage is the automated testing actually achieving? It is a metric which indicates the completeness of the testing. This metric is not so much measuring how much automation is being executed, but rather, how much of the product’s functionality is being covered. For example, 2000 test cases executing the same or similar data paths may take a lot of time and effort to execute, does not equate to a large percentage of test coverage. Percent of automatable testing coverage does not specify anything about the effectiveness of the testing taking place, it is a metric to measure its’ dimension.

AC automation coverage

PTC(%) = ------- = ( ------------------------------- )

C total coverage

PTC = Percent of Automatable testing coverage

AC = Automation coverage

C = Total Coverage (KLOC, FP, etc)

Size of system is usually counted as lines of code (KLOC) or function points (FP). KLOC is a common method of sizing a system, however, FP has also gained acceptance. Some argue that FPs can be used to size software applications more accurately. Function Point Analysis was developed in an attempt to overcome difficulties associated with KLOC (or just LOC) sizing. Function Points measure software size by quantifying the functionality provided to the user based logical design and functional specifications. There is a wealth of material available regarding the sizing or coverage of systems. A useful resourse is Stephen H Kan’s book entitled ”Metrics and Models in Software Quality Engineering” (Addison Wesley, 2003).

The Percent Automated Test Coverage metric can be used in conjunction with the standard software testing metric called Test Coverage.

TTP total # of TP

TC(%) = ------- = ( ----------------------------------- )

TTR total # of Test Requirements

TC = Percent of Testing Coverage

TTP = Total # of Test Procedures developed

TTR = Total # of defined Test Requirements

This measurement of test coverage divides the total number of test procedures developed, by the total number of defined test requirements. This metric provides the test team with a barometer to gage the depth of test coverage. The depth of test coverage is usually based on the defined acceptance criteria. When testing a mission critical system, such as operational medical systems, the test coverage indicator would need to be high relative to the depth of test coverage for non-mission critical systems. The depth of test coverage for a commercial software product that will be used by millions of end users may also be high relative to a government information system with a couple of hundred end users. 3

Defect Density

Measuring defects is a discipline to be implemented regardless if the testing effort is automated or not. Josh Bloch, Chief Architect at Google stated:

Regardless of how talented and meticulous a developer is, bugs and security vulnerabilities will be found in any body of code – open source or commercial.”, “Given this inevitably, it’s critical that all developers take the time and measures to find and fix these errors.”

Defect density is another well known metric not specific to automation. It is a measure of the total known defects divided by the size of the software entity being measured. For example, if there is a high defect density in a specific functionality, it is important to conduct a causal analysis. Is this functionality very complex, and therefore it is to be expected that the defect density is high? Is there a problem with the design/implementation of the functionality? Were the wrong (or not enough) resources assigned to the functionality, because an inaccurate risk had been assigned to it? It also could be inferred that the developer, responsible for this specific functionality, needs more training.

D # of known defects

DD = ------- = ( ------------------------------- )

SS total size of system

DD = Defect Density

D = # of known defects

SS = Total Size of system

One use of defect density is to map it against software component size. A typical defect density curve that we have experienced looks like the following, where we see small and lager sized components having a higher defect density ratio as shown below. Additionally, when evaluating defect density, the priority of the defect should be considered. For example, one application requirement may have as many as 50 low priority defects and still pass because the acceptance criteria have been satisfied. Still, another requirement might only have one open defect that prevents the acceptance criteria from being satisfied because it is a high priority. Higher priority requirements are generally weighted heavier.

The graph below shows one approach to utilizing the defect density metric. Projects can be tracked over time (for example, stages in the development cycle).

Another closely related metric to Defect Density is Defect Trend Analysis. Defect Trend Analysis is calculated as:

4 Graph adapted from article:

D # of known defects

DTA = ------- = ( ------------------------------------ )

TPE # of test procedures executed

DTA = Defect Trend Analysis

D = # of known Defects

TPE = # of Test Procedures Executed over time

Defect Trend Analysis can help determine the trend of defects found. Is the trend improving as the testing phase is winding down or is the trend worsening? Defects the test automation uncovered that manual testing didn't or couldn't have is an additional way to demonstrate ROI. During the testing process, we have found defect trend analysis one of the more useful metrics to show the health of a project. One approach to show trend is to plot total number of defects along with number of open Software Problem Reports as shown in the graph below.


Effective Defect Tracking Analysis can present a clear view of the status of testing throughout the project. A few additional common metrics sometimes used related to defects are as follows:


􀂾 Cost to locate defect = Cost of testing / the number of defects located


􀂾 Defects detected in testing = Defects detected in testing / total system defects



􀂾 Defects detected in production = Defects detected in production/system size


Some of these metrics can be combined and used to enhance quality measurements as shown in the next section.

Actual Impact on Quality

One of the more popular metrics for tracking quality (if defect count is used as a measure of quality) through testing is Defect Removal Efficiency (DRE), not specific to automation, but very useful when used in conjunction with automation efforts. DRE is a metric used to determine the effectiveness of your defect removal efforts. It is also an indirect measurement of the quality of the product. The value of the DRE is calculated as a percentage. The higher the percentage, the higher positive impact on the quality of the product. This is because it represents the timely identification and removal of defects at any particular phase.

DT # of defects found during testing

DRE(%) = --------------- = ( -------------------------------------------- )

DT + DA # of defects found during testing +

# of defect found after delivery

DRE = Defect Removal Efficiency

DT = # of defects found during testing

DA = # of defects acceptance defects found after delivery The highest attainable value of DRE is “1” which equates to “100%”. In practice we have found that an efficiency rating of 100% is not likely. DRE should be measured during the different development phases. If the DRE is low during analysis and design, it may indicate that more time should be spent improving the way formal technical reviews are conducted, and so on.

This calculation can be extended for released products as a measure of the number of defects in the product that were not caught during the product development or testing phase.

Other Software Testing Metrics

Along with the metrics mentioned in the previous sections, here are a few more common test metrics. These metrics do not necessarily just apply to automation, but could be, and most often are, associated with software testing in general. These metrics are broken up into three categories:

                        Coverage: Meaningful parameters for measuring test scope and success.


Progress: Parameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g. time to fix defects, time to test, etc).


                        Quality: Meaningful measures of excellence, worth, value, etc. of the testing product. It is difficult to measure quality directly; however, measuring the effects of quality is easier and possible.


5 Adapted from “Automated Software Testing” Addison Wesley, 1999, Dustin, et al

Metric Name



Test Coverage

Total number of test procedures/total number of test requirements.

The Test Coverage metric will indicate planned test coverage.


System Coverage Analysis

The System Coverage Analysis measures the amount of coverage at the system interface level.


Test Procedure Execution Status

Executed number of test procedures/total number of test procedures

This Test Procedure Execution metric will indicate the extent of the testing effort still outstanding.


Error Discovery Rate

Number total defects found/number of test procedures executed. The Error Discovery Rate metric uses the same calculation as the defect density metric. Metric used to analyze and support a rational product release decision


Defect Aging

Date Defect was opened versus date defect was fixed

Defect Aging metric provides an indication of turnaround of the defect.


Defect Fix Retest

Date defect was fixed & released in new build versus date defect was re-tested. The Defect Fix Retest metric provides an idea if the testing team is re-testing the fixes fast enough, in order to get an accurate progress metric


Current Quality Ratio

Number of test procedures successfully executed (without defects) versus the number of test procedures. Current Quality Ratio metric provides indications about the amount of functionality that has successfully been demonstrated.


Quality of Fixes

Number total defects reopened/total number of defects fixed

This Quality of Fixes metric will provide indications of development issues.


Ratio of previously working functionality versus new errors introduced

The Quality of Fixes metric will keep track of how often previously working functionality was adversarial affected by software fixes.


Problem Reports

Number of Software Problem Reports broken down by priority. The Problem Reports Resolved measure counts the number of software problems reported, listed by priority.


Test Effectiveness

Test effectiveness needs to be assessed statistically to determine how well the test data has exposed defects contained in the product.


Test Efficiency

Number of test required / the number of system errors