tag:blogger.com,1999:blog-48035483885108939742024-03-13T06:54:00.720+05:30Machars BlogReal World approach to technology and testing.Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.comBlogger172125tag:blogger.com,1999:blog-4803548388510893974.post-77012610811166554982012-01-25T01:58:00.006+05:302012-01-25T02:15:13.456+05:30CNG Filling Points In Hyderabad, AP, IndiaCNG refueling points operated by BGL(Bhagyanagar Gas) in collaboration with HPCL in Hyderabad till Dec'2011 are<br /><br />1. BGL Mother Station, Shamirpet, Hyderabad-Karimnagar Route<br />2. COCO BGL Saroornagar, Near Saroornagar stadium, Hyderabad-Vijayawada Highway (Operates from 7AM- 7PM)<br />3. Autocare Centre,R.P.road, Secunderabad- Near Bible House, Adjacent to Jirra<br />4. Chakra Filling Station, Nampally Station road, Nampally - on Abids to Nampally route, besides Pullareddy sweet house<br />5. Sapthagiri Filling Station, Meerpet, Near Champapet<br />6. Daraboina Filling Station, Uppal-Nagole ring road, Near Nagole bridge<br />7. Radharaman Service Station, Narayanguda - Opposite to Bloodbank, Narayanguda<br />8. K.V.S Service Station, Bowenpally, Hyd-Nizamabad-Nagpur Highway starting<br />9. Ramesh Fuel Point, Dhoolpet<br />10. Habib Fuel Station, Langarhouse, near Mehdipatnam<br /><br />and another 50 filling stations are yet to be alloted by Dec'2012 as per sources.Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-29333259424096832842011-10-03T19:52:00.000+05:302011-10-03T19:53:25.899+05:30Comparing Excel sheets, workbooks'Call CompareExcelAndProvideResultInAnotherSheet("C:\Documents and Settings\Metson\Desktop\test Excel\Book2.xls", "C:\Documents and Settings\Metson\Desktop\test Excel\Book1.xls")<br />'Call Compare2ExcelsCellByCell("C:\Documents and Settings\Metson\Desktop\test Excel\Book2.xls", "C:\Documents and Settings\Metson\Desktop\test Excel\Book1.xls")<br />Call Compare2ExcelsCellByCell("C:\Jackson-Works\BIChangedTxt\Test_USAGE_090602.csv", "C:\Jackson-Works\BIChangedTxt\POM_ADHOC_090602.csv")<br />Msgbox "Done Comparing Results"<br /><br />Function CompareExcelAndProvideResultInAnotherSheet(inputFile1, inputFile2)<br />Set objExcel=CreateObject("Excel.Application")<br />objExcel.Visible=True<br />Set objWorkBook1=objExcel.Workbooks.Open(inputFile1)<br />Set objWorkBook2=objExcel.Workbooks.Open(inputFile2)<br /><br />msgbox objWorkBook1.Worksheets.count<br /><br />Set objWorksheet1=objWorkBook1.Worksheets(1)<br />Set objWorkSheet2=objWorkBook2.Worksheets(1)<br /><br /> For each cell in objWorkSheet1.UsedRange<br /> c1=cell.Value <br /> c2=objWorksheet2.Range(cell.Address).Value <br /> Set rc=New RegExp<br /> rc.Pattern=c1<br /> rc.IgnoreCase=True<br /> rc.Global=True<br /> 'Msgbox s<br /> 'Msgbox ptn<br /> 'MsgbOx rc.Test(s)<br /> <br /> If rc.Test(c2) Then<br /> cell.Value="Pass"<br /> 'c2.Interior.ColorIndex = 0<br /> Else<br /> ' cell.Value= "Fail"<br /> c2.Interior.ColorIndex = 3<br /> End If<br /> Next<br /><br />Set objExcel=Nothing<br />End Function<br />Msgbox "Done"<br /><br />Function Compare2ExcelsCellByCell(inputFile1, inputFile2)<br /><br />'Compare 2 Excel sheets cell by cell and making the cell background as Red for the unmatched Cell value<br />'=============================================<br />'This code will open two excel sheet and compare each sheet cell by cell, if any changes there in cells , it will highlight the cells in red color in the first sheet.<br />Set objExcel = CreateObject("Excel.Application")<br />'objExcel.Visible = True<br />Set objWorkBook1=objExcel.Workbooks.Open(inputFile1)<br />Set objWorkBook2=objExcel.Workbooks.Open(inputFile2)<br />msgbox objWorkBook1.Worksheets.count<br />For i = 1 to objWorkBook1.Worksheets.count<br /> If (objWorkBook1.Worksheets(i).UsedRange.Rows.Count=1) And (objWorkBook1.Worksheets(i).UsedRange.Columns.Count=1)Then<br /> objWorkBook1.Worksheets(i).Delete<br /> End If<br />Next<br />For i = 1 to objWorkBook2.Worksheets.count<br /> If (objWorkBook2.Worksheets(i).UsedRange.Rows.Count=1) And (objWorkBook2.Worksheets(i).UsedRange.Columns.Count=1)Then<br /> objWorkBook1.Worksheets(i).Delete<br /> End If<br />Next<br /><br />Set objWorksheet1=objWorkBook1.Worksheets(1)<br />Set objWorkSheet2=objWorkBook2.Worksheets(1)<br /><br /> For Each cell In objWorksheet1.UsedRange<br /> If cell.Value <> objWorksheet2.Range(cell.Address).Value Then<br /> cell.Interior.ColorIndex = 3 'Highlights in red color if any changes in cells<br /> Else<br /> cell.Interior.ColorIndex = 5<br /> End If<br /> Next<br /> objWorkBook1.Save <br /> objWorkBook2.Save <br /> objExcel.Quit<br />set objExcel=nothing<br />End FunctionJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-75233955039731439642011-10-03T19:19:00.006+05:302011-10-03T19:24:29.427+05:30Working with dotnetfactory in qtp for database retrievaldim objConDotnet, objCmdDotnet<br /><br />Set objConDotnet=DotNetFactory.CreateInstance("System.Data.Odbc.OdbcConnection","System.Data")<br />'objConDotnet.ConnectionString="Server=xxx; UID=yyy; DataBase=zzz"<br />strCon="DRIVER={Microsoft ODBC for Oracle};SERVER=XXX; UID=YYY; PWD=ZZZ"<br />objConDotnet.ConnectionString=strCon<br />Print objConDotnet.ConnectionString<br />Print TypeName(objConDotnet)<br />objConDotnet.ConnectionTimeOut=0<br />objConDotnet.Open()<br />Print Cstr(objConDotnet.State)<br />strSql= "Select Count(*) from <systemtablename> where columnName=valuetoSearch"<br />Set objCmdDotnet=DotNetFactory.CreateInstance("System.Data.Odbc.OdbcCommand","System.Data")<br />objCmdDotnet.CommandText=strSql<br />objCmdDotnet.Connection= objConDotnet<br />strValue = objCmdDotnet.ExecuteScalar()<br />Print "The return Count is : " & CInt(strValue)<br />objConDotnet.Close()<br />Print Cstr(objConDotnet.State)Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com2tag:blogger.com,1999:blog-4803548388510893974.post-45665629414236238392011-05-09T23:08:00.002+05:302011-05-09T23:11:40.735+05:30Comparing Two excel files and finding mismatch rowsSet objExcel = CreateObject ("Excel.Application")<br />objExcel.Visible = True<br /> <br /> Set resultWb = objExcel.Workbooks.Add<br /> Set resultWs = resultWb.Worksheets("Sheet1")<br /> <br />resultrow =1<br />Set objWorkbook1= objExcel.Workbooks.Open("C:\Excel\BI_TEST\TEST_090602_1.xls")<br />Set objWorkbook2= objExcel.Workbooks.Open("C:\Excel\BI_PROD\PROD_090602_1.xls")<br /> <br />Set objWorksheet1= objWorkbook1.Worksheets(1) <br />Set objWorksheet2= objWorkbook2.Worksheets(1) <br />Const xlAscending = 1'represents the sorting type 1 for Ascending 2 for Desc<br />Const xlYes = 1 <br />'Set objRange =objWorksheet1.UsedRange 'which select the range of the cells has some data other than blank<br />'Set objRange2 = objWorksheet1.Range("A1") 'select the column to sort<br /> <br />'objRange.Sort objRange2, xlAscending, , , , , , xlYes<br /> <br />'Set objRange12 =objWorksheet2.UsedRange 'which select the range of the cells has some data other than blank<br />'Set objRange22 = objWorksheet2.Range("A1") 'select the column to sort<br /> <br />'objRange12.Sort objRange22, xlAscending, , , , , , xlYes<br /> <br />resultWs.Cells (resultrow, 1).Value ="Cell Address"<br />resultWs.Cells (resultrow, 2).Value ="Sheet1 Value"<br />resultWs.Cells (resultrow, 3).Value ="Sheet2 Value"<br /> <br />dim counter<br />counter = 0<br />For Each cell In objWorksheet1.UsedRange<br /> If cell.Value <> objWorksheet2.Range(cell.Address).Value Then<br /> cell.Interior.ColorIndex = 3'Highlights in red color if any changes in cells<br /> resultrow = resultrow+1<br /> resultWs.Cells (resultrow, 1).Value =cell.Address<br /> resultWs.Cells (resultrow, 2).Value =cell.VALUE<br /> resultWs.Cells (resultrow, 3).Value= objWorksheet2.Range(cell.Address).Value<br /> End If<br /> counter = counter +1 <br /> Next<br /><br /> resultWb.SaveAs("C:\Excel\Result\Result_090602_1.xls")Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-70878683302145717832009-12-16T23:14:00.001+05:302009-12-16T23:14:50.555+05:30Working with Arraylist of Dot Net in Qtp's VbScriptSet array1 = CreateObject( "System.Collections.ArrayList" )<br /><br />Set array2 = CreateObject( "System.Collections.ArrayList" )<br /><br /> <br /><br />Private Sub PrintInfo( ByVal title )<br /><br />dim str<br /><br />str="title" & vbNewLine<br /><br />str1= "array1 Capacity = " & array1.Capacity & vbcrlf<br /><br />str2= "array2 Capacity = " & array2.Capacity & vbcrlf<br /><br />str3= "array1 Count = " & array1.Count & vbcrlf<br /><br />str4= "array2 Count = " & array2.Count & vbcrlf<br /><br />str5= "array1 IsFixedSize = " & array1.IsFixedSize & vbcrlf<br /><br />str6= "array2 IsFixedSize = " & array2.IsFixedSize & vbcrlf<br /><br />str7= String( 50, "*" ) & vbcrlf<br /><br />Msgbox str & str1 & str2 &str3 & str4 & str5 & str6 & str7<br /><br />End Sub<br /><br />array2.Capacity = 10<br /><br />Call PrintInfo( "Before adding items." )<br /><br />' ** Adding an item to arrays<br /><br />array1.Add "New York" : array2.Add "New York"<br /><br />Call PrintInfo( "After adding 1 item." )<br /><br />array1.Add "Boston" : array2.Add "Boston"<br /><br />array1.Add "Dallas" : array2.Add "Dallas"<br /><br />array1.Add "Chicago" : array2.Add "Chicago"<br /><br />Call PrintInfo( "After adding 3 more items" )<br /><br />array1.Remove( "Boston" )<br /><br />Call PrintInfo( "After removing 1 item form array1" )Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com1tag:blogger.com,1999:blog-4803548388510893974.post-17173620686419989782009-12-16T22:40:00.001+05:302009-12-16T22:43:11.863+05:30Some Basic Useful re-usable Scripts in QTPGeneral Functions which might be useful in the projects:-<br /><br />Array Basics<br />Some basic info about creating and using arrays.<br />' The easiest way create an array is to simply declare it as follows<br />Dim strCustomers()<br />' Another method is to define a variable and then set it as an array afterwards<br />Dim strStaff<br />strStaff = Array("Alan","Brian","Chris")<br />' Yet another way is to use the split command to create and populate the array<br />Dim strProductArray<br />strProductArray = "Keyboards,Laptops,Monitors"<br />strProductArray = Split(strProductArray, ",")<br />' To itterate through the contents of an array you can use the For Each loop<br />Dim strItem<br />For Each strItem In strProductArray<br />MsgBox strItem<br />Next<br />' This will also itterate through the loop<br />Dim intCount<br />For intCount = LBound(strProductArray) To UBound(strProductArray)<br />Msgbox strProductArray(intCount)<br />Next<br />' This will itterate through the array backwards<br />For intCount = UBound(strProductArray) To LBound(strProductArray) Step -1<br />Msgbox strProductArray(intCount)<br />Next<br />' To add extra data to an array use Redim Preserve<br />Redim Preserve strProductArray(3)<br />strProductArray(3) = "Mice"<br />' To store the contents of an array into one string, use Join<br />Msgbox Join(strProductArray, ",")<br />' To delete the contents of an array, use the Erase command<br />Erase strProductArray<br />Date Manipulation Examples<br />Some date manipulation functions.<br /><br />' show todays date<br />MsgBox Date<br /><br />' show the time<br />MsgBox Time<br /><br />' show both the date and time<br />MsgBoxNow<br /><br />' calculate the minimum Date of Birth for someone who is 18 years old<br />strMinDoB = DateAdd("yyyy", -18, Date)<br />MsgBox strMinDob<br /><br />' show the number of years difference between strMinDob and today<br />MsgBox DateDiff("yyyy", strMinDob, Date)<br /><br />' show the hour portion of the time<br />MsgBox DatePart("h", Time)<br /><br />' show the day portion of the date<br />MsgBox Day(strMinDob)<br /><br />' show the month portion of the date<br />MsgBox Month(strMinDob)<br /><br />' show the year portion of the date<br />MsgBox Year(strMinDob)<br /><br />' show the weekday portion of the date<br />' Sunday=1, Monday=2, --> Saturday=7 <br />MsgBox WeekDay(strMinDob)<br /><br /><br /><br />Note: Acceptable 'Interval' parameters for DatePart, DateAdd and DateDiff...<br /><br />"yyyy" = Year <br />"q" = Quarter <br />"m" = Month <br />"y" = Day of year <br />"d" = Day <br />"w" = Weekday <br />"ww" = Week of year <br />"h" = Hour <br />"n" = Minute <br />"s" = Second <br /><br />Get Child Obects<br />Find all checkboxes on a webpage.<br />Here's a basic example that will find and tick all of the checkboxes on the QTP Helper login screen.<br />Dim objDescription<br />Dim objCheckBoxes<br />Dim iCount<br /><br />' create description objects used to locate check boxes<br />Set objDescription = Description.Create()<br /><br />' set the object properties so it looks only for web check boxes<br />objDescription("micclass").Value = "WebCheckBox"<br /><br />' check that the user isn't already logged in<br />If Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Exist(1) Then<br /><br />' click logout<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Click <br /><br />End If<br /><br />' get the check boxes from the page<br />Set objCheckBoxes = Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").ChildObjects(objDescription)<br /><br />' for each check box found<br />For iCount = 0 to objCheckBoxes.Count - 1<br /><br />' tick the check box<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebCheckBox(objCheckBoxes(iCount)).Set "ON"<br /><br />Next<br /><br /><br />Compare Arrays<br />Compare the contents of two arrays.<br />' Example usage <br />sA = Array("A", "B", "D")<br />sB = Array("A", "C", "B")<br /><br />MsgBox CompareArrays(sA, sB)<br />' =============================================================<br />' function: CompareArrays<br />' desc : Compares the content of two arrays and checks that<br />' they each contain the same data, even if in a <br />' different order<br />' params : arrArray1 is the base array<br />' arrArray2 is the array to compare<br />' returns : True if they contain same data, False otherwise<br />' =============================================================<br />Function CompareArrays(arrArray1, arrArray2)<br /><br />Dim intA1<br />Dim intA2<br />Dim blnMatched<br /><br />' check that the arrays are the same size<br />If UBound(arrArray1) <> UBound(arrArray2) then<br /><br />' arrays are different size, so return false and exit function<br />CompareArrays = False<br />Exit Function<br /><br />End if<br /><br />' for each element in the first array<br />For intA1 = LBound(arrArray1) to UBound(arrArray1)<br /><br />' initialise this to false<br />blnMatched = False<br /><br />' for each element in the second array<br />For intA2 = LBound(arrArray2) to UBound(arrArray2)<br /><br />' compare the content of the two arrays<br />If arrArray1 (intA1) = arrArray2 (intA2) Then<br />blnMatched = True<br />Exit For<br />End If<br /><br />Next ' next element in second array<br /><br />' if the element was not found in array two, return false and exit function<br />If Not blnMatched then <br />CompareArrays = False<br />Exit Function<br />End If<br /><br />Next ' next element in first array<br /><br />' if the function got this far, then the arrays contain the same data<br />CompareArrays = True<br /><br />End Function ' CompareArrays<br /><br /><br />Custom Report Entry<br />Creating a customised entry in the results.<br /><br />' Example usage<br />CustomReportEntry micFail, "Custom Report Example", "<DIV align=left>This is a <b>custom</b> report entry!</DIV>"<br /><br />' =============================================================<br />' function: CustomReportEntry<br />' desc : Creates a customized entry in the result file, you<br />' can use standard HTML tags in the message.<br />' params : strStatus is the result, micPass, micFail etc<br />' strStepName is the name of the step<br />' strMessage is the failure message, this can contain<br />' html tags<br />' returns : Void<br />' =============================================================<br />Function CustomReportEntry(strStatus, strStepName, strMessage)<br /><br />' create a dictionary object<br />Set objDict = CreateObject("Scripting.Dictionary")<br /><br />' set the object properties<br />objDict("Status") = strStatus<br />objDict("PlainTextNodeName") = strStepName<br />objDict("StepHtmlInfo") = strMessage<br />objDict("DllIconIndex") = 206<br />objDict("DllIconSelIndex") = 206<br />objDict("DllPAth") = "C:\Program Files\Mercury Interactive\QuickTest Professional\bin\ContextManager.dll"<br /><br />' report the custom entry<br />Reporter.LogEvent "User", objDict, Reporter.GetContext<br /><br />End Function 'CustomReportEntry<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Bcreating Custom Libraries<br /> An example of how to create your own custom library.<br /> This example will show you how to create your own customised code <br />library, using Visual Basic 6 as an example.<br /><br />First thing to do is open Visual Basic and create a new Active X DLL project... <br /> <br /><br />Before we add any code, we should give the Project and the Class Library sensible names.<br /><br />Here I've called the project "QTP"...<br /> <br /><br />For the Class Library I've simply called it "Library"... <br /> <br /><br />Now we can add a function to our Library. For this example I'm going to use a very<br />basic function which will simply display a message box with a given parameter value...<br /><br /> <br /><br />Next thing we need to do is create the DLL, this can be done from the File menu in Visual Basic...<br /> <br /><br />Note that during the development of the DLL, you can simply press F5 to run the code in Visual <br />Basic. We can then still call the function from QTP, this allows us to put break-points inside<br />the Visual Basic code and do some debugging. <br /><br />Another thing to note is that when you finish the DLL and want to use it on other machines,<br />you will need to register the DLL on the system. This can be done by simply dragging and dropping<br />the DLL onto the file "RegSvr32.exe", which can be found in your Windows\System32 folder.<br /><br />Now that we have our new library ready, we can call the functions from QTP... <br /><br /><br />Dim objDLL<br /><br />' create an object for our new library<br />Set objDLL = CreateObject("QTP.Library")<br /><br />' call the function from the library<br />objDLL.QTPHelper_Example "Easy!"<br /><br />' destroy the object<br />Set objDLL = Nothing<br /><br /><br />And here is the end result...<br /> <br /><br />Using methods like this will open up several new doors for your automation by allowing you to <br />execute code which isn't as easy to implement in VB Script as it is in other languages. <br /><br /><br /><br />Running DOS Commands<br />Running Dos Commands <br /><br />' =============================================================<br />' Sub : ExecuteDosCommand<br />' desc : Run a single-line DOS command<br />' params : Command to run<br />' returns : void<br />' =============================================================<br />Sub ExecuteDosCommand(strCommand)<br /><br />Dim objShell <br /><br />' create the shell object<br />Set objShell = CreateObject("WSCript.shell") <br /><br />' run the command<br />objShell.run strCommand<br /><br />' destroy the object<br />Set objShell = Nothing <br /><br />End Sub 'ExecuteDosCommand<br /><br /><br /><br />Export Data Sheet<br />Export a data sheet at runtime.<br />' =============================================================<br />' function: ExportDataSheet<br />' desc : Exports a data sheet<br />' params : strFile - full path to save the exported xls, note<br />' that any existing xls will be deleted<br />' strSheet - sheet to export<br />' returns : void<br />' =============================================================<br />Function ExportDataSheet(strFile, strSheet)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the xls doesn't already exist<br />If objFS.FileExists(strFile) Then<br />' delete it if it exists<br />ObjFS.DeleteFile strFile<br /><br />End If<br />' export the data table<br />DataTable.ExportSheet strFile, strSheet<br />' destroy the object<br />Set objFS = Nothing<br />End Function 'ExportDataSheet<br /><br /><br />Execute a Stored Procedure<br />Some code that should help you execute a stored procedure.<br /><br />' set the parameters of your database here<br />strDatabaseName = ""<br />strUser = ""<br />strPassword = ""<br />strStoredProcedureName = ""<br /><br />' create the database object<br />Set objDB = CreateObject("ADODB.Command")<br />' set the connection string<br />objDB.ActiveConnection = "DRIVER={Microsoft ODBC for Oracle}; " & _<br />"SERVER=" & strDatabaseName & _<br />";User ID=" & strUser & ";Password=" & strPassword & " ;"<br /><br />' set the command type to Stored Procedures<br />objDB.CommandType = 4 <br />objDB.CommandText = strStoredProcedureName <br /><br />' define Parameters for the stored procedure<br />objDB.Parameters.Refresh<br />' set parameters for stored procedure (i.e. two parameters here)<br />objDB.Parameters(0).Value = "Param1" <br />objDB.Parameters(1).Value = "Param2" <br /><br />' execute the stored procedure<br />objDB.Execute()<br />' destroy the object<br />Set objDB = Nothing <br /><br />Execute Method In Regular Expressions<br />Executing a regular expression to find text within a string.<br /><br />MsgBox RegularExpExample("QTP.", "QTP1 QTP2 qtp3 QTP4")<br /><br />' =============================================================<br />' function: RegularExpExample<br />' desc : Example of how to use the regular expression object<br />' to find text within a string<br />' params : strPattern is the regular expression<br />' strString is the string to use the expression on<br />' returns : An example string showing the results of the search<br />' =============================================================<br />Function RegularExpExample(strPattern, strString)<br /><br />Dim objRegEx, strMatch, strMatches <br />Dim strRet<br /><br />' create regular expression object<br />Set objRegEx = New RegExp <br /><br />' set the pattern<br />objRegEx.Pattern = strPattern <br /><br />' set it be not case sensitive<br />objRegEx.IgnoreCase = True <br /><br />' set global flag so we search all of the string, instead of just searching<br />' for the first occurrence<br />objRegEx.Global = True <br /><br />' execute search<br />Set strMatches = objRegEx.Execute(strString) <br /><br />' for each match<br />For Each strMatch in strMatches <br /><br />strRet = strRet & "Match found at position '" & _<br />strMatch.FirstIndex & "' - Matched Value is '" & _<br />strMatch.Value & "'" & vbCRLF<br /><br />Next<br /><br />RegularExpExample = strRet<br /><br />End Function ' RegularExpExample<br /><br /><br /><br />Export Data Table<br />Export a data table at runtime.<br />' =============================================================<br />' function: ExportDataTable<br />' desc : Exports a data table<br />' params : strFile - full path to save the exported xls, note<br />' that any existing xls will be deleted<br />' returns : void<br />' =============================================================<br />Function ExportDataTable(strFile)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the xls doesn't already exist<br />If objFS.FileExists(strFile) Then<br />' delete it if it exists<br />ObjFS.DeleteFile strFile<br /><br />End If<br />' export the data table<br />DataTable.Export strFile<br />' destroy the object<br />Set objFS = Nothing<br />End Function 'ExportDataTable<br />Export Data Sheet<br />Export a data sheet at runtime.<br />' =============================================================<br />' function: ExportDataSheet<br />' desc : Exports a data sheet<br />' params : strFile - full path to save the exported xls, note<br />' that any existing xls will be deleted<br />' strSheet - sheet to export<br />' returns : void<br />' =============================================================<br />Function ExportDataSheet(strFile, strSheet)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the xls doesn't already exist<br />If objFS.FileExists(strFile) Then<br />' delete it if it exists<br />ObjFS.DeleteFile strFile<br /><br />End If<br />' export the data table<br />DataTable.ExportSheet strFile, strSheet<br />' destroy the object<br />Set objFS = Nothing<br />End Function 'ExportDataShee<br /><br />Read From Excel File<br />Read all the data from an Excel file.<br /><br />' =============================================================<br />' function: ReadXLS<br />' desc : Reads a sheet from an XLS file and stores the content<br />' in a multi-dimensional array<br />' params : strFileName is XLS file to read, including path<br />' strSheetName is the name of the sheet to read, i.e "Sheet1"<br />' returns : Multi-dimensional array containing all data from <br />' the XLS<br />' =============================================================<br />Function ReadXLS(strFileName,strSheetName)<br /><br />Dim strData()<br />Dim objFS, objExcel, objSheet, objRange<br />Dim intTotalRow, intTotalCol<br />Dim intRow, intCol<br /><br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' ensure that the xls file exists<br />If Not objFS.FileExists(strFileName) Then<br /><br />' issue a fail if the file wasn't found<br />Reporter.ReportEvent micFail, "Read XLS", "Unable to read XLS file, file not found: " & strFileName<br />' file wasn't found, so exit the function<br />Exit Function<br /><br />End If ' file exists<br /><br />' create the excel object <br />Set objExcel = CreateObject("Excel.Application")<br /><br />' open the file<br />objExcel.Workbooks.open strFileName<br /><br />' select the worksheet<br />Set objSheet = objExcel.ActiveWorkbook.Worksheets(strSheetName)<br /><br />' select the used range<br />Set objRange = objSheet.UsedRange<br /><br />' count the number of rows<br />intTotalRow=CInt(Split(objRange.Address, "$")(4)) - 1<br /><br />' count the number of columns<br />intTotalCol= objSheet.Range("A1").CurrentRegion.Columns.Count<br /><br />' redimension the multi-dimensional array to accomodate each row and column<br />ReDim strData(intTotalRow, intTotalCol)<br /><br />' for each row<br />For intRow = 0 to intTotalRow - 1<br /><br />' for each column<br />For intCol =0 to intTotalCol - 1<br /><br />' store the data from the cell in the array<br />strData(intRow, intcol) = Trim(objSheet.Cells(intRow + 2,intcol + 1).Value)<br /><br />Next ' column<br /><br />Next ' row<br /><br />' close the excel object<br />objExcel.DisplayAlerts = False<br />objExcel.Quit <br /><br />' destroy the other objects <br />Set objFS = Nothing <br />Set objExcel = Nothing<br />Set objSheet = Nothing <br /><br />' return the array containing the data<br />ReadXLS = strData<br /><br />End Function ' ReadXLS<br /><br />File Browser<br />Opens a standard dialog which allows the user to choose a file.<br />' =============================================================<br />' function : FileBrowser<br />' desc : Opens a standard Open File Dialog <br />' params : strTitle - the title to apply to the dialog<br />' strFilter - the filter to apply to the dialog<br />' returns : The selected file, including path<br />' =============================================================<br />Public Function FileBrowser(strTitle, strFilter)<br /><br />Dim objDialog<br />' create a common dialog object <br />Set objDialog = CreateObject("MSComDlg.CommonDialog")<br />' set the properties and display the dialog <br />With objDialog<br />.DialogTitle = strTitle<br />.Filter = strFilter<br />.MaxFileSize = 260<br />.ShowOpen<br />End With<br />' return the selected file <br />FileBrowser = objDialog.FileName<br />' destroy the object <br />Set objDialog = Nothing<br /><br />End Function ' FileBrowser<br />File Exists<br />Check to see if a local or network file exists.<br />' =============================================================<br />' function: CheckFileExists<br />' desc : Checks to see if a file exists<br />' params : strFile - full path of the file to find<br />' returns : True if file exists, False otherwise<br />' =============================================================<br />Function CheckFileExists(strFile)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source file exists<br />If objFS.FileExists(strFile) Then<br />' file exists, return true<br />CheckFileExists = True<br /><br />Else <br /><br />' file exists, return false<br />CheckFileExists = False<br /><br />End If<br /><br />End Function 'CheckFileExists<br /><br /><br />Folder Exists<br />Check to see if a local or network folder exists.<br />' =============================================================<br />' function: CheckFolderExists<br />' desc : Checks to see if a folder exists<br />' params : strFolder - full path of the folder to find<br />' returns : True if folder exists, False otherwise<br />' =============================================================<br />Function CheckFolderExists(strFile)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source file exists<br />If objFS.FolderExists(strFolder) Then<br />' file exists, return true<br />CheckFolderExists = True<br /><br />Else<br /><br />' file exists, return false<br />CheckFolderExists = False<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'CheckFolderExists<br /><br /><br />Create Folder<br />Create a local or network folder.<br />' =============================================================<br />' function: FolderCreate<br />' desc : Creates a folder<br />' params : strFolderPath - the folder to create (full path)<br />' returns : void<br />' =============================================================<br />Function FolderCreate(strFolderPath)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' create the folder<br />If Not objFS.FolderExists(strFolderPath) Then<br />objFS.CreateFolder strFolderPath<br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FolderCreate<br /><br /><br />Delete Folder<br />Delete a local or network folder.<br />' =============================================================<br />' function: FolderDelete<br />' desc : Deletes a folder and all of it's contents<br />' params : strFolder - the folder to delete<br />' returns : void<br />' =============================================================<br /><br />Function FolderDelete(strFolder)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source folder exists<br />If Not objFS.FolderExists(strFolder) Then<br />' fail if the source does not exist<br />reporter.ReportEvent micFail, "Delete Folder", "Unable to Delete Folder '"& strFolder &"', It Does Not Exist"<br />Else<br />' delete the folder<br />objFS.DeleteFolder strFolder<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FolderDelete<br />Move Folder<br />Move a local or network folder.<br />' =============================================================<br />' function: FolderMove<br />' desc : Moves a folder and all of its files to a new path<br />' params : strSourceFolder - the folder to copy<br />' strDestinationFolder - the location to copy to<br />' returns : void<br />' =============================================================<br />Function FolderMove(strSourceFolder, strDestinationFolder)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source folder exists<br />If Not objFS.FolderExists(strSourceFolder) Then<br />' fail if the source does not exist<br />reporter.ReportEvent micFail, "Move Folder", "Source Folder '"& strSourceFolder &"' Does Not Exist"<br />Else<br />' check that the destination folder doesn't already exist<br />If Not objFS.FolderExists(strDestinationFolder) Then<br /><br />' move the folder<br />objFS.MoveFolder strSourceFolder, strDestinationFolder<br />Else<br />' fail if the target folder was already in place<br />reporter.ReportEvent micFail, "Move Folder", "Unable to Move Folder as the Target '" & strDestinationFolder & "' Already Exists"<br /><br />End If<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FolderMove<br />Copy Folder<br />Copy a local or network folder.<br />' =============================================================<br />' function: FolderCopy<br />' desc : Copys a folder and all of its files to a new path<br />' params : strSourceFolder - the folder to copy<br />' strDestinationFolder - the location to copy to<br />' returns : void<br />' =============================================================<br /><br />Function FolderCopy(strSourceFolder, strDestinationFolder)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source folder exists<br />If Not objFS.FolderExists(strSourceFolder) Then<br />' fail if the source does not exist<br />reporter.ReportEvent micFail, "Copy Folder", "Source Folder '"& strSourceFolder &"' Does Not Exist"<br />Else<br />' create the destination folder if it doesn't already exist<br />If Not objFS.FolderExists(strDestinationFolder) Then<br />objFS.CreateFolder(strDestinationFolder)<br />End If<br />' copy the folder<br />objFS.CopyFolder strSourceFolder, strDestinationFolder<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FolderCopy<br />Folder Exists<br />Check to see if a local or network folder exists.<br />' =============================================================<br />' function: CheckFolderExists<br />' desc : Checks to see if a folder exists<br />' params : strFolder - full path of the folder to find<br />' returns : True if folder exists, False otherwise<br />' =============================================================<br />Function CheckFolderExists(strFile)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source file exists<br />If objFS.FolderExists(strFolder) Then<br />' file exists, return true<br />CheckFolderExists = True<br /><br />Else<br /><br />' file exists, return false<br />CheckFolderExists = False<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'CheckFolderExists<br />Read a Text File<br />Example of how to read a text file line-by-line.<br /><br />' reading a file line by line<br /><br />Const ForReading = 1<br /><br />' create file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' first check that the file exists<br />If objFS.FileExists("c:\TextFile.txt") Then<br /><br />' open the text file for reading<br />Set objFile = objFS.OpenTextFile("c:\TextFile.txt", ForReading, False)<br /><br />' do until at end of file<br />Do Until objFile.AtEndOfStream<br /><br />' store the value of the current line in the file<br />strLine = objFile.ReadLine<br /><br />' show the line from the file<br />MsgBox strLine<br /><br />Loop ' next line<br /><br />' close the file<br />objFile.Close<br /><br />Set objFile = Nothing<br /><br />Else ' file doesn't exist<br /><br />' report a failure<br />Reporter.ReportEvent micFail, "Read File", "File not found"<br /><br />End if ' file exists<br /><br />' destroy the objects<br />Set objFS = Nothing<br /><br /><br />Write to a File<br />Example of how to write text to a file.<br /><br />' =============================================================<br />' function: AppendFile<br />' desc : Writes a line of text to a text file, text file is<br />' created if it doesn't already exist<br />' params : strFileName is the name of the file to write to<br />' strLine is the text to write to the file<br />' returns : void<br />' =============================================================<br />Function AppendFile(strFileName, strLine)<br /><br />Dim objFS<br /><br />Const ForAppending = 8<br /><br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' open/create the text file<br />Set objFile = objFS.OpenTextFile(strFilename, ForAppending, True)<br /><br />' write the line<br />objFile.WriteLine strLine<br /><br />' close the file<br />objFile.Close<br /><br />End Function ' AppendFile<br /><br /><br />Get Temporary File Name<br />Generate a unique temporary file name.<br />' =============================================================<br />' function: GetTemporaryFileName<br />' desc : Generates a unique file name in the windows <br />' temporary folder<br />' params : none<br />' returns : A unique temporary file, including path<br />' =============================================================<br />Function GetTemporaryFileName<br />Const TemporaryFolder = 2<br />Dim objFS<br />Dim objTempFolder<br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' get the path to the temporary folder<br />Set objTempFolder = objFS.GetSpecialFolder(TemporaryFolder)<br />' return the path plus a unique temporary file name<br />GetTemporaryFileName = objTempFolder.Path & "\" & objFS.GetTempName <br />' destroy the object<br />Set objFS = Nothing<br />Set objTempFolder = Nothing<br />End Function 'GetTemporaryFileName<br />Create Unique File Name<br />Create a unique file name.<br /><br />' =============================================================<br />' function: UniqueFileName<br />' desc : Creates a unique file name<br />' params : strType - file extension<br />' returns : unique file name of specified type<br />' =============================================================<br />Function UniqueFileName(strType)<br /><br />dim strReturn<br /><br />' make sure there is a dot before the type<br />If left(strType,1) <> "." then strType = "." & strType<br /><br />' create the file name using the date & time, and remove the / and : chars<br />strReturn = day(date) & month(date) & year(date) & hour(time) & minute(time) & second(time) & strType<br /><br />' return the file name<br />UniqueFileName = strReturn<br /><br />End Function 'UniqueFileName<br /><br /><br />Compare Files<br />Compare the contents of two text files.<br /><br />' =============================================================<br />' function: CompareFiles<br />' desc : Compares two text files<br />' params : strFile1 is the first file<br />' strFile2 is the second file<br />' returns : True if they are the same, False otherwise<br />' =============================================================<br />Function CompareFiles(strFile1, strFile2)<br /><br />Dim objFS<br />Dim objFileA, objFileB<br />Dim strLineA, strLineB<br />dim intCompareResult<br /><br />' create a file scripting object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' open each of the files for reading<br />Set objFileA = objFS.OpenTextFile(strFile1, 1)<br />Set objFileB = objFS.OpenTextFile(strFile2, 1)<br /><br />' repeat the following until we hit the end of one of the files<br />Do While ((objFileA.AtEndOfStream <> True) OR (objFileB.AtEndOfStream <> True))<br /><br />' read the next line from both files<br />strLineA = objFileA.ReadLine<br />strLineB = objFileB.ReadLine<br /><br />' perform a comparison on the line from each file<br />intCompareResult = StrComp(strLineA,strLineB,0)<br /><br />' if the value of the comparison is not 0, lines are different<br />If (intCompareResult <> 0) Then<br /><br />' found a difference in the files, so close them both<br />objFileA.Close<br />objFileB.Close<br /><br />' destroy the object<br />Set objFS = Nothing<br /><br />' return false<br />CompareFiles = False<br /><br />' exit the function<br />Exit Function<br /><br />End If ' if different<br /><br />Loop ' until end of file<br /><br />' close both files<br />objFileA.Close<br />objFileB.Close<br /><br />' destroy the object<br />Set objFS = Nothing<br /><br />' if function got this far, means files are the same, so return True<br />CompareFiles = True<br /><br />End Function 'CompareFiles<br /><br />Create Desktop Shortcut<br />Create a shortcut on the desktop.<br /><br />' =============================================================<br />' function: CreateDesktopShortcut<br />' desc : Creates a shortcut on the desktop<br />' params : strTargetPath is the full path to the file you <br />' are creating the shortcut to, i.e. c:\doc\me.txt<br />' strLinkName is the name of the shortcut, as it <br />' appears on the desktop<br />' strDesc is the description to set within the shortcut<br />' returns : void<br />' =============================================================<br />Sub CreateDesktopShortcut(strTargetPath, strLinkName, strDesc)<br /><br />Dim objShell, objShortCut<br />Dim strDesktopFolder<br /><br />' ensure that the link name is valid<br />if Right(Lcase(strLinkName,4)) <> ".lnk" Then strLinkName = strLinkName & ".lnk"<br /><br />' create a shell object <br />Set objShell = CreateObject("WScript.Shell")<br /><br />' get the desktop folder<br />strDesktopFolder = objShell.SpecialFolders("AllUsersDesktop")<br /><br />' create required shortcut object on the desktop<br />Set objShortCut = objShell.CreateShortcut(strDesktopFolder & "\" & strLinkName)<br /><br />' set the path within the shortcut<br />objShortCut.TargetPath = strTargetPath<br /><br />' set the description<br />objShortCut.Description = strDesc<br /><br />' save the shortcut<br />objShortCut.Save<br /><br />End Sub ' CreateDesktopShortcut<br /><br /><br />Read From Excel File<br />Read all the data from an Excel file.<br /><br />' =============================================================<br />' function: ReadXLS<br />' desc : Reads a sheet from an XLS file and stores the content<br />' in a multi-dimensional array<br />' params : strFileName is XLS file to read, including path<br />' strSheetName is the name of the sheet to read, i.e "Sheet1"<br />' returns : Multi-dimensional array containing all data from <br />' the XLS<br />' =============================================================<br />Function ReadXLS(strFileName,strSheetName)<br /><br />Dim strData()<br />Dim objFS, objExcel, objSheet, objRange<br />Dim intTotalRow, intTotalCol<br />Dim intRow, intCol<br /><br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' ensure that the xls file exists<br />If Not objFS.FileExists(strFileName) Then<br /><br />' issue a fail if the file wasn't found<br />Reporter.ReportEvent micFail, "Read XLS", "Unable to read XLS file, file not found: " & strFileName<br />' file wasn't found, so exit the function<br />Exit Function<br /><br />End If ' file exists<br /><br />' create the excel object <br />Set objExcel = CreateObject("Excel.Application")<br /><br />' open the file<br />objExcel.Workbooks.open strFileName<br /><br />' select the worksheet<br />Set objSheet = objExcel.ActiveWorkbook.Worksheets(strSheetName)<br /><br />' select the used range<br />Set objRange = objSheet.UsedRange<br /><br />' count the number of rows<br />intTotalRow=CInt(Split(objRange.Address, "$")(4)) - 1<br /><br />' count the number of columns<br />intTotalCol= objSheet.Range("A1").CurrentRegion.Columns.Count<br /><br />' redimension the multi-dimensional array to accomodate each row and column<br />ReDim strData(intTotalRow, intTotalCol)<br /><br />' for each row<br />For intRow = 0 to intTotalRow - 1<br /><br />' for each column<br />For intCol =0 to intTotalCol - 1<br /><br />' store the data from the cell in the array<br />strData(intRow, intcol) = Trim(objSheet.Cells(intRow + 2,intcol + 1).Value)<br /><br />Next ' column<br /><br />Next ' row<br /><br />' close the excel object<br />objExcel.DisplayAlerts = False<br />objExcel.Quit <br /><br />' destroy the other objects <br />Set objFS = Nothing <br />Set objExcel = Nothing<br />Set objSheet = Nothing <br /><br />' return the array containing the data<br />ReadXLS = strData<br /><br />End Function ' ReadXLS<br /><br /><br />Get Child Obects<br />Find all checkboxes on a webpage.<br />Here's a basic example that will find and tick all of the checkboxes on the QTP Helper login screen.<br />Dim objDescription<br />Dim objCheckBoxes<br />Dim iCount<br /><br />' create description objects used to locate check boxes<br />Set objDescription = Description.Create()<br /><br />' set the object properties so it looks only for web check boxes<br />objDescription("micclass").Value = "WebCheckBox"<br /><br />' check that the user isn't already logged in<br />If Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Exist(1) Then<br /><br />' click logout<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Click <br /><br />End If<br /><br />' get the check boxes from the page<br />Set objCheckBoxes = Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").ChildObjects(objDescription)<br /><br />' for each check box found<br />For iCount = 0 to objCheckBoxes.Count - 1<br /><br />' tick the check box<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebCheckBox(objCheckBoxes(iCount)).Set "ON"<br /><br />Next<br /><br /><br />Get Disk Information<br />Get information about one of your disk drives.<br /><br />Dim intSectors, intBytes, intFreeC, intTotalC, intTotal ,intFreeb<br /><br />' include this windows api<br />extern.Declare micLong, "GetDiskFreeSpace", "kernel32.dll", "GetDiskFreeSpaceA", micString+micByref, micLong+micByref, micLong+micByref,micLong+micByref,micLong+micByref<br /><br />' set these values<br />intSectors = 255<br />intBytes = 255<br />intFreeC = 255<br />intTotalC = 255<br /><br />' calculate the disk space, using C: in this example<br />intSpaceAvailable = extern.GetDiskFreeSpace("c:\", intSectors, intBytes, intFreeC, intTotalC)<br /><br />' calculate the totals<br />intTotal = intTotalC * intSectors * intBytes<br />intFreeb = intFreeC * intSectors * intBytes<br /><br />' show the outputs<br />msgBox intSectors<br />msgBox intBytes<br />msgBox intFreeC<br />msgBox intTotalC<br />msgbox intTotal<br />msgBox intFreeb<br />Get IP Address<br />Get your machines IP address.<br /><br />' =============================================================<br />' function: GetIPAddress<br />' desc : Returns the IP address of the PC<br />' params : Void<br />' returns : IP Address<br />' =============================================================<br />Function GetIPAddress()<br /><br />' get the ip addresses<br />Set IPConfigSet = GetObject("winmgmts:{impersonationLevel=impersonate}").ExecQuery _<br />("select IPAddress from Win32_NetworkAdapterConfiguration where IPEnabled=TRUE")<br /><br />' for each item in the collection<br />For Each IPConfig in IPConfigSet<br /><br />' if the item isn't empty<br />If Not IsNull(IPConfig.IPAddress) Then<br /><br />' loop through the addresses<br />For i = LBound(IPConfig.IPAddress) to UBound(IPConfig.IPAddress)<br /><br />' set the return alue<br />ipAddr = IPConfig.IPAddress(i)<br /><br />Next<br /><br />End If<br /><br />Next<br /><br />' destroy the object<br />Set IPConfigSet = Nothing<br /><br />' return the ip<br />GetIPAddress = ipAddr <br /><br />End Function ' GetIPAddress<br /><br />Get System Information<br />Get system information like User Name and Computer Name.<br /><br />Dim objNet<br /><br />' create a network object<br />Set objNet = CreateObject("WScript.NetWork")<br /><br />' show the user name<br />MsgBox "User Name: " & objNet.UserName <br /><br />' show the computer name<br />MsgBox "Computer Name: " & objNet.ComputerName <br /><br />' show the domain name<br />MsgBox "Domain Name: " & objNet.UserDomain<br /><br />' destroy the object<br />Set objNet = Nothing <br /><br />Get System Variable Value<br />Get a value from a Windows System Variable.<br /><br />' for example to get the oracle home path<br />MsgBox GetSystemVariable("ORACLE_HOME")<br /><br />' =============================================================<br />' function: GetSystemVariable<br />' desc : Get the value of a system variable<br />' params : strSysVar is the variable name<br />' returns : Content of variable name<br />' =============================================================<br />Function GetSystemVariable(strSysVar)<br /><br />Dim objWshShell, objWshProcessEnv<br /><br />' create the shell object<br />Set objWshShell = CreateObject("WScript.Shell")<br />Set objWshProcessEnv = objWshShell.Environment("Process")<br /><br />' return the system variable content <br />GetSystemVariable = objWshProcessEnv(strSysVar)<br /><br />End Function ' GetSystemVariable<br /><br />Get Temporary File Name<br />Generate a unique temporary file name.<br />' =============================================================<br />' function: GetTemporaryFileName<br />' desc : Generates a unique file name in the windows <br />' temporary folder<br />' params : none<br />' returns : A unique temporary file, including path<br />' =============================================================<br />Function GetTemporaryFileName<br />Const TemporaryFolder = 2<br />Dim objFS<br />Dim objTempFolder<br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' get the path to the temporary folder<br />Set objTempFolder = objFS.GetSpecialFolder(TemporaryFolder)<br />' return the path plus a unique temporary file name<br />GetTemporaryFileName = objTempFolder.Path & "\" & objFS.GetTempName <br />' destroy the object<br />Set objFS = Nothing<br />Set objTempFolder = Nothing<br />End Function 'GetTemporaryFileName<br />Import Data Sheet<br />Import a data sheet into your test at runtime.<br />' =============================================================<br />' function: ImportDataSheet<br />' desc : Imports a single data sheet<br />' params : strFile - full path of the xls file with the sheet<br />' strSource - name of the sheet on the xls<br />' strTarget - name of the sheet to import it to<br />' returns : void<br />' =============================================================<br />Function ImportDataSheet(strFile, strSource, strTarget)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source file exists<br />If objFS.FileExists(strFile) Then<br />' ensure that our target sheet exists<br />DataTable.AddSheet strTarget<br />' import the sheet<br />DataTable.Importsheet strFile, strSource, strTarget<br />Else<br />' fail if the xls was not found<br />Reporter.ReportEvent micFail, "Import Data Table", "Unable to Import Data Table From '" & strFile & "', File Does Not Exist"<br />End If<br />' destroy the object<br />Set objFS = Nothing<br />End Function 'ImportDataSheet<br /><br /><br />Import Data Table<br />Import a data table into your test at runtime.<br />' =============================================================<br />' function: ImportDataTable<br />' desc : Imports a data table<br />' params : strFile - full path of the xls file to import<br />' returns : void<br />' =============================================================<br />Function ImportDataTable(strFile)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source file exists<br />If objFS.FileExists(strFile) Then<br />' import the data table<br />DataTable.Import strFile<br />Else<br />' fail if the xls was not found<br />Reporter.ReportEvent micFail, "Import Data Table", "Unable to Import Data Table From '" & strFile & "', File Does Not Exist"<br />End If<br />' destroy the object<br />Set objFS = Nothing<br />End Function 'ImportDataTable<br /><br /><br />Sending Key Presses (SendKeys)<br />Examples of how to simulate key presses.<br />Dim objShell<br />' Create the shell object<br />Set objShell = CreateObject ("WSCript.shell")<br />' Various key press examples <br />objShell.SendKeys "Hello" ' Hello<br />objShell.SendKeys "{F4}" ' F4<br />objShell.SendKeys "^{F4}" ' CTRL-F4<br />objShell.SendKeys "+{F4}" ' SHIFT-F4<br />objShell.SendKeys "%{F4}" ' ALT-F4<br />' Destroy the object<br />Set objShell = Nothing<br /><br />Locate Method (Checking text within text)<br />Using Locate to determine if specific text exists within a string.<br /><br />MsgBox LocateText("www.QTPHelper.com", "QTP")<br />MsgBox LocateText("www.QTPHelper.com", "QTP.*.com")<br /><br />' =============================================================<br />' function: LocateText<br />' desc : Uses a regular expression to locate text within a string<br />' params : strString is the string to perform the search on<br />' strPattern is the regular expression<br />' returns : True if the pattern was found, False otherwise<br />' =============================================================<br />Function LocateText(strString, strPattern)<br /><br />Dim objRegEx<br /><br />' create the regular expression<br />Set objRegEx = New RegExp <br /><br />' set the pattern <br />objRegEx.Pattern = strPattern<br /><br />' ignore the casing<br />objRegEx.IgnoreCase = True<br /><br />' perform the search<br />LocateText = objRegEx.Test(strString)<br /><br />' destroy the object<br />Set objRegEx = Nothing<br /><br />End Function ' LocateText <br /><br /><br />Write to a Log File<br />Write information to a log file.<br /><br />' =============================================================<br />' function: WriteLog<br />' desc : Writes a message to a log file. File is created<br />' inside a Log folder of the current directory<br />' params : strCode is a code to prefix the message with<br />' strMessage is the message to add to the file<br />' returns : void<br />' =============================================================<br />Function WriteLog(strCode, strMessage)<br /><br />Dim objFS<br />Dim objFile<br />Dim objFolder<br />Dim strFileName<br /><br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' is there a log folder in the directory that we are currently working<br />If Not objFS.FolderExists(objFS.GetAbsolutePathName(".") & "\log") Then<br /><br />' if there is no log folder, create one<br />Set objFolder = objFS.CreateFolder(objFS.GetAbsolutePathName(".") & "\log") <br /><br />End If ' folder exists<br /><br />' set a name for the log file using year, month and day values<br />strFileName = objFS.GetAbsolutePathName(".") & "\log\" & year(date) & month(date) & day(date) & ".log"<br /><br />' create the log file<br />Set objFile = objFS.OpenTextFile(strFileName, 8, True)<br /><br />' in case of any issues writing the file<br />On Error Resume Next<br /><br />' write the log entry, include a carriage return<br />objFile.Write Date & ", " & Time & ", " & strCode & ", " & strMessage & vbcrlf<br /><br />' disable the on error statement<br />On Error GoTo 0<br /><br />' close the log file<br />objFile.Close<br /><br />' destrory the object<br />Set objFS = Nothing<br /><br />End Function ' WriteLog<br /><br /><br />Loop Basics<br />Some basic information about various loop types.<br />' Loops allow you to run a group of statements repeatidly.<br />'<br />' There are four types of loop available, all very easy to <br />' use and understand. This code sample will explain how<br />' to use each type of loop.<br />'<br /><br />' Do...Loop<br /><br />' The Do...Loop will run a block of statements repeatidly <br />' while a condition is True, or until a condition becomes True<br /><br />' Check these two examples of Do...While, there is one major difference <br />' between them. In Example A the cose will check the value of intCounter<br />' before it enters the loop, but in Example B the code will enter the <br />' loop regardless of the value of intCounter.<br /><br />' Example A<br />intCounter = 0<br />Do While intCounter < 5<br />intCounter = intCounter + 1<br />MsgBox intCounter<br />Loop<br /><br />' Example B<br />intCounter = 0<br />Do<br />intCounter = intCounter + 1<br />MsgBox intCounter <br />Loop While intCounter <5<br /><br />' Here is the same examples using the Do...Until<br />' Example A<br />intCounter = 0<br />Do Until intCounter = 6<br />intCounter = intCounter + 1<br />MsgBox intCounter<br />Loop<br /><br />' Example B<br />intCounter = 0<br />Do<br />intCounter = intCounter + 1<br />MsgBox intCounter <br />Loop Until intCounter = 6<br />' For...Next <br /><br />' For...Next Loops will execute a series of statements until a specific counter value <br />' is reached.<br />For iCounter = 1 To 5<br />MsgBox iCounter<br />Next<br /><br />' You can add a Step keyword to define how much the counter should increase with each<br />' itteration of the loop<br />For iCounter = 1 To 10 Step 2<br />MsgBox iCounter<br />Next <br /><br />' The Step keyword can also be used to itterate backwards<br />For iCounter = 5 to 1 Step -1<br />MsgBox iCounter<br />Next <br /><br /><br />' For...Each<br /><br />' Another variation on the For...Next loop is the For...Each loop. The For...Each<br />' loop is used to execute a series of statements for each object in a collection, <br />' i.e. each element of an array. For example...<br />Dim strPeopleList<br />Dim strPerson<br />strPeopleList = Array("Alan", "Bob", "Craig", "Dan")<br />For Each strPerson in strPeopleList<br />MsgBox strPerson<br />Next<br /><br /><br />' While...Wend Loops<br />'<br />' This type of loop will execute a series of statements as long as <br />' a given condition is true.<br />' Note: It's advisable to avoid using this type of loop, you should<br />' us the Do...Loop instead<br />' Here's an example anyway...<br />iCounter = 0<br />While iCounter < 5<br />iCounter = iCounter + 1<br />Msgbox iCounter<br />Wend<br />Minimize QTP<br />Minimize the main QTP window.<br /><br />' =============================================================<br />' function: MinimizeQTP<br />' desc : Minimize QTP window<br />' params : None<br />' returns : void<br />' =============================================================<br />Function MinimizeQTP()<br /><br />dim objQTP<br /><br />' create a qtp object<br />Set objQTP = getObject("","QuickTest.Application")<br /><br />' set the window state to minimized<br />objQTP.WindowState = "Minimized"<br /><br />' destroy the object<br />Set objQTP = Nothing<br /><br />End Function 'MinimizeQTP<br /><br /><br />Move File<br />Move a file from one location to another.<br />' =============================================================<br />' function: FileMove<br />' desc : Moves a file from one location to another<br />' params : strFile - full path to the source file<br />' strTarget - the folder to move the file to<br />' returns : void<br />' =============================================================<br />Function FileMove(strFile, strTarget)<br /><br />Dim objFS<br /><br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' check that the source file exists<br />If Not objFS.FileExists(strFile) Then<br /><br />' fail if the source does not exist<br />reporter.ReportEvent micFail, "Move File", "Unable to Move the File '"& strFile &"', It Does Not Exist"<br /><br />Else<br /><br />' create the destination folder if it doesn't already exist<br />If Not objFS.FolderExists(strTarget) Then<br /><br />objFS.CreateFolder(strTarget)<br /><br />End If<br /><br />' move the file<br />objFS.MoveFile strFile, strTarget <br /><br />End If<br /><br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FileMove<br />Move Folder<br />Move a local or network folder.<br />' =============================================================<br />' function: FolderMove<br />' desc : Moves a folder and all of its files to a new path<br />' params : strSourceFolder - the folder to copy<br />' strDestinationFolder - the location to copy to<br />' returns : void<br />' =============================================================<br />Function FolderMove(strSourceFolder, strDestinationFolder)<br />Dim objFS<br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br />' check that the source folder exists<br />If Not objFS.FolderExists(strSourceFolder) Then<br />' fail if the source does not exist<br />reporter.ReportEvent micFail, "Move Folder", "Source Folder '"& strSourceFolder &"' Does Not Exist"<br />Else<br />' check that the destination folder doesn't already exist<br />If Not objFS.FolderExists(strDestinationFolder) Then<br /><br />' move the folder<br />objFS.MoveFolder strSourceFolder, strDestinationFolder<br />Else<br />' fail if the target folder was already in place<br />reporter.ReportEvent micFail, "Move Folder", "Unable to Move Folder as the Target '" & strDestinationFolder & "' Already Exists"<br /><br />End If<br /><br />End If<br />' destroy the object<br />Set objFS = Nothing<br /><br />End Function 'FolderMove<br />Displaying Dialog Boxes<br />How to display and use various types of dialog box.<br />' display a basic message box<br />MsgBox "Hi, this is a message box", vbOkOnly, "Message Title"<br /><br /><br />' prompt the user with a question<br />strAnswer = InputBox("Hi, how are you today?","Question")<br />' show the user what they just typed<br />MsgBox "You are - " & strAnswer<br /><br /><br />' ask the user to select an option<br />strAnswer = MsgBox("Do you want to proceed?", vbYesNo, "Question")<br />' show the user what they just selected<br />If strAnswer = vbNo Then<br />MsgBox "You selected No"<br />Else<br />MsgBox "You selected Yes"<br />End If<br />Note: Here are the various message types you can play with...<br />vbOKOnly<br />vbOKCancel<br />vbAbortRetryIgnore<br />vbYesNoCancel<br />vbYesNo<br />vbRetryCancel<br />vbCritical<br />vbQuestion<br />vbExclamation<br />vbInformation<br /><br /><br />Capture Screenshot<br />Capture and save a PNG of the entire screen.<br />' =============================================================<br />' function: ScreenShot<br />' desc : Creates a png of the entire screen<br />' params : n/a<br />' returns : name of saved png<br />' =============================================================<br />Function ScreenShot()<br />dim strPNG<br />dim objDesktop<br />' set a unique file name using the date/time<br />strPNG = "C:\Screenshot_" & day(date) & month(date) & year(date) & _<br />& hour(time) & minute(time) & second(time) & ".png"<br />' desktop object<br />Set objDesktop = Desktop<br />' capture a png of the desktop<br />obj.CaptureBitmap strPNG, true<br />' return the file name<br />ScreenShot = strPNG<br />' destroy the object<br />Set objDesktop = Nothing<br />End Function 'ScreenShot<br /><br /><br />Override Existing Object Method<br />Override an existing object method.<br />' override the Set method with SetWithDebug<br />RegisterUserFunc "WebEdit", "Set", "SetWithDebug"<br /><br />' =============================================================<br />' function : SetWithDebug<br />' desc : Sets the value of an edit box with additional logging<br />' =============================================================<br />Function SetWithDebug(objEdit, strValue)<br /><br />' your additional logging here<br />' set the text<br />SetWithDebug = objEdit.Set(strValue)<br /><br />End Function<br /><br /><br />Registering a Procedure<br />Register a procedure with an object class.<br /><br />' add GetItemsCount as a method of the WebList class<br />RegisterUserFunc "WebList", "GetItemsCount", "GetItemsCountFunction"<br /><br />' =============================================================<br />' function : GetItemsCountFunction<br />' desc : Returns the number of items from a weblist<br />' =============================================================<br />Function GetItemsCountFunction(objWebList)<br /><br />If (objWebList = Nothing) Then<br />GetItemsCount = 0<br />Else<br />GetItemsCount = objWebList.GetROProperty("Items Count")<br />End If<br /><br />End Function<br /><br />Using Programmatic Descriptions<br />Using Programmatic Descriptions to interact with a web page.<br />This example will illustrate how to use programmatic descriptions to interact with a web page, www.QTPHelper.com to be more exact...<br />Note that I've used a simple regular expression in the Browser and Page description, just in case the titles change in the future.<br />' click the Home link<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").Link("Text:=Home").Click <br /><br />' check that the user isn't already logged in<br />If Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Exist(1) Then<br /><br />' click logout<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Click<br /><br />End If ' user logged in<br /><br />' set the username<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit("Name:=username").Set "User"<br /><br />' set the password<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit("Name:=passwd").Set "Password"<br /><br />' tick the remember-me tickbox<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebCheckBox("Name:=remember").Set "ON"<br /><br /><br />Query a Database<br />Simple example of how to query an access database.<br />dim objDB <br />dim objRS <br />dim intCounter<br />' create a database and recordset objects<br />Set objDB = CreateObject("ADODB.Connection")<br />Set objRS = CreateObject("ADODB.RecordSet")<br />' configure the connection<br />objDB.Provider="Microsoft.Jet.OLEDB.4.0"<br />objDB.Open "c:\MyTestDatabase.mdb"<br />' count the number of records in the employee table<br />objRS.Open "SELECT COUNT(*) from Employee" , objDB<br />Msgbox "There are " & objRS.Fields(0).Value & " records in the employee table."<br />' destroy the objects<br />Set objDB = Nothing<br />Set objRS = Nothing<br />Read a Text File<br />Example of how to read a text file line-by-line.<br /><br />' reading a file line by line<br /><br />Const ForReading = 1<br /><br />' create file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' first check that the file exists<br />If objFS.FileExists("c:\TextFile.txt") Then<br /><br />' open the text file for reading<br />Set objFile = objFS.OpenTextFile("c:\TextFile.txt", ForReading, False)<br /><br />' do until at end of file<br />Do Until objFile.AtEndOfStream<br /><br />' store the value of the current line in the file<br />strLine = objFile.ReadLine<br /><br />' show the line from the file<br />MsgBox strLine<br /><br />Loop ' next line<br /><br />' close the file<br />objFile.Close<br /><br />Set objFile = Nothing<br /><br />Else ' file doesn't exist<br /><br />' report a failure<br />Reporter.ReportEvent micFail, "Read File", "File not found"<br /><br />End if ' file exists<br /><br />' destroy the objects<br />Set objFS = Nothing<br /><br /><br />Read From Excel File<br />Read all the data from an Excel file.<br /><br />' =============================================================<br />' function: ReadXLS<br />' desc : Reads a sheet from an XLS file and stores the content<br />' in a multi-dimensional array<br />' params : strFileName is XLS file to read, including path<br />' strSheetName is the name of the sheet to read, i.e "Sheet1"<br />' returns : Multi-dimensional array containing all data from <br />' the XLS<br />' =============================================================<br />Function ReadXLS(strFileName,strSheetName)<br /><br />Dim strData()<br />Dim objFS, objExcel, objSheet, objRange<br />Dim intTotalRow, intTotalCol<br />Dim intRow, intCol<br /><br />' create the file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' ensure that the xls file exists<br />If Not objFS.FileExists(strFileName) Then<br /><br />' issue a fail if the file wasn't found<br />Reporter.ReportEvent micFail, "Read XLS", "Unable to read XLS file, file not found: " & strFileName<br />' file wasn't found, so exit the function<br />Exit Function<br /><br />End If ' file exists<br /><br />' create the excel object <br />Set objExcel = CreateObject("Excel.Application")<br /><br />' open the file<br />objExcel.Workbooks.open strFileName<br /><br />' select the worksheet<br />Set objSheet = objExcel.ActiveWorkbook.Worksheets(strSheetName)<br /><br />' select the used range<br />Set objRange = objSheet.UsedRange<br /><br />' count the number of rows<br />intTotalRow=CInt(Split(objRange.Address, "$")(4)) - 1<br /><br />' count the number of columns<br />intTotalCol= objSheet.Range("A1").CurrentRegion.Columns.Count<br /><br />' redimension the multi-dimensional array to accomodate each row and column<br />ReDim strData(intTotalRow, intTotalCol)<br /><br />' for each row<br />For intRow = 0 to intTotalRow - 1<br /><br />' for each column<br />For intCol =0 to intTotalCol - 1<br /><br />' store the data from the cell in the array<br />strData(intRow, intcol) = Trim(objSheet.Cells(intRow + 2,intcol + 1).Value)<br /><br />Next ' column<br /><br />Next ' row<br /><br />' close the excel object<br />objExcel.DisplayAlerts = False<br />objExcel.Quit <br /><br />' destroy the other objects <br />Set objFS = Nothing <br />Set objExcel = Nothing<br />Set objSheet = Nothing <br /><br />' return the array containing the data<br />ReadXLS = strData<br /><br />End Function ' ReadXLS<br /><br />Read from the Registry<br />Read a value from a key in the registry.<br /><br />' =============================================================<br />' function : RegistryRead<br />' desc : Read a value from the registry<br />' params : strRoot is the root key, i.e. "HKLM", "HKCU"<br />' strPath is the path to read, i.e. <br />' "Software\Test\Automation"<br />' returns : Value from the registry key<br />' =============================================================<br />Function RegistryRead(strRoot, strPath)<br /><br />' create the shell object<br />Set objShell = CreateObject("WScript.Shell")<br /><br />' read the key<br />strValue = objShell.RegRead(strRoot & "\" & strPath)<br /><br />' return the value<br />RegistryRead = strValue<br /><br />' destroy the object<br />Set objShell = Nothing<br /><br />End Function 'RegistryRead<br /><br />Registering a Procedure<br />Register a procedure with an object class.<br /><br />' add GetItemsCount as a method of the WebList class<br />RegisterUserFunc "WebList", "GetItemsCount", "GetItemsCountFunction"<br /><br />' =============================================================<br />' function : GetItemsCountFunction<br />' desc : Returns the number of items from a weblist<br />' =============================================================<br />Function GetItemsCountFunction(objWebList)<br /><br />If (objWebList = Nothing) Then<br />GetItemsCount = 0<br />Else<br />GetItemsCount = objWebList.GetROProperty("Items Count")<br />End If<br /><br />End Function<br /><br />Replace Method<br />Using the replace method to find and replace text in a string.<br /><br />MsgBox ReplaceText("Automating with QTP is rubbish.", "rubbish.", "great!")<br /><br />MsgBox ReplaceText("QTP is a great automation tool but I can't use it","but.*","!")<br /><br />' =============================================================<br />' function: ReplaceText<br />' desc : Uses a regular expression to replace text within a string<br />' params : strString is the string to perform the replacement on<br />' strPattern is the regular expression<br />' strReplacement is the replacement string<br />' returns : The finished string<br />' =============================================================<br />Function ReplaceText(strString, strPattern, strReplacement)<br /><br />Dim objRegEx<br /><br />' create the regular expression<br />Set objRegEx = New RegExp <br /><br />' set the pattern <br />objRegEx.Pattern = strPattern<br /><br />' ignore the casing<br />objRegEx.IgnoreCase = True<br /><br />' make the replacement<br />ReplaceText = objRegEx.Replace(strString, strReplacement) <br /><br />' destroy the object<br />Set objRegEx = Nothing<br /><br />End Function ' ReplaceText <br /><br /><br />Write to the Registry<br />Write a value to the Registry.<br /><br />' =============================================================<br />' Sub : RegistryWrite<br />' desc : Writes a key value to the registry<br />' params : strRoot is the root key, i.e. "HKLM", "HKCU"<br />' strPath is the path to create, i.e. <br />' "Software\Test\Automation"<br />' strValue is the value to write in the key<br />' returns : void<br />' =============================================================<br />Function RegistryWrite(strRoot, strPath, strValue)<br /><br />' create the shell object<br />Set objShell = CreateObject("WScript.Shell")<br /><br />' write the key<br />objShell.RegWrite strRoot & "\" & strPath, strValue, "REG_SZ"<br /><br />' destroy the object<br />Set objShell = Nothing<br /><br />End Function 'RegistryWrite<br /><br /><br />Read from the Registry<br />Read a value from a key in the registry.<br /><br />' =============================================================<br />' function : RegistryRead<br />' desc : Read a value from the registry<br />' params : strRoot is the root key, i.e. "HKLM", "HKCU"<br />' strPath is the path to read, i.e. <br />' "Software\Test\Automation"<br />' returns : Value from the registry key<br />' =============================================================<br />Function RegistryRead(strRoot, strPath)<br /><br />' create the shell object<br />Set objShell = CreateObject("WScript.Shell")<br /><br />' read the key<br />strValue = objShell.RegRead(strRoot & "\" & strPath)<br /><br />' return the value<br />RegistryRead = strValue<br /><br />' destroy the object<br />Set objShell = Nothing<br /><br />End Function 'RegistryRead<br /><br /><br />Delete from the Registry<br />Delete a key from the registry.<br /><br />' =============================================================<br />' function: RegistryDelete<br />' desc : Deletes a key from the registry<br />' params : strRoot is the root key, i.e. "HKLM", "HKCU"<br />' strPath is the path to delete, i.e. <br />' "Software\Test\Automation"<br />' returns : void<br />' =============================================================<br />Function RegistryDelete(strRoot, strPath)<br /><br />' create the shell object<br />Set objShell = CreateObject("WScript.Shell")<br /><br />' delete the key<br />strValue = objShell.RegDelete(strRoot & "\" & strPath)<br /><br />' destroy the object<br />Set objShell = Nothing<br /><br />End Function 'RegistryDelete<br /><br /><br />Custom Report Entry<br />Creating a customised entry in the results.<br /><br />' Example usage<br />CustomReportEntry micFail, "Custom Report Example", "<DIV align=left>This is a <b>custom</b> report entry!</DIV>"<br /><br />' =============================================================<br />' function: CustomReportEntry<br />' desc : Creates a customized entry in the result file, you<br />' can use standard HTML tags in the message.<br />' params : strStatus is the result, micPass, micFail etc<br />' strStepName is the name of the step<br />' strMessage is the failure message, this can contain<br />' html tags<br />' returns : Void<br />' =============================================================<br />Function CustomReportEntry(strStatus, strStepName, strMessage)<br /><br />' create a dictionary object<br />Set objDict = CreateObject("Scripting.Dictionary")<br /><br />' set the object properties<br />objDict("Status") = strStatus<br />objDict("PlainTextNodeName") = strStepName<br />objDict("StepHtmlInfo") = strMessage<br />objDict("DllIconIndex") = 206<br />objDict("DllIconSelIndex") = 206<br />objDict("DllPAth") = "C:\Program Files\Mercury Interactive\QuickTest Professional\bin\ContextManager.dll"<br /><br />' report the custom entry<br />Reporter.LogEvent "User", objDict, Reporter.GetContext<br /><br />End Function 'CustomReportEntry<br /><br /><br />Write to a Log File<br />Write information to a log file.<br /><br />' =============================================================<br />' function: WriteLog<br />' desc : Writes a message to a log file. File is created<br />' inside a Log folder of the current directory<br />' params : strCode is a code to prefix the message with<br />' strMessage is the message to add to the file<br />' returns : void<br />' =============================================================<br />Function WriteLog(strCode, strMessage)<br /><br />Dim objFS<br />Dim objFile<br />Dim objFolder<br />Dim strFileName<br /><br />' create a file system object<br />Set objFS = CreateObject("Scripting.FileSystemObject")<br /><br />' is there a log folder in the directory that we are currently working<br />If Not objFS.FolderExists(objFS.GetAbsolutePathName(".") & "\log") Then<br /><br />' if there is no log folder, create one<br />Set objFolder = objFS.CreateFolder(objFS.GetAbsolutePathName(".") & "\log") <br /><br />End If ' folder exists<br /><br />' set a name for the log file using year, month and day values<br />strFileName = objFS.GetAbsolutePathName(".") & "\log\" & year(date) & month(date) & day(date) & ".log"<br /><br />' create the log file<br />Set objFile = objFS.OpenTextFile(strFileName, 8, True)<br /><br />' in case of any issues writing the file<br />On Error Resume Next<br /><br />' write the log entry, include a carriage return<br />objFile.Write Date & ", " & Time & ", " & strCode & ", " & strMessage & vbcrlf<br /><br />' disable the on error statement<br />On Error GoTo 0<br /><br />' close the log file<br />objFile.Close<br /><br />' destrory the object<br />Set objFS = Nothing<br /><br />End Function ' WriteLog<br />Check Service is Running<br />Check to see if a windows service is running.<br /><br />' =============================================================<br />' function: CheckIfServiceIsRunning<br />' desc : Check to see if a service is running<br />' params : strServiceName is the name of the service<br />' returns : True if running, False otherwise<br />' =============================================================<br />Function CheckIfServiceIsRunning(strServiceName)<br /><br />Dim objShell, blnStatus <br /><br />' create the shell object<br />Set objShell= CreateObject("Shell.Application")<br />blnStatus = objShell.IsServiceRunning(strServiceName) <br /><br />' return status of service<br />CheckIfServiceIsRunning = blnStatus<br /><br />End Function 'CheckIfServiceIsRunning<br /><br /><br />Basic String Manipulation<br />Basic functions for string manipulation.<br /><br />Function: String<br />Accepts a number and a character. Returns a string created with the character that is repeated the given number of times.<br /><br />' example<br />MsgBox String(5,"A")<br /><br /><br />Function: Len<br />Returns the number of characters from a given string.<br /><br />' example<br />strMyName = "Joe Bloggs"<br />MsgBox "The Name '" & strMyName & "' is " & Len(strMyName) & " characters long"<br /><br /><br />Function: Instr<br />Accepts two strings and returns True if the second string is contained within the first.<br /><br />' example<br />If Instr("Hello, welcome to www.QTPHelper.com!", "QTP")>0 Then MsgBox "Found"<br /><br /><br />Function: Left<br />Returns the given number of left-most characters from a string<br /><br />' example<br />MsgBox Left("Joe Bloggs", 3)<br /><br /><br />Function: Right<br />Returns the given number of right-most characters from a string<br /><br />' example<br />MsgBoxRight("Joe Bloggs", 6)<br /><br /><br />Function: LCase<br />Returns a given string in lower-case<br /><br />' example<br />MsgBox LCase("JoE BloGGs")<br /><br /><br />Function: UCase<br />Returns a given string in upper-case<br /><br />' example<br />MsgBox UCase("joe bloggs")<br />Get System Information<br />Get system information like User Name and Computer Name.<br /><br />Dim objNet<br /><br />' create a network object<br />Set objNet = CreateObject("WScript.NetWork")<br /><br />' show the user name<br />MsgBox "User Name: " & objNet.UserName <br /><br />' show the computer name<br />MsgBox "Computer Name: " & objNet.ComputerName <br /><br />' show the domain name<br />MsgBox "Domain Name: " & objNet.UserDomain<br /><br />' destroy the object<br />Set objNet = Nothing <br /><br /><br />Get Disk Information<br />Get information about one of your disk drives.<br /><br />Dim intSectors, intBytes, intFreeC, intTotalC, intTotal ,intFreeb<br /><br />' include this windows api<br />extern.Declare micLong, "GetDiskFreeSpace", "kernel32.dll", "GetDiskFreeSpaceA", micString+micByref, micLong+micByref, micLong+micByref,micLong+micByref,micLong+micByref<br /><br />' set these values<br />intSectors = 255<br />intBytes = 255<br />intFreeC = 255<br />intTotalC = 255<br /><br />' calculate the disk space, using C: in this example<br />intSpaceAvailable = extern.GetDiskFreeSpace("c:\", intSectors, intBytes, intFreeC, intTotalC)<br /><br />' calculate the totals<br />intTotal = intTotalC * intSectors * intBytes<br />intFreeb = intFreeC * intSectors * intBytes<br /><br />' show the outputs<br />msgBox intSectors<br />msgBox intBytes<br />msgBox intFreeC<br />msgBox intTotalC<br />msgbox intTotal<br />msgBox intFreeb<br /><br />Get System Variable Value<br />Get a value from a Windows System Variable.<br /><br />' for example to get the oracle home path<br />MsgBox GetSystemVariable("ORACLE_HOME")<br /><br />' =============================================================<br />' function: GetSystemVariable<br />' desc : Get the value of a system variable<br />' params : strSysVar is the variable name<br />' returns : Content of variable name<br />' =============================================================<br />Function GetSystemVariable(strSysVar)<br /><br />Dim objWshShell, objWshProcessEnv<br /><br />' create the shell object<br />Set objWshShell = CreateObject("WScript.Shell")<br />Set objWshProcessEnv = objWshShell.Environment("Process")<br /><br />' return the system variable content <br />GetSystemVariable = objWshProcessEnv(strSysVar)<br /><br />End Function ' GetSystemVariable<br /><br />Using Description Objects<br />Using Description Objects to interact with a web page.<br />This example will illustrate how to use description objects to interact with a web page, www.QTPHelper.com to be more exact...<br />Note that for the Browser and Page I've used programmatic descriptions, but for the buttons, edits and check-boxes I've used Description Objects. Also take note of the regular expression in the Browser and Page description, just in case the titles change in the future.<br />You can add more properties to your description objects if you need to, i.e. if your web page has numerous objects of the same type with similar property values. <br />Dim objLogout<br />Dim objUser<br />Dim objPass<br />Dim objRemember<br /><br />' create description objects for each item we are dealing with<br />Set objLogout = Description.Create()<br />Set objUser = Description.Create()<br />Set objPass = Description.Create()<br />Set objRemember = Description.Create()<br /><br />' define the properties of each item<br />objLogout("Name").Value = "Logout"<br />objUser("Name").Value = "username"<br />objPass("Name").Value = "passwd"<br />objRemember("Name").Value = "remember"<br /><br />' check that the user isn't already logged in<br />If Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton(objLogout).Exist(1) Then<br /><br />' click logout<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton(objLogout).Click <br /><br />End If<br /><br />' set the user name<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit(objUser).Set "User"<br /><br />' set the password<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit(objPass).Set "Password"<br /><br />' tick the remember-me tickbox<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebCheckBox(objRemember).Set "ON"<br /><br /><br />Using Programmatic Descriptions<br />Using Programmatic Descriptions to interact with a web page.<br />This example will illustrate how to use programmatic descriptions to interact with a web page, www.QTPHelper.com to be more exact...<br />Note that I've used a simple regular expression in the Browser and Page description, just in case the titles change in the future.<br />' click the Home link<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").Link("Text:=Home").Click <br /><br />' check that the user isn't already logged in<br />If Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Exist(1) Then<br /><br />' click logout<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebButton("Name:=Logout").Click<br /><br />End If ' user logged in<br /><br />' set the username<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit("Name:=username").Set "User"<br /><br />' set the password<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebEdit("Name:=passwd").Set "Password"<br /><br />' tick the remember-me tickbox<br />Browser("Title:=QTP Helper.*").Page("Title:=QTP Helper.*").WebCheckBox("Name:=remember").Set "ON"<br /><br /><br /><br /><br /><br /><br /> <br /> <br /> <br />What is QuickTest Automation Object Model?<br />It's a way to write scripts so as to automate your QuickTest operations.<br /> <br />Some places where we can use AOM<br />This is a small list of places (but not limited to) where we can use AOM. Thumb Rule - Use it at any place where you find yourself doing repetitive tasks while using QTP. <br />• AOM can come handy when you have a large no of scripts to be uploaded to QC. A simple script can save you hours of manual work! <br />• Use AOM to initialize QTP options and settings like add-ins etc. <br />• You can use AOM to call QTP from other application: For ex: You can write a macro for calling QTP from excel. <br />Caution: AOM should be used outside of QTP and not within the script (during playback). Though there is no harm using it inside but some of the AOM statements might fail.<br /> <br /> How to write AOM scripts?<br />You need to understand that the very root of QT AOM is Application Object. Every automation script begins with the creation of the QuickTest "Application" object. Creating this object does not start QuickTest. It simply provides an object from which you can access all other objects, methods and properties of the QuickTest automation object model.You can create only one instance of the Application object. You do not need to recreate the QuickTest Application object even if you start and exit QuickTest several times during your script. Once you have defined this object you can then successfully work and perform operations on other objects given in Quick Test Pro > Documentation > QuickTest Automation Reference.<br />For ex: Let us connect to TD QC using AOM and open a script "qtp_demo"<br />Dim qt_obj 'Define a Quick Test object <br />qt_obj = CreateObject("Quick Test.Application") ' Instantiate a QT Object. It does not start QTP.<br />qt_obj.launch ' Launch QT <br />qt_obj.visible ' Make QT visible <br />qt_obj.TDConnection.Connect "http://tdserver/tdbin", _ 'Referencing TDConnection Object <br />"TEST_DOMAIN", "TEST_Project", "Ankur", "Testing", False ' Connect to Quality Center <br />If qt_obj.TDConnection.IsConnected Then ' If connection is successful <br /> qt_obj.Open "[QualityCenter] Subject\tests\qtp_demo", False ' Open the test <br />Else <br /> MsgBox "Cannot connect to Quality Center" ' If connection is not successful, display an error message. <br />End If<br />To quickly generate an AOM script with the current QTP settings. Use the Properties tab of the Test Settings dialog box (File > Settings) OR the General tab of the Options dialog box (Tools > Options) OR the Object Identification dialog box (Tools > Object Identification). Each contain a "Generate Script" button. Clicking this button generates a automation script file (.vbs) containing the current settings from the corresponding dialog box. <br />You can run the generated script as is to open QuickTest with the exact configuration of the QuickTest application that generated the script, or you can copy and paste selected lines from the generated files into your own automation script. <br />Reference: Quick Test Pro > Documentation > QuickTest Automation Reference. <br /><br /><br /> <br /> <br /> <br />What is QuickTest Automation Object Model?<br />It's a way to write scripts so as to automate your QuickTest operations.<br /> <br />Some places where we can use AOM<br />This is a small list of places (but not limited to) where we can use AOM. Thumb Rule - Use it at any place where you find yourself doing repetitive tasks while using QTP. <br />• AOM can come handy when you have a large no of scripts to be uploaded to QC. A simple script can save you hours of manual work! <br />• Use AOM to initialize QTP options and settings like add-ins etc. <br />• You can use AOM to call QTP from other application: For ex: You can write a macro for calling QTP from excel. <br />Caution: AOM should be used outside of QTP and not within the script (during playback). Though there is no harm using it inside but some of the AOM statements might fail.<br /> <br /> How to write AOM scripts?<br />You need to understand that the very root of QT AOM is Application Object. Every automation script begins with the creation of the QuickTest "Application" object. Creating this object does not start QuickTest. It simply provides an object from which you can access all other objects, methods and properties of the QuickTest automation object model.You can create only one instance of the Application object. You do not need to recreate the QuickTest Application object even if you start and exit QuickTest several times during your script. Once you have defined this object you can then successfully work and perform operations on other objects given in Quick Test Pro > Documentation > QuickTest Automation Reference.<br />For ex: Let us connect to TD QC using AOM and open a script "qtp_demo"<br />Dim qt_obj 'Define a Quick Test object <br />qt_obj = CreateObject("Quick Test.Application") ' Instantiate a QT Object. It does not start QTP.<br />qt_obj.launch ' Launch QT <br />qt_obj.visible ' Make QT visible <br />qt_obj.TDConnection.Connect "http://tdserver/tdbin", _ 'Referencing TDConnection Object <br />"TEST_DOMAIN", "TEST_Project", "Ankur", "Testing", False ' Connect to Quality Center <br />If qt_obj.TDConnection.IsConnected Then ' If connection is successful <br /> qt_obj.Open "[QualityCenter] Subject\tests\qtp_demo", False ' Open the test <br />Else <br /> MsgBox "Cannot connect to Quality Center" ' If connection is not successful, display an error message. <br />End If<br />To quickly generate an AOM script with the current QTP settings. Use the Properties tab of the Test Settings dialog box (File > Settings) OR the General tab of the Options dialog box (Tools > Options) OR the Object Identification dialog box (Tools > Object Identification). Each contain a "Generate Script" button. Clicking this button generates a automation script file (.vbs) containing the current settings from the corresponding dialog box. <br />You can run the generated script as is to open QuickTest with the exact configuration of the QuickTest application that generated the script, or you can copy and paste selected lines from the generated files into your own automation script. <br />Reference: Quick Test Pro > Documentation > QuickTest Automation Reference.Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-29668195096294189572009-09-08T11:11:00.001+05:302009-09-08T11:11:56.998+05:30The A-B-C's of software testing modelsSummary:-<br />This article provide you brief on testing methodologies in various software development models <br />Theme:-<br />One doesn't have to spend much time in the software industry to become familiar with several software development models. Some of the most commonly known include water fall, Iterative ,test-first or test –driven development (TFD or TDD), and Extreme Programming (XP). Interestingly, one needs to have a rather diverse set of software development experiences and needs to pay rather close attention to those experiences to realize that there are just as many models for testing software as there are for developing software -- and that the testing model a particular project follows need not be dictated by the software development model. <br />Categories of testing activities<br />To aid in this discussion, let's agree to think about software testing in terms of five general categories of activities: <br />1. Researching information to improve or enhance testing -- This information may come from specifications, use cases, technical design documentation, contracts, industry standards, competing applications, or almost anything else that is likely to improve a tester's ability to test the software deeper, faster or better. <br />2. Planning and/or designing tests -– This category would encompass such activities as writing test cases, developing test strategies, writing test plans, creating manual test scripts and preparing test data. <br />3. Scripting and/or executing tests –- Here is where tests are actually executed and/or automated. This is what most non-testers think of when they hear someone talk about software testing. <br />4. Analyzing test results and new information –- Not all tests produce results that clearly pass or fail. Many tests result in data that can only be understood by human judgment and analysis. Additionally, changing specifications, deadlines or project environments can make a test that had been clearly passing fail without anything changing in the software. This category is where this type of analysis occurs. <br />5. Reporting relevant information -- Reporting defects and preparing compliance reports are what come to mind first for most people, but a tester may need to report all kinds of additional information. <br />Again, these five categories are intended to be simple in order to make our discussion about testing models easier. They aren't intended to supplant your current terminology. <br />Testing waterfall-style<br />Just like developing software using the waterfall model, testing waterfall-style is a fundamentally linear process except for a minimal feedback loop created by the need to fix some of the problems in the software that are indicated by failing tests. Visually, that feedback loop is equivalent to the small eddy current at the bottom of a real waterfall.<br /><br />Waterfall-style testing is rarely chosen voluntarily anymore. It is commonly a side effect of some logistical challenge that kept the testers from being able to interact with the application or the developers prior to the first -- and what they hope will be the only -- build of the software. Waterfall testing is occasionally appropriate for situations where it is reasonable to hope the software will "just work," such as applying a service release or a patch to a production application. <br />Testing, iterative-style<br />Iterative testing is similar to iterative development in that many of the test iterations happen to coincide with development releases. In that regard, it is like a bunch of waterfall testing cycles strung end to end. Testing iterations differ from development iterations in that there can be iterations prior to the first software build, and there can be multiple test iterations during a single software build. Another difference is that unlike a development iteration, a test iteration can seamlessly abort at any point during the iteration to return to a research mode. While a development iteration can also abort and restart at any time, doing so is quite likely to jeopardize the project schedule. <br />Iterative software testing is extremely common in the commercial market, though it has many variants. The V-Model, the spiral model, and Rational Unified Process (RUP) based testing are all derivatives of an iterative testing approach. Iterative testing generally works well on projects where software is being developed in pre-planned, predictable increments and on projects where the software is being developed and released in such rapid or unpredictable cycles that it is counter productive for testers to plan around scheduled releases. <br />Testing, agile-style<br />Agile-style testing more or less eliminates the element of pre-determined flow from the test cycle in favor of shifting among the five basic activities whenever it adds value to the project to do so. For example, while analyzing the results of a test, the tester may realize that his test was flawed and move directly back to planning and designing tests. In a waterfall or iterative flow, that test redesign would wait until after the current results were reported and preparations were being made for the next test iteration. <br />Agile-style testing can be implemented as an overall approach or as a complement to any other testing approach. For example, within an iterative test approach, a tester could be encouraged to enter a period of agile testing, side-by-side with a developer, while tracking down and resolving defects in a particular feature. <br />Agile-style testing is significantly more common than most people realize. As it turns out, this model is what is going on in the heads of many testers all the time, regardless of the external process they are following. Be that as it may, this approach isn't very popular with managers and process improvement specialists because it is misunderstood by many non-testers, and few testers following this process are able to express what they are doing in a manner that gives stakeholders confidence that they are actually doing organized and thoughtful testing.<br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-69363933198874470602009-09-08T11:10:00.001+05:302009-09-08T11:10:51.282+05:30The A-B-C's of software testing modelsSummary:-<br />This article provide you brief on testing methodologies in various software development models <br />Theme:-<br />This article provide you brief on testing methodologies in various software development modelsJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-60731087804659100862009-09-08T11:07:00.000+05:302009-09-08T11:08:20.058+05:30The Butterfly Model for Test Development (Part-2)Theme:-<br /><br />A Swarm of Testing<br />We have now examined how test analysis, test design, and test execution compose the body of the butterflies in this test development model. In order to understand how the butterfly model monitors and modifies the software development model, we need to digress slightly and reexamine the V software development model itself.<br />In Figure 1, not only have the micro-iterations naturally present in the design cycle been included, but the major design phase segments (characterized by their outputs) have been separated into smaller arrows to clearly define the transition point from one segment to the next. The test side of the V has been similarly separated, to demarcate the boundaries between successful formal execution of each level of testing. Complete Expanded V Development Model View<br />No micro-iterations on the test side of the V are shown in this depiction, although there are a few to be found – mostly around the phase segment transitions, where test execution documentary artifacts are formulated and preserved. The relative lack of micro-iterations on the test side of the V is due to the fact that it represents only the formal running of tests – the leg work of analysis and design are done elsewhere. The question, therefore, is: Where?<br />The answer to this all-important question is shown in Figure 2.<br />Figure 1. Illustration of the Butterfly Test Development Model<br />At all micro-iteration termini, and at some micro-iteration geneses, exists a small test butterfly. These tiny test insects each contribute to the overall testing effort, encapsulating the test analysis and design required by whatever minor change is represented by the micro-iteration.<br />Larger, heavier butterflies spring to life on the boundaries between design phase segments. These larger specimens carry with them the more formal analyses required to transition from one segment to the next. They also answer the call for coordination between the tests designed as part of their smaller brethren. Large butterflies also appear at the transition points between test phase segments, where documentary artifacts of test execution are created in order to claim credit for the formal execution of the test. <br />A single butterfly, by itself, is of no moment – it cannot possibly have much impact on the overall quality of the application and its tests. But a swarm of butterflies can blot out the sun, affecting great improvement in the product’s quality. The smallest insects handle the smallest changes, while the largest tie together the tests and analyses of them all.<br />The right-pointing lineage arrows, which show the roots of each test artifact in its corresponding design artifact, point to the moment in the software development model where the analysis and design of tests culminate in their formal execution. <br />Butterfly Thinking<br /> “A butterfly flutters its wings in Asia, and the weather changes in Europe.” This colloquialism offers insight into the chaotic (in the mathematical sense of the word) nature of software development. Events that appear minor and far removed from relevance can have a profound impact on the software being created. Many seemingly minor and irrelevant events are just that – minor and irrelevant. But some such events, despite their appearance, are not.<br />Identifying these deceptions is a key outcome of the successful implementation of the butterfly model. The following paragraphs contain illustrations of this concept.<br />Left Wing Thinking<br />The FADEC must assert control over the engine’s operation within 300 msec of a power-on event.<br />This requirement, or a variant of it, appears in every system specification for a FADEC. It is important because it specifies the amount of time available for a cold-start initialization in the software. <br />The time allotted is explicit. No more than three tenths of a second may elapse before the FADEC asserts itself. <br />The commencement of that time period is well defined. The nearly vertical rising edge of the FADEC power signal as it moves from zero volts (off) to the operational voltage of the hardware marks the start line.<br />But what the heck does “assert control” mean?<br />While analyzing this requirement statement, that question should jump right off the written page at the tester. In one particular instance, the FADEC asserted control by crossing a threshold voltage on a specific analog signal coming out of the box. Unfortunately, that wasn’t in the specification. Instead, I had to ask the senior systems engineer, who had performed similar tests hundreds of times, how to tell when the FADEC asserted itself.<br />In other words, I couldn’t create a test sketch for the requirement because I couldn’t determine what the end point of the measurement should be. The system specification assumed that the reader held this knowledge, although anyone who was learning the ropes (as I was at that point) had no reasonable chance of knowing. As far as I know, this requirement has never been elaborated.<br />As a counterpoint example, consider the mass-market application that, according to the verbally preserved requirements, had to be “compelling”. What the heck is “compelling”, and how does one test for it? <br />In this case, it didn’t matter that the requirement was ill suited for testing. In fact, the testers’ opinions on the subject weren’t even asked for. But the application succeeded, as evidenced by the number of copies purchased. Customers found the product compelling, and therefore the project was a success.<br />But doesn’t this violate the “must be testable” rule for requirements? Not really. The need to be “compelling” doesn’t constitute a functional requirement, but is instead an aesthetic requirement. Part of the tester’s analysis should weed out such differences, where they exist.<br />Right Wing Thinking<br />Returning to our power-up timing example, how can we measure the time between two voltage-based events? There are many possibilities, although most can’t handle the precision necessary for a 300 msec window. Clocks, watches, and even stopwatches would be hideously unreliable for such a measurement.<br />The test stand workstation also couldn’t be used. That would require synchronization of the command to apply power with the actual application of power. There was a lag in the actual application of power, caused by the software-driven switch that had to be toggled in the test stand’s circuitry. Worse yet, detection of the output voltage required the use of a digital voltmeter, which injected an even larger amount of uncertainty into the measurement.<br />But a digital oscilloscope attached to a printer would work, provided that the scope was fast enough. The oscilloscope was the measurement device (obviously). The printer was required to “prove” that the test passed. This was, after all, an application subject to FAA certification.<br />As a non-certification counter example, consider the product whose requirements included the following statement:<br />Remove unneeded code where possible and prudent.<br />In other words, “Make the dang thing smaller”. The idea behind the requirement was to shrink the size of the executable, although eliminating unnecessary code is usually a good thing in its own right. No amount of pleading was able to change this requirement into a quantifiable statement, either.<br />So how the heck can we test for this? In this case, the tester might rephrase the requirement in his or her mind to read:<br />The downloadable installer must be smaller than version X. <br />This provides a measurable goal, albeit an assumed one. More importantly, it preserves the common thread between the two statements, which is that the product needed to shrink in size.<br />Body Thinking<br />To be honest, there isn’t all that much thought involved in formally executing thoroughly prepared test cases. The main aspect of formal execution is the collection of “evidence” to prove that the tests were run and that they passed. There is, however, the need to analyze the recorded evidence as it is amassed.<br />For example, aerospace applications commonly must be unit tested. Each individual function or procedure must be exercised according to certain rules. The generally large number of modules involved in a certification means that the unit testing effort required is big, although each unit test itself tends to be small. Naturally, the project’s management normally tries to get the unit testing underway as soon as possible to ensure completion by the “drop-dead” date for unit test completion implied in the V model.<br />As the established date nears, the test manager must account for every modified unit. The last modification of the unit must predate the configured test procedures and results for that unit. All of the tests must have been peer reviewed prior to formal execution. And all of the tests must have passed during formal execution.<br />In other words, “dot the I’s and cross the T’s”. It is largely an exercise in bookkeeping, but that doesn’t diminish its importance.<br />The Swarm Mentality<br />To better illustrate the swarm mentality, let’s look at an unmanned rocket project that utilized the myriad butterflies of this model to overwhelm bugs that could have caused catastrophic failure. This rocket was really a new version of an existing rocket that had successfully blasted off many, many times.<br />First, because the new version was to be created as a change to the older version’s software, a complete and thorough system specification analysis was performed, comparing the system specs for both versions. This analysis found that:<br />• The old version contained a feature that didn’t apply to the new version. A special extended calculation of the horizontal bias (BH) that allowed for late-countdown (between five and ten seconds before launch) holds to be restarted within a few minutes didn’t apply to the new version of the rocket. BH was known to be meaningless after either version left the launch pad, but was calculated in the older version for up to 40 seconds after liftoff. <br />• The updated flight profile for the new version had not been included in the updated specification, although this omission had been agreed to by all relevant parties. That meant that discrepancies between the early trajectory profiles between the two versions were not available for examination. The contractors building the rocket didn’t want to change their agreement on this subject, so the missing trajectory profile information was marked as a risk to be targeted with extra-detailed testing.<br />Because of the fairly serious questions raised in the system requirements analysis, the test engineers decided to really attack the early trajectory operation of the new version. Because this was an aerospace application, they knew that the subsystems had to be qualified for flight prior to integration into the overall system. That meant that the inertial reference system (SRI) that provided the raw data required to calculate BH would work, at least as far as it was intended to.<br />But how could they test the interaction of the SRI and the calculation of BH? The horizontal bias was also a product of the rocket’s acceleration, so they knew that they would have to at least simulate the accelerometer inputs to the control computer (it is physically impossible to make a vibration table approach the proper values for the rocket’s acceleration). <br />If they had a sufficiently detailed SRI model, they could also simulate the inertial reference system. Without a detailed simulation, they’d have to use a three-axis dynamic vibration table. Because the cost of using the table for an extended period of time was higher than the cost of creating a detailed simulation, they decided to go with the all simulation approach.<br />In the meantime, a detailed analysis of the software requirements for both versions revealed a previously unknown conceptual error. Every exception raised in the Ada software automatically shut down the processor – whether the exception was caused by a hardware or software fault! <br />The thinking behind this problem was that exceptions should only address random hardware failures, where the software couldn’t hope to recover. Clearly, software exceptions were possible, even if they were improbable. So, the exception handling in the software spec was updated to differentiate between hardware and software based exceptions.<br />Examining the design of the software, the test engineers were amazed to discover that the horizontal bias calculations weren’t protected for Operand Error, which is automatically raised in Ada when a floating point real to integer conversion exceeds the available range of the integer container. BH was involved just such a conversion!<br />The justification for omitting this protection was simple, at least for the older version of the rocket. The possible values of BH were physically limited in range so that the conversion couldn’t ever overflow. But the newer version couldn’t claim that fact, so the protection for Operand Error was put into the new version’s design. Despite the fact that this could put the 80% usage goal for the SRI computer at risk, the possibility that the computer could fail was simply too great.<br />Finally, after much gnashing of teeth, the test engineers convinced the powers that be to completely eliminate the prolonged calculation of horizontal bias because it was useless in the new version. The combined risks of the unknown trajectory data, the unprotected conversion to integer, and the money needed to fund the accurate SRI simulation was too much for the system’s developers. They at last agreed that it was better to eliminate the unnecessary processing, even though it worked for the previous version.<br />As a result, the maiden demonstration flight for the Ariane 5 rocket went off without a hitch.<br />That’s right – I have been describing the findings of the inquiry board for the Ariane 5 in light of how a full and rigorous implementation of the butterfly model would have detected, mitigated, or eliminated them [LION96]. <br />Ariane 4 contained an extended operation alignment function that allowed for late-countdown holds to be handled without long delays. In fact, the 33rd flight of the Ariane 4 rocket used this feature in 1989. <br />The Ariane 5 trajectory profile was never added to the system requirements. Instead, the lower values in the Ariane 4 trajectory data were allowed to stand.<br />The SRI computers (with the deficient software) were therefore never tested to the updated trajectory telemetry. <br />The missing Operand Error exception handling for the horizontal bias therefore never occurred during testing, causing the SRI computer to shut down.<br />The flawed concept of all exceptions being caused by random hardware faults was therefore never exposed.<br />SRI 1, the first of the dual redundant components, therefore halted on an Operand Error caused by the conversion of BH in the 39th second after liftoff. SRI 2 immediately took over as the active inertial reference system.<br />But then SRI 2 failed because of the same Operand Error in the following data cycle (72 msec in duration).<br />And therefore, Ariane 5 self destructed in the 42nd second of its maiden voyage – all for lack of a swarm of butterflies.<br />The Butterfly Model within the V Model Context<br />The butterfly model of test development is not a component of the V software development model. Instead, the butterfly test development model is a superstructure imposed atop the V model that operates semi-independently, in parallel with the development of software. <br />The main relationship between the V model and the butterfly swarm of testing activity is timing, at least on the design side of the V. Test development is driven by software development, for software is what we are testing. Therefore, the macro and micro iterations of software development define the points at which test development activity is both warranted and required. The individual butterflies must react to the iterative software development activity that spawned them, while the whole of the swarm helps to shape the large and small perturbations in the software design stream.<br />On the test side of the V, the relationship is largely reversed – the software milestones of the V model are the results of butterfly activity on the design side. The differences between the models give latitude to both the developer and the tester to envision the act of testing within their particular operational context. The developer is free to see testing as the culmination of their development activity. The tester is likewise free to see the formal execution of testing as the end of the line – where all of the analytical and test design effort that shepherded the software design process is transformed into the test artifacts required for progression from development to delivery.<br />But the butterfly model does not entirely fall within the bounds of the V model, either. The third issue taken with the standardized V model stated that the roots of software testing lay mainly within the boundaries of the software to be tested. But proper performance of test analysis and design require knowledge outside the realm of the application itself.<br />Testers in the butterfly model require knowledge of testing techniques, tools, methodologies, and technologies. Books and articles about test theory are hugely important to the successful implementation of the butterfly model. Similarly, software testing conferences and proceedings are valuable resources. <br />Testers in this test development model also need to keep abreast of technological advancements related to the application being developed. Trade journals and periodicals are valuable sources for such information.<br />In the end, the tester is required to not only know the application being tested, but also to understand (at some level) software testing, valid testing techniques, software testing tools and technologies, and even a little about human nature.<br />Next Steps<br />The butterfly model of test development is far from complete. The model as described herein is a first step toward a complete and usable model. Some of the remaining steps to finish it include:<br />• Creating a taxonomy of test butterflies that describes each type of testing activity within the context of the software development activity it accompanies.<br />• Correlating the butterfly taxonomy with a valid taxonomy of software bugs (to understand what the butterflies eat).<br />• Formally defining and elaborating the “objectives” associated with various testing activities.<br />• Creating a taxonomy of “artifacts” to better define the parameters of the model’s execution.<br />• Expanding visualization of the model to cover the spiral development model.<br />• Defining the framework necessary to achieve full implementation of the model.<br />• Identifying methods of automating significant portions of the model’s implementation.<br />Summary<br />The butterfly model for software test development is a semi-dependent model that represents the bifurcated role of software testing with respect to software development. The underlying realization that software development and test development are parallel processes that are separate but complementary is embodied by the butterfly model’s superposition atop the V development model. <br />Correlating the V model and butterfly model requires understanding that the standard V model is a high-level view of software development that hides the myriad micro-iterations all along the design and test legs of the V. These micro-iterations are the core of successful software development. They represent the incorporation of new knowledge, new requirements, and lessons learned – primarily during the design phase of software development, although the formation of test artifacts also includes some micro-iterative activity.<br />Tiny test butterflies occupy the termini of these micro-iterations, as well as some of their geneses. Larger, more comprehensive butterflies occupy phase segment transition points, where the nature of work is altered to reach toward the next goal of the software’s development. <br />The parts of the butterfly represent the three legs of successful software testing – test analysis, test design, and formal test execution. Of the three, formal execution is the smallest, although it is the only piece explicitly represented in the V model. Test analysis and test design, ignored in the V model, are recognized in the butterfly model as shaping forces for software development, as well as being the foundation for test execution. <br />Finally, the butterfly model is in its infancy, and there is significant work to do before it can be fully described. However, the visualization of a swarm of testing butterflies darkening the sky while they steer software away from error injection is satisfying– at last we have a physical phenomena that represents the ephemeral act of software testing.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-10456166039120649022009-09-08T11:06:00.000+05:302009-09-08T11:07:05.613+05:30The Butterfly Model for Test Development ( part-1)Summary:-<br />There is a dichotomy between the development and testing of software. This schism is illustrated by the plethora of development models employed for planning and estimating the development of software as opposed to the scarcity of valid test development models. At first glance, the same models which serve to underlay the software development process with forethought and diligence appear to be adequate for the more complex task of planning, developing, and executing adequate verification of the application.<br />Unfortunately, software development models were not intended to encapsulate the vagaries of software verification and validation, the two main goals of software testing. Indeed, software development models can be antithetical to the effective testing of software. It lies in the hands of software testing professionals, therefore, to define an effective model for software test development that complements and completes any given software development model. <br />One such test development model is the Butterfly Model, which I will explore in some detail in this paper. It should be understood that the butterfly model is neither separate nor integrated with the development model, but instead is a monitoring and modifying factor in the completion of the development model. While this may seem arbitrary and self-contradictory, it is my hope that the elaboration of the butterfly model presented herein will both explain and justify this statement.<br />In this paper I will present a modified view of the ubiquitous “V” software development model. On top of this modified model I will superpose the butterfly model of test development. Finally, I will reconcile the relationship between the models, clarifying the effects of each on the other and identifying the information portals germane to both, together or separately.<br /><br />Theme:-<br />The Standard V Software Development Model<br />Nearly everyone familiar with modern software development knows of the standard V development model, depicted below.<br />In this standardized image of the V development model, both the design and test phases of development are represented as linear processes that are gated according to the specific products of specific activities. On the design side, system requirements beget software requirements, which then beget a software design, which in turn begets an implementation. <br />On the test side of development, the software design begets unit tests. Similarly, software requirements beget integration tests (with a little help from the system requirements). Finally, system requirements beget system tests. Acceptance testing, being the domain of the end user of the application, is deliberately omitted from this view of the V model.<br />It should be understood that the V model is simply a more expressive rearrangement of the waterfall model, with the waterfall’s time-line component mercifully eliminated and abstraction of the system indicated by the vertical distance from the implementation. The V model is correct as far as it goes, in that it expresses most of the lineage required for the artifacts of successful software development. From an application development point of view, this depiction of the model is sufficient to convey the source associations of the major development cycle artifacts, including test artifacts.<br />Unfortunately, the application development viewpoint falls well short of the software test development vantage required to create and maintain effective test artifacts.<br />Rigor of Model Enforcement<br />Before launching into a discussion of the shortfalls of the V software development model, a side excursion to examine the appropriate level of rigor in enforcing the model is warranted. It needs to be recognized from the start that not all applications will select to implement the V model in the same manner. Generally, deciding on how rigidly the model must be followed is largely a product of understanding the operational arena of the application.<br />For example, any certification requirements attached to the application will dictate the rigor of the model’s implementation. Safety critical software in the commercial aerospace arena, for example, undergo an in-depth certification review prior to being released for industry use. Applications in this arena therefore tailor their implementation of the V model toward fulfillment of the objectives listed for each segment of the process in RTCA/DO-178B, the Federal Aviation Administration’s (FAA’s) selected guidelines for certification.<br />Similarly, medical devices containing software that affects safety must be developed using a version of the model that fulfills the certification requirements imposed by the Food and Drug Administration (FDA). As automotive embedded controller software continues to delve into applications that directly affect occupant safety (such as actuator based steering), it can be expected that some level of certification requirement will be instituted for that arena, as well.<br />Other arenas do not require anything approaching this level of rigor in their process. If the application cannot directly cause injury or the loss of life, or trigger the financial demise of a company, then it can most likely follow a streamlined version of the V model.<br />Web applications generally fall into this category, as do many e-commerce and home-computing applications. In fact, more applications fall into the second category than the first. That doesn’t exempt them from the need to follow the model, however. It simply modifies the parameters of their implementation of the model.<br />Where the V Model Leaves Off<br />The main issue with the V development model is not its depiction of ancestral relationships between test artifacts and their design artifact progenitors. Instead, there are three facets of the V model’s that are incomplete and must be accounted for. Just as in software development, we must define the problem before we can attempt to solve it.<br />First, the V model is inherently a linear expression of a nonlinear process. The very existence of the spiral model of software development should be sufficient evidence of the nonlinearity of software development, but this point deserves further examination. <br />Software design artifacts, just like the software program they serve, are created and maintained by people. People make mistakes. The existence of software testers bears witness to this, as does the amount of buggy software that still seems to permeate the marketplace, despite the best efforts of software development and testing professionals. When mistakes are found in an artifact, the error must be corrected. The act of correction, in a small way, is another iteration of the original development of the artifact.<br />The second deficient aspect of the V model is its implication of unidirectional flow from design artifacts into test artifacts. Any seasoned software developer understands that feedback within the development cycle is an absolute necessity. The arrows depicting the derivation of tests from the design artifacts should in reality be two headed, although the left-pointing arrowhead would be significantly smaller than the right-pointing head.<br />While test artifacts are generally derived from their corresponding design artifacts, the fact that a test artifact must be so derived needs to be factored in when creating the design artifact in the first place. Functional requirements must be testable – they must be stated in such a manner as to be conducive to analysis, measurement, or demonstration. Vague statement of the requirements is a clear indicator of trouble down the road. Likewise, software designs need to be complete and unambiguous. The implementation methodology called for in the software design must be clear enough to drive the definition of appropriate test cases for the verification and validation of that design. <br />If the implementation itself is to be part of the test ancestry, then it, too, must be concise and complete, with adequate commentary on the techniques employed in its construction but without ambiguity or self-contradiction. <br />It should be noted that this discussion of the second deficient aspect of the V model is predicated on a rigorous enforcement of the model’s dictates, such as is required for most aerospace applications. For less rigorous instances of the model, the absolutes listed above may not apply. This issue will be discussed further later in this paper.<br />The third deficient aspect of the V software development model is its encapsulation of test artifact ancestry solely within the domain of the design artifacts. As stated above, test artifacts are generally derived from their corresponding design artifacts. There are a multitude of other sources that must be touched upon to ensure success in generating a “complete” battery of tests for the software being developed. <br />A Closer View<br />The first issue mentioned with regard to the V development model is its essential linearization of a nonlinear process – software development. This problem is one of perception, really, or perhaps perspective. The root cause can be found in the fact that the V software development model is a simplified visualization tool that illustrates a complex and interrelated process. A more detailed view of a segment of the design leg (which segment is immaterial) is shown below.<br />In this expanded view of the design leg of the V, the micro-iterative feedback depicted by the small black arrows within the overall gray feed-forward thrust are visible. Each micro-iteration represents the accumulation of further data, application of a lesson learned, or even the bright idea someone dreamed up while singing in the shower. The point to be made here is this: The general forward-leaning nature of the legs of the V tends to disguise the frenzied iterations in thought, specification, and development required to create a useful application.<br />There are critical points along the software development stream that must be accounted for in any valid test development model. For example, any time there is a handoff of an artifact (or part of an artifact), the transacted artifact must be analyzed with respect to its contents and any flow-down effects caused by those contents [MARI99]. In the expanded view of the V development model shown above, the left edge of the broad arrow represents the genesis of a change in the artifact under development. This edge, where new or modified information is being introduced, is the starting point for all new micro-iterations. The right edge of the broad arrow is the terminus for each micro-iteration, where the new or modified information is fully incorporated in the artifact.<br />It should be further understood that micro-iterations can be independent of each other. In fact, most significant software development incorporates a maelstrom of independent micro-iterations that ebb and flow both concurrently and continuously throughout the overall development cycle.<br />The spiral model of software development, which many consider to be superior to the V model, is founded on an explicit understanding of the iterative nature of software creation. Unfortunately, the spiral model tends to be expressed on a macro scale, hiding the developmental perturbations needed for the production of useful design and test artifacts.<br />The Butterfly Model<br />Now that we have rediscovered the hidden micro-iterations in a successful process based on the V model, we need to understand the source of these perturbations. Further, we need to understand the fundamental interconnectedness of it all, to borrow an existential phrase.<br />Butterflies are composed of three pieces – two wings and a body. Each part represents a piece of software testing, as described hereafter.<br />Test Analysis<br />The left wing of the butterfly represents test analysis – the investigation, quantization, and/or re-expression of a facet of the software to be tested. Analysis is both the byproduct and foundation of successful test design. In its earliest form, analysis represents the thorough pre-examination of design and test artifacts to ensure the existence of adequate testability, including checking for ambiguities, inconsistencies, and omissions. <br />Test analysis must be distinguished from software design analysis. Software design analysis is constituted by efforts to define the problem to be solved, break it down into manageable and cohesive chunks, create software that fulfills the needs of each chunk, and finally integrate the various software components into an overall program that solves the original problem. Test analysis, on the other hand, is concerned with validating the outputs of each software development stage or micro-iteration, as well as verifying compliance of those outputs to the (separately validated) products of previous stages.<br />Test analysis mechanisms vary according to the design artifact being examined. For an aerospace software requirement specification, the test engineer would do all of the following, as a minimum:<br />• Verify that each requirement is tagged in a manner that allows correlation of the tests for that requirement to the requirement itself. (Establish Test Traceability)<br />• Verify traceability of the software requirements to system requirements.<br />• Inspect for contradictory requirements.<br />• Inspect for ambiguous requirements. <br />• Inspect for missing requirements.<br />• Check to make sure that each requirement, as well as the specification as a whole, is understandable.<br />• Identify one or more measurement, demonstration, or analysis method that may be used to verify the requirement’s implementation (during formal testing).<br />• Create a test “sketch” that includes the tentative approach and indicates the test’s objectives.<br />Out of the items listed above, only the last two are specifically aimed at the act of creating test cases. The other items are almost mechanical in nature, where the test design engineer is simply checking the software engineer’s work. But all of the items are germane to test analysis, where any error can manifest itself as a bug in the implemented application. <br />Test analysis also serves a valid and valuable purpose within the context of software development. By digesting and restating the contents of a design artifact (whether it be requirements or design), testing analysis offers a second look – from another viewpoint – at the developer’s work. This is particularly true with regard to lower-level design artifacts like detailed design and source code. <br />This kind of feedback has a counterpart in human conversation. To verify one’s understanding of another person’s statements, it is useful to rephrase the statement in question using the phrase “So, what you’re saying is…”. This powerful method of confirming comprehension and eliminating miscommunication is just as important for software development – it helps to weed out misconceptions on the part of both the developer and tester, and in the process identifies potential problems in the software itself.<br />It should be clear from the above discussion that the tester’s analysis is both formal and informal. Formal analysis becomes the basis for documentary artifacts of the test side of the V. Informal analysis is used for immediate feedback to the designer in order to both verify that the artifact captures the intent of the designer and give the tester a starting point for understanding the software to be tested.<br />In the bulleted list shown above, the first two analyses are formal in nature (for an aerospace application). The verification of system requirement tags is a necessary step in the creation of a test traceability matrix. The software to system requirements traceability matrix similarly depends on the second analysis. <br />The three inspection analyses listed are more informal, aimed at ensuring that the specification being examined is of sufficient quality to drive the development of a quality implementation. The difference is in how the analytical outputs are used, not in the level of effort or attention that go into the analysis. <br />Test Design<br />Thus far, the tester has produced a lot of analytical output, some semi-formalized documentary artifacts, and several tentative approaches to testing the software. At this point, the tester is ready for the next step: test design.<br />The right wing of the butterfly represents the act of designing and implementing the test cases needed to verify the design artifact as replicated in the implementation. Like test analysis, it is a relatively large piece of work. Unlike test analysis, however, the focus of test design is not to assimilate information created by others, but rather to implement procedures, techniques, and data sets that achieve the test’s objective(s). <br />The outputs of the test analysis phase are the foundation for test design. Each requirement or design construct has had at least one technique (a measurement, demonstration, or analysis) identified during test analysis that will validate or verify that requirement. The tester must now put on his or her development hat and implement the intended technique.<br />Software test design, as a discipline, is an exercise in the prevention, detection, and elimination of bugs in software. Preventing bugs is the primary goal of software testing [BEIZ90]. Diligent and competent test design prevents bugs from ever reaching the implementation stage. Test design, with its attendant test analysis foundation, is therefore the premiere weapon in the arsenal of developers and testers for limiting the cost associated with finding and fixing bugs.<br />Before moving further ahead, it is necessary to comment on the continued analytical work performed during test design. As previously noted, tentative approaches are mapped out in the test analysis phase. During the test design phase of test development, those tentatively selected techniques and approaches must be evaluated more fully, until it is “proven” that the test’s objectives are met by the selected technique. If all tentatively selected approaches fail to satisfy the test’s objectives, then the tester must put his test analysis hat back on and start looking for more alternatives.<br />Test Execution<br />In the butterfly model of software test development, test execution is a separate piece of the overall approach. In fact, it is the smallest piece – the slender insect’s body – but it also provides the muscle that makes the wings work. It is important to note, however, that test execution (as defined for this model) includes only the formal running of the designed tests. Informal test execution is a normal part of test design, and in fact is also a normal part of software design and development. <br />Formal test execution marks the moment in the software development process where the developer and the tester join forces. In a way, formal execution is the moment when the developer gets to take credit for the tester’s work – by demonstrating that the software works as advertised. The tester, on the other hand, should already have proactively identified bugs (in both the software and the tests) and helped to eliminate them – well before the commencement of formal test execution!<br />Formal test execution should (almost) never reveal bugs. I hope this plain statement raises some eyebrows – although it is very much true. The only reasonable cause of unexpected failure in a formal test execution is hardware failure. The software, along with the test itself, should have been through the wringer enough to be bone-dry.<br />Note, however, that unexpected failure is singled out in the above paragraph. That implies that some software tests will have expected failures, doesn’t it? Yes, it surely does! <br />The reasons behind expected failure vary, but allow me to relate a case in point:<br />In the commercial jet engine control business, systems engineers prepare a wide variety of tests against the system (being the FADEC – or Full Authority Digital Engine Control) requirements. One such commonly employed test is the “flight envelope” test. The flight envelope test essentially begins with the simulated engine either off or at idle with the real controller (both hardware and software) commanding the situation. Then the engine is spooled up and taken for a simulated ride throughout its defined operational domain – varying altitude, speed, thrust, temperature, etc. in accordance with real world recorded profiles. The expected results for this test are produced by running a simulation (created and maintained independently from the application software itself) with the same input data sets.<br />Minor failures in the formal execution of this test are fairly common. Some are hard failures – repeatable on every single run of the test. Others are soft – only intermittently reaching out to bite the tester. Each and every failure is investigated, naturally – and the vast majority of flight envelope failures are caused by test stand problems. These can include issues like a voltage source being one twentieth of a volt low, or slight timing mismatches caused by the less exact timekeeping of the test stand workstation as compared to the FADEC itself. <br />Some flight envelope failures are attributed to the model used to provide expected results. In such cases, hours and days of gut-wrenching analytical work go into identifying the miniscule difference between the model and the actual software. <br />A handful of flight envelope test failures are caused by the test parameters themselves. Tolerances may be set at unrealistically tight levels, for example. Or slight operating mode mismatches between the air speed and engine fan speed may cause a fault to be intermittently annunciated.<br />In very few cases have I seen the software being tested lay at the root of the failure. (I did witness the bugs being fixed, by the way!)<br />The point is this – complex and complicated tests can fail due to a variety of reasons, from hardware failure, through test stand problems, to application error. Intermittent failures may even jump into the formal run, just to make life interesting. <br />But the test engineer understands the complexity of the test being run, and anticipates potential issues that may cause failures. In fact, the test is expected to fail once in a while. If it doesn’t, then it isn’t doing its job – which is to exercise the control software throughout its valid operational envelope. As in all applications, the FADEC’s boundaries of valid operation are dark corners in which bugs (or at least potential bugs) congregate.<br />It was mentioned during our initial discussion of the V development model that the model is sufficient, from a software development point of view, to express the lineage of test artifacts. This is because testing, again from the development viewpoint, is composed of only the body of the butterfly – formal test execution. We testers, having learned the hard way, know better.<br /><br />To be continued in Part-2Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com1tag:blogger.com,1999:blog-4803548388510893974.post-28900828951433811442009-09-08T11:04:00.000+05:302009-09-08T11:05:12.770+05:30Understanding Metrics in Software TestingSummary:-<br />Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are typically the providers of the visibility you need.<br />Theme:-<br />Metrics<br />Metrics usually fall into a few categories: project management (which includes process efficiency) and process improvement. People are often confused about what metrics they should be using. You may use different metrics for different purposes. For example, you may have a set of metrics that you use to evaluate the output of your test team. One such metric may be the project management measure of the number of bugs found. Others may be an efficiency measure of the number of test cases written, or the number of tests executed in a given period of time.<br />________________________________________<br />The goal is to choose metrics that will help you understand the state of your product.<br />________________________________________<br />Ultimately, when you consider the value of a metric, you need to ask if it provides visibility into the software product's quality. Metrics are only useful if they help you to make sound business decisions in a timely manner. If the relevancy or integrity of a metric cannot be justified, don't use it. Consider, for example, how management analysis and control makes use of financial reports such as profit/loss, cash flow, ratios, job costing, etc. These reports help you navigate your business in a timely manner. Engineering metrics are analogous, providing data to help perform analyses and control the development process. However, your engineers may not be the right people to give you the metrics you need to help in making business decisions, because they are not trained financial analysts. As an executive, you need to determine what metrics you want and tell your staff to provide them.<br />For example, coverage metrics are essential for your team. Coverage is the measure of some amount of testing. You could have requirements coverage metrics, platform coverage metrics, path coverage metrics, scenario coverage metrics, or even test plan coverage metrics, to name a few. Cem Kaner lists over 100 types of coverage measures in his paper "Negligence and Testing Coverage." Before the project starts, it is important to come to agreement on how you will measure test coverage. Obviously, the more coverage of a certain type, the less risk associated with that type.<br />The goal is to choose metrics that will help you understand the state of your product. Wisely choose a handful of these metrics specific to your type of project and use them to give you visibility into how close the product is to release. The test group needs to be providing you plenty of useful information with these metrics.<br />Conclusion<br />The metrics provided by testing offer a major benefit to executives: visibility into the maturity and readiness for release or production, and visibility into the quality of the software product under development. This enables effective management of the software development process, by allowing clear measurement of the quality and completeness of the product.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-43132756963305228232009-09-08T11:03:00.001+05:302009-09-08T11:03:39.122+05:30Software testing metrics for a medium-sized projectSummary:- This article provides you details on all the metrics one should collect for a typical medium-sized software testing project and how long these metrics be collected during the project schedule? <br />Theme:- <br />IMHO, project size doesn't change your need to know what you're doing, which is what metrics are for. And I can't think of a point in a project when it's no longer necessary to know what's going on. Failing to know key measures, including the consequences after the project supposedly is done, is a major way in which small projects turn into big projects. <br />Basically, you always need measures of two things: (1) results you are getting, and (2) the causes of those results. <br />Results<br />Typically, the primary measure of results is whether the project is on time and in budget, which usually actually says more about the effectiveness of setting budgets and schedules than about the project itself. Poorly set budgets and schedules are the biggest reasons for overruns. Other results measures include size and quality of what has been produced. <br />Size may be measured in terms of KLOC (K for thousand, LOC for lines of code), function points, modules, objects, methods, or similar units which reliably describe physical size of software produced. Some people measure project size in number of requirements or pages of design. Other types of sizing measures include capacity, such as number of users or sites served, and database and transaction volumes. Project results involving hardware are also often sized with respect to numbers and capacities or capabilities of hardware components. A highway project ordinarily would be sized with respect to the length of the road involved. Although somewhat circular, many projects are sized by the budget and/or schedule.<br />Quality of results is typically measured in terms of defects, ordinarily as defect density, which is the number of defects relative to the physical size of the product, system or software. However, the way many folks measure defects can create as many issues as it addresses. <br />For instance, it's especially common for defect measures to include only coding errors, which reflect poorly on the developer and thereby create incentives for developers to pay more attention to avoiding accountability than actually doing a good job. Arguing about whether something is a defect is a pretty nonproductive use of everyone's time. "Coded as designed" and "user error" argument distractions can be prevented by making sure that defects also can be categorized as requirements, design, instructions and operational defects. <br />Results value<br />In addition to these physical size and quality measures of results, it's essential to quantify results in terms of value, which is what stumps many people. Probably the simplest method used is the percentage of defined requirements that have been implemented. <br />Percentages alone don't tell the full story because all requirements are not created equal with respect to size or value, and there can be wide variations in how well a requirement has been satisfied and how adequately the requirements have been defined. That's why it's essential to use effective methods to discover the REAL business requirements -- deliverable whats that provide value when delivered (or met or satisfied). <br />Ultimately, value should be measured in money. Monetary benefits come from four sources. Cost savings mean eliminating or reducing existing expenditures (unfortunately the most common method is eliminating jobs). Cost avoidance means not having to incur an otherwise additional future expense. Revenue enhancement occurs when one sells more, charges more for what they sell, and/or collects more of what they charge. Revenue protection involves retaining existing sales, which includes compliance with laws and regulations necessary to stay operational.<br />Actually, value is a net figure, which also must take into account the investment cost of achieving the benefit return. Thus, value most often is measured as return on investment (ROI). Conventional ROI determinations are frequently unreliable because they tend to fall prey to 10 common but seldom recognized pitfalls. (See www.proveit.net for information about determining right, reliable and responsible "REAL ROI.")<br />Causes<br />In order to sustain and improve results, it's necessary to identify and measure the causes of those results. Basic causal measures are resource costs/effort and time duration of the project work. Size and complexity of the project, of course, are the biggest determinants of effort and duration; they also are major sources of risk, which is another causal factor to consider. <br />Usually it's helpful to measure causes and results with respect to life cycle stages, such as requirements, design, development, unit testing, integration testing, system testing, acceptance testing and production. Distinguishing new code from modified code can be helpful for understanding causes of results. <br />Similarly, causes of results can be identified with respect to factors such as development methodology, use of particular types of tools and techniques, platform and language, and staff skills and experience. <br />By measuring results associated with these various types of causal factors, it's usually possible to tell what's going well and what needs improvement. Moreover, these more granular measures give quicker indication how well improvements are working.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-86395122406659525932009-09-08T10:57:00.000+05:302009-09-08T11:00:42.191+05:30Software testing metrics for a medium-sized projectThis article provides you details on all the metrics one should collect for a typical medium-sized software testing project and how long these metrics be collected during the project schedule?<br />Theme<br />MHO, project size doesn't change your need to know what you're doing, which is what metrics are for. And I can't think of a point in a project when it's no longer necessary to know what's going on. Failing to know key measures, including the consequences after the project supposedly is done, is a major way in which small projects turn into big projects. <br />Basically, you always need measures of two things: (1) results you are getting, and (2) the causes of those results. <br />Results<br />Typically, the primary measure of results is whether the project is on time and in budget, which usually actually says more about the effectiveness of setting budgets and schedules than about the project itself. Poorly set budgets and schedules are the biggest reasons for overruns. Other results measures include size and quality of what has been produced. <br />Size may be measured in terms of KLOC (K for thousand, LOC for lines of code), function points, modules, objects, methods, or similar units which reliably describe physical size of software produced. Some people measure project size in number of requirements or pages of design. Other types of sizing measures include capacity, such as number of users or sites served, and database and transaction volumes. Project results involving hardware are also often sized with respect to numbers and capacities or capabilities of hardware components. A highway project ordinarily would be sized with respect to the length of the road involved. Although somewhat circular, many projects are sized by the budget and/or schedule.<br />Quality of results is typically measured in terms of defects, ordinarily as defect density, which is the number of defects relative to the physical size of the product, system or software. However, the way many folks measure defects can create as many issues as it addresses. <br />For instance, it's especially common for defect measures to include only coding errors, which reflect poorly on the developer and thereby create incentives for developers to pay more attention to avoiding accountability than actually doing a good job. Arguing about whether something is a defect is a pretty nonproductive use of everyone's time. "Coded as designed" and "user error" argument distractions can be prevented by making sure that defects also can be categorized as requirements, design, instructions and operational defects. <br />Results value<br />In addition to these physical size and quality measures of results, it's essential to quantify results in terms of value, which is what stumps many people. Probably the simplest method used is the percentage of defined requirements that have been implemented. <br />Percentages alone don't tell the full story because all requirements are not created equal with respect to size or value, and there can be wide variations in how well a requirement has been satisfied and how adequately the requirements have been defined. That's why it's essential to use effective methods to discover the REAL business requirements -- deliverable whats that provide value when delivered (or met or satisfied). <br />Ultimately, value should be measured in money. Monetary benefits come from four sources. Cost savings mean eliminating or reducing existing expenditures (unfortunately the most common method is eliminating jobs). Cost avoidance means not having to incur an otherwise additional future expense. Revenue enhancement occurs when one sells more, charges more for what they sell, and/or collects more of what they charge. Revenue protection involves retaining existing sales, which includes compliance with laws and regulations necessary to stay operational.<br />Actually, value is a net figure, which also must take into account the investment cost of achieving the benefit return. Thus, value most often is measured as return on investment (ROI). Conventional ROI determinations are frequently unreliable because they tend to fall prey to 10 common but seldom recognized pitfalls. (See www.proveit.net for information about determining right, reliable and responsible "REAL ROI.")<br />Causes<br />In order to sustain and improve results, it's necessary to identify and measure the causes of those results. Basic causal measures are resource costs/effort and time duration of the project work. Size and complexity of the project, of course, are the biggest determinants of effort and duration; they also are major sources of risk, which is another causal factor to consider. <br />Usually it's helpful to measure causes and results with respect to life cycle stages, such as requirements, design, development, unit testing, integration testing, system testing, acceptance testing and production. Distinguishing new code from modified code can be helpful for understanding causes of results. <br />Similarly, causes of results can be identified with respect to factors such as development methodology, use of particular types of tools and techniques, platform and language, and staff skills and experience. <br />By measuring results associated with these various types of causal factors, it's usually possible to tell what's going well and what needs improvement. Moreover, these more granular measures give quicker indication how well improvements are working.<br /> <br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-68731730909951639092009-09-08T10:55:00.000+05:302009-09-08T10:56:51.918+05:30Measuring Defect Removal AccuratelySummary:<br />This article provides you details on test metrics at product, process and project level<br /><br /><br />PRODUCT<br />Test metric Definition Purpose How to calculate<br />Number of remarks The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test engineer that the application shows an undesired behavior. It may or may not result in software modification or changes to documentation. One of the earliest indicators to measure once the testing commences; provides initial indications about the stability of the software. Total number of remarks found.<br />Number of defects The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications. A more meaningful way of assessing the stability and reliability of the software than number of remarks. Duplicate remarks have been eliminated; rejected remarks have been done. Only remarks that resulted in modifying the software or the documentation are counted.<br />Remark status The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved. Track the progress with respect to entering, solving and retesting the remarks. During this phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved and retested. This information can normally be obtained directly from the defect tracking system based on the remark status.<br />Defect severity The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). Provides indications about the quality of the product under test. High-severity defects means low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels. Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.<br />Defect severity index An index representing the average of the severity of the defects. Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability. Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.<br />Time to find a defect The effort required to find a defect. Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found. Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.<br />Time to solve a defect Effort required to resolve a defect (diagnosis and correction). Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs. Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period. <br />Test coverage Defined as the extent to which testing covers the product’s complete functionality. This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.<br />Test case effectiveness The extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software. Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.<br />Defects/ KLOC The number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version. Ratio of the number of defects found vs. the total number of lines of code (thousands) <br />PROJECT<br />Workload capacity ratio Ratio of the planned workload and the gross capacity for the total test project or phase. This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well. Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity.<br />Test planning performance The planned value related to the actual value. Shows how well estimation was done. The ratio of the actual effort spent to the planned effort.<br />Test effort percentage Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among multiple phases of the project: requirements, design, coding, testing and such. The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future. This metric can be computed by dividing the overall test effort by the total project effort.<br />Defect category An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation and internationalization. This metric can provide insight into the different quality attributes of the product. This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.<br /><br />PROCESS<br />Should be found in which phase An attribute of the defect, indicating in which phase the remark should have been found. Are we able to find the right defects in the right phase as described in the test strategy? Indicates the percentage of defects that are getting migrated into subsequent test phases. Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.<br />Residual defect density An estimate of the number of defects that may have been unresolved in the product phase. The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test phases so that few will remain. This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation.<br />Defect remark ratio Ratio of the number of remarks that resulted in software modification vs. the total number of remarks. Provides an indication of the level of understanding between the test engineers and the software engineers about the product, as well as an indirect indication of test effectiveness. The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.<br />Valid remark ratio Percentage of valid remarks during a certain period. Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next phase or release. Indicates the efficiency of the test process. Ratio of the total number of remarks that are valid to the total number of remarks found.<br />Bad fix ratio Percentage of the number of resolved remarks that resulted in creating new defects while resolving existing ones. Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the maintainability of the software. Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test type, test phase or time period.<br />Defect removal efficiency The number of defects that are removed per time unit (hours/days/weeks) Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product. Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases.<br />Phase yield Defined as the number of defects found during the phase of the development life cycle vs. the estimated number of defects at the start of the phase. Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used to determine the estimated number of defects for the next phase. Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase.<br />Backlog development The number of remarks that are yet to be resolved by the development team. Indicates how well the software engineers are coping with the testing efforts. The number of remarks that remain to be resolved.<br />Backlog testing The number of resolved remarks that are yet to be retested by the development team. Indicates how well the test engineers are coping with the development efforts. The number of remarks that have been resolved.<br />Scope changes The number of changes that were made to the test scope. Indicates requirements stability or volatility, as well as process stability. Ratio of the number of changed items in the test scope to the total number of items.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-33565612788874191292009-09-08T10:53:00.000+05:302009-09-08T10:54:38.448+05:30Subject: Software testing metrics for a medium-sized projectAuthor: Robin F Goldsmith<br />Summary:- This article provides you details on all the metrics one should collect for a typical medium-sized software testing project and how long these metrics be collected during the project schedule? <br />Theme:- <br />IMHO, project size doesn't change your need to know what you're doing, which is what metrics are for. And I can't think of a point in a project when it's no longer necessary to know what's going on. Failing to know key measures, including the consequences after the project supposedly is done, is a major way in which small projects turn into big projects. <br />Basically, you always need measures of two things: (1) results you are getting, and (2) the causes of those results. <br />Results<br />Typically, the primary measure of results is whether the project is on time and in budget, which usually actually says more about the effectiveness of setting budgets and schedules than about the project itself. Poorly set budgets and schedules are the biggest reasons for overruns. Other results measures include size and quality of what has been produced. <br />Size may be measured in terms of KLOC (K for thousand, LOC for lines of code), function points, modules, objects, methods, or similar units which reliably describe physical size of software produced. Some people measure project size in number of requirements or pages of design. Other types of sizing measures include capacity, such as number of users or sites served, and database and transaction volumes. Project results involving hardware are also often sized with respect to numbers and capacities or capabilities of hardware components. A highway project ordinarily would be sized with respect to the length of the road involved. Although somewhat circular, many projects are sized by the budget and/or schedule.<br />Quality of results is typically measured in terms of defects, ordinarily as defect density, which is the number of defects relative to the physical size of the product, system or software. However, the way many folks measure defects can create as many issues as it addresses. <br />For instance, it's especially common for defect measures to include only coding errors, which reflect poorly on the developer and thereby create incentives for developers to pay more attention to avoiding accountability than actually doing a good job. Arguing about whether something is a defect is a pretty nonproductive use of everyone's time. "Coded as designed" and "user error" argument distractions can be prevented by making sure that defects also can be categorized as requirements, design, instructions and operational defects. <br />Results value<br />In addition to these physical size and quality measures of results, it's essential to quantify results in terms of value, which is what stumps many people. Probably the simplest method used is the percentage of defined requirements that have been implemented. <br />Percentages alone don't tell the full story because all requirements are not created equal with respect to size or value, and there can be wide variations in how well a requirement has been satisfied and how adequately the requirements have been defined. That's why it's essential to use effective methods to discover the REAL business requirements -- deliverable whats that provide value when delivered (or met or satisfied). <br />Ultimately, value should be measured in money. Monetary benefits come from four sources. Cost savings mean eliminating or reducing existing expenditures (unfortunately the most common method is eliminating jobs). Cost avoidance means not having to incur an otherwise additional future expense. Revenue enhancement occurs when one sells more, charges more for what they sell, and/or collects more of what they charge. Revenue protection involves retaining existing sales, which includes compliance with laws and regulations necessary to stay operational.<br />Actually, value is a net figure, which also must take into account the investment cost of achieving the benefit return. Thus, value most often is measured as return on investment (ROI). Conventional ROI determinations are frequently unreliable because they tend to fall prey to 10 common but seldom recognized pitfalls. (See www.proveit.net for information about determining right, reliable and responsible "REAL ROI.")<br />Causes<br />In order to sustain and improve results, it's necessary to identify and measure the causes of those results. Basic causal measures are resource costs/effort and time duration of the project work. Size and complexity of the project, of course, are the biggest determinants of effort and duration; they also are major sources of risk, which is another causal factor to consider. <br />Usually it's helpful to measure causes and results with respect to life cycle stages, such as requirements, design, development, unit testing, integration testing, system testing, acceptance testing and production. Distinguishing new code from modified code can be helpful for understanding causes of results. <br />Similarly, causes of results can be identified with respect to factors such as development methodology, use of particular types of tools and techniques, platform and language, and staff skills and experience. <br />By measuring results associated with these various types of causal factors, it's usually possible to tell what's going well and what needs improvement. Moreover, these more granular measures give quicker indication how well improvements are working.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-33551625718819371032009-09-08T10:52:00.001+05:302009-09-08T10:52:52.616+05:30Software testing Metrices - Test Case Review EffectivenessSummary:- This article provides you laundry list of all metrics for test case review<br />Theme:-<br />Metrics are the means by which the software quality can be measured; they give you confidence in the product. You may consider these product management indicators, which can be either quantitative or qualitative. They are typically the providers of the visibility you need.<br />The goal is to choose metrics that will help you understand the state of your product.<br /><br />Metrics for Test Case Review Effectiveness:<br /><br />1. Major Defects Per Test Case Review<br />2. Minor Defects Per Test Case Review<br />3. Total Defects Per Test Case Review<br />4. Ratio of Major to Minor Defects Per Test Case Review<br />5. Total Defects Per Test Case Review Hour<br />6. Major Defects Per Test Case Review Hour<br />7. Ratio of Major to Minor Defects Per Test Case Review Hour<br />8. Number of Open Defects Per Test Review<br />9. Number of Closed Defects Per Test Case Review<br />10. Ratio of Closed to Open Defects Per Test Case Review<br />11. Number of Major Open Defects Per Test Case Review<br />12. Number of Major Closed Defects Per Test Case Review<br />13. Ratio of Major Closed to Open Defects Per Test Case Review<br />14. Number of Minor Open Defects Per Test Case Review<br />15. Number of Minor Closed Defects Per Test Case Review<br />16. Ratio of Minor Closed to Open Defects Per Test Case Review<br />17. Percent of Total Defects Captured Per Test Case Review<br />18. Percent of Major Defects Captured Per Test Case Review<br />19. Percent of Minor Defects Captured Per Test Case Review<br />20. Ratio of Percent Major to Minor Defects Captured Per Test Case Review<br />21. Percent of Total Defects Captured Per Test Case Review Hour<br />22. Percent of Major Defects Captured Per Test Case Review Hour<br />23. Percent of Minor Defects Captured Per Test Case Review Hour<br />24. Ratio of Percent Major to Minor Defects Captured Per Test Case Review Hour<br />25. Percent of Total Defect Residual Per Test Case Review<br />26. Percent of Major Defect Residual Per Test Case Review<br />27. Percent of Minor Defect Residual Per Test Case Review<br />28. Ratio of Percent Major to Minor Defect Residual Per Test Case Review<br />29. Percent of Total Defect Residual Per Test Case Review Hour<br />30. Percent of Major Defect Residual Per Test Case Review Hour<br />31. Percent of Minor Defect Residual Per Test Case Review Hour<br />32. Ratio of Percent Major to Minor Defect Residual Per Test Case Review Hour<br />33. Number of Planned Test Case Reviews<br />34. Number of Held Test Case Reviews<br />35. Ratio of Planned to Held Test Case Reviews<br />36. Number of Reviewed Test Cases<br />37. Number of Un reviewed Test Cases<br />38. Ratio of Reviewed to Un reviewed Test Cases<br />39. Number of Compliant Test Case Reviews<br />40. Number of Non-Compliant Test Case Reviews<br />41. Ratio of Compliant to Non-Compliant Test Case Reviews<br />42. Compliance of Test Case Reviews<br />43. Non-Compliance of Test Case Reviews<br />44. Ratio of Compliance to Non-Compliance of Test Case Reviews<br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-40688439444390940462009-09-08T10:39:00.000+05:302009-09-08T10:51:32.908+05:30Software Testing Metrics - Metrics Used by Software TestersSummary:<br />This article provides you details of various types of metrics generally used in software tester<br />Theme:-<br />A software metric is a measure of some property of a piece of software or its specifications.<br /><br />Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring similar approaches to software development. Tom De Marco stated, “You can’t control what you can't measure.”<br /><br />The Product Quality Measures captured in various ways, here are some of the examples<br />1. Customer satisfaction index<br /><br />This index is surveyed before product delivery and after product delivery<br />(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:<br /><br />- Number of system enhancement requests per year<br />- Number of maintenance fix requests per year<br />- User friendliness: call volume to customer service hotline<br />- User friendliness: training time per new user<br />- Number of product recalls or fix releases (software vendors)<br />- Number of production re-runs (in-house information systems groups)<br /><br />2. Delivered defect quantities<br /><br />They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.<br /><br />3. Responsiveness (turnaround time) to users<br /><br />- Turnaround time for defect fixes, by level of severity<br />- Time for minor vs. major enhancements; actual vs. planned elapsed time<br /><br />4. Product volatility<br /><br />- Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)<br /><br />5. Defect ratios<br /><br />- Defects found after product delivery per function point.<br />- Defects found after product delivery per LOC<br />- Pre-delivery defects: annual post-delivery defects<br />- Defects per function point of the system modifications<br /><br />6. Defect removal efficiency<br /><br />- Number of post-release defects (found by clients in field operation), categorized by level of severity<br />- Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects<br />- All defects include defects found internally plus externally (by customers) in the first year after product delivery<br /><br />7. Complexity of delivered product<br /><br />- McCabe's cyclomatic complexity counts across the system<br />- Halstead’s measure<br />- Card's design complexity measures<br />- Predicted defects and maintenance costs, based on complexity measures<br /><br />8. Test coverage<br /><br />- Breadth of functional coverage<br />- Percentage of paths, branches or conditions that were actually tested<br />- Percentage by criticality level: perceived level of risk of paths<br />- The ratio of the number of detected faults to the number of predicted faults.<br /><br />9. Cost of defects<br /><br />- Business losses per defect that occurs during operation<br />- Business interruption costs; costs of work-arounds<br />- Lost sales and lost goodwill<br />- Litigation costs resulting from defects<br />- Annual maintenance cost (per function point)<br />- Annual operating cost (per function point)<br />- Measurable damage to your boss's career<br /><br />10. Costs of quality activities<br /><br />- Costs of reviews, inspections and preventive measures<br />- Costs of test planning and preparation<br />- Costs of test execution, defect tracking, version and change control<br />- Costs of diagnostics, debugging and fixing<br />- Costs of tools and tool support<br />- Costs of test case library maintenance<br />- Costs of testing & QA education associated with the product<br />- Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)<br /><br />11. Re-work<br /><br />- Re-work effort (hours, as a percentage of the original coding hours)<br />- Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)<br />- Re-worked software components (as a percentage of the total delivered components)<br /><br />12. Reliability<br /><br />- Availability (percentage of time a system is available, versus the time the system is needed to be available)<br />- Mean time between failure (MTBF).<br />- Man time to repair (MTTR)<br />- Reliability ratio (MTBF / MTTR)<br />- Number of product recalls or fix releases<br />- Number of production re-runs as a ratio of production runs<br /><br />Metrics for Evaluating Application System Testing:<br /><br />Metric = Formula<br /><br />Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)<br /><br />Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).<br /><br />Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria<br /><br />Defects per size = Defects detected / system size<br /><br />Test cost (in %) = Cost of testing / total cost *100<br /><br />Cost to locate defect = Cost of testing / the number of defects located<br /><br />Achieving Budget = Actual cost of testing / Budgeted cost of testing<br /><br />Defects detected in testing = Defects detected in testing / total system defects<br /><br />Defects detected in production = Defects detected in production/system size<br /><br />Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100<br /><br />Effectiveness of testing to business = Loss due to problems / total resources processed by the system.<br /><br />System complaints = Number of third party complaints / number of transactions processed<br /><br />Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10<br /><br />Source Code Analysis = Number of source code statements changed / total number of tests.<br /><br />Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation<br /><br />Test Execution Productivity = No of Test cycles executed / Actual Effort for testing<br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-38197548112152521042009-09-08T10:38:00.000+05:302009-09-08T10:39:31.673+05:30Subject: Using metrics to monitor software projectsAuthor: By Lawrence Oliva<br />Summary:<br />This article provides you details on metrics at project level in general<br />Theme:<br />“ Using inaccurate metrics to manage a project budget of $20,000 per day (or more) often leads to a significant project recovery situation.”<br />Using metrics to monitor projects is not a new concept to project managers. The builders of the pyramids used metrics to monitor and report progress (cubits of earth moved, stones placed, resources needed, etc.) Today's software project managers measure lines of code completed, bugs repaired, and engineering productivity using Web 2.0 tools. The concepts -- working to schedule and budget -- remain the same after thousands of years. <br />However, it is important for metrics to support your project and not just be a bottomless pit for data collection and weekly presentation. If currently used metrics are not driving your team's time, energy, and skill set mix towards achieving project success, it's time to select a new set of measurements. <br />One problem PMs encounter is selecting the most applicable measuring tools or standards from the vast number of potential metrics available, including earned value, cost budgets, Gantt charts with baselines, PERT charts, risk registers, network diagrams, function points, lines of code (LOC), bug counts, use case points, conditional complexity, time to repair, and time to fail. Working your way through this list can be daunting, especially if your client or management doesn't see the value of using (or paying for) metrics. Each project has its unique issues, constraints, and deliverables. Taking extra time to evaluate which metrics are best to use is a smart PM investment. <br />My advice is to select two metrics that are most relevant to your manager, two that are relevant to your team, and two that are relevant to you. Selecting a few appropriate metrics and frequently updating the data is a far more effective use of your limited time. Many PMs find metrics usually provide valuable reference information for estimating future project resources and schedules. <br />Metrics don't need to be stated on bar charts or have multiple pretty colors. But they do need to be accurate. Using inaccurate metrics to manage a project budget of $20,000 per day (or more) often leads to a significant project recovery situation. Many PMs don't survive a project that needs a 50% budget increase to recover from internal mistakes the PM should have caught. <br />Which metrics are most useful to software PMs? I would suggest these six: <br />For management: Earned value and Gantt charts. These metrics provide good information on overall schedule, budget, and project movement in graphical formats, clearly showing progress to date and work left to complete. <br />For the project team: LOC and bug counts. These metrics show daily or weekly progress made by team members as needed to achieve schedule milestones and how many defects are occurring "in-process." <br />For the project manager: Conditional complexity and use case points. These metrics illuminate potential areas for additional (or different) resources, programming tools, or time. Knowing when and where these problems may arise helps the PM minimize their potential impact to the project. <br />Successful software project management has always been a combination of delivering what clients said they wanted combined with exceeding expectations. Metrics enable a PM to achieve what is traditionally expected -– completing on time and on budget -– while providing the opportunity to impress management, promote the project team's talents, and surprise the world. Just like the pyramid builders! <br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-53188756861000141462009-09-08T10:35:00.000+05:302009-09-08T10:38:11.460+05:30Subject: Using SBTM for exploratory testing coverage problemsAuthor: By Michael Kelly<br />Summary:<br />Session-based test management (SBTM) can give test managers greater control in exploratory testing. In this article, we'll look at how test managers can better handle test execution, focusing on the metrics gleaned from the process and how those metrics can help us report testing status by providing increased visibility into the work.<br />Theme:<br />When I'm managing a project using session-based test management, I regularly use the following metrics: <br />• Charter velocity <br />• Level of coverage achieved <br />• Features/areas/risks covered <br />• Average session execution time <br />• Percent of test execution complete <br />Charter velocity<br />Velocity is the key metric I use to track day-to-day work and to predict when my testing will be done. On a daily basis, I look at how many charters the team is creating and how many charters a day the team is executing. These measures can be based on charters per day, per iteration, per tester, per charter priority (explained in more detail below), or per area (also explained in more detail below). <br />For a basic look at charter velocity, let's look at the following data from my first nine-days of testing on a hypothetical project: <br /> Charters created Charters executed Remaining charters<br />Day 1 10 9 20<br />Day 2 8 8 21<br />Day 3 9 7 21<br />Day 4 6 8 23<br />Day 5 5 7 21<br />Day 6 7 9 19<br />Day 7 4 6 17<br />Day 8 5 7 15<br />Day 9 3 8 13<br />On this project, we start with 20 charters from our initial look at the testing, and on our first day we discover 10 new charters we hadn't thought of initially and execute nine charters out of our pool. That gives us 21 charters on the start of the second day. That process of creating new charters and running charters out of the pool continues for the next few days. <br />Two patterns typically emerge from this type of data. First, you'll likely find that over time the number of new charters you create each day starts to go down. Second, you'll notice that you tend to average around the same number of charters a day as a team (give or take a few depending on what else is going on within the project). <br />If you were to chart the data, it might look something like the following: <br /> <br />At this point, I might add a tendline to help me predict when I might be finished with my testing: <br />As you can see from the graph above, based on my testing to date, I might by finished with my testing as early as three days out. On large projects, with a lot of measures of charters by area, priority, or some other criteria, I've found simple charts like these to be predictive of what the team will actually do. It's normally not correct down to the exact day, but it's normally within the week (for small to medium sized projects). <br />Level of coverage achieved<br />As I outlined in the first article in the series, when I create my charters I prioritize them into three levels: <br />• A - We need to run this charter. <br />• B - We should run this charter if we have time. <br />• C - We could run this charter, but there are likely better uses of our time. <br />I did this to allow me to easily map my charter coverage to the coverage metric James Bach outlines in his low-tech testing dashboard. In that dashboard, James provides four levels of coverage: <br />• 0 - We have no good information about this area. <br />• 1 - We've done a sanity check of major functions using simple data. <br />• 2 - We've touched all functions with common and critical data. <br />• 3 - We've looked at corner cases using strong data, state, error, or stress testing. <br />This gives me the ability to do a direct mapping of charters to coverage. When my level A charters are done, we've completed our basic sanity tests. When our level B charters are finished, we've hit all the common cases we could think of. Same for level C. What's interesting about this, is that if we're at level 2 or 3 coverage, as soon as one of the testers identifies another level A charter we go back to level 1. It means we missed something - likely something big. <br />Features/areas/risks covered <br />As with any testing effort, with session-based test management I'm always watching what we're testing. I look at the number of charters by feature or by area of the application. I'm trying to answer questions like: <br />• Do we have at least one charter per story (or requirement, depending on your methodology)? <br />• Do the areas of the application that are historically more complicated or more error prone have more charters than those areas that are easier or more stable? <br />• Are there certain areas or risks where we need a high level of coverage (level 3 coverage or priority C charters)? Do we have that coverage planned or executed? <br />Keeping an eye on coverage from multiple perspectives (story vs. area vs. risk) can help make sure you're getting a good balance. For example, if I'm only looking at coverage for my current stories or requirements, then while I might have great requirements coverage, I might miss areas that need regression testing. If I'm only looking at areas by feature set, then I might miss testing for performance, security, or some other quality criteria. In general, I try to get at least two different views on coverage per project.<br />When thinking about how to break up an application by area or risk, I'll often start with subsystems and work out from there. For example, if you look at an e-commerce site, you'd have something like: search, item browsing, shopping cart, checkout, email and messaging, order lookup and tracking, account administration, and help documentation. You might also include performance, security, internationalization, and usability. For a given iteration, you might track coverage across those areas, but then using a separate view of the data, also look at coverage by feature or story for that iteration. <br />Average session execution time <br />One of the things I try to measure when running a project using session-based test management is how long it takes us to run our charters. If you remember from the first article in this series, each session is time-boxed (typically 45 to 60 minutes). Once you have this information, you can then sort and filter the data to better understand how much time you're team is spending executing tests by functional area, by feature or story, by type of testing (functional, performance, security, etc…), or by tester. <br />Capturing this metric gives me feedback: <br />• It tells me how good we are at estimating when we do our initial chartering. With this information, I know when I need to work with the team, or individuals on the team, to help them improve either their time estimates, or help them better manage the scope of their charter missions. <br />• It tells me how much time we're spending on specific areas or features of the application. With this information, I can better manage where we are spending our time to ensure the most important areas of the application are getting the most coverage. It can also be an indicator of which areas of the application are more difficult to test than others. That can be useful in future planning and training. <br />• It tells me how much time we're spending on specific types of testing. With this information I can better understand how much time we spend testing various quality criteria and work with the team to make sure when we charter our work we're giving proper attention to areas like usability, security, performance, or supportability - areas we might be ignoring without being aware of it. <br />Percent of test execution complete<br />A big aspect of session-based test management is that testers have the freedom to add and remove charters as needed to be successful. That means, one day you might have 20 more charters to execute until you're finished. Depending on how your testing goes, the next day you might have 25 or 15. My experience tells me that many project managers are uncomfortable with that idea. Most project managers want a predictable, always going up - never going down, measure of percent complete.<br />The measure of percent of test execution complete is the number of charters you've executed, divided by the total number of charters you have for the interval you're measuring. While you likely won't get a nice predictable increase day after day like you might get on projects where all the test design is done upfront, there is value in measuring your percent complete by iteration or release. I don't use percent complete to predict when I'll be done (I use velocity for that), but I will use it to help me remain focused on the end goal. It's one macro-level measure of when our testing might be complete. <br />Detailed session metrics<br />Jon Bach outlines some other session metrics he commonly uses: <br />• Percentage of session time spent setting up for testing <br />• Percentage of session time spent testing <br />• Percentage of session time spent investigating problems <br />The capturing of detailed session metrics like those Jon outlines in that article is quite common in the session-based test management community. In the article, Jon outlines the details of setup, testing, and problem investigation. <br />Test setup measures the time it takes to get ready to run the first test. Test execution and design measures how much time is spent thinking of tests conditions and running those tests. And bug Investigation and reporting looks at the time that gets spent researching issues identified and logging them in your defect-tracking tool.<br />These measures help tell a different story about your testing. For example, knowing how long your testers are spending on setup can be helpful in letting you know when you might need to pay more attention to automating setup tasks, focusing on making data more available, or providing training. And knowing how much testing is done per charter is useful in helping out understand how much coverage you actually got out of a session. If someone ran a 60-minute charter, but only did ten minutes of design and execution, you might take some extra time to ask if they really fulfilled their mission. Did they loose too much time during setup? Or did they get sidetracked investigating a specific issue? <br />Getting visibility into the testing project <br />Once you feel like you have the visibility you need to effectively manage the project, the next step is to figure out what you need to do to successfully integrate session-based test management into your development methodology. That often means figuring out how to use your metrics to convince others you're providing them with the data they need to make good decisions. In the next article, we look at some techniques for integrating session-based test management into some different methodologies.<br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-66987971929999636952009-09-08T10:33:00.000+05:302009-09-08T10:34:27.776+05:30Subject: What Metrics Can Do for You?Summary:<br />Measuring activities are vital to the software test process. On this site, there are more than 200 items (articles, tools, templates, etc.) classified under the topic "measurement." But what good are all the bits and pieces of data that you collect? In this week's column, veteran software tester Rick Craig outlines some of the practical uses for metrics.<br />Theme:<br />To manage a testing effort, test managers and testers need information that will help them make timely and informed decisions. This information is often called "metrics." I am often asked to provide a client or student with a list of metrics they need to do their job. Unfortunately, such a standard list doesn't exist because the measurement needs of each team and project are usually different. At the very least, most teams will need measures of quality, resources, time, and size to do their job. In this short article, I am going to address some of the things metrics can do for you rather than discussing which metrics or types of metrics you should collect. <br /><br />Provide a Basis for Estimating <br />Without some information to use as a basis of comparison, there can be no estimate--only a guess. Sometimes testers, test managers, and project managers make estimates based upon their experience. These are not necessarily guesses, since most of these practitioners have a reservoir of metrics on past projects stored in their heads. Estimation can often be improved, however, by recording the time, effort, and characteristics of each testing effort to provide a sounder basis for future estimates. Differences in project size and characteristics, software quality, staff skill, etc., will require normalization of the stored information to use as the basis for estimating a new testing effort. <br /><br />Provide a Means of Control/Status Reporting <br />I often joke about projects that are always 90 percent done, but without the use of metrics, the progress report will often be based upon "gut feel" (metrics in the mind?). Testing status can be measured based upon the number or percent of test cases written or executed, requirements tested, modules tested, business functions tested, and others. Of course to be useful, this information will have to be reconciled against the schedule. A word of caution here: All test cases and requirements are not created equal. For example, some test cases may take a very short time to run and others may take much longer; so if you’re measuring against a schedule, these test cases may have to be weighted based on the amount of time each one takes. Similarly, some test cases test more important functions, or more code than others; therefore, completing 50 percent of the test cases doesn’t necessarily mean that 50 percent of the system has been tested. In that case, the test cases will have to be weighted against their coverage. <br /><br />Identify Risky Areas That Require More Testing <br />Anyone who has ever been a maintenance programmer knows that when they are called upon to fix a problem (especially an emergency!), the problem is often found in a module or function that has already been fixed repeatedly in the past. Without belaboring the cause of this phenomenon in this short article, suffice it to say that identifying those parts of the application that are prone to failure can give the testers insight into areas that require greater care during testing. So by measuring the relative defect density of a module (or function, or piece of code), the tester can focus additional testing on those error-prone components. <br /><br />Provide Meters to Flag Actions <br />Metrics that flag an action are sometimes called meters. Examples include exit criteria, suspension criteria, and criteria that call for reinspection of a piece of code. These meters are established to signal an action that should occur if the threshold is met. For example, some organizations establish entry criteria into the system test group to demonstrate that the application is complete and stable enough to allow the testers to begin testing without repeatedly stopping for major bug fixes. <br /><br />Process and Buy-In <br />Metrics can also be used to identify training needs and process improvement opportunities, to establish budget and staffing goals, and to facilitate buy-in. It is almost always easier to get buy-in from staff, colleagues, and management if you can back up your requests with pertinent metrics. At the same time, it is important to ensure that the metrics you collect actually serve a purpose beyond just filling out a chart or graph. Nothing is more frustrating to practitioners than submitting data that no one uses, or worse, is somehow going to be used only as a grade card. People who are collecting measurement data and/or whose work is being measured must understand how the data will be used and who will have access to it. <br /><br />Summary <br />This brief survey gives you an idea of the variety of ways metrics can help you do a better job. There must also be vigilance against their misuse. Please add your comments if you want to share more positive uses for metrics, or if you want to share a warning about misuses you have witnessed.<br /><br /><br />End of documentJacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-39105984787475313062009-09-08T10:29:00.001+05:302009-09-08T10:29:51.743+05:30Subject: Off-Shore software test automation: A strategic approach t cost and speed effectiveness (Part-2)Executive Overview of Software Testing<br />Many mistakenly believe that software testing is a small extension of development, and worse, some assume that testing is performed exclusively by software engineers. Others assume that software testing is something that happens at the end of the development process - after the software is developed; it is then tested and shipped. Others equate quality assurance and software testing. All of these beliefs are incorrect.<br />Software testing is a highly strategic and specialized discipline. It is very different from, but related to, software development (just as sales and marketing are very different but related disciplines). At its most basic, software testing is a concerted attempt to break software so that bugs may be identified and fixed before end-users encounter them.<br />Testing uses skills, methodologies, insights, and creativity that are very different from those used by software developers.<br />Software testing is a critically important part of the overall development process.<br />Because of its strategic importance software testing should be its own organization, separate and distinct from development, with its own budget. This setup can help to enable:<br />• Effective training<br />• Continuous skills and career development<br />• Efficient implementation of testing tools<br />• More effective communication from the testing team to development and management<br /> <br />The software testing process has several main activities. These include:<br />• Designing the software tests<br />• Running the tests<br />• Identifying problems and defects<br />• Reporting to management and development on key metrics<br /> <br />Testing is an iterative process of identifying, fixing, and retesting, as well as reacting to design and code changes. Software testing consists of both manual testing and automated testing. While it is true that manual testing should be kept to a minimum; there will always be tasks, such as usability testing, that require manual testing.<br />Manual Testing: Productivity Objective<br />No more than 5% of tests should be executed manually.<br />Software testing is not quality assurance; it is a part of quality assurance. All steps in the development process, from requirements definition, to design and development, share in quality as an objective. The role of software testing in the process is to identify defects so that they may be fixed. You cannot, however, test quality into a poor design (as the saying goes, “bugs may be tested out, but quality must be built in”).<br />Software testing is also not something that simply happens at the end of the development process. To be effective, software testing needs to be a strategic partner with product marketing and development from the beginning of the development process. Software testing needs to have a clear and full understanding of the goals and objectives of the software under design and development. Having such knowledge will help them to design better tests. Early visibility also helps software testing to develop their testing plans, as well as start to design test cases. It helps to decrease testing time and costs.<br />In addition to being an activity, software testing generates products that can be viewed as strategic assets to an organization. These products include:<br />• Test cases that can consolidate the intellectual property of your team members<br />• Automated tests that can be re-used, becoming assets that reduce costs<br />Testing metrics provide visibility<br />Testing metrics must provide visibility into a software product’s quality. Metrics are only useful if they help to make sound business decisions.<br />Metrics fall into two categories<br />1. Project management.<br />2. Process improvement<br />Some metrics can measure the output of your test team such as<br />1. Bug reports<br />2. Test cases written or test cases executed<br />3. Other metrics, such as coverage metrics, give visibility into how much of the software has been tested.<br />Executive Overview of Test Automation<br />Test automation can provide great benefits to the software testing process and improve the quality of the results. The reasons to automate software testing lie in the pitfalls of manual software testing. Manual testing:<br />• is slow and costly<br />• does not scale well<br />• is not consistent and repeatable<br />• is difficult to manage<br />While these factors may drive the desire and need to automate testing, it is important to take the right approach to test automation. There are several basic steps to automating testing:<br />• Use the Action Based Testing (ABT) methodology, a keyword-based, object oriented approach that provides for visibility, reusability, scalability, and maintainability. Visibility, reusability, scalability, and maintainability translate into speed and cost savings.<br />• Choose the right enabling technologies that support the methodology. The tools need to support extensibility and a team-based global test automation framework, with a solid management and communication platform.<br />• Put in place the right people with the proper skills and training in methodology, tools, and domain knowledge (knowledge of the software to be tested, the industry for which the software is intended, and end-user expectations).<br />• Separate test design from test automation so that automation does not dominate test design. Action Based Testing (ABT) creates a hierarchical test development model that allows test engineers (domain experts who may not be skilled in coding) to focus on developing executable tests based on action keywords, while automation engineers (highly skilled technically but who may not be good at developing effective tests) focus on developing the low-level scripts that implement the keyword-based actions used by the test experts. ABT allows an organization to spend more time developing tests and less time actually coding the test cases. This speeds up the whole testing process and helps to reduce costs.<br />• Lower costs by using less expensive labor than your local team.<br />Automated testing : productivity objective<br />No more than 5% of the effort surrounding testing should be expended in automating the tests.<br />Jumpstart the process with a pre-trained outsourcing partner that knows more about test automation success than you do, and that has a competent, well-trained staff of software testers, automation engineers, test engineers, test leads and project managers.<br />The most essential element is methodology. The methodology is the foundation upon which everything else rests. The methodology drives tool selection and the rest of the automation process. The methodology also helps to drive the approach to off shoring the “appropriate” pieces of the testing process.<br /> <br />Executive Overview of Offshore Outsourcing<br />Most companies are convinced of the need to offshore some or all of software testing.<br />Off shoring offers the promise of significant cost savings. However, off shoring is more than simply moving existing software testing efforts to an offshore outsource partner.<br />Executives making the decision to offshore testing must understand the possible pitfalls of outsourcing and off shoring, and must plan effective strategies to combat these pitfalls.<br />Executives also must stay focused on the fact that an effective off shored testing effort is based on the strategic integration of methodology, the latest technologies, and an effective global resource strategy that supports the methodology and tools.<br /> <br />Some of the major pitfalls of off shoring include:<br />• Problematic communications due to language and cultural barriers, mismatched or miscommunication of expectations, poor metrics selection, and unresponsiveness.<br />• Insufficient or mismatched skill sets in the software testing organization, such as using entry-level development engineers as software testers, lack of knowledge of the software being tested, and lack of domain knowledge (knowledge in the category of the software being tested).<br />• Management issues due to the lack of a workable test management process and associated methodology.<br />• Vendor problems or vendor infrastructure problems such as poor data bandwidth.<br />• General off shoring risks such as security and protection of intellectual capital.<br />Some suggestions for dealing with these pitfalls include:<br />• Finding a trusted partner or building trust in a partner. You need to work with a partner that you know has testing experience, an experienced staff, an understanding of current methodologies, and competent domain knowledge.<br />Train the test organization. Provide them with knowledge of your application, your expectations, your communication/management platforms and expected domain expertise. Discuss how to recognize and deal with cultural issues.<br />• Adopt a methodology and tools that support the overall methodology to improve testing, defect tracking, automation, and communications management, focusing on excellent and correct methods, ease of distributed team communication, accessibility, and useful measures.<br />• Choose carefully what work goes offshore and what remains “at home”. Often it makes sense to keep user-focused scenario development and business process testing at home where you have more knowledge of the domain and the user.<br />• Get someone local to manage the off shored test effort. A local lead that is part of your team, who understands the culture and communication nuances of the offshore team, can lead the project, effectively communicate progress and metrics, and help to streamline the process.<br />The most effective way to avoid the pitfalls of test automation and off shoring, and to realize the full time and cost saving benefits of test automation, is to implement a strategy of Global Test Automation.<br />Global Test Automation strategically integrates the latest test automation methodologies based on Action Based Testing, the latest testing technologies, with an effective and balanced global resource strategy that makes use of both onshore and less expensive offshore resources, on- and off-shore leads, and effective team-based management tools and methodologies.<br />The benefits of such an approach are scalability, reusability, visibility, and maintainability that ultimately will allow an organization to achieve both time and cost savings while delivering a quality software product. The bottom line is a higher quality product that is delivered faster and more cost effectively.<br />As you can see in Figure 3 below, Global Test Automation is the strategic integration of:<br />• Speed achieved through an ABT test methodology and test automation technologies for distributed teams<br />• Cost control achieved by using a cost structure based on worldwide resources<br /><br />Choosing a Global Test Automation Partner<br />Selecting the right testing partner to implement this strategy is more than simply selecting a vendor to develop and run test cases. It entails selecting a partner who fosters innovation to improve test productivity, a partner who understands Global Test Automation, a partner who understands that the methodology drives the use of appropriate tools, a partner who has an effective management and communication strategy, and a partner who makes use of global resources to provide the appropriate mix of on and offshore resources to drive down costs.<br />The process of test automation can be jump started with the selection of a pre trained strategic partner that knows more about test automation success than you do, and that has a competent, well-trained staff of software testers, test leads, and an in-place global resourcing strategy.Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-69554088516947589972009-09-08T10:20:00.000+05:302009-09-08T10:21:21.912+05:30Subject: Off-Shore software test automation: A strategic approach t cost and speed effectiveness (Part-1)Author: A White paper from Logi gear<br />Summary:<br />The following white paper presents an executive overview of an innovative approach to integrating global resourcing and the latest test automation methodologies and tools.<br />This approach can effectively help any software development or organization meet their goals of time to market, cost containment, and quality.<br />To provide the paper background for this discussion, the paper also summarizes the software development process, the software testing process, global resourcing and the automation of software testing based on structured approach and methodology known as Action based testing (ABT).<br />Theme<br />Introduction<br />Two industry trends, automating software testing and moving software testing offshore, hold out the promise of providing both cost and time savings. Bringing software to market faster,while reducing costs and helping the bottom line are goals that are very high on any software development organization’s wish list.<br />Unfortunately, many of these efforts yield results that fall far short of expectations. The trade and popular press are full of stories of failed offshore efforts. In addition, many test automation efforts not only fail to yield time or cost savings, but in fact result in quite the opposite! A coordinated management effort, with good understanding of the planning required, is necessary for success in both of these areas.<br />In fact, the strategic integration of the latest test automation methodologies and technologies with global resource strategies will not only improve upon both efforts, it will allow an organization to fully capitalize on the speed and cost saving potential of<br />Off shoring and automation.<br />The competing goals of delivering a quality software product, reducing costs and meeting time to market targets often lead management to take a tactical approach, focusing entirely on one goal or approach while neglecting to consider the impact of their decisions on other parts of the process. They may simply focus on one dimension such as automating testing to improve time to market, or off shoring testing to drive down costs.<br />Taking a tactical or one dimensional approach can very often lead to undesirable or unforeseen results. Management needs to consider the interplay between quality, cost, and time-to-market goals. They need to take a more strategic approach and develop an overall strategy and methodologies, and then select tools and partners.<br />An effective testing automation effort requires:<br />• A full understanding and appreciation of the software development process<br />• An understanding of software testing as a strategic effort that can provide cost benefits, speed benefits, as well as providing management with critically important visibility into the quality of their software under development<br />• An understanding that software testing is a discipline that is separate from development that should have its own budget and funding<br />• An understanding of the importance of a structured test automation approach that is based on a methodology known as Action-Based Testing (ABT), which creates a hierarchical test development model that can improve the quality and speed of testing while reducing the costs of testing<br />• An appreciation of the strategic importance of global resourcing strategies and best practices to further drive down costs and impact the bottom line<br />This white paper will:<br />• Provide an executive overview of the software development process, and the proper fit of the testing and automated testing processes, to help executive management to achieve a better understanding so that they will be able to make more informed and effective decisions on test automation and global resourcing.<br />• Discuss the strategic integration of the latest test automation methodologies and technologies such as a structured approach that is based on a methodology known as Action-Based Testing (ABT), with global resource strategies to fully capitalize on the speed, cost advantages, and best practices in automation and global sourcing.<br />Present the importance of selecting an experienced strategic partner such as Logi Gear, who can provide an integrated testing solution based on ABT and a global resource strategy that will effectively decrease testing time and cut costs while meeting quality goals<br />Executive Overview of the Software Development Process<br />At its most basic, there are three broad categories of efforts in any software product development life cycle. These are:<br />1. Specification or requirements definition<br />2. Development<br />3. Testing<br />This simple list, and the prevailing view of the process, implies a linear relationship between each of the components – the process moves from specification to development to testing as illustrated in Figure 1.<br />Typical View of Development Process<br />Specification Development Test<br /><br />This is an overly simplistic view of the process of developing software that focuses on the tasks while ignoring the strategic interplay between the three parts of the process. It is, unfortunately, how the process may be implemented in many companies. There are several problems with this task-focused view of the process, including:<br />• It assumes that the specification delivered will not change<br />• It assumes the engineering will deliver software to testing on time and that there will not be any changes<br />• It involves testing far too late in the process<br />It is essential to understand that each step in the overall product development life cycle is in itself its own discipline and process. Further, each process is substantially enhanced if it is informed by, and continuously involved with, the other steps in the process. In reality, the steps in the development life cycle should both overlap each other and provide feedback to each other. For example:<br />• As product marketing/planning starts to develop a requirements document (hopefully with extensive customer and user input), engineering can start to investigate alternative ways of implementation, develop initial implementation estimates, identify critical paths, and start to work on translating requirements into a workable engineering plan.<br />• Engineering can also provide feedback to those developing the requirements on feasibility, risks, and time to implement certain features and functionality that may trigger alterations to the requirements specification.<br />• Testing involvement at these early stages provides visibility into what will need to be tested so that a test plan can start to be developed and test cases planned.<br />• Testing can design and automate test cases in advance so they are ready when development starts to deliver them code to be tested.<br />• Testing can also provide feedback, time estimates, risk assessments, and identify critical areas or provide feedback on the merits of different ways of implementing the requirements. They can even ask development to put hooks into the software that will make testing easier and faster.<br />• All will be able to more effectively deal with any changes or alterations and there will be fewer project slowing “surprises”.<br />Figure 2 illustrates the more parallel nature of the process with the invaluable feedback loops. The strategic integration of the three disciplines into a more streamlined and cooperative process helps to improve quality, improve delivery time, and reduce costs.<br />Specification <br />------------------------ Development -------------------------------------------<br />------------------------------------------------------------------------------------------- Test-------------- <br /><br />How Quality Fits<br />Many incorrectly associate quality assurance with software testing. It is possible to do an excellent job of software testing and still deliver a poor quality product! Poor requirements definitions, or poor implementations, can lead to a product with unwanted or unusable features. Testing may verify that such a product is relatively reliable and works as intended, but nobody wants it or wants to use it. Such a “working” product will be seen to be of poor quality in the eyes of users and the marketplace. Delivering a product like this can have a very negative impact on an organization, driving up costs, and potentially impacting current or future revenue streams.<br />It is important to understand that quality entails efforts at every step in the development life cycle. It is also important to understand the costs of quality throughout the process, and that costs differ depending on where in the process they are incurred.<br />Quality cost, or the total cost to deliver a quality product, is simply the sum total of all quality related costs incurred at every phase of a project. There are four broad categories of quality costs:<br />1. Prevention costs – Prevention costs represent all costs spent to prevent software, documentation, and other product related errors. These costs include tasks such as staff training, requirements analysis, early prototyping, defensive programming, usability analysis, clarity of specification, and more. Prevention quality costs are the most cost-effective quality dollars that a company can spend. In essence, it costs much less to avoid errors up front than it does to identify them later, determine how to fix them, and develop new code to rectify them.<br /> <br />2. Appraisal costs – Appraisal costs represent all of the money that is spent on testing activities, which are any and all activities associated with searching for errors in software and associated materials. It includes the testing done by developers themselves, as well as the testing performed by the software testing organization. It is one of the largest costs associated with quality. However, it is far less expensive to find the errors early and fix the problems prior to software release.<br />3. Internal failure costs – Internal failure costs represent all of the costs associated with fixing bugs found prior to release. Again, whatever these costs, it is much less expensive and disruptive to fix problems internally prior to product release than it is after the product is released to customers.<br />4. External failure costs – External failure costs are all of those costs associated with defects and errors that are discovered after the product is released. These costs include all of the direct costs of identifying and fixing the problems, usually under high-pressure, as well as all of the sales and marketing costs associated with damage control. There are also intangible costs such as loss of goodwill, low customer satisfaction, and impact on future sales. External failure costs are very costly, and are typically much higher than internal failure costs. External failures can also negatively impact an organization’s reputation, putting at risk current and future revenue streams. External failures present the very real risk of negatively impacting an organization’s bottom line profitability.<br />The challenge to management is to do a cost-benefit analysis to achieve an optimum balance between prevention and appraisal costs to help to minimize failure costs, reduce overall costs, and positively impact the bottom line.<br /> <br /><br /><br /><br />To be Continued in Part-2Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-1475326335647274072009-04-06T22:51:00.001+05:302009-04-06T22:51:29.575+05:30Why the quality assurance department should be involved in testingBy John Scarpino<br /> <br /><br /> <br /><br />In this sluggish economy, corporate software users are price-conscious, of course, but they also want the best product for the lowest price. So software companies are concentrating on making sure their products excel in performance, security and longevity. This tip's advice and examples show some lessons I've learned about achieving excellence in those areas by beefing up quality assurance (QA). <br /><br />In recessions or boom times, any organization purchasing and/or implementing a new tool should involve the quality assurance department in the testing process. Doing so improves the assessment and eliminates bias from any one group. <br /><br />Tools that are purchased for the security and performance environments are usually licensed to a specific department instead of the company at large (even though other departments within the corporation can use it). This is a technicality that can be abused, because a software license gives an entity all the information its needs to install and use the tool and the power to implement the software as it sees fit. Sometimes the licensed department may operate exclusive of other departments that would normally be involved in the software development lifecycle (SDLC). When something like this happens, the QA department is officially out of the picture -- which does not bode well for the future of the product.<br /><br />Believe it or not, I've witnessed a Web application group validating and testing its own product with software testing tools and without informing QA that it was doing so. I've also seen an infrastructure group hoard performance information from QA and actually change the testing results because the sole validation and verification came from within the group. <br /><br />Why does this present a problem? There is no objectivity involved in the testing process, because it is all conducted internally with a biased group. The involvement of QA is absolutely imperative during the testing process, because only the QA department has a variety of resources needed to effectively approach different situations and evaluate them. <br /><br />Really, the issue isn't so much the fact that the Web or infrastructure groups conducted their own testing of security and performance; rather it's that they were the only groups that conducted testing. Just because the Web and infrastructure groups are the only ones using the tool does not mean they also own the software and license, nor is it right for them to test their own definition of "quality." This increases the chance of information being withheld from other groups that need it. It's important that departments do not become "information silos," because QA is ultimately responsible for the outcome of both process and product.<br /><br />I believe that the best results come when a group tests its own product as a whole unit, and then the QA department tests it again to uphold the product's integrity through objectivity. Moreso, the QA team should use nontraditional testing approaches, such as testing "around" the product to other functionalities that may be affected by the product's implementation, just to cover all the bases. Then the verification and results can be centrally located with all of the other functional and nonfunctional tests for a given release or project. <br /><br />A good way to centralize testing information is to create a data library for the results, so that every test during the SDLC is documented and accessible to everyone. It should support software requirements by including functional and nonfunctional test results, as well as security, performance and infrastructure implementation or installation updates. <br /><br />Also, the principles of quality assurance must be weaved throughout the project. I like to arrange the requirements and the test plans by displaying the high-level details of each document in a test management tool or shared network location displaying the functional, nonfunctional, security and performance requirements and test plans with both positive and negative approaches. Then I'll create another folder within the test management system or shared network location for verifications, which contains all phases of my testing along with each phase's results. <br /><br />Whenever a defect occurs, I note the phase during which the defect took place, and which test plan was created due to that defect. By encompassing security testing and performance testing information in the same place, everything is easy to find and navigate.Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-9027769963854115242009-04-06T22:50:00.001+05:302009-04-06T22:50:34.345+05:30Improved software design with test-driven development (TDD)Theme:-<br />If it's difficult to get software developers to write unit tests, imagine how difficult it can be to get them to write the unit test cases before they write a line of code. <br />That's the principle behind test-driven development (TDD), a practice with roots in Extreme Programming (XP). According to advocates of TDD, the payoff can be big -- simpler and better designed code that delivers business value and has fewer defects. <br />However, TDD can also be a difficult practice to learn, and it requires a significant change in mindset on the part of the developer. As a result, adoption is not yet widespread and is primarily on an ad hoc basis, according to industry observers.<br />"I think test-driven development is an amazing practice, but I find it very uncommon," said Carey Schwaber, an analyst at Forrester Research Inc. "It's difficult to get people to do it. Unit testing is more common, but a lot of developers say they're doing unit testing but they're not really; it's either 'testing by developers' or they're exaggerating how much unit testing they do. Test-driven development is where you write the unit test case before you write the code, and writing the test case first is a lot harder. There are a lot of benefits, but it requires shifting your mindset." <br />"Thinking of what test to write will force you to improve your code," added James Shore, consultant, co-author of The Art of Agile Development and signatory number 10 to the Agile Manifesto. <br />TDD "helps define the expectations of the product, forcing engineers to do everything they commit to do and nothing more," said Christophe Louvion, CTO of Gorilla Nation, an online advertising sales representation company in Los Angeles. "For me, what it does is define better what the product is, removing a lot of the QA mentality like test later." <br />If engineering has ownership of TDD, he said, there's more time for QA to do exploratory testing and manual testing. <br />TDD for developers, not testers<br />Test-driven development, despite the "test" nomenclature, is a technique for developers, not testers, Shore said. <br />"It's used to ensure that what you're developing is what you intended to develop," he said. But "of all the XP practices becoming mainstream, this has a big learning curve." <br />Even for agile shops, adoption is still low, according to Schwaber.<br />"I would say a good number of agile shops are not doing test-driven development. In general, people who adopt agile pick up more practices over time. They eventually get to test-driven development, but it takes a while," she said. <br />How TDD works<br />According to Shore, TDD is a rapid cycle of very short steps, where the developer writes a small amount of test code, typically five lines or fewer, sees the unit test fail, writes a small amount of production code, sees the test pass. <br />"You're doing it to prove that what you intended to write is what you did write," he explained. <br />"There are a lot of advantages to having complete TDD coverage," Shore continued. "You can have a lot of confidence when you're making changes to code because the tests catch any mistakes." <br />Ryan Martens, CTO and founder of Rally Software, a developer of agile lifecycle management software based in Boulder, Colo., said he views TDD as a continuum, starting with writing unit tests prior to coding. The next step, he said, is acceptance test-driven development. <br />"Acceptance test-driven development focuses not on components, but it's an acceptance test for the story or piece of functionality," he said. <br />The third part is continuous integration and system testing, Martens said. Here is where a "stop-the-line" concept comes into play. If the integrated nightly build breaks, the team has to get it working again in the morning. "You're moving to a zero-defect mentality," he explained. <br />But just because a developer does unit testing or the team runs nightly builds doesn't mean they're getting the full benefits of TDD or even agile methods, Martens said. <br />"We find organizations where the developers don't write the unit tests -- the offshore teams write the unit tests. This is defeating the purpose [of TDD] because it doesn't allow you to think about simpler design," he said. However, "it allows you to refactor faster and get the unit test up." <br />And, Martens added, "if a highly technical team tends to be very focused on getting a nightly build process, that doesn't mean they're getting the benefits of TDD -- to think in terms of test and making a simpler design. It also doesn't mean they've picked up agile or done acceptance testing." <br />While unit testing is becoming more common if an organization has a language and tools that support it, Martens said, "it's pretty easy to write really dumb unit tests." And while most organizations say they run unit tests, "most don't have coverage above 30%. It's mostly still ad hoc; that's where the industry is," he said. <br />Getting started with TDD<br />Shore said organizations that contact him about TDD typically have an increasing testing burden as applications continue to grow more complicated, so they're looking to move to shorter cycles, such as those when using Scrum, he said. <br />"It's risky to go for a year of development without seeing delivered software. If they're still throwing it over the wall to testers, the testers are finding the test burden going up," he said. "As the number of features increases, regression testing increases. When it becomes a problem, they call me. They want to deliver software without a large regression burden." <br />To get going with TDD, Shore recommends starting with new development and starting something small in order to get a win. <br />"TDD is for writing code, not modifying. More than anything, test-driven development gives you a rhythm, a smooth flow of writing test, code, refactoring," he said. "It should be natural, but it's hard to get there; thinking of what code to write and stepping back to how you can test it. Choose a simple problem that doesn't involve a UI or a database." <br />For Gorilla Nation's Louvion, who is in the process of changing the shop's methodology to Scrum and uses Rally's agile lifecycle management software, starting to do TDD raises a lot of engineering practice problems that you have to resolve. <br />"To do TDD in my mind, you must have continuous integration," he said. "My former team [at a previous company] was very talented in doing test-driven development. When the story was too fuzzy and they wouldn't know what to test, it helped them define it faster. From my experience [TDD] is helping to better define the scope of stories, which is a driver for better quality." <br />Louvion said he is implementing TDD with just one team to start and to make the case that TDD works. "Also, it is more involved in terms of knowledge, and a more profound change. It's a more organic implementation of TDD," he said. <br />Be prepared to go slow at first when starting out with TDD and to go fast later, practitioners warn. <br />"It's certainly a change process," Martens said. "People can try to bite off too much too fast, but it doesn't have to radically slow things down. But if you're learning new stuff, it will have to take in some overhead." <br />However, TDD is critical to releasing more often and quickly, he added. "In our team we run eight-week release cycles. Over 27 releases have stopped in the middle and shipped. You never have that option if you run in waterfall, with testing on the back side." <br />For a new system, Shore said he finds TDD takes about two months and 200 tests on average. <br />"I can't say that's true for everybody, but it takes that amount of time [typically] to learn and create a test helper library and understand how to modify design to make testing easy," Shore said. "Once you've gotten over the hump, and you've got libraries and support in place, it starts being very fast. This is partly because I'm not debugging anymore, and I get into a rhythm where I constantly know where the next step is." <br />However, Shore added, for legacy code and adding test to a code base, it can take time. That's because it's a lot harder to do TDD on legacy systems. Because of that, he recommends learning on a green field. <br />Another recommendation is to not force TDD on developers, Martens said. <br />"If you roll out TDD, the last thing you want to do is to make it mandatory. You will get people writing stupid unit tests that are hardly tests, and you will get subversive behavior," he said. <br />Martens suggested letting the team agree to try TDD, with the ability to stop if they find it's not working. <br />"It's more about process change and testing forward in the cycle and making it the responsibility of developers," he said. "In organizations where there's a hard wall between QA and development, there's a lot of scary stuff in there [with TDD]. For organizations like that, the best approach is in an agile, lightweight fashion." <br />That means asking the team what they think and if they want training. It's more of a grassroots approach than a top-down approach, he said. <br />Forrester's Schwaber said she has seen ad hoc adoption by individual developers. <br />"Developers can do TDD even though their colleagues are not. It makes it harder to drive [through the organization], but it really is individual to the developer." <br />Question of scale<br />With any technique, there's always a question around the ability to scale for large projects, but practitioners said that's not an issue with TDD. <br />"I've never found any limit with TDD in terms of scale," Shore said. "It operates at the level of individual class, so it scales just as well as software scales -- both perfectly and not very well, depending on what you're talking about. It should be as much a part of the routine as compiling code or typing on a keyboard. Those things become slower as a system gets bigger." <br />It is the same thing with TDD, he said. "It's not test-driven development slowing you down; it's the overall size of the project." <br />Martens agreed: "I don't think there are any scaling issues. It's good for small projects, and it's almost required for big projects. When all teams are on the same iteration release calendars, the teams have dependencies. If there are defects you introduce that break the build, other people can't do their stuff and it slows the whole program down." <br />Practitioners are quick to point out, however, that TDD is no panacea for quality. <br />"You can create good software quality without test-driven development, and you can create bad software with test-driven development," Louvion said. <br />Shore summed it up this way: "It won't make a poor designer magically great, but it will help competent designers design better."Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0tag:blogger.com,1999:blog-4803548388510893974.post-6420504388067771812009-04-06T22:47:00.000+05:302009-04-06T22:49:25.219+05:30Barriers remain for test-driven developmentTheme:-<br />Test-driven development (TDD) is a little like saving for retirement -- you know your quality of life will be better in the long run if you do it, but you may feel you can't afford to put that money aside right now, or you don't have time to think about it. <br /> <br />The benefits of TDD are quickly realized as IT can respond to requirement changes and enhancements more rapidly and with predictable results. <br />Bobby Pantall<br />Lead technology consultant, CC Pace Systems Inc. <br /><br /> <br /> <br /> <br />At a recent roundtable on TDD hosted by Stelligent Inc., a consulting firm in Reston, Va., attendees were surveyed on this issue, and the results were "surprising," according to Burke Cox, Stelligent CEO. Among the CTOs and IT and project managers in attendance, 84% do not currently practice TDD, and 79% said they do not measure code coverage for development projects. <br />With the practice of test-driven development, developers write automated unit tests defining the requirements of the code before they write the code itself. TDD has its roots in Extreme Programming (XP), although it is not limited to agile development, Cox said. <br />The two main barriers to TDD are cost and culture, according to Cox. <br />"First, you have to realize the value of testing and invest dollars upfront," he said. "At a micro level it costs more to write test and code, than just to write code. But if you continue for five weeks, then the person writing test and code will be much more productive than the person just writing code, because you're constantly testing and refactoring. You will deliver faster and it will be cheaper [in the long run], but management has to understand that." <br />Roundtable attendee Bobby Pantall, lead technology consultant at CC Pace Systems Inc., a business and technology consulting company in Fairfax, Va., agrees. <br />"Organizations haven't been sufficiently informed of the business benefits of TDD," he said. "In our experience, there is a popular misconception that it's wasteful to spend so much time writing tests instead of pure functionality. However, over the lifetime of an application, the majority of time is spent in maintenance mode. The benefits of TDD are quickly realized as IT can respond to requirement changes and enhancements more rapidly and with predictable results." <br />Developer buy-in needed<br />Developers also have to understand the value of test, Cox said. <br />"Developers cross their arms and say, 'I write code, not test,'" he said. "Test historically is second-class citizenship; it doesn't get the kinds of responsibility and respect developers or architects get. To change this attitude, you need a change of commitment from business people, who own the line of business. They have to insist teams do it [TDD]." <br />Those who practice TDD cite benefits such as a growing log of regression tests, better maintainability, reduction in development time and few defects, simpler and more extensible system design, improved code quality before reaching the testing/QA phase, and faster feedback, according to the survey. <br />Roundtable attendee Luke Majewski, director of application architecture at Intalgent, a custom software developer in Reston, Va., said TDD forces you to think about the testability of your code beforehand. <br /> <br />Problems with test-driven development<br />At last week's Better Software Agile Development Practices conference, James Coplien, senior Agile coach, software developer and systems architect at Nordija, spoke about the problems of using test-driven development (TDD). <br /><br />The biggest issue, he said, is that TDD creates a procedural architecture rather than an object-oriented architecture. And a procedural architecture weakens an object-oriented architecture and "in time destroys usability and maintainability." <br /><br />Coplien said development groups should instead use a lightweight, upfront architecture. That will help you avoid architecture rot, reduce maintenance cost and reduce usability problems. <br /><br />He also said to remember to use a system testing program driven by use cases.<br /><br />--Reported by Michelle Davidson, Site Editor<br /><br /> <br /> <br />"Even while you're designing the architecture you get to think about, 'If I use a framework in this manner it might not be testable.' If something's not testable, it's difficult to code. And once you start writing code that's not testable, it makes it impossible to make changes in the future, and finding bugs is more difficult," he said.<br />CC Pace practices TDD extensively, Pantall said. <br />"Before we implement any functionality, we write an automated test that formalizes the contract of the requirement. While we currently do not use any tools to measure code coverage, we are confident that 100% of the functionality is covered by tests," he said, "TDD by itself may not guarantee performance improvement, but automated testing of any sort, TDD or otherwise, will result in significant improvements in quality. TDD tends to improve code coverage, which translates to better quality." <br />TDD also improves maintainability, Pantall said. <br />"Supporting our code with automated tests allows us to perform regression testing every time changes are made," he said. "Ideally, every time we check in our code to source control, an automated build occurs which runs through our entire test suite. If the change broke something else, we know right away. " <br />Pantall also said that automated tests give developers the confidence to make large-scale refactoring or functionality changes to legacy code. <br />"We can immediately find out what functionality our changes may have affected. This does not obviate the need for dedicated system testers, but it can significantly reduce the burden on them," he said. <br />As with any methodology, developers sometimes adapt TDD to their needs. "I do not write tests first; I'm a dinosaur," said David Medinets, roundtable attendee and president of Affy Agile Advice and Coding, in Fairfax, Va. "I've been doing this for a long time; it's hard to change the way your brain thinks. I use test and development to guide how I'm writing code, so I write tests as I code." <br />According to Majewski, "By our definition of test-driven development, we write code that's testable; we write code, then we test. If something is checked into our continuous integration server, it needs to have a test, with very few exceptions." <br />While Stelligent's Cox said the best practice is to write the test before code, "we understand the challenges of companies adopting that more radical approach. At the least, have test associated with any code that is changing. Whether you write tests before or after coding is a question of internal debate."<br />Medinets said that while he tries to push his clients in the direction of writing unit tests, and he builds into his estimates the time to write tests, he said it's harder to build code coverage into the cost structure. "Typically I'd say 25% of the projects I work on do code coverage; they have budget and time constraints," he said.<br />In the end, Majewski said, seeing is believing. <br />"The common thread when we have this discussion with other developers is their companies don't do it because it doesn't make sense financially. It isn't until you complete a project fully, with good test coverage, that you really appreciate what it does," he said. "There's a level of disbelief from managers that test coverage will help. People need to give pushback to managers and give them proof that test-driven development works."Jacksonhttp://www.blogger.com/profile/06729565072784015826noreply@blogger.com0