Search This Blog

Thursday, September 23, 2010

When and Why to use Descriptive programming?

Below are some of the situations when Descriptive Programming can be considered useful:
1.      The objects in the application are dynamic in nature and need special handling to identify the object. The best example would be of clicking a link which changes according to the user of the application, Ex. “Logout <>”.
2.      When object repository is getting huge due to the no. of objects being added. If the size of Object repository increases too much then it decreases the performance of QTP while recognizing a object.
3.      When you don’t want to use object repository at all. Well the first question would be why not Object repository? Consider the following scenario which would help understand why not Object repository

Scenario 1: Suppose we have a web application that has not been developed yet. Now QTP for recording the script and adding the objects to repository needs the application to be up, that would mean waiting for the application to be deployed before we can start of with making QTP scripts. But if we know the descriptions of the objects that will be created then we can still start off with the script writing for testing

Scenario 2: Suppose an application has 3 navigation buttons on each and every page. Let the buttons be “Cancel”, “Back” and “Next”. Now recording action on these buttons would add 3 objects per page in the repository. For a 10 page flow this would mean 30 objects which could have been represented just by using 3 objects. So instead of adding these 30 objects to the repository we can just write 3 descriptions for the object and use it on any page.

4.      Modification to a test case is needed but the Object repository for the same is Read only or in shared mode i.e. changes may affect other scripts as well.
5.      When you want to take action on similar type of object i.e. suppose we have 20 textboxes on the page and there names are in the form txt_1, txt_2, txt_3 and so on. Now adding all 20 the Object repository would not be a good programming approach.

What is Descriptive Programming

Whenever QTP records any action on any object of an application, it adds some description on how to recognize that object to a repository of objects called object repository. QTP cannot take action on an object until unless its object description is in the Object Repository. But descriptive programming provides a way to perform action on objects which are not in Object repository.

What is e-Commerce


What is e-commerce? 
For the purposes of this paper, e-commerce (also known as e-business) is defined as the software and business processes required to allow businesses to operate solely or primarily using digital data flows.  E-commerce is often associated with web technology and is commonly transacted via web portals, but e-commerce is much more than the provision of a web page as the customer interface.
    The creation of integrated business processes (Enterprise Resource Planning), the integration of collections of disparate software applications, each designed to facilitate a different aspect of the business (Enterprise Application Integration), the extension of software and business processes to embrace transactions with suppliers’ systems (Supply Chain Management), the need for increased security for transactions over public networks, and the potential volume demand at e-commerce sites all provide new and unique challenges to the e-commerce development community – challenges which will require novel and innovatory solutions and which will need thorough testing before they are allowed to go live.
Why is testing important in the e-commerce environment?
The first and primary reason is because e-commerce is, by its very nature, business critical.  In the third quarter of 1998, Dell’s e-commerce site exceeded $10 million in daily sales; the E*Trade site currently exceeds 52, 000 transactions per day, giving a cost of one-day failure of around $800,000; and the travel industry in Europe will be worth $2 billion by 2002, according to Datamonitor.  The immediacy of the customer, with its implied promise of rapid delivery at competitive prices, and the sheer accessibility of the web, all combine to create potentially massive demand on web sites and portals. 


The second reason is that e-commerce is a massive and growing market place but one which requires large up-front investment to enter successfully.  There are already 5.8 million web sites worldwide, 2.5 million of which have been created this year (1999).  The International Data Corporation (IDC) estimates that the e-commerce market will grow from over $5billion in 1998 to $1trillion in 2003.  The average cost of development of an e-commerce site is $1 million, says the Gartner Group and will increase by 25% annually over the next 2 years.
The third reason is because the history of e-commerce development has been littered with expensive failures, at least some of which could have been avoided by better testing before the site was opened to the general public.  (In e-commerce terms, ‘the site’ means the entire architecture from suppliers through back-end systems and front-end systems to the customers; it typically includes Intranet, Internet and extranet applications as well as legacy systems and third party middleware).

Ten Key Principles of Effective E-Commerce Testing


Over the decades since Information Technology (IT) became a major factor in business life, problems and challenges such as those now faced by the e-commerce community have been met and solved.  Key testing principles have emerged and these can be successfully applied to the e-commerce situation.
Principle 1.  Testing is a risk management processThe most important lesson we have learned about software testing is that it is one of the best mechanisms we have for managing the risk to businesses of unsuccessful IT applications.  Effective testing adopts a strategy that is tailored to the type of application or service being tested, the business value of the application or service, and the risks that would accompany its failure.  The detailed planning of the testing and the design of the tests can then be conformed by the strategy into a business-focused activity that adds real business value and provides some objective assessment of risk at each stage of the development process.  Plans should include measures of risk and value and incorporate testing and other quality-related activities that ensure development is properly focused on achieving maximum value with minimum risk.  Real projects may not achieve everything that is planned, but the metrics will at least enable us to decide whether it would be wise to release an application for live use.
Principle 2.  Know the value of the applications being tested To manage risk effectively, we must know the business value of success as well as the cost of failure.  The business community must be involved in setting values on which the risk assessment can be based and committed to delivering an agreed level of quality.

Principle 3.  Set clear testing objectives and criteria for successful completion (including test coverage measures):  When testing an e-commerce site, it would be very easy for the testing to degenerate into surfing, due to the ease of searching related sites or another totally unrelated site.  This is why the test programme must be properly planned, with test scripts giving precise instructions and expected results.  There will also need to be some cross-referencing back to the requirements and objectives, so that some assessment can be made of how many of the requirements have been tested at any given time.  Criteria for successful completion are based on delivering enough business value, testing enough of the requirements to be confident of the most important behaviour of the site, and minimising the risk of a significant failure.  These criteria – which should be agreed with the business community - give us the critical evidence that we need in deciding readiness to make the site accessible to customers.
Principle 4.  Create an effective test environment:  It would be very expensive to create a completely representative test environment for e-commerce, given the variety of platforms and the use of the Internet as a communications medium.  Cross-platform testing is, naturally, an important part of testing any multi-platform software application.  In the case of e-commerce, the term ‘cross-platform’ must also extend to include ‘cross-browser’.  In order to ensure that a site loads and functions properly from all supported platforms, as much stress and load testing as possible should be performed.  As an absolute minimum, several people should be able to log into the site and access it concurrently, from a mixture of the browsers and platforms supported.  The goal of stress and load testing, however, is to subject the site to representative usage levels.  It would, therefore, be beneficial to use automated tools, such as Segue’s SilkPerformer or Mercury Interactive’s LoadRunner, for performance/load testing.
Principle 5.  Test as early as possible in the development cycle:  It is already well understood and accepted in the software engineering community that the earlier faults are detected, the cheaper the cost of rectification.  In the case of an e-commerce site, a fault found after shipping will have been detected as a failure of the site by the marketplace, which is potentially as large as the number of Internet users.  This has the added complication of loss of interest and possibly the loss of customer loyalty, as well as the immediate cost of fixing the fault.  The fact that e-commerce development is rapid and often based on changing requirements makes early testing difficult, but testing strategies have been developed by the RAD community, and these can be mobilised for support.   Perhaps the most important idea in RAD is the joint development team, allowing users to interact with the developers and validate product behaviour continuously from the beginning of the development process.  RAD utilises product prototypes, developed in a series of strictly controlled ‘timeboxes’ – fixed periods of time during which the prototype can be developed and tested – to ensure that product development does not drift from its original objectives.  This style of web development makes testing an integral part of the development process and enhances risk management throughout the development cycle.
Principle 6.  User Acceptance Testing (UAT) The client or ultimate owner of the e-commerce site should perform field testing and acceptance testing, with involvement from the provider where needed, at the end of the development process.  Even if RAD is used with its continuous user testing approach, there are some attributes of an e-commerce site that will not be easy (or even possible, in some cases) to validate in this way.  Some form of final testing that can address issues such as performance and security needs to be included as a final confirmation that the site will perform well with typical user interactions.  Where RAD is not used, the scope of the provider’s internal testing coverage and user acceptance testing coverage should be defined early in the project development lifecycle (in the Test Plan) and revisited as the project nears completion, to assure continued alignment of goals and responsibilities. UAT, however, should not be seen as a beta-testing activity, delegated to users in the field before formal release.   E-commerce users are becoming increasingly intolerant of poor sites, and technical issues related to functionality, performance or reliability have been cited as primary reasons why customers have abandoned sites.  Early exposure of users to sites with problems increases the probability that they will find the site unacceptable, even if developers continue to improve the site during beta testing.
Principle 7.  Regression testing:  Applications that change need regression testing to confirm that changes did not have unintended effects, so this must be a major feature of any e-commerce testing strategy.  Web-based applications that reference external links need regular regression testing, even if their functionality does not change, because the environment is changing continuously. Wherever possible, regression testing should be automated, in order to minimise the impact on the test schedule.

Principle 8.  Automate as much as possible:  This is a risky principle because test automation is fraught with difficulties.  It has been said that a fool with a tool is still a fool, and that the outcome of automating an unstable process is faster chaos, and both of these are true.  Nevertheless, the chances of getting adequate testing done in the tight time scales for an e-commerce project and without automation are extremely slim.  The key is to take testing processes sufficiently seriously that you document them and control them so that automation becomes a feasible option – then you select, purchase and install the tools.  It will not be quick or cheap – but it might just avoid a very expensive failure.

Principle 9.  Capture test incidents and use them to manage risk at release time:  A test incident is any discrepancy between the expected and actual results of a test.  Only some test incidents will relate to actual faults; some will be caused by incorrect test scripts, misunderstandings or deliberate changes to system functionality.  All incidents found must be recorded via an incident management system (IMS), which can then be used to ascertain what faults are outstanding in the system and what the risks of release might be. Outstanding incidents can be one of the completion criteria that we apply, so the ability to track and evaluate the importance of incidents is crucial to the management of testing.

Principle 10.  Manage change properly to avoid undoing all the testing effort:  Things change quickly and often in an e-commerce development and management of change can be a bottleneck, but there is little point in testing one version of a software application and then shipping a different version; not only is the testing effort wasted, but the risk is not reduced either.  Configuration Management tools, such as PVCS and ClearCase, can help to minimise the overheads of change management, but the discipline is the most important thing.

What are the Testing Challenges in e-Commerce


Business Issues

A successful e-commerce application is:

  1.   Usable:  Problems with user interfaces lose clients.
  2.   Secure:  Privacy, access control, authentication, integrity and    non-repudiation are big issues.
  3.   Scaleable:  Success will bring increasing demand.
  4.   Reliable: Failure is unthinkable for a business critical system.
  5.   Maintainable:  High rates of change are fundamental to e-commerce.
  6.   Highly available:  Downtime is too expensive to tolerate.

These characteristics relate in part to the web technology that usually underlies e-commerce applications, but they are also dependent on effective integration and effective back-end applications.  E-commerce integrates high value, high risk, high performance business critical systems, and it is these characteristics that must dominate the approach to testing because it is these characteristics that determine the success of e-commerce at the business level.

Technical Issues


The development process for e-commerce has unique characteristics and some associated risks.  It is generally recognised that a ‘web year’ is about 2 months long.  In other words, a credible update strategy would need to generate e-commerce site updates roughly monthly.  For this reason, Rapid Application Development (RAD) techniques predominate in the e-commerce environment, and in some cases development is even done directly in a production environment rather than in a separate development environment.  RAD techniques are not new, and it is generally agreed that they work best where functionality is visible to the user – so web site development would seem to be an ideal application area.  Unfortunately, though, other aspects of e-commerce are at least as important as the front-end.  The end-to-end integration of business processes and the consequent severe constraints placed on intermediate processes make them less than ideal application areas for RAD.

These changes increase risk and create new challenges for testers, because time pressures militate against spending a longer time testing sites before they are released.  At the same time, the technical environment of front-end systems is changing very rapidly, so change is imposed on e-commerce sites even when the site itself is not changing.  This requires more regression testing than would be expected in a conventional application to ensure that the site continues to function acceptably after changes to browsers, search engines and portals.  New issues have also come to the fore for testers, notably security of transactions and the performance of web sites under heavy load conditions.

If we consider an e-commerce site as made up of a front end (the human-computer interface), a back end (the software applications underlying the key business processes) and some middleware (the integrating software to link all the relevant software applications), we can consider each component in isolation.

 

Front End Systems


Static Testing.  The front end of an e-commerce site is usually a web site that needs testing in its own right.   The site must be syntactically correct, which is a fairly straightforward issue, but it must also offer an acceptable level of serviceon one or more platforms, and have portability between chosen platforms.  It should be tested against a variety of browsers, to ensure that images seen across browsers are of the same quality.  Usability is a key issue and testing must adopt a user perspective.  For example, the functionality of buttons on a screen may be acceptable in isolation, but can a user navigate around the site easily and does information printed from the site look good on the page when printed?  It is also important to gain confidence in the security of the site.  Many of these tests can be automated by creating and running a file of typical user interactions – useful for regression testing and to save time in checking basic functionality.

Dynamic Testing:  Applications attached to an e-commerce site, either by CGI programming or server extensions, will need to be tested by creating scenarios that generate calls to these attached applications, for example by requiring database searches.  The services offered to customers must be systematically explored, including the turnaround time for each service and the overall server response.  This, too, must be exercised across alternative platforms, browsers and network connections.  E-commerce applications are essentially transaction-oriented, based on key business processes, and will require effective interfacing between intranet-based and extranet-based applications.
      

Back End Systems:

The back end of e-commerce systems will typically include ERP and database applications.  Back end testing, therefore, is about business application testing and does not pose any new or poorly understood problems from a business perspective, but there are potential new technical problems, such as server load balancing.  Fortunately, client-server system testing has taught the testing community many valuable lessons that can be applied in this situation.  What is essential, however, is to apply the key front end testing scenarios to the back end systems.  In other words, the back end systems should be driven by the same real transactions and data that will be used in front end testing.  The back end may well prove to be a bottleneck for user services, so performance under load and scalability are key issues to be addressed.  Security is an issue in its own right, but also has potential to impact on performance.

Middleware and Integration:


Integration is the key to e-commerce. In order to build an e-commerce application, one or more of the following components are usually integrated:

  1.  Database Server
  2.  Server-side application scripts/programs
  3.  Application server
  4.  HTML forms for user interface
  5.  Application scripts on the client
  6.  Payment server
  7.  Scripts/programs to integrate with legacy back-end systems

The process of developing an e-commerce site is significantly different from developing a web site – commerce adds extra levels of complexity.   One highly complex feature is that of integration.

If an application is being built that uses a database server, web server and payment server from different vendors, there is considerable effort involved in networking these components, understanding connectivity-related issues and integrating them into a single development (executable) environment.  If legacy code is involved, this adds a new dimension to the problem, since time will need to be invested in understanding the interfaces to the legacy code, and the likely impact of any changes.

It is also crucial to keep in mind the steep learning curve associated with cutting-edge technologies.  Keeping pace with the latest versions of the development tools and products to be integrated, their compatibility with the previous versions, and investigating all the new features for building optimal solutions for performance can be a daunting task.  Also, since e-commerce applications on the web are a relatively new phenomenon, there are unlikely to be any metrics on similar projects to help with project planning and development.

The maintenance tasks of installing and upgrading applications can also become very involved, since they demand expertise in:

  1. Database administration.
  2. Web server administration.
  3. Payment server administration.
  4. Administration of any other special tools that have been integrated into the site.
  5. Technical support should also be borne in mind.

Correctly functioning back-end and front-end systems offer no guarantees of reliable overall functionality or performance.  End-to-end testing of complete integrated architectures, using realistic transactions, is an essential component. 

Tuesday, September 7, 2010

Defining user permissions

This article describes the permission’s functionalities that exists in Quality Center.
The first thing you have to be aware when customizing a Quality Center project is that permission settings are not defined in a single location but are spread in different parts of the customization sections. You might wonder why such a decision has been made? Even though, the location of these settings have some “reasonable” sense, I believe this complexifies the tasks of securing access to the project data.
Anyway, this is how it is so let’s start examining these settings.
All the security settings are defined in the Customization (accessible through Tools > Customize…).
The different sections where you can affect user’s permissions are:
• Set Up Project Users
• Set Up Groups
• Customize Module Access
• Set Up Workflow
• Script Generator - Add Defect Field Customization
• Script Generator - Defect Details Field Customization
• Script Editor
Let’s first start with setting up the Group.
Set Up Groups
On a fresh project, there are always 5 default groups that are already defined:
• Developer
• Project Manager
• QATester
• TDAdmin
• Viewer
These groups cannot be customized and cannot be removed from the project. In order to tailor the group permissions to your project, you need to create new groups. With these new groups, you will be able to customize their settings. When creating a new group, you need to indicate from which group you want to duplicate the initial settings. This can be useful especially if your new group has similar settings from another group.
Once created, you can amend its settings by selecting Change permissions. This will bring a new window divided into tabs for each Quality Center module:
• Requirements
• Business Components (optional)
• Test Plan
• Test Lab
• Defects
• Administration (customization module permissions)
For each module you will find similar settings where you can allow (if checked) or disable (unchecked) permissions for different aspects of the module.
There are usually 3 actions Add/Modify/Delete which gives you control on a group basis.
For the Delete action, you can specify that only the owner can delete the object (’Can be deleted by owner only’ checkbox).
For the Modify action, you can even define finer rules. For each field, you can restrict modification permissions to the owner only (’Can be modified by owner only’ checkbox) and, for fields defined by lists, you can specify transition rules (i.e. define a transition workflow). This last point is particularly interesting for workflow based transitions such as Status where the designer wants the user to follow a predefined path (for instance, a defect Status cannot be set as Fixed unless the testing team has validated it beforehand by setting its Status as Validated).
For the Test Plan, Test Lab and Defect tabs, you may have noticed a “Data-Hiding Filter” link. These are extra security settings and will bring another window with further customization settings.
First, you can set filtering conditions. By defining a filter, you limit the visibility scope a group has. As an example, imagine you have different teams who are accessing a QC project:
• Team 1: this team have access to some confidential technology and consequently is not accessible to everyone.
• Team 2: these are the outsourced testers who can log defects
To separate the defects that are confidential from the one that are not, a field called “Confidentiality Grade” has been created and this field contains 2 values “1-High” and “2-Low”.
If you are defining a user group for the Team 2 then you set a fiter for the “Confidentiality Grade” field with “2-Low” as the filter. By doing this, any user that is only part of this group will not see any defect with grade “1-High”.
Secondly, you can also hide fields from the user. This prevents a user from seeing values he shouldn’t.

How to become a QC Project Administrator - Roles and Responsibilities

1.Log in to the QC - Site Administration Tool
2.Highlight the Project where the User is to become the Project Administrator
3.Check the Project Administrator Check box "ON" for the User
4.Save and Log Out
5.Log in to the QC Project
6.In the Customization Page, assign the user to the "TD_ADMIN_QTP_Project" Group
Remove the User from the "TD Admin" Group
Note: If the User is no longer a TD Admin of any Project,that user will no longer be able to log in to the Site Administration Tool.

Site Administrator: The Site Administrator allows the user to:
1.Become a QC Administrator of any QC Project
2.Create,Delete,Modify,Disconnect,Remove,Restore,Activate,Deactivate,Copy,Upgrade,....of any QC Project
3.Add Users to QC
NOTE:Once the User is added to QC, the user should be added to a Project in the Project Custmization Tool
4.Disconnect Users from Login Session
5.Update the QC License
6.Configure QC
7.View License usage

Site Administrator Access:

1.From the QC Home Page,Click the Site Administrator Link so that the "Quality Center - Administration" Log in Page Displays
2.Log in Using the Same User Name and Password used to log in to a QC Project
3."Site Projects" Tab is where a QC User can be set as a Site Administrator which allows them to access the "Quality Center - Site Administration" Tool
4.A Tree View of the Domains and Projects within those Domains Display
5."QualityCenter_DEMO" is a built in demo Project that comes with QC
6.Never DELETE "QualityCenter_DEMO"
7.This Project is where users can be set as a "Project Administrator"
Selecting the Project Administrator Check Box for a User in ANY* Project automatically makes a QC User a
a) QC "Site" Administrator
b) A "TD Admin" for Only that Project
8.There are other two Projects in a QC where a User might be set to become a Site Administrator(In Case the QualityCenter_DEMO project is accidentally deleted)
a) QC_MASTER_TEMPLATE
b) QC_EXAMPLE_Structure
9.Setting User as a Project Administrator in the QC-Site Administration automatically assigns the User to the TD_Admin User group for that Project
TD_Admin is a pre-defined QC User Group
User Defined Groups must be used instead of "TD Admin"
This is because QC was Customized to use only User-Defined Groups.
TD_ADMIN_QTP_Project is a user defined Group most equilaent to the TD_Admin Group

10. UnAssigning a User from the "TD Admin" Group will remove the user from being a Site Administrator.But if the User is still assigned to at least on a project as a Project Administrator (TD_Admin),the user can still access the Site Administration Tool

11. It is a good idea to have more than one user as a "Site Administrator" in order to have a backup person

12. It is not a good idea to have a user as a "Project Administrator" of a Project if that User's Position is not to be one,due to many reasons (i.e useless email notifications,accidental deletion of data...)

Monday, September 6, 2010

QTP Scripting GuideLines

1 Introduction
Coding conventions are suggestions designed to help write code using Microsoft Visual Basic Scripting for QTP. The document describes the coding standards to be followed while developing automated test scripts for Oracle applications. In general, this document talks about a general purpose set of coding conventions that will define minimum requirements while coding using QTP.
1.1 Background

1.2 Objectives
The following are the main objectives of this document
Define guidelines for test script development.
Define naming conventions for test scripts, data sheets, library files, reusable actions and Result files.
Outline Best practices
1.3 Purpose
A standardized process for automated test script development is important due to the following reasons
Automated regression test scripts are developed with a long term perspective. The test scripts thus developed should be highly maintainable. Also, hardly automated test scripts are maintained for its whole life by the original author.
Scripting conventions improve the readability of the software, allowing engineers to understand new scripts more quickly and thoroughly.
Improve readability by ensuring a “common look and feel” to the scripts regardless of how many people have worked on it.
Using consistent Scripting standards and naming conventions will not only contribute significant time and resource savings for both the initial construction and long term maintenance of automated test scripts, but also improve script efficiency dramatically. Developing these standards prior to construction and adhering to them during construction is thus very important.

2 Naming Conventions
Good coding conventions result in precise, readable, and unambiguous source code that is consistent with other language conventions and is as intuitive as possible. The coding standard guideline is to standardize the way coding is done across the automation effort for Oracle applications.
2.1 Constant Naming Conventions
Earlier versions of VBScript had no mechanism for creating user-defined constants. Constants, if used, were implemented as variables and distinguished from other variables using all uppercase characters. Multiple words were separated using the underscore (_) character.

Example:
INVENTORY_ORG = “STOR”

While this is still an acceptable way to identify your constants, you can create true constants using the Const Statement. This convention uses a mixed-case format in which constant names have a “con” prefix. One declaration per line is recommended since it encourages commenting

Example:
Const conInventoryOrg = “STOR”
2.2 Variable Naming Conventions:
A simplified form of Hungarian notation should be used to indicate the data type and the scope of a variable.
Use a lower case prefix for the type
Concatenate scope and type prefixes, in that order, add an “a” for Action level variable Ex: aintQuantity (where “a” indicates that the variable is at action level)
To enhance readability and consistency, use the following prefixes with descriptive names for variables in your VBScript code:
Subtype Prefix Example
Boolean bln blnFound
Byte byt bytRasterData
Date (Time) dtm dtmStart
Double dbl dblTolerance
Error err errOrderNumber
Integer int intQuantity
Long lng lngTotalPrice
Object obj objCurrent
Single Sng sngAverage
String Str strFirstName

2.3 User Defined Functions
The function or procedure name should be descriptive of its primary purpose.
Example:
Sub RunConcurrentProgram (. . .)
In VBScript, there are two kinds of procedures; the Sub procedure and the Function procedure. A Function in the script must always be used on the right side of a variable assignment or in an expression.
Example:
StrStatusMessge = GetStatusbarMessage ()
2.4 Parameters Naming Conventions:
When parameters are created give the parameter-column name as the name of the field in the Oracle form. Because using the field-name, it will be clear what Oracle field is parameterized when maintenance is required. A prefix of “P_” will be added to the Parameter thus defined.

Example:
P_Number
2.5 Object Naming Conventions:

During test script development when there is a requirement to use descriptive programming for object identification. Therefore it is necessary to use some recommended conventions to name the objects. Suffix the object type with the name of the object. The following list some of the objects in oracle forms and the required prefix to be followed. The name of the Object will be designated based on the Form Name as designed in the Oracle Applications in Initial Capital format. So for example a form with the name “Find Orders” in Oracle will be given the name “FindOrders” for the purpose of descriptive programming.

Object Type Suffix Example
Window Window/Form SubmitRequestWindow or SubmitRequestForm
Button Button ApproveButton
Tabbed Region Tab MainTab
Check Box CheckBox GlobalCheckBox
In Oracle Forms objects are recognized as a Parent-Child hierarchy. For example under the Sales Order Window there will be Tab Regions which in turn will have editable text fields. For this purpose the description of the Child object must be preceded by its corresponding Parent’s name as well.
So for example, if the Oracle Form is named “Sales Order Form” in which there is a tab region named “Main” and under which there is a text field called Customer Number then the object description should follow the hierarchy as below:-

Sales Order Form description: - SalesOrderWindow
Main Tab Region’s description: - SalesOrderWindow_MainTab
Text Field’s description: - SalesOrderWindow_MainTab_CustomerNumberTextField

2.6 Code Commenting Conventions:
The following are the commenting conventions to be used while script development.
All procedures should begin with a brief comment describing what they do. This description should not describe the implementation details (how it does it) because these often change over time, resulting in unnecessary comment maintenance work, or worse, erroneous comments.
Arguments passed to a procedure should be described when their purpose is not obvious and when the procedure expects the arguments to be in a specific range.
Return values for functions and variables that are changed by a procedure, especially through reference arguments, should also be described at the beginning of each procedure.
Every important variable declaration should include an inline comment describing the use of the variable being declared
Variables, controls, and procedures should be named clearly to ensure that inline comments are only needed for complex implementation details.
At the beginning of the script, there should be an overview that describes the script, enumerating objects, procedures, algorithms, dialog boxes, and other system dependencies. Sometimes a piece of pseudo code describing the algorithm can be helpful.
There should be header comments included for procedures. The header comments should include the following section headings:
2.7 Checkpoints and Description of Checkpoints
When a Checkpoint is inserted, add a comment which details the type check point, objective of the checkpoint, the value / object /event checked.

3 Test Script/Components Header
Header information are comments usually at the beginning of a test script, reusable action, user defined function or procedure, library files or object definitions.
All procedures and functions should begin with a Header block describing the functional characteristics of the routine (what it does). This description should not describe the implementation details (how it does it), because these may change over time, resulting in unnecessary comment maintenance work or, worse, erroneous comments.

3.1 Procedures/Function headers

‘*********************************************************
'* Function Name : Name of the function
'* Process Thread : NA.
'* Object/Details : A brief description of the flow of the function
'* Input Parameters : Input variables description to the function
'* Output Parameters : Output variables description to the function
'* Pre-Conditions : Any pre-requisites for the function
'* Post-Conditions : NA
'* Reusable Actions/Functions Used: None.
'* Author : The name of the person who created the function
'* Date : The date on which the function was created
'* Comments : Comments if any
‘*********************************************************
3.2 Reusable Action Header

Reusable actions should contain the following header information.

‘*********************************************************
'* Objective : The main objective of the reusable action
'* Action input : input action parameters
'* Action output : output action parameters
'* Pre - Conditions: The preconditions that should be satisfied before the
Action can be called
'* Post - Conditions: Post conditions that should be executed, if any
‘* Author : The name of person who originally created
‘* Date : The date on which the action was created
‘* Last Modified date: The last modification date
‘*********************************************************
3.3 Library File

‘*********************************************************
'* Name : Name of the lib file
'* Functions : Name and description of the functions in the file
'* Classification : General or oracle module name.
‘*********************************************************

3.4 Object Definition File

‘*********************************************************
'* Name : Name of the objects file
'* Classification : Common or Oracle module name
‘*********************************************************
3.5 Test Script

‘*********************************************************
'* Project Name : Name of the Project
'* Test Reference : Test Case name related to the script
'* Script Title : Name of the automation script
'* Test Version : Script version
'* Test Version Date: Version Date
'* Application Code Version: Application Version
'* Created by : Name of the person who created the script
'* Date of script Creation: Creation date of the script
'* Purpose/Description : A brief description of the script
'* Input Excel Sheet Name: Input Data Sheet Name of the script
'* Results Excel Sheet Name: Result Sheet Name for the script
'* Initial Conditions: Any initial setups that may need to be performed before running the script
'* Dependencies/Assumptions: Describe if there are any dependencies for the script
'* Pass/Fail Criteria: List out the PASS/FAIL conditions for the script 1. Successful Generation of Invoice

‘****************************************************************
4 Script and Components Naming
4.1 Test Script Naming
Test Script names should prefix with Application Name followed by an underscore and the word “SCN” and in succession followed by a short description of the script.
Try to keep your test script names simple and descriptive
Use whole words-avoid acronyms and abbreviations (unless the abbreviation is much more widely used than the long form.) in Initial Capital format
The Script Name should contain a reference to the test case or test suite
Use the underscore “_”for separation.

Example:
ApplicationName_”SCN-”ScriptName

4.2 Action Naming
Actions created using QTP should have a meaningful Name.
They should be in mixed case with the first letter of each word capitalized and prefixed by the letters “RU” and an underscore.
Example:
RU_Login should be the name assigned for the reusable action designed for logging into the Oracle Front End application.


4.3 Library File Naming
A library file would be created as a place holder for a group of general user defined functions. Also there would be a library file specific to each module. The name of the library file should start with the application name and a meaningful name/module name. Use “_” as a separation character. When the module name/general name has multiple words, they should be in mixed case with the first letter of each word capitalized
Example:
ConcurrentProgram.vbs

4.4 Data File Naming
Input data files would be maintained separately. The data files are excel spread sheets that would be called inside the test dynamically by the script during execution. Each test script will have an input data file. The data file names should be same as the test script except that they will be followed by the string “Data”. This would help in associating a data file with a test script and the test case.
Example:
“ScriptName”Data.xls

4.5 Object File Naming

The object file will be designed based on the responsibility in the Oracle Applications and the whole file should have the object descriptions of the different Oracle Forms that are available under that responsibility. The Object file must be named accordingly as the requisite responsibility name followed by the word “Objects” preceded by an underscore.
Example:
“ResponsibiltyName”_Objects.vbs


Test Script Usage and Maintenance
A Test script usage & maintenance document should be prepared for each script or for a group of test sets. The document helps in future maintenance of scripts and for easy execution of scripts by anyone. The document should address the following:
Scope of test script or test set
Execution steps for the scripts
Opening the test through QTP
Reference to Test plan & Test cases
Viewing/Editing test data
Executing the test script
Viewing/Analyzing Results & log
Baseline data used for script execution
Test scripts location
Limitation of test scripts
Reference – PEs, BRDs, SRDs etc..

Tables in QC - Test Set - Used to generate Excel Reports

TABLE: CYCLE (Test Sets)

CY_CYCLE_ID The ID of the test set record.
CY_CYCLE The test set name assigned by the creator of the test set.
51 CY_FOLDER_ID The CYCL_FOLD.CF_ITEM_ID of the folder containing the test set.
CY_ASSIGN_RCYC The RELEASE_CYCLES.RCYC_ID of the release cycle with which the test set is associated.


TABLE: CYCL_FOLD (The test set folder tree.)

CF_ITEM_ID The record ID.
CF_ITEM_NAME The folder name.
CF_FATHER_ID The CF_ITEM_ID of the folder containing this item. Root folders have CF_FATHER_ID = 0.
CF_ASSIGN_RCYC The RELEASE_CYCLES.RCYC_ID of the release cycle with which the folder is associated.

TABLE: TESTCYCL (Instances of design tests in test sets.)

TC_TESTCYCL_ID The record ID.
TC_CYCLE_ID The CYCLE.CY_CYCLE_ID of the test set containing this test. (test set ID)
TC_STATUS The status of the last run of the test instance. The values are from the Status custom list.
Typical values are: Not Completed, No Run, Passed, N/A, Failed.
TC_ASSIGN_RCYC The RELEASE_CYCLES.RCYC_ID of the release cycle with which the test instance is associated.

How to become a QC Administrator - QC Administration Terminology

Site Administrator: A User who can access the "Quality center Site administration" Tool.

TD Admin or Project Administrator:A User who can Customize a Specific QC Project .

Quality Center Site Administration: A tool that allows a site Administrator to become a TD Admin,Create and modify Projects,Add Users and Configure QC.Project level Customization is excluded from this tool.

Sunday, September 5, 2010

Requirements for Load Testing

Questions that need to be answered for coming up with test scenarios:

1.What would be the max number of users logged in and using the system at one time?
2.Which transactions (features) are most commonly used?
3.What would be the max number of users processing a transaction at one time?
4.How many of each transaction takes place per day?
5.What time of day do certain transactions take place?
6.What are the transactions acceptable response times?
7.What connection speeds would be used by how many users for each transaction?
8.What transactions would be taking place at the same time and how many?

Thursday, September 2, 2010

Folder Structure in QTP Package

The high level explanation could be as follows:

1.

2.
a. Action folders : , , ....
b. Files inside the :
1. .cfg file -
This holds the run time information like addins loaded, user information, etc.

2. .prm file -
Hlds the information about the number of iterations and the parameters that are used within the script / action.

3. .usr file -
This will be generated to hold the name of the script, the objectRepository info, etc.

4. .lck file -
This is a lock file.

5. .mtr file -
(No idea)

6. .xls sheet -
The default sheet that holds the parameters used within the script.

7. .prm.bak file -
backup file.

8. .usp file -
Stores some RUnLogic information for the script.

9. thin_usr.dat file -
similar to the thick usr file. The diff is not clear exactly.

10. thick_usr.dat file -
Stores high level info about all the above and below files indicating what set of files was created, etc.

11. .tsp file -
??

3. folder:
This is a default folder that is created to hold the following:

a. folder:
This will hold all the active screen files which are zipped, pertaining to that action.

b. Resource.mtr file -
This file holds info regarding the object rep used, the shared libraries used, and possibly maitains a track of how the script is being accessed through the code. Ie. the call chain.

c. Script.mts -
This is the script file actually.

d. .tsr -
The object rep file if used in per action mode.

e. The result file.

f. The action excel sheets.

The subsequent action folders hold similar set of files.

Basic guide lines and check list for web testing

Aspects to cover: Functionality Usability User Interface Server side interface Compatibility Security Performance
LINKS: To check for all the links in the web site 1) All hyper links
2) All Internal links 3) All mail links 4) Check for Orphan pages
5) Check for broken links FORMS: To check for the integrity of submission of all forms 1) All field level checks 2) All field level validations
3) Functionality of create,modify,delete,and view
4) Handling of wrong inputs
5) Default values if any 6) Optional Vs Mandatory fields

COOKIES: Check for the cookies that has to be enabled and how it has to be expired

WEB INDEXING: Depending on how the site is designed using meta tags, frames, HTML Syntax, dynamically created pages, Passwords or different languages, our site will be searchable in different ways

DATABASE: Two types of errors occurs in WEB Application
1) Data Integrity: Missing or wrong data in Table
2) Output Errors: Errors in writing, editing, or reading operation in the table

USABILITY: How simple customer can browse the web site NAVIGATION:
1) Navigation describes the way user navigate with in a webpage, between different user interface controls (buttons, text boxes, combo boxes, drop down lists………etc) 2) Application navigation is proper through tabs
3) Application navigation is proper through mouse
4) Any hot keys, control keys to access menus

CONTENT: 1) Correctness is whether the information is truthful or contains mis information
2) The accuracy of the information is whether it is with out grammatical or spelling errors
3) Spelling and grammar
4) Updated information (contact details, mail ID’s, help reports)

GENERAL APPEARANCE: 1) Page appearance 2) Colour, font size 3) Frames 4) Consistant designs 5) Symbols and logo’s

SERVER SIDE INTERFACE
: 1) Verify that communication is done correctly, WEB Server – Application Server, Application Server – database Server and vice versa
2) Compatibility of Server software, hardware, network connections
3) Database compatibility
4) External interfaces if any CLIENT SIDE COMPATIBILITY: Platforms: XP, NT, UNIX, LINUX, Solaris, Macintosh Browsers: IE (3.x, 4.x, 5.x), NET Scape, AOL Browser Settings

Error Number Descriptions in QTP

507 An exception occurred
449 Argument not optional
17 Can't perform requested operation
430 Class doesn't support Automation
506 Class not defined
11 Division by zero
48 Error in loading DLL
5020 Expected ')' in regular expression
5019 Expected ']' in regular expression
432 File name or class name not found during Automation
92 For loop not initialized
5008 Illegal assignment
51 Internal error
505 Invalid or unqualified reference
481 Invalid picture
5 Invalid procedure call or argument
5021 Invalid range in character set
94 Invalid use of Null
448 Named argument not found
447 Object doesn't support current locale setting
445 Object doesn't support this action
438 Object doesn't support this property or method
451 Object not a collection
504 Object not safe for creating
503 Object not safe for initializing
502 Object not safe for scripting
424 Object required
91 Object variable not set
7 Out of Memory
28 Out of stack space
14 Out of string space
6 Overflow
35 Sub or function not defined
9 Subscript out of range
5017 Syntax error in regular expression
462 The remote server machine does not exist or is unavailable
10 This array is fixed or temporarily locked
13 Type mismatch
5018 Unexpected quantifier
500 Variable is undefined
458 Variable uses an Automation type not supported in VBScript
450 Wrong number of arguments or invalid property assignment

What is ER Diagram

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views.

Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram which is used to visually represents data objects.

Since Chen wrote his paper the model has been extended and today it is commonly used for database design For the database designer, the utility of the ER model is:

it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables.
it is simple and easy to understand with a minimum of training. Therefore, the model can be used by the database designer to communicate the design to the end user.

In addition, the model can be used as a design plan by the database developer to implement a data model in a specific database management software.

model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views.

Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram which is used to visually represents data objects.

Since Chen wrote his paper the model has been extended and today it is commonly used for database design For the database designer, the utility of the ER model is:

it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables.
it is simple and easy to understand with a minimum of training. Therefore, the model can be used by the database designer to communicate the design to the end user.

In addition, the model can be used as a design plan by the database developer to implement a data model in a specific database management software.

What is ETL

ETL is a short for Extract, Transform and Load. It is a data integration function that involves extracting the data from outside sources , transforming it into business needs and ultimately loading it into a datawarehouse

ETL is an abbreviation for "Extract, Transform and Load".This is the process of extracting data from their operational data sources or external data sources, transforming the data which includes cleansing, aggregation, summarization, integration, as well as basic transformation and loading the data into some form of the data warehouse.

E: Extraction of data from the homogeneous/heterogenous sources.
T: Transforming/modifying the source data by applying some transformations like Filter, Expression, Router, Joiner,
L: Loading the Transformed data into corresponding Target tables.

Wednesday, September 1, 2010

What is FrameWork

It basically depends upon the type of Framework being used - datadriven, modular, keyword based or hybrid Framework.

Test automation Framework is a set of assumptions, concepts and practices that provide support for automated software testing.

Test script modularity Framework:

The test script modularity Framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test. These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case.
It is called as modular since we are breaking the scripts into modules - a few of which are compile modules or reusable. This is the simplest of all Frameworks.

Data Driven Frameworks:

Test scripts are executed and verified based on the data values stored in one or more central data sources or databases. These databases can range from datapools, odbc sources, csv files, excel files, dao objects, ado objects, etc.
The establishment of several interacting test scripts together with their related data results in a Framework used for the methodology. In this Framework, variables are used for both input values and output verification values: navigation through the program, reading of the data sources, and logging of test status and information are all coded in the test script.
Thus, the logic executed in the script is also dependant on the data values.


Keyword-driven Framework:

Keyword-driven Framework separates much of the programming work from the actual test steps so that the test steps can be developed earlier and can often be maintained with only minor updates, even when the application or testing needs change significantly.
It consists of driver, control scripts etc

It is set of rules defined for developping and organising the test scripts.There are diffrent types of Framework are available namely modularized type of framework, library based, data driven, keyword driven and hybrid framework.

You can use any of these Framework for automation. It depends on the project and its criticality.

Hybrid Framework:

Hybrid Framework represents combination of other different framework in order to maintain code/scripts. You can choose some external files for updation of fields in the projects/application(say excel sheets or text files) and you need to import the file to the script, so whenever u want to update the script by different set of value you just need to update the external files which will effect on the automation script

How to run QTP Scripts from QC

Open QC,
Create a new Subject in Test Plan

Then place all your QTP scripts in folder
Open QTP and file - > Quality Center Connection
Enter url
Connect
Enter QC UserID and Password
Project : Domain & Project
Open the script you wish to run, click on File - > Save As ....select Quality Center button in the right botton corner
Seelct the relevant subject folder created before in QC
Click on OK


Login to QC, in Test lab, select the testscripts uploaded and click on Run