Listeners let you view the results of the Samplers in the form of tables, graphs, trees or simple text in some log files. They provide visual access to the data gathered by JMeter about the test cases as a Sampler component of JMeter is executed.

Listeners collect data ONLY from elements at or below their level.

Each Listener displays the response information in specific way. For example, in order to view the graph form of the statistical data of the response time, you may want to use a "Aggregate Graph" Listener. Likewise, to view the statistical report on the same response data in a table form, you may want to add a "Summary Report" or "Aggregate Report" Listener. You can choose the form in which you would like to view the requests by selecting any of these Listeners, but they all write the same raw data to the output file with a .jtl extension.

Listeners provide means to view, save, and read saved test results.

All Listeners that save data into the same file shows the same data differently.

The following list consists of all the Listeners JMeter provides:
  • Sample Result Save Configuration
  • Graph Full Results
  • Graph Results
  • Spline Visualizer
  • Assertion Results
  • View Results Tree
  • Aggregate Report
  • View Results in Table
  • Simple Data Writer
  • Monitor Results
  • Distribution Graph (alpha)
  • Aggregate Graph
  • Mailer Visualizer
  • BeanShell Listener
  • Summary Report

Jmeter : Logic Controllers

Logic Controllers allow you to customize the logic that JMeter uses to decide when to send requests. For example, you can use Random Controllers to send HTTP requests to the server randomly.

Logic Controllers let you define the order of processing Samplers in a Thread, as you customize the logic that JMeter uses to send requests. A Logic Controller changes the order of requests that come from its sub-elements, or child elements. The child elements of a Logic Controller may comprise Samplers, Configuration Elements, and more Logic Controllers. For these requests, JMeter may randomly select (using Random Controller), repeat (using Loop Controller), interchange (using Interleave Controller) etc.

Several Logic Controllers can be combined to achieve various results.

A Loop Controller Control Panel looks like the following figure:

The following list consists of all the Logic Controllers JMeter provides:
  • Simple Controller
  • Loop Controller
  • Once Only Controller
  • Interleave Controller
  • Random Controller
  • Random Order Controller
  • Throughput Controller
  • Runtime Controller
  • If Controller
  • While Controller
  • Switch Controller
  • ForEach Controller
  • Module Controller
  • Include Controller
  • Transaction Controller
  • Recording Controller


Samplers allow JMeter to send specific types of requests to a server. In our sample tests later, we will be sending HTTP Requests the most, so we will
use HTTP Request Sampler to let JMeter send those requests. You may add Configuration Elements to these Samplers to customize your server requests.

Samplers allow you to define the requests that can be sent to a server. They simulate a user's request for a page from the target server. Each Sampler generates sample results that may have various attributes, such as performance, elapsed time, throughput, etc. By default, JMeter sends the requests in the order that the Samplers appear in the Test Plan tree. However, the order of processing the Samplers can be further customized using Logic Controllers. This will be further explained in the following section on "Logic Controllers".

You can customize each sampler by setting its properties, or you can add Configuration Elements. For the purpose of this book, since we will be sending numerous HTTP Requests to the same server, we may use the Default Configuration Element, which predefines the server to which all HTTP requests will be made.

An HTTP Request Sampler Control Panel looks like the following figure:

The following is a list of all Samplers JMeter provides:
  • HTTP Request
  • FTP Request
  • JDBC Request
  • Java Request
  • SOAP/XML-RPC Request
  • WebService (SOAP) Request
  • LDAP Request
  • LDAP Extended Request
  • Access Log Sampler
  • BeanShell Sampler
  • BSF Sampler
  • TCP Sampler
  • JMS Publisher
  • JMS Subscriber
  • JMS Point-to-Point
  • JUnit Request
  • Mail Reader Sampler
  • Test Action

Thread Group

These elements are used to specify number of running threads, a ramp-up period, and loop-count (no. of times to execute the test). Each thread simulates a user and the ramp-up period specifies the time to create all the threads. For example with 5 threads and 10 seconds of ramp-up time, it will take 2 seconds between each thread creation. The loop count defines the number of times the test will repeat for the thread group. The scheduler also allows you to set the start and end of the run time.

A Thread Group represents a group of users that will execute a particular test case. In its Control Panel, shown in the following figure, you will be able to simulate the "number of users", how long it takes to start each "user" (or how often the users should send requests), the number of times to perform the test (or how many requests they should send), and a start and stop time for each test.

Elements must be placed under a Thread Group as they define a Test Plan. A Thread Group controls the number of threads (or "users") Jmeter will use to execute your test. If there are two or more Thread Groups in the same Test Plan, each Thread Group will execute completely independently from each other. Multiple Thread Groups within the same Test Plan simply simulate groups of concurrent, individual connections to your server application. The Control Panel allows us to configure each Thread Group to have its own set of specific "behaviours".

Action to be taken after a Sampler error
In case of any error recorded in any Sample as the test runs, you may let the test either: Continue to the next element in the test, or Stop Thread to stop the current Thread, or Stop Test completely, in case you want to inspect the error before continue running.

Number of Threads
Simulates the number of user(s) or connection(s) to your server application.

Ramp-Up Period
Defines how long it will take Jmeter to get all threads running. For example, if there are 10 threads and a ramp-up period of 60 seconds, then each successive thread will be delayed by 6 seconds. In 60 seconds, all threads would be up and running. The best policy is to make your ramp-up period long enough to avoid large workload as the test begins, but short enough to allow the last one to start running before finishing the first one. You may set your ramp-up period to be equal with the number of threads, and later adjust accordingly.

Loop Count
Defines the number of times to execute the test. By default, the test is executed once but you can adjust as needed. Clicking the Forever checkbox causes the test to run repeatedly until stopped manually.

Scheduler Checkbox
Once selected, the Scheduler Configuration section will appear at the bottom of the control panel.

Scheduler Configuration
Version 1.9 and later reveals this feature, where you can set the start and end time of running the test. Once you start the test, it will not execute any elements until the start time is reached. After each execution cycle, unless the end-time is reached, in which case the run is stopped, the iteration will continue until the loop count limit. The startup delay field allows Jmeter some time before a thread is started and the duration field lets you define the duration of the whole test. The former overrides start-time, while the latter overrides end-time.

Feature Vs Functionality

Feature and Functionality - These are the two commonly asked terms in interview and Yes, they sense similarity. However, there is big difference between these two. Below are few definitions describing both the terms:-

Feature refer to what something can do whereas Functionality refers to how well something works.

A Feature is a sub-system or facility that in included within a larger system whereas A Function is the action that can be performed within the system.

"Any Functionality is enabled through a Feature."

For instance, User Administration is a feature offered in Windows. Add User, Grant Privilege to User, Delete User, List Users, etc. are Functions enabled by the User Administration feature.

Release Cycle and Build Process

In every company, there are different phases which describes how a product is prepared, developed, tested and delivered. This whole process is referred as Release Cycle.

Below is the common or I should say a categorized flow showing a Release Cycle:-

1. Requirement Analysis ( Testing and Production Management )
2. Feature Design Discussion ( Development and Testing)
3. Feature Development Complete
4. Testing Test Case execution ( Every Test Case Should Be tested with multiple test data )
5. Bug Reporting and Regression Testing.
6. Code Freeze ( No more changes are required in the code)
7. UAT (User Acceptance Testing)
8. Release to production with good quality.

Database Testing

Database Testing includes verifying stored procedures, table indexes, exceptions, schema's and compatibility.

Different types of Database Testing:

Structural testing
Functional testing
Boundary testing
Stress Testing

Few Sample Scenarios:

- Creating an user account from GUI - How would you ensure the details are stored into table correctly?

- Executing stored procedures in different conditions like valid and invalid conditions.

- Varying data definitions - The data type and length for a particular attribute may vary in tables though the semantic definitions are same. Example: Account number declared as Number ( 9 ) in one table and the same as varchar2( 11 ) in another table.

- Varying data codes and values - The data representation of the same attribute may vary with and across tables. Example: Yes or No may be represented as "Y", "y", "N", "n", "1", "0".

- Misuse of integrity constraints - When referential integrity constrains are misused, foreign key values may left "dangling". Example: Employee record deleted but dependent records not deleted.

- Nulls - Null may be ignored when joining tables or doing searches on the column.

- Inaccessible data - Inaccessible data due to missing or redundant unique identifier value. Example: Uniqueness not enforced.

- Incorrect data values - Data that is misspelled or inaccurately recorded. Example: Indra Nagar - Indra ngr.

- Inappropriate use of views - Data is updated incorrectly through views. Example: Data is properly fetched from the database but first record or last record is not displayed


SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.

CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

* Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.

* Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

* Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

* Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

* Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see for the latest information. In the U.S. the standards can be purchased via the ASQ web site at

IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Quality Planning , Quality Assurance & Quality Control

Quality Planning

Quality Planning is the most important step in Software Quality Management. Proper planning ensures that the remaining Quality processes make sense and achieve the desired results. The starting point for the Planning process is the standards followed by the Organization. This is expressed in the Quality Policy and Documentation defining the Organization-wide standards. Sometimes additional industry standards relevant to the Software Project may be referred to as needed. Using these as inputs the Standards for the specific project are decided. The Scope of the effort is also clearly defined. The inputs for the Planning are as summarized as follows:

a. Company’s Quality Policy
b. Organization Standards
c. Relevant Industry Standards
d. Regulations
e. Scope of Work
f. Project Requirements

Quality Assurance

(1) The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements.

(2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures.

(3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output.

(4) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as expected individually and collectively.

Quality Control

The Quality Control Processes use various tools to study the Work done. If the Work done is found unsatisfactory it may be sent back to the development team for fixes. Changes to the Development process may be done if necessary.

If the work done meets the standards defined then the work done is accepted and released to the clients.

Boundary value analysis Vs Equivalence partitioning

Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.

Equivalence Partitioning:

In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.

In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:

1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.

Boundary value analysis:

It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.

Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:

1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Boundary value analysis is often called as a part of stress and negative testing.

n, n+1, n-1 can be use to calculate BV.

Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.

E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.

The main difference between EP and BVA is that EP determines the number of test cases to be generated for a given scenario where as BVA will determine the effectivenss of those generated test cases.

Testable & Non-Testable Requirements

In many interviews, this question is asked frequently. So I am writing the answer which I believe is correct.

Testable Requirements: A requirement which is unambiguous and clearly specifies the behavior.

Non-Testable Requirements: A requirement which is very interpretive and do not specify exact behavior of the software .

Examples of Testable & Non-Testable Requirements:

1. All Users are allowed to post Only 5 Questions per day in this Forum - Clearly specifies the Limit allowed Whereas Users Reply to a question is made visible to all board members as soon as possible - Does not state Where/How it will be made visible and does not specify a "value" instead of "as soon as possible".

2. Consider the Application "X" which have A,B,C modules which makes the Full application "X". All the requirements are mentioned in the Design documents. When Developers start working on the module A and the build A is availble for testing, then Module A is Testable and we can't test module B or C, so based on the development plan QA Analyst will prepare Test Strategy document and will give clear details on what requirements need to be tested and what should not.

3. The user interface must look modern and attractive. The application must be fast. The user interface must be very responsive and so on.

The way many people write performance requirements are untestable (think - "performance shall be fast", "shall be google speed", "easy to use", etc). These can be fixed by the addition of detailed fit criteria to clarify how you will test them.

White Box Testing

White box testing is a security testing method that can be used to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

The purpose of any security testing method is to ensure the robustness of a system in the face of malicious attacks or regular software failures. White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase.

White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze available design documentation, source code, and other relevant development artifacts, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.


Suppose i have a application which show data into pie chart. And it calculates the pie chart angle wrong.
Pie chart angle can be calculated as Size of Angle= (value * 360)/ total value.
So my application calculating the value wrongly say instead of multiplying it by 360 we multiply it by 180.
Another way may be when the angle is calculated with correct algorithm Pie chart is showing round value means if angle is 103.56 then it will show 104. This will again creates problem because if it rounding the value for every angle then complete total may exceed 360 to may be 365 or 370.

Black Box Testing

Black Box Testing is not a type of testing; it instead is a testing strategy, which does not need any knowledge of internal design or code etc. As the name "black box" suggests, no knowledge of internal logic or code structure is required. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing".

The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Now a days, it is becoming common to route the Testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer.

In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action.

Advantages of Black Box testing include:
•The test is unbiased because the designer and the tester are independent of each other.
•The tester does not need knowledge of any specific programming languages.
•The test is done from the point of view of the user, not the designer.
•Test cases can be designed as soon as the specifications are complete.

Disadvantages of Black Box testing include:
•The test can be redundant if the software designer has already run a test case.
•The test cases are difficult to design.
•Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.

Gray Box Testing

Gray box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.

Gray box testing is a powerful idea. The concept is simple; if one knows something about how the product works on the inside, one can test it better, even from the outside. Gray box testing is not to be confused with white box testing; i.e. a testing approach that attempts to cover the internals of the product in detail. Gray box testing is a test strategy based partly on internals. The testing approach is known as gray box testing, when one does have some knowledge, but not the full knowledge of the internals of the product one is testing.

In gray box testing, just as in black box testing, you test from the outside of a product, just as you do with black box, but you make better-informed testing choices because you're better informed; because you know how the underlying software components operate and interact.

User Acceptance Testing

In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.

User Acceptance Testing is often the final step before rolling out the application. Usually the end users who will be using the applications test the application before ‘accepting’ the application. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements. This testing also helps nail bugs related to usability of the application. User acceptance testing, a testing methodology where the clients/end users involved in testing the product to validate the product against their requirements. It is performed at client location at developer's site.

For industry such as medicine or aviation industry, contract and regulatory compliance testing and operational acceptance testing is also carried out as part of user acceptance testing.

UAT is context dependent and the UAT plans are prepared based on the requirements and NOT mandatory to execute all kinds of user acceptance tests and even coordinated and contributed by testing team.

Alpha Testing

In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

Test Case

Test Case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Case Format:

Test Case ID Test Case (Objective) Steps Input Data Expected Output

Below is an example of writing test cases:

Test Cases for Registration on a Website
  • Check and verify that server open other page even after user type the URL(
  • Check and verify that server does not open "New user registration " form on clicking  "new user" link
  • Check and verify that server open invalid page  on clicking  "new user" link
  • Check and verify that server should open "congratulation page" even after leaving mandatory fields blank.
  • Check and verify that server  accept credentials (login id)  already in use.
  • Check and verify that server accepts  password and confirm password even after giving different password and confirm password.
  • Check and verify that server accepts the password which is not in the range of  6-32 characters.
  • Check and verify that server accepts the wrong captcha code.
  • Check and verify that server does not change captcha code even after click "Try new characters".
  • Check and verify that server does not respond to "Need Audio Assistance" even after clicking it.
  • Check and verify that server does not create new account even after completing the form and clicking "Create My Account " link.
  • Check and verify that server show "Congratulation page " even registration form is not completed.
  • Check and verify that server does not open user home page even though user clicks "continue" on "Congratulation Page"
  • Check and verify that server open invalid page even though user clicks "continue" on "Congratulation Page"
  • Check and verify that server open error  page even though user clicks "continue" on "Congratulation Page"
  • Check and verify that server does not sign off the user even after user clicks "sign out" link.
  • Check and verify that server opens the user's home page on clicking back button even though user is sign out.
  • Check that the server cancels the registration process after clicking "cancel" button
  • Check that server open "Password recovering " page after selecting "Forget password."
  • Check that server open XXXXX home page after writing url
  • Check that server opens new user registration page as user clicks to "new user" link
  • Check that server create the user account as user fill the form and clicks to "Create My Account " button
  • Check that server open user's home page after user clicks to "continue " on "congratulation Page"
  • Check that server sign off the user after clicking "sign out"

Bug Report

Hi All,

I found this good bug report example from google. Have a look.

Disposable Cup Bug:

Summary – The disposable cup leaks when a hot drink like coffee or tea exceeding 95 degree centigrade is allowed to remain in the cup for a time of about 15 minutes.

Cup type: Hard Paper cup
Manufacturer: XYZ Corporation
Version: 1.0
Customer: ABC Corporation

Test Content: Hot Coffee/Tea (> 95 degree centigrade)
Test Case mapping: TC_43_HT_LQ

Steps to Reproduce

1) Take 3 – 4 disposable cups for testing.
2) Place them on a table and ensure the room temperature is below 40 degree centigrade.
3) Prepare coffee and/or tea and check/ensure the temperature of the tea or coffee is above 95 degree centigrade.
4) Pour the prepared hot coffee/tea into the cups on the table and ensure, each cup has different volume. (For ex – 1st cup contains half cup coffee, 2nd cup 3/4th coffee)
5) Repeat the above steps for a different sample/batch of cups.
6) Repeat the above steps with other hot liquids such as hot water, hot malt drinks…

Observed Result

1) After around 10 minutes, all cups under test, having different volume of hot liquids, start to leak. (The leak is from the bottom of the cup)
2) The hot liquid leaks entirely out of the cup in one minute, when the cup was filled for full up to brim.

Expected Result

1) The cup should be able to hold hot liquids (90 – 100 degree centigrade)
2) As per specification, the cup should not leak when a hot liquid is allowed to remain in it for an hour.

Reproducibility – 100 %

Suggested temporary fix

As per experiments, it is found that, if two cups are used one over the other, the hot liquids are held and does not leak out for an hour.

Recommended permanent fix

Consider re looking at the material/thickness of the paper and glue.

Risk of the bug

Customer will be annoyed to see the tea/coffee has leaked and the customer is forced to clean the surface where the cup was placed. There could be some important documents/dress/money/anything important that may get spoilt because of the leak.

Severity: 1
Priority: 1

Limitation of testing

1) The above scenario was reproduced using hot coffee and tea only.
2) It is not experimented with hot water or any other liquid.
3) No testing was carried out to find whether the glue making the coffee/tea toxic.

JIRA Vs Bugzilla

Here's a copy of an answer I found on net on comparison of JIRA and Bugzilla:
Things that Bugzilla does and JIRA does not:
1. Bugzilla is very good at performance with large bug databases. Take a look at, it has ~400,000 records. I'm not sure what hardware it runs on, but you will probably need a lot more for the same stuff on JIRA. It's just Perl vs. J2EE. But if you have fewer than 50,000 records, don't worry.
2. Flags/requests. If you use this Bugzilla feature, you won't find anything similar in JIRA. But probably there's a plug-in.
3. Authorization is different. Bugzilla has a mind-breaking feature for grouping users & issues; JIRA has something more simple (and more convenient, I think). But if you have defined a lot of security groups in Bugzilla, it may be not easy to transfer their business logic to JIRA.
4. Search in JIRA is far less powerful than Bugzilla's advanced search.
5. If your Bugzilla is patched or integrated with other systems, take a close look at that.
6. Bugzilla is free and open-source, if that does matter.
7. Bugzilla's security theoretically should be better, because of (6).
But there are also reasons to move to JIRA:
1. Web-based user interface is better.
2. JIRA supports custom fields of many types.
(LpSolit: Since Bugzilla 3.0, custom fields are also supported. In Bugzilla 3.4, it supports not less than 6 different types.)
3. JIRA (enterprise edition) supports a number of customizable workflow schemes (LpSolit: Bugzilla 3.2 let administrators customize the workflow.)
4. JIRA issues may be linked with custom link types of user-defined semantics.
5. JIRA has open architecture and a plug-in API and lots of plug-ins. If JIRA doesn't do something, you might find a plug-in for that.
6. JIRA has very good commercial support.
(Someone: The commercial Bugzilla support is excellent, too)

Non-Functional Testing

Non-Functional Testing : Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client. 

1) Load Testing
2) Performance Testing
3) Compatibility Testing
4) Installation Testing
5) Usability Testing
6) User Interface Testing
7) Security Testing

Functional Testing

Functional Testing : Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.

1) Sanity Testing
2) Smoke Testing
3) Component Testing
4) Integration Testing
5) Regression Testing
6) URL Testing
7) System Testing
8) Globalization and Localization Testing
9) White Box and Black Box Testing

Usability Testing

" Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. "

In usability testing a basic model or prototype of the product is put in front of evaluators who are representative of typical end-users. They are then set a number of standard tasks which they must complete using the product. Any difficulty or obstructions they encounter are then noted by a host or observers and design changes are made to the product to correct these. The process is then repeated with the new design to evaluate those changes.

Few Useful Selenium IDE Commands

assignId(”Locator”,”String”) Temporarily sets the “id” attribute of the specified element
capture Screenshot (”File name”) Captures a PNG screenshot to thespecified file.
Check(”Locator”) Check a toggle-button(checkbox/radio)
click(”Locator”) Clicks on a link, button, checkboxor radio button.
clickAt(”Locator”,”Coordinate String”) Clicks on a link, button, checkboxor radio button.
close() Simulates the user clicking the”close” button in the title bar of a popup window or tab.
doubleClick(”Locator”) Double clicks on a link, button,checkbox or radio button.
doubleClickAt(”Locator”,”Coordinate String”) Double clicks on a link, button,checkbox or radio button.
getAlert() Retrieves the message of aJavaScript alert generated during the previous action, or fail if there were no alerts.
getAllButtons() Returns the IDs of all buttons onthe page.
getAllFields() Returns the IDs of all input fieldson the page.
getAllLinks() Returns the IDs of all links on the page.
getAllWindowIds() Returns the IDs of all windows that the browser knows about.
getAllWindowNames() Returns the names of all windows that the browser knows about.
getAllWindowTitles() Returns the titles of all windows that the browser knows about.
getAttribute(”Attribute Locator”) Gets the value of an element attribute.
getBodyText() Gets the entire text of the page.
getConfirmation() Retrieves the message of a JavaScript confirmation dialog generated during the previous action.
getCookie() Return all cookies of the current page under test.
getElementHeight(”Locator”) Retrieves the height of an element
getElementPositionLeft(”Locator”) Retrieves the horizontal position of an element
getElementPositionTop(”Locator”) Retrieves the vertical position of an element
getElementWidth(”Locator”) Retrieves the width of an element
getEval(”JS Expression”) Gets the result of evaluating the specified JavaScript snippet.
getLocation() Gets the absolute URL of the current page.
getMouseSpeed() Returns the number of pixels between “mousemove” events during dragAndDrop commands (default=10).
getPrompt() Retrieves the message of a JavaScript question prompt dialog generated during the previous action.
getSelectedId(”Select Locator”) Gets option element ID for selected option in the specified select element.
getSelectedIds(”Select Locator”) Gets all option element IDs for selected options in the specified select or multi-select element.
getSelectedIndex(”Select Locator”) Gets option index (option number, starting at 0) for selected option in the specified select element.
getSelectedIndexes(”Select Locator”) Gets all option indexes (option number, starting at 0) for selected options in the specified select or multi-select element.
getSelectedLable(”Select Locator”) Gets option label (visible text) for selected option in the specified select element.
getSelectedLables(”Select Locator”) Gets all option labels (visible text) for selected options in the specified select or multi-select element.
getSelectedValue(”Select Locator”) Gets option value (value attribute) for selected option in the specified select element.
getSelectedValues(”Select Locator”) Gets all option values (value attributes) for selected options in the specified select or multi-select element.
getSelectOptions(”Select Locator”) Gets all option labels in the specified select drop-down.
getSpeed() Get execution speed (i.e., get the millisecond length of the delay following each selenium operation).
getTable(”Table Cell Address”) Gets the text from a cell of a table.
getText(”Locator”) Gets the text of an element.
getTitle() Gets the title of the current page.
getValue(”Locator”) Gets the (whitespace-trimmed) value of an input field (or anything else with a value parameter).
get Whether This Frame MatchFrameExpression(”Current Frame”,”Target”) Determine whether current/locator identify the frame containing this running code
get Whether This Window MatchWindowExpression(”CurrentWindow”,”Target”) Determine whether currentWindow String plus target identify the window containing this running code.
goBack() Simulates the user clicking the “back” button on their browser.
highlight(”Locator”) Briefly changes the backgroundColor of the specified element yellow.
isAlertPresent() Has an alert occurred?
isChecked(”Locator”) Gets whether a toggle-button (checkbox/radio) is checked.
isConfirmationPresent() Has confirm() been called?
isEditable(”Locator”) Determines whether the specified input element is editable, ie hasn’t been disabled.
isElementPresent(”Locator”) Verifies that the specified element is somewhere on the page.
isPromptPresent() Has a prompt occurred?
isSomethingSelected(”Locator”) Determines whether some option in a drop-down menu is selected.
isTextPresent(”Pattern”) Verifies that the specified text pattern appears somewhere on the rendered page shown to the user.
isVisible(”Locator”) Determines if the specified element is visible.
open(”URL”) Opens an URL in the test frame.
openWindow(”URL”,”WindowID”) Opens a popup window (if a window with that ID isn’t already open).
refresh() Simulates the user clicking the “Refresh” button on their browser.
removeAllSelections(”Locator”) Unselects all of the selected options in a multi-select element.
removeSelection(”Locator”,”Option Locator”) Remove a selection from the set of selected options in a multi-select element using an option locator.
select(”Select Locator”,”Option Locator”) Select an option from a drop-down using an option locator.
selectFrame(”Locator”) Selects a frame within the current window.
selectWindow(”WindowID”) Selects a popup window; once a popup window has been selected, all commands go to that window.
setSpeed(”Value”) Set execution speed (i.e., set the millisecond length of a delay which will follow each selenium operation).
setTimeout(”Time”) Specifies the amount of time that Selenium will wait for actions to complete.
start() Launches the browser with a new Selenium session
stop() Ends the test session, killing the browser
submit(”Form Locator”) Submit the specified form.
type(”Locator”,”Value”) Sets the value of an input field, as though you typed it in.
unCheck(”Locator”) Uncheck a toggle-button (checkbox/radio)
waitForCondition(”JavaScript”,”Timeout”) Runs the specified JavaScript snippet repeatedly until it evaluates to “true”.
waitForFrameToLoad(”Frame Address”,”Timeout”) Waits for a new frame to load.
waitForPageToLoad(”Timeout”) Waits for a new page to load.
waitForPopUp(”WindowID”,”Timeout”) Waits for a popup window to appear and load up.
windowFocus() Gives focus to the currently selected window
windowMaximize() Resize currently selected window to take up the entire screen

Test Design Techniques

There are mainly 3 techniques for designing the test cases.

Specification-based / Black-box techniques
  • Equivalence partitioning
  • Boundary value analysis
  • Decision table testing
  • State transition testing
  • Use case testing

Structure-based / White-box techniques
  • Statement testing
  • Decision testing
  • Other structure based techniques - Condition coverage & Multi condition coverage

Experience-based techniques
  • Error guessing
  • Exploratory testing

Severity and Priority

Severity means how much severe is the particular defect in the application (i.e.) how it affects the functionality of the application. Severity type is defined below :


1. Build fails

2. Complete application is blocked and QA cannot proceed further.


1. Any functionality is blocked but workaround is available.

2. Issue in functionality that is important from business or demo point of view.


1. Functionality issue but QA can continue with a workaround.
2. UI issues on every browser
3. Logo missing.
4. Spelling mistakes.


1. UI issue on least used browser.
2. Textual changes.


1. Errors that do not prevent or hinder functionality
2. Suggestion from QA (Improvement). If suggestions are accepted then the priority will change accordingly.

Priority means the importance and urgency to fix the defect by the developers (i.e.) which defect should be fixed first and which should be fixed in later versions.

Severity level will be set by the Testing Team and the Priority level will be set by the Development Team.

The Severity and Priority levels must vary depends upon the company and the defect tracking tool used by the company.

Few Examples:

1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.

2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.

3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.

4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.

Smoke Testing Vs Sanity Testing

This is one of the frequently asked question during interviews.

Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
A smoke test is scripted--either using a written set of tests or an automated test. A sanity test is usually unscripted.
A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide. A Sanity test is used to determine a small section of the application is still working after a minor change.
Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification). Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
Smoke testing is normal health check up to a build of an application before taking it to testing in depth. Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
Smoke testing is after build releasing we will test high level functionalities working or not. Sanity testing is we will test all functionalities working properly or not.
It covers the major functionality of the application without bothering with finer details. Tester conducts the Sanity test to ensure the stability of the application build. Tester finds weather the application builds is stable for complete application or not.

Verification and Validation

This is the most frequent question being asked interviews and most of the interviewee failed to explain the difference. Thus, what I understand is mentioned below:

Verification: Have we built the software right? (i.e., does it match the specification).

Validation: Have we built the right software? (i.e., is this what the customer wants).

The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology:

Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.

Introduction to Bugzilla

It is a Web-based general-purpose bug tracker and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License. Released as open source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a bug tracking system and occasionally as a front-end to project management software. It is used for both free and open source software and proprietary projects and products.

Components of Bugzilla :-

  • Summary - Describe the bug in 60 characters or fewer. Be pithy, be precise, and be concise. A developer should be able to read the summary and say, "Oh, that's what the bug is about."
  • Platform and OS - These usually have no bearing on the actual bug, but it doesn't hurt to leave them specified (if you don't, they are auto detected)
  • Component - Try to figure out what the bug is part of.
  • Severity - How severe the bug is.
  • Assign To - the field will automatically get filled, don't touch it
  • CC - this field will add people to a mailing list which notifies users when a bug has been changed.
  • URL - a specific URL for the bug, if any.

Bug Life Cycle

Bug :- A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

Description of Various Stages:

1. New:  When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign:  Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

Testing, Quality Assurance and Quality Control

Hi All,

Most of the people are confused with the difference between Testing, QA and QC Engineer. Below are the definitions showing difference between these three:-

Quality Assurance (QA): A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.

Quality Control (QC): A set of activities designed to evaluate a developed work product. These activities can always include the audits conducted by the QA team to assess the cost of correcting defects, documentation etc.

Testing: It is process of finding defects by executing a system/software. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

Test Strategy

Test strategy is “How we plan to cover the product so as to develop an adequate assessment of quality.”

A good test strategy is: Specific, Practical, Justified.

The purpose of a test strategy is to clarify the major tasks and challenges of the test project. Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy. It describes what kind of testing needs to be done for a project for ex: user acceptance testing, functional testing, load testing, performance testing etc.

Test Plan

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project.

Contents of a Test Plan:

  • Title
  • Identification of software including version/release numbers
  • Revision history of document including authors, dates, approvals
  • Table of Contents
  • Purpose of document, intended audience
  • Objective of testing effort
  • Software product overview
  • Relevant related document list, such as requirements, design documents, other test plans, etc.
  • Relevant standards or legal requirements
  • Traceability requirements
  • Relevant naming conventions and identifier conventions
  • Overall software project organization and personnel/contact-info/responsibilities
  • Test organization and personnel/contact-info/responsibilities
  • Assumptions and dependencies
  • Project risk analysis
  • Testing priorities and focus
  • Scope and limitations of testing
  • Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
  • Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
  • Test environment validity analysis - differences between the test and production systems and their impact on test validity.
  • Test environment setup and configuration issues
  • Software migration processes
  • Software CM processes
  • Test data setup requirements
  • Database setup requirements
  • Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
  • Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
  • Test automation - justification and overview
  • Test tools to be used, including versions, patches, etc.
  • Test script/test code maintenance processes and version control
  • Problem tracking and resolution - tools and processes
  • Project test metrics to be used
  • Reporting requirements and testing deliverables
  • Software entrance and exit criteria
  • Initial sanity testing period and criteria
  • Test suspension and restart criteria
  • Personnel allocation
  • Personnel pre-training needs
  • Test site/location
  • Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues
  • Relevant proprietary, classified, security, and licensing issues.
  • Open issues
  • Appendix - glossary, acronyms, etc.

Jmeter - Test Plan

A Test Plan defines and provides a layout of how and what to test: the web application as well as the client server application. It can be viewed as a container for running tests. It provides a framework in which it will execute a sequence of operations or tools to perform the testing. A test plan includes elements such as thread groups, logic controllers, sample-generating controllers, listeners, timers, assertions, and configuration elements. A test plan must have at least one thread group.

User Defined Variables:
Here you can define static variables that allow you to extract repeated values throughout your tests, such as server names, port number, etc. For example, if you are testing an application on server, then you can define a variable called "server" with the value of This value will replace variable "${server}" found anywhere in the test plan.

Functional Test Mode:
This will cause Jmeter to record the data returned from the server for each sample and write this data to the file that you have selected in your Listener. You may use the Configuration button on a listener to decide what fields to save. This can be useful if you are doing a small run to ensure that your server is returning the expected results. However, as this option allows Jmeter to save the maximum sample information, Jmeter's performance will reduce.

If you are doing stress-testing, do not select this option, as it will affect your results.

If checked, this feature will save all information, including the full response log data and the default items, which are: time stamp, the data type, the label, the thread name, the response time, message, code, and a success indicator.

Run each Thread Group separately:
If you have two or more Thread Groups in your Test Plan, selecting this will instruct Jmeter to run each serially. Otherwise, Jmeter will run Thread Groups simultaneously or in parallel.

Add directory or jar to class path:
This additive feature allows you to add JAR files or directories in case you have created your own extension to the Jmeter package you are using. However, you will need to restart Jmeter if you remove any entry. Alternatively, you can copy all the jar files to the Jmeter | lib directory.

Jmeter - GUI

The user interface has two panels. Once Jmeter runs, you will see two elements, Test Plan and WorkBench, as you see in the figure below. A Test Plan describes a series of steps Jmeter will execute once the Test Plan runs, while a WorkBench functions as a temporary workspace to store test elements. Elements in the WorkBench are not saved with the Test Plan, but can be saved independently.

Introduction to Jmeter

Jmeter is a desktop application, designed to test and measure the performance and functional behavior of client/server applications, such as web applications or FTP applications. It is by far, one of the most widely used open-source, freely distributed testing application that the Net can offer. It is purely Java-based and is highly extensible through a provided API (Application Programming Interface). Jmeter works by acting as the "client side" of a "client/server" application. It measures response time and all other server resources such as CPU loads, memory usage, and resource usage. In this respect, Jmeter can be used effectively for functional test automation. In addition, it has tools that support regression testing of similar types of applications. Although it was originally designed for testing web applications, it has been extended to support other test functions. It was first and still is being developed as one of the Apache Jakarta Projects (, as this project offer a diverse set of open-source Java solutions.

Jmeter was first developed by Stefano Mazzocchi of the Apache Software Foundation. He wrote it primarily to test the performance of Apache JServ,which was later replaced by the Apache Tomcat Project. Jmeter has since been developed and has expanded to load-test FTP servers, database servers, and Java Servlets and objects. Today, it has been widely accepted as a performance testing tool for web applications. Various companies, including AOL, have used Jmeter to load-test their websites and SharpMind of Germany has used Jmeter for functional and regression testing its applications and its clients.

Business Requirement Specification ( BRS ), Software Requirement Specification ( SRS ), Functional requirement Specification ( FRS )

Business Requirement Specification ( BRS ) : BRS contains the basic requirements of customer that are to be developed as software, project cost, schedule, target dates. It typcially expresses the broad outcomes the business requires rather than specific functions the system may perform.  Specific design elements are usually outside the scope of this document.

Software Requirement Specification ( SRS ) : SRS is implemented form of BRS. SRS is often referred as parent document of project  management document such as design specifications,statmnets of works ,software architecture specifications, testing and validation plans and documentation plans.The basic issues of SRS is what is the functionality(what is the s/w supposed to do)what are the external interfaces (how does the software interact with the user, other hardware, and other system software)performance(What is the speed of application ,recovery time ,response time, availability of various software functions)attributes(what is the portability, security, correctness etc )design constraints (OS environments, implementation of languages, database integrity and resource limits)

SRS contains the functional and non functional requirements only.

Functional requirement Specification ( FRS ) : FRS document provides the more detailed and described form of SRS.It contains the technical information and data needed to design the application. FRS define the what are software functionality will be and how to implement.

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle.

STLC Phases:

1. Planning of Tests:

In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. It mainly includes:

  • Scope of Testing : Defining the areas to be tested, identification of features to be covered during testing
  • Identification of Approaches for Testing: Identification of approaches including types of testing
  • Defining Risks: Identification of different types of risks involved with the decided plan
  • Identification of resources : Identification of resources like man, materials & machines which need to be deployed during Testing
  • Time schedule: For performing the decided testing is aimed to deliver the end product as per the commitment made to the customer.  Involvement of software testers begins in the planning phase of the software development life cycle.

2. Analysis of Tests:

This phase includes:
  • Identification of Types of Testing to be performed during various stages of Software Development Life Cycle.
  • Identification of extent to which automation needs to be done.
  • Identification of the time at which automation is to be carried out.
  • Identification of documentation required for automated testing
  • Identification of the test cases best suited to the automated testing
  • Identification of the areas to covered for performance testing and stress testing
  • Carry out detailed review of documentation covering areas like Customer Requirements, Product Features & Specifications and Functional Design etc.

3. Designing of Tests:

This phase involves the following:
  • Further polishing of various Test Cases, Test Plans
  • Revision & finalization of Matrix for Functional Validation.
  • Finalization of risk assessment methodologies.
  • In case line of automation is to be adopted, identification of test cases suitable for automation.
  • Creation of scripts for Test cases decided for automation.
  • Preparation of test data.
  • Establishing Unit testing Standards including defining acceptance criteria
  • Revision & finalization of testing environment.

4. Construction and verification:

This phase involves the following:
  • Finalization of test plans and test cases
  • Completion of script creation for test cased decided for automation.
  • Completion of test plans for Performance testing & Stress testing.
  • Providing technical support to the code developers in their effort directed towards unit testing.
  • Bug logging in bug repository & preparation of detailed bug report.
  • Performing Integration testing followed by reporting of defects detected if any.
5. Execution of Testing Cycles:

This phase involves the following:

Completion of test cycles by executing all the test cases till a predefined stage reaches or a stage of no detection of any more errors reach. This is an iterative process involving execution of Test Cases, Detection of Bugs, Bug Reporting, Modification of test cases if felt necessary, Fixing of bugs by the developers & finally repeating the testing cycles.

6. Performance Testing, Documentation & Actions after Implementation:

This phase involves the following:
  • Execution of test cases pertaining to performance testing & stress testing.
  • Revision & finalization of test documentation
  • Performing Acceptance testing, load testing followed by recovery testing
  • Verification of the software application by simulating conditions of actual usage.

7. Actions after Implementation:

This phase involves the following:
  • Evaluation of the entire process of testing.
  • Documentation of TGR (Things Gone Right) & TGW (Things Gone Wrong) reports. Identification of approaches to be followed in the event of occurrence of similar defects & problems in the future.
  • Creation of comprehensive plans with a view to refine the process of Testing.
  • Identification & fixing of newly cropped up errors on continuous basis.

Selenese - Selenium Commands

Selenium provides a rich set of commands for fully testing your web-app in virtually any way you may imagine. The command set is often called Selenese. These commands essentially create a testing language.

In selenese, one can test the existence of UI elements based on their HTML tags, test for specific content, test for broken links, input fields, selection list options, submitting forms, and table data among other things. In addition Selenium commands support testing of window size, mouse position, alerts, Ajax functionality, pop up windows, event handling, and many other web-application features.

A command is what tells Selenium what to do. Elements of Selenium Commands are: -
  • Actions are commands that generally manipulate the state of the application. They do things like “click this link” and “select that option”. If an Action fails, or has an error, the execution of the current test is stopped.
  • Accessors examine the state of the application and store the results in variables, e.g. “storeTitle”. They are also used to automatically generate Assertions.
  • Assertions are like Accessors, but they verify that the state of the application conforms to what is expected. Examples include “make sure the page title is X” and “verify that this checkbox is checked”.

Introduction to Selenium

Introduction to Selenium

Selenium is a robust set of tools that supports rapid development of test automation for web-based applications. Selenium provides a rich set of testing functions specifically geared to the needs of testing of a web application. These operations are highly flexible, allowing many options for locating UI elements and comparing expected test results against actual application behavior.
One of Selenium’s key features is the support for executing one’s tests on multiple browser platforms.