Tester Roles and Responsibilities

Responsibilities of a Test Manager:

− Manage the Testing Department.
− Allocate resource to projects.
− Review weekly Testers' status reports and take necessary actions
− Escalate Testers' issues to the Sr. Management.
− Estimate for testing projects.
− Enforce the adherence to the company's Quality processes and procedures.
− Decision to procure Software Testing tools for the organization.
− Inter group co-ordination between various departments.
− Provide technical support to the Testing team.
− Continuous monitoring and mentoring of Testing team members.
− Review of test plans and test cases.
− Attend weekly meeting of projects and provide inputs from the Testers' perspective.
− Immediate notification/escalation of problems to the Sr. Test Manager / Senior Management.
− Ensure processes are followed as laid down.

Responsibilities of a Test Lead:

− Prepare the Software Test Plan.
− Check / Review the Test Cases document
- System Integration and User Acceptance prepared by test engineers.
− Analyze requirements during the requirements analysis phase of projects.
− Keep track of the new requirements from the Project.
− Forecast / Estimate the Project future requirements.
− Arrange the Hardware and software requirement for the Test Setup.
− Develop and implement test plans.
− Escalate the issues about project requirements (Software Hardware Resources) to Project Manager / Test Manager.
− Escalate the issues in the application to the Client.
− Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
− Organize the meetings.
− Prepare the Agenda for the meeting for example: Weekly Team meeting etc.
− Attend the regular client call and discuss the weekly status with the client.
− Send the Status Report (Daily Weekly etc.) to the Client.
− Frequent status check meetings with the Team.
− Communication by means of Chat / emails etc. with the Client (If required).
− Act as the single point of contact between Development and Testers for iterations Testing and deployment activities.
− Track and report upon testing activities including testing results test case coverage required resources defects discovered and their status performance baselines etc.
− Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any.
− Ensure content and structure of all Testing documents / artifacts is documented and maintained.
− Document implement monitor and enforce all processes and procedures for testing is established as per standards defined by the organization.
− Review various reports prepared by Test engineers.
− Log project related issues in the defect tracking tool identified for the project.
− Check for timely delivery of different milestones.
− Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
− Attend weekly Team Leader meeting.
− Motivate team members.
− Organize / Conduct internal trainings on various products.

High Seviority-Low Priority & Low Seviority and High Priority

Severity: Means impact on the application..,

Priority: Means Importance in terms of both application and client

High severity and low priority.
When the application has critical problem and it has to be solved after a month then we can say it as high severity and low priority.

low severity and high priority
When the application has trivial problem ie (less affected) and it has to be solved within a day then we can say it as low seviority and high priority

Software Testing Metrics

Measurement: Quantifying the quality of an Application

Metric:It is the combination of measurements

Some important testing metrics:

1)Schedule variance=(Actual time taken-planed time)/planed time*100

2)Effort variance=(Actual effort-Planned Effort)/Planned effort * 100

3)Test Case coverage =(Total Test Cases – Requirements that cannot mapped to test cases)/Total Test cases * 100

4)Customer Satisfaction= number of complaints/Period of time

5)Test Case effectiveness = The extent to which test cases are able to find defect.

6) Time to find a defect = The effort required to find a defect

7) Defect Severity = business impact= effect on the end user

8)Test Coverage = To which test case covers the products complete functionality.

9)Defect Severity index = An index representing the average of the severity of the defect.

10)Time to Solve a Defect = Effort Required to resolve the a defect

11)No Of Defect = The Total number of defects found in time

12)Defects/KLOC = The number of defects per 1000 lines of code

13)Defect severity = The severity level of a defect indicates the potential business impact for the end user.
(business impact = effect on the end user)

14) Time to find the defect= The effort required to find a defect.

15)Time to solve a defect =Effort required to resolve a defect (diagnosis and correction)

16)Test coverage =Defined as the extent to which testing covers the product’s complete functionality.

17) Test case effectiveness = The extent to which test cases are able to find defects.

18) Defect Age=Fixed date-Reported date

Software Testing Process

I) Test Planning

(Primary Role: Test Lead/Team Lead)


a) Requirements specification
b) Test Strategy
c) Project plan
d) Use cases/design docs/Prototype screen
e) Process Guidelines docs


- Review Report
- Test Plan


Test Lead/team Lead: Test Planning
Test Engineers: Contribution to Test plan
BA: Clarifications on Requirements


a) Understanding & Analyzing the Requirements
b) Test Strategy Implementation
c) Test Estimations (Time, Resources-Environmental, Human, Budget)
d) Risk Analysis
f) Team formation
g) Configuration management plan
h) Test Plan Documentation
i) Test Environment set-up defining

Quality Standards ( ISO,CMM & Six Sigma)

1) ISO (International Organization for Standardization)
2) SEI-CMM/CMMI (Capability Maturity Model)
3) Six Sigma

ISO (International Organization for Standardization) :
ISO 9001:2000: ISO is generic Model, Applicable for all types of originations, contains 20 clauses, certification audit is like an examination ,result is the certification is pass or fail.

It is based on the “PDCA Cycle” and the “8 Quality management Principles”

Testing Terminology

Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.

Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – Black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

System testing - Black box type testing that is based on overall requirements specifications; covers all combined parts of a system.

end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Black Box Testing Techniques

1) User Interface Testing
2) Functional Testing
3) Non Functional Testing
4) User support Testing

User Interface Testing:

During this testing test engineers validates user interface of the application as following aspects:
1) Look & Feel
2) Easy to use
3) Navigations & shortcut keys

Testing Methodologies ( White Box Vs Black Box Testing)

White Box Testing

In this testing we test internal logic of the program.
To conduct this testing we should aware of programming .
Ex: Unit Testing

Black Box Testing
Without knowing internal logic of the program,we test over all functionality of the application whether it is working according to client requirement or not.