- Overview:
- Brief introduction to the test plan, its purpose, and the system or application under test.
- Scope:
- Clearly defines the boundaries of the testing effort, specifying what is included and excluded from the testing scope.
- Inclusions:
- Details the specific items, features, or functionalities that are to be tested.
- Test Environments:
- Describes the hardware, software, and network configurations where testing will take place.
- Exclusions:
- Identifies any items, features, or functionalities that are explicitly excluded from the testing effort.
- Test Strategy:
- Outlines the overall approach to testing, including testing levels, test types, and techniques to be employed.
- Defect Reporting Procedure:
- Describes the process for logging, tracking, and managing defects identified during testing.
- Roles/Responsibilities:
- Defines the roles and responsibilities of team members involved in the testing process.
- Test Schedule:
- Provides a timeline for testing activities, including start and end dates for each testing phase.
- Test Deliverables:
- Lists the expected outputs and documents to be produced during and after the testing process.
- Entry and Exit Criteria:
- Specifies the conditions that must be met before testing can begin (entry criteria) and the conditions that must be satisfied for testing to conclude (exit criteria).
- Suspension and Resumption Criteria:
- Outlines conditions under which testing may be temporarily halted and the criteria for resuming testing.
- Tools:
- Identifies any testing tools, software, or utilities that will be used to facilitate the testing process.
- Risks and Mitigations:
- Identifies potential risks to the testing effort and outlines strategies for mitigating or managing these risks.
- Approvals:
- Specifies the individuals or groups responsible for reviewing and approving the test plan.
1) Actor: Represents the user, either an individual or a group, interacting with a process.
2) Action: Specifies the steps or activities taken to achieve the desired outcome.
3) Goal/Outcome: Defines the successful result or the expected outcome of the user's interaction with the system.
Test Scenario: A possible area to be tested (What to test)
Test Case: Step by step actions to be performed to validate functionality of AUT (How to test). Test case contains test steps, expected result & actual result.
Use Case: Online Shopping - Place an Order for an Item
Actor: Online Shopper
Basic Flow/Action:
The shopper logs into the online shopping application.
Navigates to the "Electronics" category.
Selects the "Smartphones" subcategory.
Chooses a specific smartphone model for details.
Adds the selected smartphone to the shopping cart.
Proceeds to checkout.
Enters shipping information.
Reviews and confirms the order.
Goal/Outcome : The system generates an order confirmation.
Test Scenarios based on Use Case:
Login Functionality:
- Verify that the shopper can successfully log into the online shopping application.
- Validate the system's response to incorrect login credentials.
Browse and Select Product:
- Confirm that the shopper can navigate to the "Electronics" category and select "Smartphones."
- Ensure that the selected smartphone details are correctly displayed.
Add to Cart:
- Check that the shopper can successfully add the selected smartphone to the shopping cart.
- Verify the system's behavior if the item is out of stock during the addition process.
Checkout Process:
- Test the flow of proceeding to checkout after adding the item to the cart.
- Validate the system's response if the shopper attempts to proceed without adding any items.
Enter Shipping Information:
- Confirm that the shopper can enter valid shipping information.
- Verify the system's handling of invalid or missing shipping details.
Review and Confirm Order:
- Check that the order details are accurately displayed for review.
- Validate the order confirmation process.
Test Cases for above Test scenarios:
TC1: Login Functionality:
Test Steps:
- Enter valid username and password.
- Click the "Login" button.
- Expected Result: Successfully logged into the online shopping application.
- Actual Result: [Outcome]
TC2: Browse and Select Product:
Test Steps:
- Navigate to "Electronics" > "Smartphones."
- Click on a specific smartphone model.
- Expected Result: Smartphone details page is displayed.
- Actual Result: [Outcome]
TC3: Add to Cart:
Test Steps:
- Click "Add to Cart" for a selected smartphone.
- Verify the item is added to the shopping cart.
- Expected Result: Item added to the cart successfully.
- Actual Result: [Outcome]
TC4: Checkout Process:
Test Steps:
- Click "Proceed to Checkout" from the cart.
- Expected Result: Redirected to the checkout process.
- Actual Result: [Outcome]
TC5: Enter Shipping Information:
Test Steps:
- Fill in valid shipping details.
- Click "Continue" to proceed.
- Expected Result: Able to enter shipping information.
- Actual Result: [Outcome]
TC6: Review and Confirm Order:
Test Steps:
- Review order details.
- Click "Place Order."
- Expected Result: Order confirmation is generated.
- Actual Result: [Outcome]
TEST CASE DOCUMENT [CONTENTS]
A test case typically consists of several components that provide detailed information about how to test a specific aspect of a software application. The contents of a test case may include:
Test Case Identifier:
- A unique identifier or name for the test case, often including a reference to the module or functionality being tested.
Test Objective:
- A brief description of the goal or objective of the test case, outlining what specific aspect of the system is being tested.
Preconditions:
- Conditions or states that must be true or exist before the test case can be executed. This may include prerequisites, such as specific data, configurations, or system states.
Test Steps:
- A detailed sequence of actions or operations that the tester needs to perform to execute the test case. Each step should be specific and include inputs, interactions, or operations on the system.
Expected Result:
- The anticipated outcome or behavior of the system after the test steps have been executed. This serves as a benchmark against which the actual result is compared.
Actual Result:
- The observed outcome or behavior of the system after executing the test steps. Testers document the actual results during the test execution to identify any deviations from the expected behavior.
Test Data:
- Specific data values or inputs used during the test case execution. This ensures that the test case is conducted with known and controlled data.
Test Environment:
- Information about the test environment, including details about the hardware, software, configurations, and any other relevant setup required for the test.
Test Execution Date:
- The date and time when the test case was executed.
Tester Information:
- The name or identifier of the tester who executed the test case.
Pass/Fail Status:
- The outcome of the test case execution, indicating whether the system behaved as expected (pass) or if there were issues or deviations (fail).
Comments/Notes:
- Additional comments or notes that provide context, explanations, or details about the test case, its execution, or any issues encountered.
REQUIREMENT TREACEABILITY MATRIX(RTM)
The Requirement Traceability Matrix (RTM) is a document that establishes a mapping between requirements and test cases. Its primary purpose is to ensure that all the requirements specified for a system are covered by corresponding test cases. The RTM helps in tracking the progress of testing activities and ensures that each requirement has been validated.
Key components of an RTM include:
Requirement ID:
- A unique identifier assigned to each requirement, making it easy to reference and track.
Req Description:
- A detailed description of each requirement, outlining what functionality or behavior is expected from the system.
Test Case ID's:
- A list of test cases associated with each requirement. This section establishes a clear link between the requirements and the corresponding test cases that verify or validate those requirements.
The RTM serves several purposes in the software development and testing process:
Coverage Analysis: It helps ensure that every requirement has at least one associated test case, providing a way to track the coverage of testing activities against the specified requirements.
Change Impact Analysis: When there are changes or updates to the requirements, the RTM helps identify which test cases need to be modified or added to accommodate those changes.
Validation Tracking: It aids in monitoring the progress of testing by highlighting which requirements have been tested, which test cases have been executed, and the overall status of testing against the requirements.
Traceability: It establishes a traceability link between the requirements, test cases, and sometimes, other related artifacts, providing transparency in the testing process.
Here's a simplified example of what an RTM might look like:
Requirement ID | Req Description | Test Case ID's |
---|---|---|
REQ001 | User should be able to log in | TC001, TC002 |
REQ002 | System should display product details | TC003, TC004 |
REQ003 | User can add items to the shopping cart | TC005, TC006 |
In this example, each requirement is associated with one or more test cases, creating a clear mapping that helps ensure comprehensive testing coverage.