Software Testing Terminology

REGRESSION TESTING
Regression testing is the process of testing a software application to ensure that new changes (like updates, bug fixes, or new features) do not negatively impact the existing functionality of the software.

Example:
Imagine you have a mobile app that allows users to log in, view their profile, and send messages. If a developer adds a new feature, such as the ability to upload profile pictures, regression testing ensures that after this change, users can still log in, view their profiles, and send messages without any issues.

Types of Regression Testing:
  • Unit Regression Testing:
    • We test only the specific changes made by the developer.
  • Regional Regression Testing:
    • We test the part that was changed along with the connected parts. Impact Analysis Meeting will be conducted to figure out which parts will be affected by the changes, involving both the testing team and the developers.
  • Full Regression:
    • We test the main part that was changed and also check the rest of the software to be thorough.
    • For example, if the developer made changes in many areas, instead of checking each one separately, we test everything together in one full round.
RE-TESTING
  • Whenever the developer fixed a bug, tester will test the bug fix is called Re-testing.
  • Tester close the bug if it worked otherwise re-open and send to developer.
  • To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build.
Example:
     Build 1.0 was released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted.
     Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is retesting.

RETESTING VS REGRESSION TESTING

Re-testing:
When a tester finds a bug in a specific module, let's say the Purchase module, and reports it to the developers. Once the developers fix that bug, the tester needs to perform Re-testing. Re-testing focuses on the specific bug that was identified earlier in the Purchase module. The tester checks if the particular problem reported in Purchase is resolved after the bug fix. It's like double-checking to make sure the issue is truly fixed.

Regression Testing:
Now, considering the Finance module depends on the Purchase module, after the bug in Purchase is fixed, the tester also needs to perform Regression Testing. Regression Testing ensures that changes made to one part of the application (Purchase module) haven't negatively affected other connected parts (Finance module). In this scenario, Regression Testing would involve testing not just the fixed Purchase module but also checking if the Finance module still works correctly after the Purchase module is modified. It helps catch unintended side effects that might have occurred due to the bug fix.

In summary, Re-testing is specifically testing the fixed bug to confirm it's resolved, while Regression Testing is making sure that the fix hasn't caused new issues or disruptions, especially in modules that are interconnected.

SMOKE TESTING

Smoke testing is a preliminary testing phase conducted on a software build to ensure that the critical functionalities are working properly. It is a quick and basic test to check if the software is stable enough for more in-depth testing. The term "smoke" comes from the idea that if there's a major issue, it would generate enough smoke to stop further testing.

Example of Smoke Test:
Imagine you have a new mobile application. In a smoke test, you would quickly check if the app opens without crashing, basic navigation works, and essential features like logging in function correctly.

SANITY TESTING

Sanity testing, also known as build verification testing, is a more focused and narrow form of testing. It aims to ensure that specific functionalities or areas of the software have been fixed or enhanced after changes, and that they work as intended. Sanity tests are usually performed after bug fixes or minor updates.

Example of Sanity Test:
Continuing with the mobile app example, if there was a bug reported about the app crashing when users upload profile pictures, a sanity test would specifically check if this issue has been resolved without extensively testing every other feature.

SANITY TESTING VS SMOKE TESTING:

Purpose:

Smoke Testing: Verifies if the software build is stable enough for further testing.
Sanity Testing: Focuses on specific functionalities to ensure they work after changes.

Scope:

Smoke Testing: Broad, covering major features.
Sanity Testing: Narrow, focusing on specific areas.

Timing:

Smoke Testing: Done at the beginning of testing.
Sanity Testing: Usually done after bug fixes or minor changes.

Depth:

Smoke Testing: Surface-level, not detailed.
Sanity Testing: Deeper, focusing on specific aspects.

EXPLORATORY TESTING

Exploratory testing is a testing approach where testers simultaneously design and execute test cases. Testers explore the application, learn its functionalities, and make decisions on the spot about what areas to test and what test cases to execute. It is less formalized than traditional testing methods and relies on the tester's intuition, creativity, and experience.

ADHOC TESTING

Adhoc testing is a type of informal testing where testers randomly test the application without any predefined test cases. Testers may explore the application in an unstructured manner, trying to identify defects without following a planned testing approach. Adhoc testing is often used to discover unexpected issues that might not be covered in formal test cases.

MONKEY TESTING

Monkey testing, also known as random testing, is a technique where the application is tested with random and unexpected inputs. The goal is to identify system crashes or unexpected behaviors caused by random inputs. This type of testing can be particularly useful in uncovering vulnerabilities and stability issues in the application.


EXPLORATORY TESTING VS  ADHOC TESTING VS MONKEY TESTING

Exploratory Testing:
  • Documentation: None
  • Plan: No formal plan
  • Testing Style: Informal
  • Tester's Knowledge: Testers don't know much about the application
  • Testing Approach: Random testing
  • Purpose: Intention is to learn or explore the functionality of the application
  • Application Type: Any application that is new to the tester
Adhoc Testing:
  • Documentation: None
  • Plan: No formal plan
  • Testing Style: Informal
  • Tester's Knowledge: Testers should have some knowledge of the application's functionality
  • Testing Approach: Random testing
  • Purpose: Intention is to break the application or find out corner defects
  • Application Type: Any application
Monkey Testing:
  • Documentation: None
  • Plan: No formal plan
  • Testing Style: Informal
  • Tester's Knowledge: Testers don't know much about the application
  • Testing Approach: Random testing
  • Purpose: Intention is to break the application or find out corner defects
  • Application Type: Typically used for gaming applications
Summary:
  • Common Characteristics: All three types involve no documentation, no formal plan, informal testing, and random testing.
  • Tester's Knowledge: In exploratory testing, testers don't know much about the application, while in adhoc and monkey testing, testers should have some knowledge or have no knowledge, respectively.
  • Purpose: Exploratory testing aims to learn or explore application functionality, adhoc and monkey testing aim to break the application or find corner defects.
  • Application Type: Exploratory and adhoc testing can be applied to any type of application, while monkey testing is specifically mentioned for gaming applications.

In simple terms, exploratory testing is for learning about a new application, adhoc testing is for finding defects, and monkey testing is like playing around with applications, especially games, to break them or find hidden issues.

POSITIVE TESTING

Positive testing focuses on ensuring that a system behaves as expected when provided with valid inputs. The goal is to confirm that the software functions correctly under normal or expected conditions. In positive testing, the tester checks if the application does what it is supposed to do when everything is right.

Examples of Positive Tests:

  • Login Test: Entering valid credentials and checking if the user can successfully log in.
  • Calculator Addition: Adding two positive numbers to ensure the calculator gives the correct sum.
  • Form Submission: Filling out a form with valid data and verifying that it is successfully submitted.

NEGATIVE TESTING

Negative testing is about examining how well a system handles invalid or unexpected inputs. The goal is to identify potential weaknesses or vulnerabilities in the software by deliberately providing it with incorrect or inappropriate data.

Examples of Negative Tests:

  • Login Test (Negative): Attempting to log in with an incorrect password to check if the system rejects invalid credentials.
  • File Upload (Negative): Trying to upload a file in a format not supported by the application and verifying if the system handles it gracefully.
  • Credit Card Payment (Negative): Entering an expired credit card date during a payment process to see if the system detects and handles this scenario correctly.
Summary:

Positive Testing: Checks if the system behaves correctly under normal conditions with valid inputs, ensuring it does what it's supposed to.

Negative Testing: Involves intentionally providing the system with invalid or unexpected inputs to uncover vulnerabilities or weaknesses, checking how well it handles unexpected situations.

In simple terms, positive testing ensures things work when they should, while negative testing investigates how well the system copes with unexpected or incorrect inputs.


END-TO-END TESTING

End-to-End Testing is a comprehensive testing approach that evaluates the entire application flow from start to finish. It involves testing the interactions between various components and systems to ensure they work seamlessly together. The purpose is to simulate real-world scenarios and verify that the application behaves as expected throughout its complete lifecycle.

Example of End-to-End Testing for an E-commerce Application:

Imagine testing the process of a customer making a purchase on an e-commerce platform.

Scenario: User Makes a Purchase

Steps:
  • The user logs into the e-commerce website.
  • The user browses products, adds items to the cart, and proceeds to checkout.
  • The user provides shipping and payment information.
  • The system processes the payment and generates an order confirmation.
  • The user receives an email confirmation.

End-to-End Testing Checks:
  • Confirm that users can successfully log in.
  • Ensure the shopping cart calculates the correct total.
  • Verify that the checkout process collects and processes shipping and payment information accurately.
  • Check that the system generates a valid order confirmation.
  • Confirm that the user receives the expected email confirmation.

In this example, end-to-end testing covers the entire journey of a user making a purchase, including interactions with the website interface, backend systems, payment processing, and email notifications. The goal is to ensure that all these components work harmoniously together, providing a smooth and error-free experience for the end user.


GLOBALIZATION TESTING

Globalization Testing  checks if an application can function seamlessly across different regions and cultures, accommodating various languages, date formats, and currencies.

LOCALIZATION TESTING

Localization Testing ensures that an application is adapted for a specific locale or target audience by verifying language translations, cultural preferences, and regional requirements.

In simple terms, globalization testing makes sure your software can work anywhere in the world, and localization testing ensures it's a good fit for the specific cultural and linguistic needs of a particular region.



Followers