Agile Methodology

What is Agile Methodology?
  • It is an Iterative and Incremental Approach.
  • Iterative means same process repeating again and again.(The process keeps on repeating).
  • Incremental means, modules/features keep on adding on top of existing software. 
  • Agile is Iterative and Incremental model where requirements keeps on changing. 
  • As a company we should be flexible to accept requirements change, develop, test and finally release a peace of working software within short span of time.
  • There will be good communication between Customer, Business Analyst, Developers & Testers.
  • The Goal of the agile model is the customer satisfaction by delivering the piece of the software to the customer within short span of time.
  • Agile Testing is type of testing where we follow the agile principles. 
Advantages:
  • Requirement changes are allowed in any stage of development (or) We can accommodate Requirement changes in the middle of development.
  • Releases will be very fast( Weekly)
  • Customer no need to wait for long time.
  • Good communication between team.
  • It is very easy model to adopt.
Disadvantage:
  • Less focus on design and documentation since we deliver software very faster.
What is Scrum?

Scrum is a framework through which we build software product by following Agile Principles.

Scrum includes group of people called as Scrum team. 
  • Product Owner
  • Scrum Master
  • Dev Team
  • QA Team
Product Owner :
  • Define the features of the product 
  • Prioritize features according to market value  
  • Adjust features and priority every iteration, as needed   
  • Accept or reject work results.  
Scrum Master:
  • The main role is facilitating and driving the agile process.
Developers and QA:
  • Develop and Test the software.
Agile Vs Scrum

Agile:
Focus: Agile is an approach to project management and product development that emphasizes flexibility and customer satisfaction.
Key Principles: It values collaboration, adaptability, and delivering small, functional pieces of a project regularly.
Benefits: Allows for changes in project requirements, encourages customer feedback, and promotes a collaborative team environment.

Scrum:
Type of Agile Framework: Scrum is one of the specific frameworks within the broader Agile methodology.
Roles: In Scrum, there are defined roles - Scrum Master, Product Owner, and Development Team.
Artifacts: It uses specific artifacts like the Product Backlog, Sprint Backlog, and Increment to manage and deliver work.
Events: Scrum includes specific events or ceremonies like Sprint Planning, Daily Standup, Sprint Review, and Sprint Retrospective.



Scrum Terminology

User Story :  A Feature/module in a software

Epic : Collection of user stories. 

Product backlog : Contains list of user stories. Prepared by product owner.

Sprint : Period of time to complete the user stories, decided by the product owner and team, usually 2-4 weeks of time.

Sprint planning meeting: Meating conducts with the team to define what can be delivered in the sprint and duration.

Sprint backlog : List of committed stories by Dev/QA for specific sprint.

Scrum meeting : Meating conducted by Scrum Master everyday 15 mins. Called as Standup meeting.
  1. What did you do yesterday?
  2. What will you do today?
  3. Are there any impediments in your way?
Sprint retrospective meeting : Review meeting after completion of sprint. The entire team, including both the ScrumMaster and the product owner should participate.

Story point : Rough estimation of user stories, will be given by Dev & QA in the form of Fibonacci series.

Burndown chart : Shows how much work remining in the sprint. Maintained by the scrum master daily.

DoR & DoD
  • Definition of Ready (DoR):
    • Preparation: Ensures tasks are well-prepared before starting work.
    • Clarity: Describes what needs to be done, making sure everyone understands the plan.
    • Timing: Decided before starting a task or user story during planning.
    • Owner: Managed by the task planner or team lead.
    • Adjustable: Can be tweaked as needed during planning.
  • Definition of Done (DoD):
    • Completion: Declares when a task or user story is considered finished.
    • Criteria: Lists specific standards that must be met for completion.
    • Timing: Decided at the beginning of the project or sprint.
    • Shared Responsibility: Owned by the entire team, including developers and testers.
    • Consistency: Should remain constant during the sprint; changes considered for future sprints.
Agile Meetings

1) Sprint Planning:
  • Attendees: Entire team (developers, testers, product owner).
  • When: At the beginning of each sprint.
  • Duration: Typically 1-2 hours.
  • Purpose: Plan and prioritize tasks for the upcoming sprint.
2) Daily Standup (Daily Scrum):
  • Attendees: Entire team.
  • When: Daily, preferably in the morning.
  • Duration: 15 minutes or less.
  • Purpose: Share updates on work, discuss challenges, and align for the day.
3) Sprint Review:
  • Attendees: Team, stakeholders, product owner.
  • When: At the end of each sprint.
  • Duration: 2-4 hours.
  • Purpose: Showcase completed work, gather feedback, and discuss what's next.
4) Sprint Retrospective:
  • Attendees: Team members.
  • When: At the end of each sprint, after the sprint review.
  • Duration: 1-2 hours.
  • Purpose: Reflect on the sprint, discuss what went well and what could be improved, and plan for adjustments.
5) Backlog Grooming (Refinement):
  • Attendees: Product owner, Scrum Master, development team.
  • When: As needed between sprints.
  • Duration: Typically 1-2 hours.
  • Purpose: Review and refine the product backlog, ensuring items are well-defined and ready for upcoming sprints.

Story point
A story point is a unit of measure used to estimate the difficulty or complexity of a task or user story. 
Estimating a user story in Agile involves assigning it a story point value, and teams often use the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.) 

Estimating a story using story point

User Story: "As a user, I want to be able to log in to the application using my email and password."

Estimation Process:
  • Understand the User Story:
    • The team discusses the user story to ensure everyone understands what's required. Logging in with email and password seems straightforward.
  • Compare Complexity:
    • The team compares this user story to a reference story. Let's say the reference story is a simple one-point story, like "displaying a welcome message."
  • Use Relative Sizing:
    • Team members discuss and agree that logging in is a bit more complex than displaying a welcome message but not significantly more complex. They decide to assign it a story point value of 2.
  • Fibonacci Sequence:
    • The team considers whether the complexity is closer to 2 or 3 in the Fibonacci sequence. After discussion, they agree that 2 is a more accurate representation.
  • Team Consensus:
    • The team discusses any differing opinions. If someone initially suggested 3, they might discuss why they thought it was more complex. After a brief discussion, the team reaches a consensus, and everyone agrees on 2 story points.
  • Record the Estimate:
    • The team records the estimate of 2 story points for the "log in" user story. This estimate will be used for planning and prioritizing in the upcoming sprint.
1 Story Point: Typically takes a few hours to complete (half a day).
2 Story Points: Could take a day to a day and a half.
3 Story Points: Might take two days.
5 Story Points: Could take around three days.
8 Story Points: A larger task, likely taking a week.
13 Story Points: A significant effort, possibly spanning multiple weeks.

Burn-Down Charts
There are four popularly used burn down charts in Agile.
  • Product burndown chart : A graph which shows how many Product Backlog Items (User Stories) implemented/not implemented.
  • Sprint burndown chart : A graph which shows how many Sprints implemented/not implemented by the Scrum Team.
  • Release burndown chart : A graph which shows List of releases still pending, which Scrum Team have planned.
  • Defect burndown chart : A graph which shows how many defects identified and fixed.




Guidelines To Write a Good Bug Report

 Effective bug reporting is essential for efficient communication between testers, developers, and other stakeholders. Clear and detailed bug reports can significantly speed up the debugging and resolution process. Here are some guidelines for creating effective bug reports:

Provide a Descriptive Title:

Use a concise and descriptive title that summarizes the nature of the bug. A good title helps quickly convey the issue.

Include Clear Steps to Reproduce:

Clearly outline the steps needed to reproduce the bug. This should be detailed enough that someone unfamiliar with the system can follow the steps and observe the issue.

Specify the Environment:

Mention the environment details where the bug was encountered, including the operating system, browser version, device, and any other relevant software configurations.

Include Preconditions and Test Data:

Specify any preconditions required for reproducing the bug, such as specific settings or data. Also, include the input or data used during testing.

Capture Screenshots or Recordings:

Attach screenshots or screen recordings that illustrate the bug. Visuals can provide a clear understanding of the problem and help developers identify the issue faster.

Provide Expected and Actual Results:

Clearly state what the expected behavior should be and what behavior was observed. This helps developers understand the deviation from the expected outcome.

Classify Severity and Priority:

Assign an appropriate severity level (e.g., critical, major, minor) to indicate the impact of the bug on the system. Additionally, assign a priority level (e.g., high, medium, low) based on business priorities.

Isolate the Issue:

If the bug is part of a larger system, attempt to isolate the issue to a specific module or component. This helps developers narrow down the problem area.

Include System Logs and Error Messages:

If applicable, include relevant system logs, error messages, or stack traces. These details can provide valuable information for diagnosing the root cause of the issue.

Check for Duplicates:

Before submitting a bug report, check if a similar issue has already been reported. Duplicate bug reports can lead to confusion and unnecessary efforts.

Specify Browser/Device Configuration:

If the bug is related to a web application, provide details about the browser type and version. For mobile apps, specify the device type and operating system version.

Provide User Account Information (if applicable):

If the bug is user-specific, include details about the user account, such as username or user ID. This helps in replicating the issue in a similar user context.

Include Date and Time of Occurrence:

Specify when the bug was first observed. This information can be crucial in identifying patterns or correlating the bug with specific events or changes.

Be Objective and Avoid Assumptions:

Stick to facts and avoid making assumptions or speculations. Clearly state what was observed without adding personal opinions.

Communication Etiquette:

Maintain a professional and constructive tone in bug reports. Clearly articulate the problem without using offensive language. Remember that the goal is to improve the software, not to assign blame.

Follow the Bug Reporting Template (if available):

If your organization has a standard bug reporting template, make sure to use it. Consistency in bug reports makes it easier for developers to process and prioritize issues.

Keep it Concise:

While providing details is important, avoid unnecessary information. Keep the bug report concise and focused on the essential details.

Verify Bug Before Reporting:

Ensure that the issue is reproducible and not a one-time occurrence. Verify the bug on different environments if possible.

Update Bug Status:

Stay involved in the bug resolution process. If additional information is requested or if the bug is fixed, promptly update the bug status accordingly.

Continuous Learning:

Learn from the feedback and resolutions of your reported bugs. This helps improve the quality of future bug reports and your overall testing skills.

By following these guidelines, you can contribute to a more efficient and collaborative bug tracking process, ultimately leading to higher-quality software.

Test Case Best Practices - Guidelines To Follow When Writing A Good Test Case

Writing effective test cases is crucial for ensuring the quality and reliability of software. Here are some best practices and guidelines to follow when creating test cases:

Understand Requirements:

Gain a thorough understanding of the requirements before writing test cases. Clear requirements help in creating accurate and relevant test cases.

Use Clear and Concise Language:

Write test cases in simple and clear language to ensure that they are easily understandable by team members and stakeholders.

One Test Case, One Purpose:

Each test case should focus on testing a single, specific functionality or scenario. This makes it easier to identify and fix issues.

Use a Standardized Format:

Adopt a standardized format for documenting test cases, including test case ID, description, preconditions, test steps, expected results, actual results, and post-conditions.

Provide Detailed Steps:

Clearly outline the steps to execute the test case. Make sure they are detailed enough for anyone to follow and reproduce the test.

Include Preconditions and Post-conditions:

Specify any necessary preconditions that must be met before the test case can be executed. Also, document any post-conditions that should be true after the test case has been executed.

Test Data and Environment Setup:

Clearly define the test data required for the test case and ensure that the testing environment is set up appropriately. This helps in reproducing the test conditions.

Positive and Negative Testing:

Include both positive and negative test cases. Positive test cases validate that the system behaves as expected under normal conditions, while negative test cases verify that the system handles errors correctly.

Cover Boundary Conditions:

Ensure that test cases cover boundary conditions and edge cases. This helps identify potential issues at the limits of the software's capabilities.

Reusable Test Cases:

Write test cases in a way that allows for reusability across different test scenarios. This can save time and effort in test case creation and maintenance.

Prioritize Test Cases:

Prioritize test cases based on risk, critical functionality, and business impact. This ensures that the most important areas of the application are thoroughly tested.

Review and Collaboration:

Conduct peer reviews of test cases to identify potential issues and ensure quality. Collaboration with developers and other stakeholders is crucial for comprehensive testing.

Maintainability:

Ensure that test cases are easy to maintain. If there are changes in requirements or the application, update the test cases accordingly.

Traceability:

Establish traceability between test cases and requirements to ensure that each requirement is covered by at least one test case.

Automation Considerations:

If automation is part of the testing strategy, design test cases with automation in mind. Ensure that test cases are modular and can be easily automated.

Logging and Reporting:

Include provisions for logging test execution details and generating comprehensive test reports. This facilitates tracking the progress of testing activities.

Data Independence:

Ensure that test cases are not dependent on the state of previous test cases. Each test case should be able to run independently.

Accessibility and Usability:

Consider including test cases that verify the accessibility and usability of the application, especially if these aspects are critical for end-users.

Regression Testing:

Consider the impact of changes on existing functionality and include relevant regression test cases to ensure that new updates do not introduce defects into previously working features.

Continuous Improvement:

Regularly review and update test cases to incorporate lessons learned, accommodate changes in requirements, and improve overall testing efficiency.


Software Testing Metrics

 Test metrics play a crucial role in software testing by providing quantitative and qualitative insights into the testing process and the quality of the software being developed. 

Why do we need Test Metrics?

Performance Measurement:

Test metrics help measure the performance and progress of the testing process. They provide data on test execution, test coverage, and defect status, allowing teams to assess their efficiency.

Quality Assessment:

Metrics such as defect density, defect leakage, and defect rejection ratio help assess the quality of the software. These metrics provide insights into the effectiveness of the testing process and identify areas that need improvement.

Resource Management:

Test metrics assist in resource management by helping teams understand how efficiently resources are utilized during testing. This includes tracking the number of executed, passed, and failed test cases, allowing teams to optimize their testing efforts.

Risk Identification:

Metrics highlight potential risks and issues in the software development and testing process. For example, a high defect density may indicate areas with a higher risk of defects, enabling teams to focus on critical areas.

Decision Making:

Test metrics provide data-driven insights for decision-making. Project managers, QA leads, and other stakeholders can make informed decisions based on metrics such as test coverage, defect trends, and test execution status.

Continuous Improvement:

Metrics are valuable for continuous improvement. By analyzing historical data, teams can identify patterns, trends, and areas for improvement in the testing process. This leads to more effective and efficient testing practices over time.

Communication and Reporting:

Metrics serve as a communication tool for various stakeholders. They provide a standardized way to communicate the status of testing efforts, allowing for transparent reporting and facilitating collaboration among team members.

Benchmarking:

Test metrics can be used for benchmarking against industry standards or best practices. By comparing metrics with established benchmarks, teams can identify areas where they excel and areas that may need improvement.

Goal Alignment:

Metrics help align testing activities with project goals and objectives. By tracking progress against predefined metrics, teams can ensure that testing efforts are aligned with the overall project and quality assurance objectives.

Efficiency Improvement:

Test metrics highlight bottlenecks, inefficiencies, or areas of improvement in the testing process. This allows teams to implement corrective actions and optimize their testing strategies for better efficiency.

SOFTWARE TESTING METRICS

  1. % of Test Cases Executed:

    • No. of Test Cases ExecutedTotal No. of Test Cases Written×100

  2. % of Test Cases NOT Executed:

    • No. of Test Cases NOT ExecutedTotal No. of Test Cases Written×100

  3. % Test Cases Passed:

    • No. of Test Cases PassedTotal Test Cases Executed×100

  4. % Test Cases Failed:

    • No. of Test Cases FailedTotal Test Cases Executed×100

  5. % Test Cases Blocked:

    • No. of Test Cases BlockedTotal Test Cases Executed×100

  6. Defect Density:

    • No. of Defects FoundSize (No. of Requirements)

  7. Defect Removal Efficiency (DRE):

    • Fixed DefectsFixed Defects + Missed Defects×100

      • A: Defects identified during testing or fixed
      • B: Defects identified by the customer or missed

  8. Defect Leakage:

    • No. of Defects Found in UATNo. of Defects Found in Testing×100

  9. Defect Rejection Ratio:

    • No. of Defects RejectedTotal No. of Defects Raised×100

  10. Defect Age:

    • Fixed Date - Reported Date
  11. Customer Satisfaction:

    • Measured by the number of complaints per period of time.





Followers