14 January, 2025

Question asked in the interview for manual testing?

1. What are your roles and responsibilities?

Roles and Responsibilities:

  1. Requirement Analysis:

    • Analyzing functional and non-functional requirements to ensure a clear understanding of project objectives.
    • Collaborating with stakeholders, business analysts, and developers to address ambiguities in requirements.
  2. Test Case Design and Management:

    • Writing detailed, comprehensive, and well-structured test cases for both web-based and mobile applications.
    • Ensuring test cases cover positive, negative, and edge scenarios to achieve robust testing.
  3. Manual Testing:

    • Performing manual testing for various features across web and mobile platforms.
    • Executing functional, regression, smoke, and exploratory tests to identify defects early.
  4. Test Case Management in JIRA:

    • Adding and managing test cases in JIRA for traceability and easy access.
    • Executing test cases directly within JIRA and logging results for transparency.
  5. Defect Tracking and Reporting:

    • Logging and tracking defects in JIRA with detailed reproduction steps, screenshots, and severity.
    • Communicating defects and their impacts to stakeholders and development teams for prompt resolution.
  6. Collaboration with Stakeholders:

    • Regularly updating stakeholders on testing progress, challenges, and defect status.
    • Participating in sprint planning and daily stand-up meetings to align testing efforts with project timelines.
  7. Quality Assurance Advocacy:

    • Ensuring the quality of deliverables by maintaining a user-centric perspective throughout the testing process.
    • Recommending process improvements and providing feedback to enhance product quality.

2. What is end-to-end testing?

End-to-end testing is a testing approach that validates the complete functionality of a system, ensuring all its components work together as expected. It simulates real-world scenarios to test the entire application flow from start to finish.

3. How do you start testing?

Testing starts with understanding project requirements, preparing the test environment, and creating test cases. Once the groundwork is ready, testing is executed step-by-step, ensuring all functionalities work as expected, defects are identified, and results are shared with stakeholders.

4. How do you install builds (App/Mobile testing) ?

Steps to Install Builds for App/Mobile Testing

Obtain the Build:
  • Get the build from the development team or a shared repository (e.g., Jenkins, Firebase, App Center, or email).
  • Ensure you have the correct version of the build (APK for Android or IPA for iOS).
Check Prerequisites:

For Android:
  • Enable "Install from Unknown Sources" in the device settings if not using the Play Store.
For iOS:
  • Ensure the device is registered in the developer provisioning profile if required.
  • Install any necessary certificates or profiles.
Install the Build:

For Android:
  • Transfer the APK file to the mobile device via USB, email, or a cloud service.
  • Tap on the APK file to install it.
For iOS:
  • Use tools like TestFlight, App Store, or manual installation through Xcode.
  • Follow the prompts to install the IPA file.
Verify Installation:
  • Open the app after installation to ensure it launches correctly.
  • Check for any immediate issues like crashes or UI misalignment.
Update the Build (if needed):
  • Uninstall the previous version before installing the new build (if required).
  • Clear cache or data if needed to avoid conflicts.
Log Installation Issues:

If the build fails to install, note the error messages and report them to the development team for troubleshooting.

5. How do you inform the team if the build is incorrect?

When the build is incorrect, it’s important to clearly communicate the issue, provide evidence, and ensure that the team is aware of what needs to be fixed.

Here's a streamlined process with a real-time example.

Identify the Issue:
  • Check the app for issues like crashes, installation failures, or UI bugs.
  • Ensure it’s not related to the environment or setup.
  • Document the Problem:
Take screenshots, logs, or screen recordings that show the issue.
Example: "The app crashes on launch with the error message 'App has stopped working.'"
Report the Issue:

Log the issue in a tracking tool like JIRA. Include build version, device details, and steps to reproduce.
Example: Create a JIRA ticket: "Build version 2.3 fails to launch on Android 12. Issue occurs when opening the app."
Notify the Team:

Inform the developers via Slack or email. Mention the JIRA ticket for easy reference.
Example message:
"Hi Team, the latest build (version 2.3) is failing to launch on Android 12 devices. I’ve logged the issue in JIRA (Ticket #345), which includes steps to reproduce and screenshots. Please check and provide an updated build."
Follow Up:

Ask when a fixed build will be available and confirm any further details needed.

Reak time Example:
Imagine you’re testing an Android app. You install build version 2.3 on an Android 12 device, and it crashes on startup. You verify the issue on a couple of different devices and see the same behavior.

You capture a screenshot of the crash screen and log the bug in JIRA: "App crashes on startup – Build 2.3 – Android 12."
You send a Slack message to the team:
"Hi Team, the app crashes on launch with Build 2.3 on Android 12. Please check the JIRA ticket #345 for more details."
The development team investigates and fixes the issue, providing a corrected build for you to test.

6. How do you raise bugs?
Imagine you're testing a mobile app, and you notice that the app crashes when you try to log in:

Reproduce the Issue:

Open the app, click on the ‘Login’ button, and the app crashes.

Bug Report in JIRA:

Title: “App crashes on login screen”
Steps to Reproduce:
Open the app.
Tap the ‘Login’ button.
App crashes with error message.
Expected Behavior: “App should log in and open the home screen.”
Actual Behavior: “App crashes with error: ‘App has stopped working.’”
Device Details: “Samsung Galaxy S10, Android 12, App version 2.3”
Screenshots/Logs: Attach crash logs or screenshots of the error.

7. How do you follow up with developers for the bugs?
Steps to Follow Up with Developers for Bugs
  1. Initial Acknowledgment:

    • After raising the bug, wait for acknowledgment from the developer. Ensure they have seen the bug and are working on it.
    • Example: “Hi [Developer Name], I’ve logged a bug (Ticket #123) regarding the app crash on login. Please confirm when you get a chance.”
  2. Set a Timeline:

    • Agree on a reasonable timeline for fixing the bug. This can be discussed during daily standups or through the issue tracker.
    • Example: “Do you think the issue will be resolved by the end of the day?”
  3. Regular Check-Ins:

    • If the bug is not resolved within the agreed time, follow up to check the status.
    • Example: “Hi [Developer Name], just checking in on the status of the login crash bug (Ticket #123). Do you need any further details from my side?”
  4. Be Clear and Polite:

    • Politely ask for updates without being too pushy.
    • Example: “I wanted to follow up on the login crash issue. Can you provide an estimated time for the fix?”
  5. Escalate if Necessary:

    • If the bug is critical and the fix is delayed, escalate the issue to the team lead or project manager.
    • Example: “The crash bug (Ticket #123) is blocking further testing. Could you please assist in prioritizing the fix?”
  6. Test and Confirm Fixes:

    • Once the fix is made, retest the bug. Let the developer know if it’s resolved or if there are any new issues.
    • Example: “I’ve tested the login crash issue, and it looks resolved now. Thank you for the fix!”
  7. Close the Bug:

    • Once the bug is confirmed fixed, update the bug tracking system and notify the developer that the issue is resolved.
    • Example: “The login crash issue (Ticket #123) is fixed and works as expected now. Closing the ticket. Thanks!”

Real-Time Example:

  • You raise a bug:
    “App crashes on login screen - Ticket #123”
  • Developer acknowledges:
    “Thanks, I’ll check it and let you know.”
  • Follow-up (after 1-2 days):
    “Hi [Developer], any updates on the login crash issue? It’s blocking testing for me.”
  • Developer responds:
    “I’ve found the issue, working on the fix now. Should be ready in a couple of hours.”
  • After the fix:
    “I’ve tested the login issue, and it’s fixed. Closing the ticket now.”
8. If the bugs are fixed, how would you test and when will you close a bug?

When bugs are fixed, the process of testing and closing the bug involves several steps to ensure that the issue is completely resolved and doesn’t affect other areas. Here’s how you can go about it:


Steps to Test Fixed Bugs and Close Them

  1. Verify the Fix:

    • Once the bug is marked as fixed by the developer, test the specific scenario where the bug occurred.
    • Make sure that the issue no longer appears after the fix.
    • Example: If the bug was a crash on login, try logging in again to see if the app behaves as expected.
  2. Perform Regression Testing:

    • After confirming the bug is fixed, run regression tests to ensure that the fix didn’t break anything else in the application.
    • Example: Check if other parts of the app, like user registration or navigation, are still working correctly after the fix.
  3. Re-test on Different Devices (if applicable):

    • If the bug was device-specific, test the fix on all devices or platforms where the bug was occurring (e.g., Android/iOS, different screen sizes).
    • Example: If the bug was on Android 12, check if the fix works on both Android 12 and other versions, if applicable.
  4. Confirm with Stakeholders (if needed):

    • Sometimes, especially for high-priority bugs, you may need to confirm the fix with stakeholders or product owners to ensure it aligns with expectations.
    • Example: “The issue with the login crash is fixed and works as expected on Android 12. Please confirm if you’re satisfied with the fix.”
  5. Close the Bug:

    • Once you’ve verified that the bug is fully fixed and no other issues were introduced, update the status of the bug in the defect tracking tool (like JIRA).
    • Add a comment indicating that the bug was successfully tested and resolved.
    • Example: “Tested and confirmed the fix for login crash. The issue is resolved and no further problems found. Closing the ticket.”
  6. Monitor for Recurrence (Optional):

    • After closing the bug, continue monitoring the application to ensure the issue doesn’t recur during other testing sessions or when new updates are introduced.

Real-Time Example:

  • Bug Raised:
    “App crashes on login screen - Ticket #123.”
  • Developer Fixes the Bug:
    “The issue has been resolved. Please verify.”
  • You Test the Fix:
    • Log into the app, and it successfully opens without crashing.
    • No crashes or errors are observed.
  • Regression Testing:
    • Test other features like user registration and search to ensure they work as expected.
    • Check the app on different Android devices.
  • You Confirm the Fix:
    “I’ve tested the fix, and the app no longer crashes on login. Everything seems fine now.”
  • Closing the Bug:
    • Update the JIRA ticket with the status “Resolved” and add a comment:
      “Tested and verified the fix. The login issue is resolved. Closing the ticket.”
9.When to Perform Regression Testing

Regression testing is performed to ensure that new changes (such as bug fixes, new features, or updates) have not negatively impacted existing functionality. Here’s when and why you should perform regression testing:


1. After Bug Fixes

  • Reason: Bug fixes often involve changes to existing code, and these changes can sometimes break other parts of the application.
  • When to Perform: Once a bug is fixed, before closing it, perform regression testing to verify that the fix hasn't introduced new issues.
  • Example: If a login issue is fixed, make sure other areas like registration, user profile, and navigation still work fine.

2. After New Features or Functionality Are Added

  • Reason: Adding new features or functionality may affect existing parts of the application, especially when it interacts with other components.
  • When to Perform: After new features are implemented and integrated into the system, run regression tests to ensure the rest of the application still functions as expected.
  • Example: If a new payment gateway is added, test other functionalities like order placement and email confirmation to ensure they still work.

3. After Application or System Updates

  • Reason: Updates can modify code, libraries, frameworks, or third-party services, which may cause unforeseen issues.
  • When to Perform: After any updates (OS, app versions, or dependencies) are applied, test the application to ensure previous functionalities are intact.
  • Example: After upgrading the Android version or a mobile app's SDK, run regression tests to verify the app works on the new version.

4. After Merging Code from Different Branches

  • Reason: Code merging from different branches or developers can introduce integration issues, causing bugs or unintended behavior.
  • When to Perform: After merging feature branches, perform regression testing to ensure that the integration didn’t affect existing functionality.
  • Example: If a developer merges their feature branch into the main branch, test all critical features again to ensure no conflicts.

5. After Major Code Refactoring

  • Reason: Refactoring changes the structure of the code but doesn’t alter its functionality. However, it can accidentally impact how the system behaves.
  • When to Perform: After refactoring the code to improve performance, readability, or maintainability, regression testing is necessary to confirm that nothing has broken.
  • Example: After refactoring the backend code of the app, test core functionalities like login, payment processing, and navigation.

6. After Performance or Security Enhancements

  • Reason: Optimizing for performance or implementing new security measures may impact other parts of the application.
  • When to Perform: After improving performance or adding security features, perform regression tests to ensure no functionality was disrupted.
  • Example: If a new encryption method is added to protect user data, verify that features like user login and data retrieval still work as expected.

7. After User Interface (UI) Changes

  • Reason: Even small changes in the UI (e.g., layout adjustments, design tweaks, or changes in buttons) can have unintended effects on functionality.
  • When to Perform: After any UI changes are made, perform regression testing to ensure the app still works smoothly.
  • Example: After changing the placement of a login button, ensure that the button works properly and does not interfere with other UI components.

Real-Time Example of Regression Testing:

  • Scenario: A bug is fixed where the app crashes on login after updating the SDK.
  • Regression Test Performed:
    • Test login functionality to confirm the crash is resolved.
    • Test other key functionalities like user registration, password reset, and data synchronization.
    • Check the app on different devices and operating systems to ensure it works on all platforms.
    • Verify that the new SDK doesn’t impact existing features.
10. How would you perform regression testing?
Regression testing ensures that recent changes (like bug fixes or new features) don’t negatively affect existing functionality. Here's a quick breakdown:
  1. Identify Affected Areas: Focus on features that were changed or related ones (e.g., after fixing a login crash, test login, registration, etc.).

  2. Prepare Test Environment: Set up the testing environment with the right devices, OS, or browsers (e.g., Android 12, iOS 15).

  3. Select Test Cases: Choose test cases covering impacted features and critical areas (e.g., login, payment, user profile).

  4. Execute Tests: Run the selected tests, both manual and automated, to verify nothing else is broken.

  5. Log Defects: Report any issues found (e.g., UI misalignment) with clear steps in the bug-tracking system.

  6. Retest After Fixes: After fixes are applied, retest the affected areas to ensure everything works properly.

  7. Close the Bug: Once the issue is resolved, mark the bug as closed and confirm the fix.

12. What is Agile ?
Agile is a project management and software development approach that focuses on delivering small, incremental improvements in a flexible and collaborative way. It emphasizes continuous feedback, adaptability to changes, and delivering value to the customer in short cycles, known as sprints.

Key Points of Agile:

  • Iterative Development: Work is broken down into smaller chunks (sprints), allowing teams to deliver functional features frequently.
  • Collaboration: Regular communication between team members, stakeholders, and customers.
  • Flexibility: Agile allows teams to adapt to changes in requirements even late in the development process.
  • Customer Focus: The goal is to deliver high-quality work that meets the customer’s needs.

Real-Time Example of Agile:

Let’s say a team is developing a new mobile app for managing personal finance.

  1. Sprint 1: The team delivers a basic version of the app that allows users to create an account and add expenses.

    • Customer Feedback: Users find it helpful but suggest adding a feature to categorize expenses.
  2. Sprint 2: Based on the feedback, the team adds expense categories and improves the UI.

    • Customer Feedback: Users now want to see monthly expense reports.
  3. Sprint 3: The team adds a feature for monthly reports and refines existing functionality.

    This process continues, with small updates being delivered regularly, allowing the app to evolve based on user feedback, leading to a product that better meets the customer’s needs.


In this example, Agile is used to continuously improve the app through small, frequent updates, ensuring the end product aligns closely with user needs.

13. What are sprints, grooming sessions, discovery meetings, scrum call , stand up, triage and retro meetings and when do they happen ?

1. Sprints

  • Definition: A sprint is a set period of time (usually 1-4 weeks) in which a specific set of tasks or features are developed, tested, and delivered.
  • When It Happens: Sprints typically occur in regular intervals throughout the project lifecycle.

Example:

  • The development team plans a 2-week sprint to add a new feature, such as integrating a payment gateway into an e-commerce app. At the end of the 2 weeks, they deliver a fully tested feature.

2. Grooming Sessions (Backlog Refinement)

  • Definition: Grooming (or backlog refinement) is a meeting where the team reviews the product backlog to ensure items are clearly defined, prioritized, and ready for the upcoming sprints.
  • When It Happens: Typically happens mid-sprint or before the next sprint planning meeting.

Example:

  • Before starting the next sprint, the product owner and team go through the backlog, prioritize features like “user login via Google” and clarify any questions. This helps the team understand the tasks ahead.

3. Discovery Meetings

  • Definition: Discovery meetings are early discussions in the project to gather initial requirements, understand business goals, and define user stories.
  • When It Happens: These happen at the beginning of the project or when starting a new feature.

Example:

  • In the first week of a project, the team meets to discuss the requirements for a new mobile app. The stakeholders clarify the features needed, such as user authentication, profile creation, and payment integration.

4. Scrum Call (Daily Scrum)

  • Definition: A daily meeting (usually 15 minutes) where the team discusses what they worked on yesterday, what they plan to work on today, and any blockers they are facing.
  • When It Happens: Happens daily, typically in the morning.

Example:

  • A developer says: “Yesterday, I worked on the login feature. Today, I’ll integrate payment options. I need help with API access.”
  • The Scrum Master ensures that any blockers are removed and the team stays on track.

5. Stand-Up Meeting

  • Definition: Another term for the Scrum Call. It’s a quick, daily meeting where each team member shares updates on their work.
  • When It Happens: Happens daily, just like the Scrum call.

Example:

  • The development team gathers for a 10-minute stand-up to give updates. The product manager asks if anyone has roadblocks and provides clarifications on requirements if needed.

6. Triage Meeting

  • Definition: A meeting where the team reviews and prioritizes bugs, issues, or tasks that need immediate attention, often based on severity and impact.
  • When It Happens: Happens regularly (daily or weekly) depending on the project’s needs, especially when there’s a backlog of bugs or critical issues.

Example:

  • The QA team discovers several high-priority bugs in the app during the sprint. The triage meeting is called to decide which bugs to fix immediately (like a crash in the checkout process) and which can be fixed in future sprints.

7. Retro Meetings (Retrospective)

  • Definition: A meeting held at the end of a sprint where the team reflects on what went well, what didn’t, and how they can improve in the next sprint.
  • When It Happens: Happens at the end of each sprint.

Example:

  • After completing a sprint, the team meets for a retrospective. They discuss:
    • What went well: "We completed all tasks on time."
    • What didn’t go well: "The new feature took longer than expected due to unclear requirements."
    • Action items for improvement: "For the next sprint, we will have clearer user stories and more frequent check-ins."

Summary of When They Happen:

  1. Sprints: Happens regularly (1-4 weeks), with new features or tasks.
  2. Grooming Sessions: Happens mid-sprint or before sprint planning to refine the backlog.
  3. Discovery Meetings: Happens at the beginning or during feature planning.
  4. Scrum Calls/Stand-Up Meetings: Happens daily, usually in the morning.
  5. Triage Meetings: Happens regularly, usually to address bugs or critical issues.
  6. Retro Meetings: Happens at the end of each sprint to reflect on progress and improve.

Real-Time Example:

  • Project: A team is developing an e-commerce app.
  • Sprints: The team plans a 2-week sprint to implement the shopping cart feature.
  • Grooming Session: The team holds a grooming session to clarify user stories for the next sprint (like adding a wishlist feature).
  • Discovery Meeting: Before development begins, the team has a discovery meeting with the product owner to understand business goals and user needs for the checkout page.
  • Scrum Calls/Stand-Up: Every morning, the team has a 10-minute stand-up where the developer shares progress on the shopping cart, and the designer discusses UI changes.
  • Triage Meeting: The team meets to prioritize critical bugs found during the sprint, such as broken links on product pages.
  • Retro Meeting: After the sprint ends, the team reflects on how well the sprint went and discusses improvements for the next one, like more detailed user stories and earlier testing.

These meetings help maintain communication, ensure progress, and improve efficiency throughout the project! Let me know if you need further details. 

14. What is a Beta test and why is it important?

Beta testing is the second phase of software testing, where a product or feature is released to a small group of real users (outside of the development team) to test its functionality and performance in a real-world environment. It’s typically the last stage of testing before the product is made publicly available.

Why is Beta Testing Important?

  1. Real-World Feedback:

    • Beta testing allows developers to gather feedback from actual users who interact with the product in a way that developers might not have anticipated. It helps identify bugs, usability issues, and missing features.
  2. Usability Validation:

    • It helps validate whether the product is user-friendly and if it meets user expectations.
  3. Performance Under Real Conditions:

    • Beta testing reveals how the product performs in real-world scenarios (e.g., on different devices, networks, or operating systems), which might not be fully simulated in earlier stages.
  4. Identifying Hidden Issues:

    • Users may encounter bugs or edge cases that weren’t discovered in internal testing, helping to uncover issues before the product reaches a larger audience.
  5. Building Hype & Marketing:

    • In some cases, beta testing is used as a marketing tool. It creates buzz around the product, allowing users to feel involved before the official release.

Real-Time Example of Beta Testing:

Example 1: Beta Testing for a Mobile App

Suppose a company is developing a fitness tracking app with new features like workout tracking, meal logging, and social sharing.

  1. Beta Phase:

    • The app is made available to a small group of fitness enthusiasts (e.g., 500 users) for testing.
  2. Feedback from Beta Testers:

    • Users discover that the workout tracking feature doesn’t sync well with some wearable devices (e.g., Fitbit). Additionally, they suggest a new feature to share progress on social media.
    • Testers also report that the app consumes a lot of battery power during use.
  3. Importance:

    • Developers can fix the syncing issues, optimize battery usage, and implement the social sharing feature before releasing the app to the public.

Example 2: Beta Testing for a Game

A video game studio releases a beta version of a game before its official launch.

  1. Beta Phase:

    • The game is released to a limited number of users who test out the gameplay, multiplayer features, and performance on different hardware configurations.
  2. Feedback from Beta Testers:

    • Beta players notice issues like lag during multiplayer matches, missing sound effects, and bugs with character movement.
    • They also suggest improving the game’s tutorial to make it easier for beginners.
  3. Importance:

    • The studio can fix these issues, optimize game performance, and enhance the user experience before the full game release, ensuring a smoother experience for a larger audience.
15.What is AB testing (Not alpha beta) 

A/B testing is a method of comparing two versions of a webpage, app, or other product to determine which one performs better. In an A/B test, two variants (A and B) are shown to different segments of users, and their behavior is measured to see which version delivers better results, such as higher conversion rates, more engagement, or better user experience.

Why is A/B Testing Important?

  • Data-Driven Decisions: It allows businesses to make decisions based on actual user data rather than assumptions or opinions.
  • Optimization: It helps optimize products, features, or marketing campaigns by improving elements that impact user behavior and performance.
  • Minimizing Risk: A/B testing helps to ensure that changes made to a website, app, or product are beneficial before implementing them fully.

Real-Time Example of A/B Testing

Example 1: E-Commerce Website A/B Testing

Scenario:
An e-commerce website wants to increase its sales conversion rate by changing the color of the "Buy Now" button.

  1. Variant A (Control): The original button is blue.
  2. Variant B (Test): The new button is green.

Steps:

  • The website randomly divides its visitors into two groups: one sees the blue button (Variant A), and the other sees the green button (Variant B).
  • Both groups are tracked to see how many users click on the "Buy Now" button and make a purchase.

Results:

  • After a week, the data shows that users who saw the green button clicked more frequently and converted at a higher rate.

Decision:

  • The e-commerce website decides to switch to the green button for all users because it resulted in better performance.

Example 2: A/B Testing for a Mobile App

Scenario:
A fitness app wants to test whether showing a motivational message after users complete a workout will increase engagement.

  1. Variant A (Control): The app shows the standard "Great Job!" message after a workout.
  2. Variant B (Test): The app shows a motivational quote like, "You're one step closer to your fitness goals!".

Steps:

  • The app randomly shows one of the two messages to different users.
  • The app tracks user behavior, such as how often users return to the app after their workout, and whether they complete more workouts.

Results:

  • The data shows that users who saw the motivational quote returned more frequently and completed more workouts than those who saw the basic "Great Job!" message.

Decision:

  • The app team decides to implement the motivational quote for all users, as it improved user retention and engagement.
16. What should you do when an issue is found in Live?
When an issue is found in the live environment, quick action is necessary to minimize user impact. The steps include acknowledging the issue, reproducing and investigating it, notifying stakeholders, fixing the problem, testing the fix, and deploying it to production. Ongoing monitoring and communication with users are essential, and a post-incident review ensures that similar issues don’t occur in the future.

17. What should you do when you release your application live ?
When releasing an application live, it’s crucial to ensure that everything is thoroughly tested, deployed carefully, and actively monitored. After the release, continuous monitoring for performance, errors, and user feedback ensures a smooth user experience. Rapid response to issues and communication with stakeholders will help keep the application stable and improve over time.

18. How would you communicate with the team (Devs), should you or not?
Yes, communication with developers is essential for effective testing and ensuring a successful release. The collaboration between QA and development teams is what makes software high-quality and bug-free. However, communication should be: Clear and Concise: Provide all necessary details (e.g., steps to reproduce bugs, expected vs. actual behavior, screenshots). Respectful: Always maintain professionalism and respect for their time.
Proactive: Don’t wait for them to ask. If you encounter issues or blockers, let them know promptly. Collaborative: Work together to improve the product and find the best solutions.

Example of Communication in Action:

Scenario:
During your testing of a mobile banking app, you notice that the "Transfer Money" feature is not working for certain user accounts (it throws an error).

  1. Raise a Bug in Jira:

    • You create a bug ticket in Jira with the following information:
      • Title: “Transfer Money feature fails for specific user accounts”
      • Steps to Reproduce: List the steps needed to reproduce the error.
      • Expected vs. Actual Result: Expected: Transfer completes. Actual: Error displayed.
      • Screenshot/Logs: Attach relevant logs or screenshots.
  2. Slack Message:

    • You send a quick message to the developer:
      • “Hey [Developer's Name], I encountered an issue in the transfer feature when testing on user accounts with balances over $500. The transaction fails with error code XYZ. It’s logged in Jira as ticket #123. Could you take a look when you get a chance?”
  3. Stand-Up Communication:

    • During the daily stand-up, you mention:
      • “The transfer money bug is blocking further tests, and the issue has been raised in Jira. Waiting on a fix.”
  4. Post-Fix Validation:

    • Once the developer fixes the bug, you re-test the transfer feature and confirm the issue is resolved. You can then close the bug ticket in Jira and inform the dev team about the successful fix.
19. Why is communication important?
Communication is crucial in software testing and development because it fosters clarity, prevents mistakes, speeds up problem resolution, enhances collaboration, reduces risks, and ensures that the end product is of high quality. It helps ensure that everyone is on the same page, leading to better decision-making, faster delivery, and a more successful product.
20. What are the tools you have used for creating test recording them and raising bugs like, Test rail, Testlink, Jira, Trello

Jira only

22. What do you do in the design phase?
During the design phase of testing, you focus on thoroughly understanding the requirements, creating a solid test strategy, identifying critical test scenarios, and preparing detailed test cases. You also ensure that the test environment is set up correctly and collaborate with developers to resolve ambiguities. This phase is crucial because it sets the stage for the actual testing phase, ensuring that all necessary areas of the application are covered.

By planning and designing tests carefully, you minimize the risk of missing bugs and contribute to the overall quality of the product.

23.How do you confirm you are performing regression right?

To confirm you're performing regression testing correctly:

  • Ensure all previously tested features are re-validated.
  • Prioritize high-risk areas and keep your regression suite up to date.
  • Run a smoke test first to validate the stability of the build.
  • Automate repetitive tests and track any defects that arise.
  • Compare results with previous builds and communicate with the dev team.

This ensures that your regression tests are thorough, efficient, and reliable, leading to higher product quality with each release.

24. When do you think that you are confident with your testing?

You can be confident in your testing when:

  • All test cases are executed and pass (or acceptable defects are reported).
  • Critical business scenarios and high-priority defects are thoroughly tested and addressed.
  • The regression suite passes successfully without breaking existing functionality.
  • Test coverage is comprehensive and includes edge cases.
  • Feedback from stakeholders and users is positive.
  • All risks are mitigated or documented.
  • Metrics show good results for quality and test execution.

Understanding Smoke and Sanity Testing

Smoke Test:
- Validates basic, critical functionalities post a new build or release.
- Offers a broad overview of the core application functionality.
- Conducted early in the testing phase, typically after a new build.
- Emphasizes high-level functionality across the application.
- Confirms build stability for uninterrupted testing, avoiding defective builds.

Sanity Test:
- Confirms specific functionalities or fixes are functioning accurately.
- Targets precise areas related to a fix or change.
- Carried out post minor code alterations or bug fixes.
- Concentrates on particular features, modules, or bug fixes.
- Ensures the correctness of changes or fixes, preventing broader application impacts.

1 comment:

Anonymous said...

Very nicely described