Showing posts with label Testing Concepts. Show all posts
Showing posts with label Testing Concepts. Show all posts

16 January, 2025

Defect Lifecycle: A Real-World Example and Detailed Insights!

 The Defect Lifecycle is the journey a defect takes from identification to closure in the software development lifecycle. It outlines the various stages a defect goes through and helps in tracking, managing, and resolving defects efficiently.


Stages of Defect Lifecycle

  1. New:
    When a tester identifies a defect, it is logged into a defect-tracking tool (e.g., Jira, Bugzilla) with all relevant details (description, steps to reproduce, severity, screenshots, etc.).

    Example: A tester notices that entering invalid credentials on a login page doesn't display the correct error message. They log the defect in Jira with the following details:

    • Title: Incorrect error message for invalid login.
    • Steps to Reproduce: Enter invalid credentials and click "Login."
    • Expected Result: "Invalid username or password" should appear.
    • Actual Result: "System error" appears.

  1. Assigned:
    The defect is reviewed by the lead or project manager and assigned to a developer based on the module or expertise.

    Example: The lead assigns the defect to the backend developer responsible for the login functionality.


  1. Open:
    The developer begins analyzing the defect to identify the root cause.

    Example: The developer investigates and finds that the API responsible for error handling is returning a generic response instead of specific error codes.


  1. Fixed:
    Once the issue is resolved, the developer marks the defect as “Fixed” and updates the comments with the changes made.

    Example: The developer updates the API to return proper error messages and ensures the front-end displays them accurately. They then commit the changes to the code repository.


  1. Retest:
    The tester retests the defect to verify the fix using the steps to reproduce.

    Example: The tester verifies that entering invalid credentials now shows "Invalid username or password" as expected.


  1. Verified:
    If the defect is resolved as expected, the tester marks it as “Verified.”

    Example: After multiple tests, the tester confirms that the error message is displayed correctly across browsers and devices.


  1. Closed:
    If no further issues are found, the defect is marked as “Closed.”

    Example: The lead reviews the resolution, and the defect is officially marked as “Closed” in Jira.


  1. Reopen (if applicable):
    If the issue is found to persist during retesting or UAT, the defect is reopened and re-assigned for further investigation.

    Example: During UAT, the client finds that the error message isn't localized for certain regions. The defect is reopened for localization fixes.


Defect Lifecycle Flow Diagram

Here’s a simplified flow of the lifecycle:
New → Assigned → Open → Fixed → Retest → Verified → Closed
If required: Reopened → Assigned → Open → Fixed


Importance of Defect Lifecycle

  • Ensures transparency: Everyone involved can track the progress of a defect.
  • Improves quality: Proper management ensures all defects are resolved effectively.
  • Boosts collaboration: Facilitates communication between testers, developers, and stakeholders.

15 January, 2025

Smoke Testing: A Real-World Example for Testers

Smoke testing is a preliminary test performed on a new build to verify its basic stability. It ensures that the core functionalities of the application are working as expected before proceeding to detailed testing. Smoke testing acts as a "quick health check" for the software.


Key Characteristics:

  1. Broad and Shallow: Covers major functionalities without going into details.
  2. Executed on Every Build: Ensures the build is stable enough for further testing.
  3. Automation or Manual: Can be done manually or with automation tools.

Real-Time Example:

Scenario:

Suppose you’re testing a web-based e-commerce application, and a new build has been delivered after adding a new "wishlist" feature.

Steps:

  1. Verify the Core Features:

    • Check if the homepage loads properly.
    • Test whether the login and registration functionality works.
    • Ensure the product search feature is functional.
    • Confirm that the product details page displays correctly.
    • Validate that the cart and checkout process are accessible.
  2. Verify the New Feature (Wishlist):

    • Check if users can add items to the wishlist.
    • Confirm that items in the wishlist can be viewed later.
    • Ensure the wishlist does not impact existing features like adding items to the cart.
  3. Outcome:

    • If all core functionalities and the new feature work, the build is declared stable, and detailed testing (functional, regression, etc.) can begin.
    • If any major feature fails (e.g., the login doesn’t work), the build is rejected, and testing is halted until fixes are made.

Example in Practice:

During smoke testing, you focus on ensuring the basic stability of the application. You don’t delve into edge cases or performance details—those are handled in later stages.




Sanity Testing: A Real-World Example for Testers

Sanity testing is a type of software testing performed after receiving a software build to ensure that the critical functionalities or bug fixes are working as intended. It acts as a checkpoint to determine whether the build is stable enough for further detailed testing. Sanity testing is usually narrow and focused, covering only specific functionalities or areas impacted by recent changes.


Key Characteristics:

  1. Performed After Bug Fixes or Minor Changes: Ensures that the new code or fixes didn’t break existing functionality.
  2. Narrow and Focused: Only specific areas or modules are tested.

Real-Time Example:

Scenario:

Imagine you're working on a web-based e-commerce application, and a critical issue was reported in the checkout functionality: customers couldn't apply a discount code to their orders.

Steps:

  1. Bug Fix: The development team fixes the discount code issue and deploys a new build for testing.

  2. Sanity Testing:

    • As a tester, you focus only on the checkout page and the discount code functionality.
    • You verify:
      • If the discount code field accepts valid codes.
      • If an invalid code displays the correct error message.
      • If the discount is correctly applied to the total amount.
  3. Result:

    • If all tests pass, you confirm that the build is stable and ready for more extensive testing (regression testing, system testing, etc.).
    • If any test fails, you send the build back to the developers for further fixes.

Example in Practice:

You don’t test unrelated features like the user profile page or search functionality during sanity testing because the issue and fix were specific to the checkout module.

04 January, 2025

Every QA Automation Engineer must know these 10 essential skills 😄😄

If you want to be proficient as a QA Automation Engineer, here are the most important skills you need to learn.

1. Core Testing Concepts

Learn the fundamentals of software testing, including types of testing (unit, integration, regression, performance), test levels (functional, non-functional), and testing methodologies (Agile, Waterfall).

2.Scripting Languages

Gain proficiency in scripting languages like Java, Python, or JavaScript to write test automation scripts for functional and non-functional testing.

3. Test Automation Frameworks

Learn and implement test automation frameworks such as testNg,Junit ,data driven approach with Playwright ,Selenium WebDriver, Appium, or Cypress for automating web and mobile applications.

4.Continious Integration/Continuous Deployment (CI/CD)

Understand CI/CD practices and tools like Jenkins, GitLab CI, or CircleCI to integrate automated tests into the software development pipeline for continuous quality assurance.

5.Version Control

Use version control tools like Git to manage and track changes in your codebase and collaborate effectively with other team members. Platforms like GitHub, GitLab, or Bitbucket are essential.

6. API Testing

Master tools like Postman, RestAssured, or SoapUI to automate API testing, ensuring that web services are working as expected across different endpoints.

7.Performance Testing Tools

Learn how to use performance testing tools like JMeter, LoadRunner, or Gatling to test the scalability and load capacity of your applications.

8. Database & SQL

Learn SQL and database tools like MySQL, PostgreSQL, or MongoDB to query and verify data in your automated tests and ensure data integrity.

9. Test Management Tools 

Familiarize yourself with test management tools such as TestRail, Zephyr, or Jira, which help you plan, organize, and track test cases, defects, and overall test execution.

10. Problem Solving and Debugging skills 

Develop strong problem-solving and debugging skills to identify issues quickly in the automation process. Use debugging tools and logs effectively to troubleshoot and fix test failures.

29 November, 2024

Sciens Technologies hiring Manual QA

👥 Company Name: Sciens Technologies

✨ Job Positions: Manual QA 

🗓️ Experience: 4+ Years

💡 Location: Hyderabad 

📅 Availability: Immediate to 30 Days

🎯 Skills: Manual Testing, SQL, API (Postman)

📧 If you are interested, please share your profile to gayathri.saggam@scienstechnologies.com with details on CTC, ECTC, and Notice Period

🗣 Join us on WhatsApp: https://chat.whatsapp.com/Lb7B5KWnGRB4hdCbr5vYO7

04 November, 2024

Digi Mantra Job Openings

 Digi Mantra looking for passionate professionals to fill the following roles:

📌 Team Lead - QA | Experience: 5+ years

📌 Team Lead - Mobile | Experience: 5+ years

📌 Associate Lead - React Native | Experience: 4+ years

📌 Next.js Developer | Experience: 2-4 years

📌 Business Development Manager (BDM) | Experience: 4+ years

📌 Business Development Intern (BD Intern) | Freshers welcome!

📌 Java Intern | Freshers welcome!

To apply or for more details, email your CV to nikita.pant@digimantra.com

🗣 Join us on WhatsApp: https://chat.whatsapp.com/GOqtcHLxC6SH2UiZ66650Q

12 September, 2024

JS TechAlliance Consulting Pvt Ltd hiring for Java Developer role

Company Name: JS TechAlliance Consulting Pvt Ltd

Role :  Java Developer

Skill Set - Core Java, Advance Java, OOPS

Location - Indore 

Experience - 3-8 Years

Notice Period - (Immediate - 30 Days)

Job Description: -

Experienced in OO design and building components

Understand of multi-thread system and java concurrent API

Responsible for design, development, unit testing and documentation.

Serial port coding experience

Experience in Java Development

Bonus:

ZWAVE Experience

Knowledge of Jira and Gitlabs (or another git platform)

Work with sprints in an agile environment

Experience in Android development

Worked with symmetric and asymmetric encryption

Serial Communication Development

Interested candidates are invited to submit their resume outlining their relevant experience and qualifications to hr@jstechalliance.com

Kind Request: We Need Your Support! Click on the Ads Shown on blog– It Helps Us Keep Bringing You Great Content on Our Blog! & For quick Job updates - Join Telegram

02 September, 2024

programming.com hiring for QA Engineer

Company Name: programming.com   

Role: QA Engineer

Experience: 5+ years

Location: Gurgaon, India

Skill Sets: QA, Appium, RestAssured

Interested candidates are invited to submit their resume outlining their relevant experience and qualifications to manpreet.kaur@programming.com

Kind Request - Help support our blog by clicking on the ads you see here, so we can keep bringing you great content

13 January, 2024

Agile basic Terminologies

1. User Story:

A user story is like a short storybook. It's a concise narrative describing a software feature from an end-user perspective, capturing the 'who,' 'what,' and 'why.'

2. Sprint:

A sprint is like a focused workout session. It's a time-boxed iteration, typically two weeks, during which the development team works on a set of prioritized user stories.

3. Scrum Master:

The Scrum Master is like a coach. They facilitate the Scrum process, removing impediments, fostering collaboration, and ensuring the team adheres to Agile principles.

4. Product Backlog:

The product backlog is like a to-do list for the project. It's an evolving, prioritized list of features, enhancements, and fixes that the team intends to work on.

5. Daily Standup:

The daily standup is like a morning huddle. It's a brief, daily meeting where team members share updates on their progress, discuss impediments, and align on the day's tasks.

6. Definition of Done (DoD):

The Definition of Done is like a checklist for completeness. It defines the criteria that a user story must meet to be considered complete, ensuring a shared understanding of 'done.'

7. Burn-down Chart:

A burn-down chart is like a progress map. It visually represents the work completed over time, helping the team track progress toward completing the planned tasks.

8. Product Owner:

The Product Owner is like a storyteller. They represent the voice of the customer, prioritize the product backlog, and make decisions on feature requirements.

9. Velocity:

Velocity is like a team's speedometer. It measures the amount of work a team can complete in a sprint, providing insights into their capacity and aiding in future planning.

10. Retrospective:

A retrospective is like a team debrief. It's a dedicated meeting at the end of a sprint where the team reflects on what went well, what could be improved, and plans for adjustments.

11. Kanban:

Kanban is like a visual task board. It's a framework for visualizing work, limiting work in progress, and maximizing efficiency in the flow of work items.

12. Increment:

An increment is like a building block. It's the sum of completed user stories and improvements at the end of a sprint, representing a tangible progress in the project.

13. Backlog Grooming:

Backlog grooming is like preparing for a journey. It's the process of refining and organizing the product backlog, ensuring that upcoming user stories are well-defined and prioritized.

15 April, 2023

Overview on level of Software Testing and Examples

Unit Testing:

Unit testing is a software testing technique that focuses on testing individual units or components of the software. The objective of unit testing is to ensure that each unit or component is functioning correctly and performing the tasks it was designed to do. Defects found at this level are typically related to coding errors or logical mistakes within a specific unit of code.

Example 1: Suppose there is a function in a piece of software that is designed to calculate the total amount of a customer's purchase. A unit test could be written to pass two input values (price and quantity of items) to the function and verify that the output is the correct total amount.

Example 2: Another example of unit testing is testing a login function that checks if the user's username and password are valid. A unit test could be written to pass the function a set of valid credentials and a set of invalid credentials and verify that the function behaves correctly in both cases.

Integration Testing:

Integration testing is a software testing technique that focuses on testing how different units or components of the software interact with each other. The goal of integration testing is to ensure that the software is functioning correctly as a whole and that the individual units or components are working together as intended. Defects found at this level are typically related to interfaces between the units or components.

Example 1: Consider a software system that consists of a backend database and a frontend application. An integration test could be written to verify that data entered through the frontend is correctly stored in the database and that data retrieved from the database is correctly displayed in the frontend.

Example 2: Another example of integration testing is testing the integration of a payment gateway into an e-commerce website. An integration test could be written to verify that when a customer submits an order, the payment gateway correctly processes the payment and updates the order status.

System Testing:

System testing is a software testing technique that focuses on testing the entire software system to ensure that it meets the specified requirements and quality standards. The objective of system testing is to identify defects that can only be found when the software is running as a complete system. Defects found at this level are typically related to the overall functionality, usability, and performance of the software system.

Example 1: Suppose a software system is designed to manage inventory for a retail store. A system test could be written to simulate a real-world scenario in which a large number of customers are purchasing items, and the system is expected to handle the load without any errors.

Example 2: Another example of system testing is testing a software system that is designed to control a manufacturing process. A system test could be written to simulate the entire process from start to finish and verify that the software correctly controls each step of the process.

Acceptance Testing:

Acceptance testing is a software testing technique that is performed to ensure that the software meets the customer's requirements and is ready for deployment. The objective of acceptance testing is to ensure that the software is user-friendly, functional, and meets the customer's needs. Defects found at this level are typically related to usability, functionality, and compliance with the customer's requirements.

Example 1: Suppose a customer has specified that a software system must be compatible with a specific operating system. An acceptance test could be written to verify that the software runs correctly on that operating system and meets all of the customer's other specified requirements.

Example 2: Another example of acceptance testing is testing a software system that is designed to generate financial reports for a business. An acceptance test could be written to verify that the reports generated by the software are accurate, easy to read, and meet the customer's requirements.

Software Testing: A Comprehensive Overview

Software testing is a critical process in the software development life cycle that ensures that software applications or systems meet the specified requirements and quality standards. The primary objective of software testing is to identify any defects or errors in the software before its release to the end-users.

Different types of software testing are employed to achieve the desired quality standards. Unit testing is used to test individual units or components of the software to ensure their proper functionality. Integration testing is employed to test how different units or components of the software interact with each other.

System testing is conducted to test the entire software system to ensure that it functions as intended and meets the required quality standards. Acceptance testing is done to ensure that the software meets the customer's requirements and is ready for deployment.

Regression testing is performed to test the software after changes have been made to it, to ensure that no new defects or errors have been introduced.

13 February, 2022

Selenium Cheat Sheet💢

Manage Driver Initialization
WebDriver driver = new ChromeDriver();
WebDriver driver = new FirefoxDriver();
WebDriver driver = new InternetExplorerDriver();
WebDriver driver = new HtmlUnitDriver();
Manage Element Locators
driver.findElement(By.id("Id Value"));
driver.findElement(By.name("Name Value"));
driver.findElement(By.className("Class Name Value"));
driver.findElement(By.linkText("Link text Value"));
driver.findElement(By.partialLinkText("Partial Text Constant
Value"));
driver.findElement(By.tagName("Tag Name Value"));
driver.findElement(By.cssSelector("CSS Value"));
driver.findElement(By.xpath("Xpath Value"));
driver.findElement(new ByAll(By.className("ElementClass
Name"), By.id("Element Id"), By.name("Element Name")))
Manage Elements Operations
WebElement element =
driver.FindElement(By.ElementLocator("Value of Element
Locator"));
element.click();
element.sendKeys("Input Text");
element.clear();
element.submit();
element.getAttribute(“type”);
String innerText = element.getText();
boolean enabledstatus = element.isEnabled();
boolean displayedstatus = element.isDisplayed();
boolean selectedstatus = element.isSelected();
Manage Operation on drop down
Select select = new Select(element);
select.selectByIndex(Integer Index);
select.selectByVisibleText("Text");
select.SelectByValue("Value");
select.deselectAll();
select.deselectByIndex(Integer Index);
select.deselectByVisibleText("Text");
select.deselectByValue("Value");
WebElement selectedOptions = select.getOptions();
Browser Operations
String pageTitle = driver.getTitle();
String currentURL = getCurrentUrl();
String currentPageSource = driver.getPageSource();
Manage Navigation history
driver.get("https://www.facebook.com/");
driver.manage().window().maximize();
driver.navigate().to("https://www.google.com/");
driver.navigate().back();
driver.navigate().forward();
driver.navigate().refresh();
driver.close();
driver.quit();
Manage Alert
Alert alert = driver.switchTo().alert();
alert.accept();
alert.dismiss();
alert.getText();
alert.sendKeys(“Input Data");
Manage Cookies
Cookie cookie = new Cookie(“cookieName”, “cookieValue”);
driver.manage().addCookie(cookie);
driver.manage().getCookies();
driver.manage().getCookieNamed(arg0);
driver.manage().deleteAllCookies();
driver.manage().deleteCookieNamed(arg0);
Manage frames
driver.switchTo().frame(int Frame Index);
driver.switchTo().frame("frameName");
WebElement element =
driver.FindElement(By.ElementLocator("Value of Element
Locator"));
driver.switchTo().frame(element);
driver.SwitchTo().defaultContent();
Manage Screenshots Capture
TakesScreenshot screenshot =((TakesScreenshot)driver);
File srcFile= screenshot.getScreenshotAs(OutputType.FILE);
FileHandler.copy(srcFile, destFile);
Timeouts Management 
driver.manage().timeouts().implicitlyWait(10,
TimeUnit.SECONDS);
welement = wait.until(Syntax: WebDriverWait wait = new
WebDriverWait(driver, timeout);
ExpectedConditions.elementToBeClickable(locator));
welement.click();
Thread.sleep(Long milli-seconds)
driver.manage().timeouts().pageLoadTimeout(30,
TimeUnit.SECONDS);
Scroll Down or Up Web Page
JavascriptExecutor js = (JavascriptExecutor)driver;
js.executeScript("window.scrollBy(0,100)");
js.executeScript("window.scrollTo(0,
document.body.scrollHeight)");
WebElement element =
driver.FindElement(By.ElementLocator("Value of Element
Locator"));
js. executeScript("arguments[0].scrollIntoView()", element);
TestNG Annotations
@Test
@BeforeMethod
@AfterMethod
@BeforeTest
@AfterTest
@BeforeClass
@AfterClass
@Test(enabled = false)
@Test(enabled = true)
@Test(priority=2)
@Test(priority=5,dependsOnMethods={"method1","method
2"})
@Test(dependsOnMethods = {"method1"}, alwaysRun=true)
@Test(groups = { "Group1", "Group2" })
@Parameters({"testparameter1", "testparameter2"})
@Listeners(packagename.ListenerClassName.class)
@Test (dataProvider = "getUserIDandPassword")
@Test (description = "Open Facebook Login Page",
timeOut=35000)
@Test (invocationCount = 3, invocationTimeOut = 20000)
@Test (invocationCount = 3, skipFailedInvocations = true)
@Test (invocationCount = 3)
@Test (invocationCount = 7, threadPoolSize = 2)
TestNG Assertions
SoftAssert softassert= new SoftAssert();
softassert.assertEquals(1, 1);
softassert.assertAll();
Assert.assertEquals(11, 11);
Assert.assertEquals(true, true, "Not Matching");

24 December, 2019

Software Testing

Manual testing interview questions and answers.
Software tester the person should have certain qualities, which are imperative. The person should be observant, creative, innovative, speculative, patient, etc. It is important to note, that when you opt for manual testing, it is an accepted fact that the job is going to be tedious and laborious. Whether you are a fresher or experienced, there are certain questions, to which answers you should know.

Q. What is Sanity Test or  Build test?
Ans. Verifying the critical (important) functionality of the software on new build to decide whether to carry further testing or not.
Q. What is Dynamic Testing?
Ans. It is the testing done by executing the code or program with various input values and output is verified.
Q. What is GUI Testing?
Ans. GUI or Graphical user interface testing is the process of testing software user interface against the provided requirements/mockups/HTML designs.
Q. What is Formal Testing?
Ans. Software verification carried out by following test plan, testing procedures and proper documentation with an approval from customer.
Q. What is Risk Based Testing?
Ans. Identifying the critical functionality in the system then deciding the orders in which these functionality to be tested and applying testing.
Q. What is Early Testing?
Ans. Conducting testing as soon as possible in development life cycle to find defects at early stages of SDLC. Early testing is helpful to reduce the cost of fixing defects at later stages of STLC.
Q. What is Exhaustive Testing?
Ans. Testing functionality with all valid, invalid inputs and preconditions is called exhaustive testing.
Q. What is Defect Clustering?
Ans. Any small module or functionality may contain more number of defects – concentrate more testing on these functionality.
Q. What is Pesticide Paradox?
Ans. If prepared test cases are not finding defects, add/revise test cases to find more defects.
Q. What is Static Testing?
Ans. Manual verification of the code without executing the program is called as static testing. In this process issues are identified in code by checking code, requirement and design documents.
Q. What is Positive Testing?
Ans. Testing conducted on the application to determine if system works. Basically known as “test to pass” approach.
Q. What is Negative Testing?
Ans. Testing Software with negative approach to check if system is not “showing error when not supposed to” and “not showing error when supposed to”.
Q. What is End-to-End Testing?
Ans. Testing the overall functionality of the system  including the data integration among all the modules is called end to end testing.
Q. What is Exploratory Testing?
Ans. Exploring the application, understanding the functionality, adding (or) modifying existing test cases for better testing is called exploratory testing.
Q. What is Monkey Testing
Ans. Testing conducted on a application without any plan and carried out with tests here and there to find any system crash with an intention of finding tricky defects is called monkey testing.
Q. What is Non-functionality Testing?
Ans. Validating various non functional aspects of the system such as user interfaces, user friendliness security, compatibility, Load, Stress and Performance etc is called non functional testing.
Q. What is Usability Testing?
Ans. Checking how easily the end users are able to understand and operate the application is called Usability Testing.
Q. What is Security Testing
Ans. Validating whether all security conditions are properly implemented in the software (or) not is called security testing.
Q. What is Performance Testing?
Ans. Process of measuring various efficiency characteristics of a system such as response time, through put, load stress transactions per minutes transaction mix.
Q. What is Load Testing?
Ans. Analysing functional and performance behaviour of the application under various conditions is called Load Testing.
Q. What is Stress Testing?
Ans. Checking the application behavior under stress conditions (or)
Reducing the system resources and keeping the load as constant checking how does the application is behaving is called stress testing.
Q. What is Process?
Ans. A process is set of a practices performed to achieve a give purpose; it may include tools, methods, materials and or people.
Q. What is Software Configuration Management?
Ans. The process of identifying, Organizing and controlling changes to software development and maintenance or A methodology to control and manage a software development project.
Q. What is Testing Process / Life Cycle?
Ans. Write Test  Plan | Test Scenarios | Test Cases | Executing Test Cases | Test Results | Defect Reporting | Defect Tracking | Defect Closing | Test Release
Q. What is full form of CMMI?
Ans. Capability Maturity Model Integration.
Q. What is Code Walk Through?
Ans. Informal analysis of the program source code to find defects and verify coding techniques.
Q. What is Unit Level Testing?
Ans. Testing of single programs, modules or unit of code.
Q. What is Integration Level Testing?
Ans. Testing of related programs, Modules (or) Unit of code. or Partitions of the system ready for testing with other partitions of the system.
Q. What is System Level Testing?
Ans. Testing of entire computer system across all modules.  This kind of testing can include functional and structural testing.
Q. What is Alpha Testing?
Ans. Testing of whole computer system before rolling out to the UAT.
Q. What is User Acceptance Testing  (UAT)?
Ans. Testing of computer system by client to verify if it adhered to the provided requirements.
Q. What is Test Plan?
Ans. A document describing the scope, approach, resources, and schedule  of testing activities.  It identifies test items, features to be tested, testing tasks, who will do each task, and any risks requiring contingency planning.
Q. What is Test Scenario?
Ans. Identify all the possible areas to be tested (or) what to be tested.
Q. What is ECP (Equivalence Class Partition)?
Ans. It is method for deriving test cases.
Q. What is a Defect?
Ans. Any flaw imperfection in a software work product, or Expected result is not matching with the application actual result.
Q. What is Severity?
Ans. It defines the important of defect with respect to functional point of view i.e. how critical is defect  with respective to the application.
Q. What is Priority?
Ans. It indicates the importance or urgency of fixing a defect.
Q. What is Re-Testing?
Ans. Retesting the application to verify whether defects have been fixed or not.
Q. What is Regression Testing?
Ans. Verifying existing functional and non functional area after making changes to the part of the software or addition of new features.
Q. What is Recovery Testing?
Ans. Checking if the system is able to handle some unexpected unpredictable situations is called recovery testing.
Q. What is Globalization Testing?
Ans. Process of verifying software whether it can be run independent of its geographical and cultural environment. Checking if the application is having features of setting and changing language, date, format and currency if it is designed for global users.
Q. What is Localization Testing?
Ans. Verifying of globalized application for a particular locality of users, cultural and geographical conditions.
Q. What is Installation Testing?
Ans. Checking if we are able to install the software successfully (or) not as per the guidelines given in installation document is called installation testing.
Q. What is Un-installation Testing?
Ans. Checking if we are able to uninstall the software from the system successfully (or) not is called Uninstallation Testing
Q. What is Compatibility Testing?
Ans. Checking if the application is compatible to different software and hardware environment or not is called compatibility testing.
Q. What is Test Strategy?
Ans. It is a part of test plan describing how testing is carried out for the project and what testing types needs to be performed on the application.
Q. What is Test Case?
Ans. A Test case is a set of preconditions steps to be followed with input data and expected behavior to validate a functionality of a system.
Q. What is Business Validation Test Case?
Ans. A test case is prepared to check business condition or business requirement is called business validation test case.
Q. What is a Good Test Case?
Ans. A Test case that have high priority of catching defects in called a good test case.
Q. What is Use Case Testing?
Ans. Validating a software to confirm whether it is developed as per the use cases or not is called use case testing.
Q. What is Defect Age?
Ans. The time gap between date of detection & date of closure of a defect.
Q. What is Showstopper Defect?
Ans. A defect which is not permitting to continue further testing is called Showstopper Defect
Q. What is Test Closure?
Ans. It is the last phase of the STLC,  where the management prepares various test summary reports that explains the complete statistics of the project based on the testing carried out.
Q. What is Bucket Testing?
Ans. Bucket testing is also know as A/B testing. It is mostly used to study the impact of the various product designs in website metrics. Two simultaneous versions are run on a single or set of web pages to measure the difference in click rates, interface and traffic.
Q. What is What is Entry Criteria and Exit Criteria Software Testing?
Ans. The Entry Criteria is the process that must be present when a system begins, like,
SRS – Software | FRS | Use Case | Test Case | Test Plan
The Exit criteria ensures whether testing is completed and the application is ready for release, like,
Test Summary Report, Metrics Defect Analysis Report.
Q. What is Concurrency Testing?
Ans. This is a multiple user testing to access the application at the same time to verify effect on code, module or DB. mainly used to identify locking and deadlocking situations in the code.
Q. What is Web Application Testing?
Ans. Web application testing is done on a website to check – load, performance, security, Functionality, Interface, Compatibility and other usability related issues.
Q. What is Unit Testing?
Ans. Unit testing is done  to check whether the individual modules of the source code are working properly or not.
Q. What is Interface Testing ?
Ans. Interface testing is done to check whether the individual modules are communicating properly as per specifications. Interface testing is mostly used to test the user interface of GUI applications.
Q. What is Gamma Testing ?
Ans. Gamma testing is done when the software is ready for release with specified requirements, this testing is done directly by skipping all the in-house testing activities.
Q. What is Test Harness?
Ans. Test Harness is configuring a set of tools and test data to test an application in various conditions, which involves monitoring the output with expected output for correctness.

The benefits of Testing Harness are: Productivity increase due to process automation and increase in product quality
Q. What is Scalability Testing?
Ans. It is used to check whether the functionality and performance of a system, whether system is capable to meet the volume and size changes as per the requirements Scalability testing is done using load test by changing various software, hardware configurations and testing environment.
Q. What is Fuzz Testing?
Ans. Fuzz testing is a black box testing technique which uses a random bad data to attack a program to check if anything breaks in the application.
Q. What is Difference between QA, QC and testing?
Ans. QA -
It is process oriented.
Aim is to prevent defects in an application
QC -
Set of activities used to evaluate a developed work product. It is product oriented.
Testing - 
Executing and verifying application with the intention of finding defects.
Q. What is Date Driven Testing?
Ans. It is Automation testing process in which application is tested with multiple set of data with different preconditions as an input to the script.
Q. V model in manual testing ? 
Ans: V model is a framework, which describes the software development life cycle activities right from requirements specification up to software maintenance phase. Testing is integrated in each of the phases of the model. The phases of the model start with user requirements and are followed by system requirements, global design, detailed design, implementation and ends with system testing of the entire system. Each phase of model has the respective testing activity integrated in it and is carried out parallel to the development activities. The four test levels used by this model include, component testing, integration testing, system testing and acceptance testing.
Q. What are stubs and drivers in manual testing ? 
Ans: Both stubs and drivers are a part of incremental testing. There are two approaches, which are used in incremental testing, namely bottom up and top down approach. Drivers are used in bottom up testing. They are modules, which test the components to be tested. The look of the drivers is similar to the future real modules. A skeletal or special purpose implementation of a component, which is used to develop or test a component, that calls or is otherwise dependent on it. It is the replacement for the called component.
Q. What is difference between bug, error and defect ? 
Ans: Bug and defect essentially mean the same. It is the flaw in a component or system, which can cause the component or system to fail to perform its required function. If a bug or defect is encountered during the execution phase of the software development, it can cause the component or the system to fail. On the other hand, an error is a human error, which gives rise to incorrect result. You may want to know about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and statuses used during a bug life cycle, which help you in understanding the terms bug and defect better.
Q. What are the phases of STLC ? 
Ans: There are different phases of the software development life cycle, there are different phases of software testing life cycle as well. Read through software testing life cycle for more explanation. 
Q.  What is a Review ? 
Ans: A review is an evaluation of a said product or project status to ascertain any discrepancies from the actual planned results and to recommend improvements to the said product. The common examples of reviews are informal review or peer review, technical review, inspection, walkthrough, management review. This is one of the manual testing interview questions.
Q.  Explain equivalence class partition ?
Ans: It is either specification based or a black box technique. Gather information on equivalence partitioning from the article on equivalence partitioning.
Q. What is a test suite? 
Ans: A test suite is a set of several test cases designed for a component of a software or system under test, where the post condition of one test case is normally used as the precondition for the next test.
Q. What is boundary value analysis ? 
Ans: A boundary value is an input or an output value, which resides on the edge of an equivalence partition. It can also be the smallest incremental distance on either side of an edge, like the minimum or a maximum value of an edge. Boundary value analysis is a black box testing technique, where the tests are based on the boundary values.
Q  What is compatibility testing ? 
Ans: Compatibility testing is a part of non-functional tests carried out on the software component or the entire software to evaluate the compatibility of the application with the computing environment. It can be with the servers, other software, computer operating system, different web browsers or the hardware as well.
Q.What is exact difference between debugging & testing? 
Ans: When a test is run and a defect has been identified. It is the duty of the developer to first locate the defect in the code and then fix it. This process is known as debugging. In other words, debugging is the process of finding, analyzing and removing the causes of failures in the software. On the other hand, testing consists of both static and dynamic testing life cycle activities. It helps to determine that the software does satisfy specified requirements and it is fit for purpose.
Q. Explain in short, sanity testing, ad-hoc testing and smoke testing. 
Ans: Sanity testing is a basic test, which is conducted if all the components of the software can be compiled with each other without any problem. It is to make sure that there are no conflicting or multiple functions or global variable definitions have been made by different developers. It can also be carried out by the developers themselves. Smoke testing on the other hand is a testing methodology used to cover all the major functionality of the application without getting into the finer nuances of the application. It is said to be the main functionality oriented test. Ad hoc testing is different than smoke and sanity testing. This term is used for software testing, which is performed without any sort of planning and/or documentation. These tests are intended to run only once. However in case of a defect found it can be carried out again. It is also said to be a part of exploratory testing.
Q. Explain performance testing. 
It is one of the non-functional types of software testing. Performance of software is the degree to which a system or a component of system accomplishes the designated functions given constraints regarding processing time and throughput rate. Therefore, performance testing is the process to test to determine the performance of software. 
Q. What is meant by functional defects and usability defects in general? Give a example. 
Ans: We will take the example of ‘Login window’ to understand functionality and usability defects. A functionality defect is when a user gives a valid user name but invalid password and the user clicks on login button. If the application accepts the user name and password, and displays the main window, where an error should have been displayed. On the other hand a usability defect is when the user gives a valid user name, but invalid password and clicks on login button. The application throws up an error message saying “Please enter valid user name” when the error message should have been “Please enter valid Password”.
Q.  What is pilot testing ? 
Ans: It is a test of a component of a software system or the entire system under the real time operating conditions. The real time environment helps to find the defects in the system and prevent costly bugs been detected later on. Normally a group of users use the system before its complete deployment and give their feedback about the system.
Q. Explain statement coverage. 
Ans: It is a structure based or white box technique. Test coverage measures in a specific way the amount of testing performed by a set of tests. One of the test coverage type is statement coverage. It is the percentage of executable statements which have been exercise by a particular test suite. The formula which is used for statement coverage is: Statement Coverage = Number of statements exercised Total number of statements * 100% 

28 May, 2019

Seven Testing Principles

1. Testing shows the presence of defects, not their absence
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces
the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness.
2. Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts.
3. Early testing saves time and money
To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes (see section 3.1).
4. Defects cluster together
A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.
6. Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential lifecycle project (see section 2.1).
7. Absence-of-errors is a fallacy
Some organizations expect that testers can run all possible tests and find all possible defects, but
principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfill the users’ needs and expectations, or that is inferior compared to other competing systems.

10 April, 2018

What is Automation Testing ?

Automation testing is the process of testing the software using an automation tool to find the defects. In this process,executing the test script sand generating the results are performed automatically by automation tools. Some most popular tools to do automation testing are HP QTP/UFT, Selenium WebDriver,etc.