• +(91) 8750050183
  • info@helpingtesters.com
  • helpingtesters

Differences Terms & Definitions in Software Testing

October 8, 2017 ,
Definitions in Software Testing, Software Testing Terms, Software Testing Interview Questions

In Software Testing, we come across lot many Definitions in Software Testing which sound similar but have huge differences between them and sometimes confusing too. Software Testing Interviews will usually have the questions around differences between these types of terms. This article is an attempt to list out the differences between software testing terms that are frequently asked in interviews, in a simpler way. 

Software Testing Interview Questions on differences between Definitions in Software Testing are described below:

1. What are the differences between Verification and Validation?

Verification

  • Are we building the system right?
  • Reviews, Meetings, Inspections, Walkthroughs are included in Verification
  • Performed in the development phase to ensure that the specified requirements are met
  • Performed by QA team
  • Code is not executed in this phase
  • Requirement specifications, High-level design document, Low-level design document, Code, Test Cases, Test Scenarios are evaluated
  • Cost of maintenance due to errors caught in this phase is less

Validation

  • Are we building the right system
  • Regression, System, User acceptance testing are included in Validation
  • Performed at the end of development phase to ensure that the customer expectations, requirements specifications are met
  • Performed by Testing team
  • Code is executed in this phase
  • Actual system is evaluated by testing
  • Cost of maintenance due to errors caught in this phase is high

2. What are the differences between Quality Assurance and Quality Control? (Important Definitions in Software Testing)

Quality Assurance

  • Activities to ensure quality in the processes followed to develop the system it is an Important Definitions in Software Testing.
  • Prevention activity – preventing bugs to enter the system by enhancing the processes followed in development and testing phases
  • Pro-active – identifying the process weaknesses
  • Focus is mainly on the process followed
  • Process-oriented
  • Verification is Quality Assurance activity
  • Developers, BAs, customers, leads, managers are responsible for Quality Assurance
  • Planning is done for process enhancements

Quality Control

  • Activities to ensure quality of the system developed, it is an Important Definitions in Software Testing.
  • Correction and Reactive activity – Identifying and correcting the bugs in the system developed
  • Product-oriented
  • Focus is mainly on identifying the bugs in actual system
  • Validation is Quality Control activity
  • Testing team is alone responsible for Quality Control
  • Planning is executed for process enhancements

3. What are the differences between Static and Dynamic Testing?

Static Testing

  • Actual testing of the system is not performed.
  • It is reviewed at code, process, requirements, and design level to ensure possible bugs to be identified before entering the code.
  • Reviews are performed in order to achieve static testing goals – Walkthrough, code reviews
  • Performed at early stage in software development life cycle
  • Performed before the code is deployed
  • Cost-effective

Dynamic Testing

  • Actual testing of the system is performed by providing inputs.
  • System response to input is analyzed to ensure that it working correctly as per requirements specifications.
  • Functional and non-functional testing is done to ensure dynamic testing goals
  • Performed at later stage in software development life cycle
  • Performed after the code is deployed
  • Costlier

4. What are the differences between the Black box and White box Testing?

Black box Testing

  • The focus is on system response to inputs, i.e, the actual execution of system is performed.
  • Knowledge of the internal structure of code and implementation is not required. Programming knowledge is not necessary
  • Test cases are written to cover for functionalities of the system based on Requirements specifications
  • Regression, System, User acceptance testing are performed
  • Performed by Testers
  • Known as Functional / External testing

White box Testing

  • Focus is on analysis of code.. i.e., program code is tested
  • Knowledge of the internal structure of code and implementation is not required. Programming knowledge is very much necessary
  • Analysis is done for branches, loops, paths, statements in the coding, based on detailed design documents
  • Unit and Integration testing is performed
  • Performed by Developers
  • Known as Structural / Interior testing

5. What are the differences between Boundary Value and Equivalence Partitioning Techniques?

Boundary Value Testing Technique

  • This involves testing the data which has range
  • Data range’s boundary and one data between the range has to be tested
  • Common testing idea: 7 possible data to test for each data range: Minimal-1, Minimal, Minimal+1, Nominal, Maximum-1, Maximum, Maximum+1
  • Example: If the field accepts data in the range 18 – 65, then the testing has to be performed for the below values: 17, 18, 19, <any value between 20 and 63, say 45), 64, 65, 66

Equivalence Partitioning Testing Technique

  • This involves categorizing the input data into groups where similar type of data is put in their respective groups
  • Most suitable when the field accepts data with specific type
  • Any value provided apart from the specified type’s group, the testing should fail with proper error message
  • Common testing idea: If the test passes for one data for the group, then all the other data in the group will pass. If the test fails for one data for the group, then all the other data in the group will fail
  • Example: If the field has to accept only positive numbers then the test data will go as below:
    • 1 and above – Test should pass for all the inputs in this group. If the test pass for any value says 3, then it passes for all the values in this group. If the test fails for any value say 3, then it fails for all the values in this group. One or two tests for the values in the group is sufficient.
    • 0 and negatives – Test should fail for all the inputs in this group. One or two tests for the values in the group is sufficient. If it passes for any of the value, then it is a bug here
    • Alphabets (a-z, A-Z) – Test should fail for all the inputs in this group. One or two tests for the values in the group is sufficient. If it passes for any of the value, then it is a bug here
    • Special Characters – Test should fail for all the inputs in this group. One or two tests for the values in the group is sufficient. If it passes for any of the value, then it is a bug here

6. What are the differences between Smoke and Sanity Testing?

Smoke Testing

  • Testing the system for basic functionalities to work fine when the build is deployed.
  • Critical functionalities, Core business functions are expected to work fine without any blocker issues

Sanity Testing

  • Testing the system for cleanliness – page loading properly, alignments are proper, buttons, links are clickable – when the build is deployed
  • Navigations are tested from page to page

7. What are the differences between Unit and Integration Testing?

Unit Testing

  • Testing single component of system to ensure it works correctly alone
  • Scope of testing is narrow as it involves just a single component
  • No dependencies on other factors outside the coding like Database, batch processing etc
  • Unit testing cannot be divided into types or categories as it involves only individual components to be tested
  • Impact of this component on other components is not considered
  • Performed by Developers
  • Performed by White-box techniques
  • Effort required is less

Integration Testing

  • Testing the integration of two or more components of system to ensure integration between them works correctly
  • Scope of testing is wide as it involves multiple components integrated
  • Data flows, Logical flows, interface are tested extensively after integrating the components
  • Have dependencies on other factors outside the coding like Database, batch processing, interface etc
  • Classified into 2 types: Top-down Integration, Bottom-up Integration
  • Impact integration on other components is hugely considered
  • Performed by Testers
  • Performed by both Black-box and White-box techniques
  • Effort required is more

8. What are the differences between Functional and Non Functional Testing? (Important Definitions in Software Testing)

Functional Testing

  • System’s output is tested and analyzed for wide-range of inputs
  • Actual behavior/function of the system is tested
  • Regression, System, User Acceptance testing are performed to test actual functionality of the system
  • Test Cases, Test Scenarios supports to perform functional testing
  • Manual and Automation testing performs functional testing
  • Functional and testable requirements are tested

Non-functional Testing

  • System’s behavior factors are tested and analyzed
  • System is tested for its performance, strengths, weakness
  • Load, Stress, Volume, Performance, Reliability, Resistance testing are performed on the system
  • Specific tools are used to perform non-functional testing
  • Requires specialized testers in for this kind of testing
  • Non-functional requirements are tested. This usually will be defined by the load that the server can take, huge data that the server can handle at any point of point, response time, etc..

9. What are the differences between Adhoc and Exploratory Testing?

Adhoc

  • Random testing of the system with in-depth knowledge of it
  • Domain knowledge, clear understanding of the system is required
  • Skilled, experienced tester should perform this activity
  • Aim is to uncover corner areas and bugs coming from work-around scenario

Exploratory Testing

  • Random testing of the system without in-depth knowledge on it
  • Domain knowledge, clear understanding of the system not required
  • Any fresher can perform this activity
  • Aim is to uncover the bugs that are not captured through test cases

10. What are the differences between System and User Acceptance Testing?

System Testing

  • Entire system is tested as whole on testing environment
  • Complete end-to-end flow is tested with wide range of data
  • Performed by testers
  • It is the constitution of both Functional and Non-functional testing
  • Analysis is made on how the system is behaving for any test conditions
  • Positive, negative tests are performed as per test cases identified
  • Real-time scenarios are tested with test data. Any bug encountered at this level has to be fixed at high priority
  • All the test cases should be passed or at least 98% – 99% of test cases should be passed
  • Medium / low severity bugs that do not affect functionality can be left open and signed off for acceptance

User Acceptance Testing

  • Entire system is tested as whole on prod-like environment or staging environment
  • Critical/real-time business scenarios are tested for end-to-end flow with specific data set
  • Performed by few testers, customers, different stake-holders
  • Only functional testing is performed
  • Positive tests are performed as per UAT scenarios identified
  • Real-time scenarios are tested with real-users data. Any bug encountered in this level will result in failure of the system to go-live
  • All the UAT Scenarios identified should be passed
  • No bugs to be left open and signed-off by customers / stake-holders

11. What are the differences between Load, Stress, and Performance Testing?

Load Testing

  • Testing the system with continuous and steadily increasing load on it
  • This sets benchmark on the system’s load that it can handle at any point in time
  • Analysis is made on the how the system reduces it response time upon increasing load
  • Supported by tools like LoadRunner
  • Aim is to identify memory management bugs, memory leaks, buffer overflows

Stress Testing

  • Testing the system by overloading it beyond its specified load
  • Analysis is made on the what point to heavy load the system breaks/crashes
  • Complex data is involved in breaking the system
  • System here can be database, servers, network, memory

Performance Testing

  • Testing the system for performance of the components like memory, scalability, reliability, and speed
  • Response of the system under any condition is evaluated
  • This contains Load testing, stress testing, volume testing, reliability testing, scalability testing as part of it
  • Can be tested at different levels: Application/system level, Database level, Network level. In each case, the input is different and huge.

12. What are the differences between Regression and Retesting? (Important Definitions in Software Testing)

Regression Testing

  • Testing the functionality or component to ensure that it is not impacted by bug fixes / adding new features or components / modifying existing features or components
  • Requires impact analysis has to be done – i.e., areas that might get affected due to bug fix has to be analyzed for testing it after the bug has been fixed
  • Requires to identify which test cases to be executed
  • Aim: To ensure earlier working functionalities are still working as expected when there is bug fix or new feature included or existing feature is modified
  • Can be performed only on stable features or components
  • Automation can be used for regression testing.
  • Automation has to be done for the features that are expected to remain as it is when there is a new feature implemented or any other existing feature is modified.

Retesting

  • Once the bug is fixed, it has to be tested following the steps mentioned to reproduce and the result of it should match expected result section of bug
  • The only bug fix has to be checked for its removal.
  • No need to test the areas that are identified to be tested after bug fix.
  • No need to identify or execute test case
  • Aim: to ensure that the bug is fixed and working fine
  • Automation is not preferred and retesting is one-time activity when the bug is fixed

13. What are the differences between Severity and Priority?

Severity

  • Impact of the bug on the related features an entire system/application
  • Classified as Critical, Major, Normal, Minor
  • Decided by tester who is logging the bug

Priority

  • Importance of fixing the bug (either early or late or at acceptable timeframe)
  • Classified as Urgent, High, Medium, Low
  • Decided by developer / managers / customers sometimes

Higher the severity sooner will be the priority to fix the big. Combination of high and low, Severity and Priority is provided below with examples for better understanding:

High Priority & High Severity: Basic functionality of the system is hampered or the user cannot use the system at all. Example: Page not loading, Adding a record not working, etc.

High Priority & Low Severity: Functionality is not hampered, but leaves bad remarks on the system/application. Example: Spelling mistake in the application name itself, logo not displayed fully, etc.,

High Severity & Low Priority: Functionality is working incorrectly. Logic is not correctly implemented, as a result, the wrong output is produced. Example: Few fields in the records do not update upon saving changes, conversion is incorrect, conditions are not working properly, etc.

Low Priority and Low Severity: Functionality is not hampered and also there is no remarks as well on the system/application. Example: Spelling mistake in the paragraphs (descriptions), border issues, UI issues

14. What are the differences between Test Case, Test Scenario, and User Acceptance Scenario? (Important Definitions in Software Testing)

Test Case

  • Documentation which captures test steps, expected results, preconditions, input data to test the feature
  • Test case is a low-level document – that means, each and every step has to be described on how to perform particular test
  • To be written in such a way that even any new comer (fresher to the application) can execute it without any difficulties
  • Higher traceability can be achieved
  • Test coverage will be more as everything in the feature is expected to be executed
  • Reviews can be done within the team

Test Scenario

  • Documentation which captures the test at very high level it is an Important Definitions in Software Testing.
  • Test scenario is a high-level document – that means, what has to be tested will be mentioned, but how to perform will not be described
  • Will be written mainly for Subject Matter Expertise (SME), the high-skilled tester with domain knowledge. It will difficult for the new comer (fresher) to execute it
  • Lower traceability is achieved
  • Test coverage in terms of scenarios is achieved. Not all simpler testable requirement will be covered
  • Reviews can be done by customers, managers, Business analysts

User Acceptance Scenario

  • Documentation which captures real-time scenarios it is an Important Definitions in Software Testing.
  • User acceptance scenario is a high-level document – that means, what has to be tested will be mentioned, but how to perform will not be described
  • Will be written mainly for Subject Matter Expertise (SME), a high-skilled tester with domain knowledge, and also for customers. It will difficult for the new comer (fresher) to execute it
  • Lower traceability is achieved
  • Not all the requirements will be covered. Only business-critical scenarios, major flows will be captured
  • Scenarios are to be executed only in User Acceptance Phase and on staging or prod-like environment, with real
  • Reviews can be done by customers, managers, Business analysts

15. What are the differences between Manual Testing and Automation Testing?

Manual Testing

  • System/application is tested manually by proving inputs and/or performing actions it is an Important Definitions in Software Testing.
  • No need of programming or scripting language knowledge
  • All the testable requirements are tested manually to achieve full coverage. Not even a single requirement can be left untested in the system/application
  • Testing tools are not needed
  • Anything in the system/application can be tested without dependencies

Automation Testing

  • System/application is tested with the help of tools called Automation tools it is an Important Definitions in Software Testing.
  • Requires extensive programming or scripting language knowledge
  • All the testable requirements cannot be covered in automation testing as tools used may not support some areas of the system/application
  • Cannot achieve full coverage
  • Testing tools are needed
  • Only stable features/components can be tested
  • Changing features / new features cannot be the candidates for automation testing

16. What are the differences between Test Plan and Test Strategy?

Test Plan

  • Detailed document that describes each and every aspect of testing and the team it is an Important Definitions in Software Testing.
  • Requirement Specification Document is the reference for Test Plan
  • Dynamic document and will keep changing/updated frequently as and when there is change in process being followed or new feature added/feature modified – Master document
  • Usually written by Test Managers, Test Leads
  • What to test, what not to test, who will test, how is the testing done are captured

Test Strategy

  • High-level document that describes test approach for the document it is an Important Definitions in Software Testing.
  • Business Requirement Document is the reference for Test Strategy
  • Static document and will not change/update frequently
  • Usually written by Project Managers, Business Analysts
  • Scope, Business Criteria, Test Approach, Risk, Training are captured

17. What are the differences between Tester and SDET?

Tester

  • Only testing skills required. Development skills are not needed for tester
  • Only functional testing is performed by testers through black box approach
  • Testers without or with minimum domain knowledge can perform testing
  • Can use automation tools, but cannot develop them

SDET (Software Development Engineer in Testing)

  • Both testing and development skills are required for SDETs
  • Both manual and automation testing are done by SDETs
  • Extensive domain knowledge required
  • Can develop automation tools, frameworks, review product design documents, participate in project management activities
  • Can easily work with developers as they understand coding, standards, etc.v

Leave a Reply

Your email address will not be published.

Broaden Your Knowledge. Enroll Today.

Our tutoring services on software testing courses online offer information on a wide variety of courses, ranging from Web Security and Software Testing courses to selenium online training to Mobile Automation Testing. Whatever criterion you need help with concerning advanced technological functions and operations, we’ve got you covered. We also use real world examples and scenarios for solving examples and projects, enhancing your knowledge and broadening your horizon.