Ads

Manual Testing Notes and Interview questions

 SDLC:

  • Requirement Analysis

  • Design

  • Code

  • Test

  • Deploy

  • Maintenance

STLC:

  • Requirement analysis

  • Test Planning

  • Test case development

  • Test Environment setup

  • Test Execution

    • Bug Reporting

    • Retesting

    • Regression

  • Test Cycle Closure

Agile STLC:

  • User Story

  • Test Data

  • Testing Strategy

  • Testcase Design

    • Test Design technique

    • Testcases

  • Test Execution

  • Test closure

Black Box testing:

99% of the testing that QA team done is Black box testing

Types:

  • Functional testing---->Testing the function

  • Non Functional Testing---->Performance

  • Regression testing

White Box Testing(Development team):

  • unit testing

Smoke Testing:

to find whether the build is stable or not by testing the most critical components are working or not

Purpose: to save Effort and time

Other names: Build acceptance testing,Build verification testing ,confidence testing,Level Zero testing

Sanity Testing:

performed after a new build is received. This new build could be a resultant of inclusion of new functionality or due to a bug fix.

Smoke testing

Sanity testing

To test Crucial or critical functionality of the application or testing the stability of the software

to check the new functionality or the bug fix is working fine and assure the rationality

Done by testers

Done by testers

Well documented process.Testcases for this functionalities would have been written.

This also well documented process.Best Practise to writing testcases for any bug introduced

subset of acceptance testing.because tester has to decide whether to accept or reject the build

Subset of regression testing because we are testing after a code change or bug fix.

Crucial scenarios of the entire system will be tested

Particular component of the entire system will be tested

Regression Testing:

After bug fix or new Release or code change due to these two,no new bugs should be introduced across the modules or entire system(that means existing functionalities should not break )

Retesting vs Regression:

Retesting->to check whether the bug is fixed or not

Regression-->end to end testing performed across the system to check whether existing functionalities working fine or not after bug fix or new release

Adhoc Testing:

Not a formal testing that means not documented process. tested without being planned or organised or rehearsed.this is kind of negative testing.like performing illogical or logical scenarios check whther the application has any bugs.

Defect vs Bug:

Defect: can be identified in all phase including end user.this is actually due to Design failure,misunderstanding in requirement

Bug: Usually identified during testing phase.This is actually due to Code error,error in logics

Bug Lifecycle:

1.New--->Bug created by tester

2.Assigned-->Assigned to respective developer(FE or BE)

3.Open--->Validate the bugs(Duplicate,Reproducible,is it in scope,valid)

  • Rejected/Defered-if it failed in above cases

4.Fixed--->assigned to Tester back

5.Retest--->Tester retesting the bug

  • if not fixed,it will be Reopened

6.Verified-->verified by tester

7.Closed--->Closing the bug

Severity vs priority:

Severity: it is a measure that indicates the impact of the bug in the application

Types:low,minor,major,critical

low-wont affect product flow

minor,major

critical-kind of blocker,entire application collapse

Priority: it refers how urgently the bug needs to be fixed

Types:low,medium ,high

Combination of Severity ,Priority:

1.High prioriy,high severity:

Ex: Search box is missing for google in edge browser

Ex: Policy creation add policy button is not visible for all users

2.Low priority, high Severiy:

Ex: Latest browser application is working fine. but in legacy browsers or in very old browsers some elements are not interactable

Ex:Policy creation button not visible for one user

3.High Priority ,low severity

Ex: Zoom in/out affects the application in some browser

Ex:Pagination not working

4.Low priority, Low severity:(this actually not stopping the work flow)

Ex: Element colour or text colour

Software Test Life Cycle(STLC):

  • Requirement analysis:

    • Testing team have to study the functional/non functional requirement from the testing point of view

    • Automation feasibility also has to be done for this stage

    • find out what type of testing has to be done

    • create RTM

  • Test planning:

    • kind of detailed document that has schedules,resources required,test strategy,estimates,objective,deadline

  • Test case developments:

    • once test plan is ready,testcases has to be written and reviewed.

  • Testcase Execution:

    • Actually testing begins here including smoke testing.

    • finding the bugs,tracking the bugs to closure

    • retesting

    • mapping the bugs to testcases in RTM

  • Test closure:

    • Sign off given by testing team along with QA Evidence,bug reports.

Entry Criteria & Exit Criteria:

Entry: set of conditions to met before actual activity begins

Exit: set of conditions to met before actual activity concluded

Ex:Entry critieria for Test case development:

name

desc

conditon

target

requirements should be frozen

aims to have all the requirements should be frozen and there should not be any change

dynamic change in requirement not allowed

0

Ex:Entry critieria for regression testing:

name

desc

conditon

target

Pass percentage of smoke/sanity

aim is to have pass percentage of smoke and sanity should be 100%

equal to

100%

No new bugs

aim is to have no new bugs should be introduced during smoke/sanity

equal to

0

Ex:exit critieria for regression:

name

desc

conditon

target

pass percentage of regression

aim is to have pass percentage of regression shouls be be greaterg thatn 97

greater tahn

97

no high priority pending bugs

aim is to have zero high prioty pending bugs

equal to

0

Ex: exit criteria for Release:

name

desc

condition

target

no high priorityhigh severity open bugs

aim is to have zero high priority bugs

equal to

0

low priority low severity

aim is to have less than 10 low prioriy,low severity bugs

less

10

failed suite

failed suites should be less than 5 %

less than

5%

Ex: exit criteria for Automation release:

name

Desc

Condtion

Target

Automation code should be reviewed

aim is to have automation code review with atleast 2 person

greater than

2 person

code qualtiy

bugs and vulnerablity in code

equal to

0

Test Scenario:

it is a breakup of business requirements in form of possible use cases.

Test case:

A Step by step instructions written in a detailed manner with steps to be performed and expected result for any given functionality is called a test case.

Requirement traceability Matrix:(RTM):

is a document that maps and traces the requirements with testcases.main purpose of this document is to validate that all requirements are tested by atleast one testcase,so that no functionality is unchecked during software testing.

Testing Principles:

  • Testing identifies the presence of defects

    • goal is to deliver flawless product to the customer

  • Absence of error fallacy

    • 99% error free product should be delivered to the customer

  • Early testing

    • testing should be started before production to avoid unexpected event

  • Exhaustive testing is not possible

    • max capacity reached

  • defect clustering

    • parato principle:80% consequence come from 20% causes

  • Pesticide paradox

    • Same set of testcases never discover new bugs.

  • testing is context dependent

Agile Testing Principles:(Ten Principles)

  • deliver values to the customer

  • provides continuous feedback

  • courgagge to do write things

  • focus on people

  • enable face to face communication

  • keep it simple

  • respond to changes

  • practise continuous improvemnet

  • self organise

  • enjoy

  • less documentation

Test Design Techniques:

  • Specification based:

    • Equavalence particining---->whole range of input data grouped into several set of input--->0 to 100 input

    • Boundry value analysis--->lower,upper,on boundry--->Age critieria 0 to 18 18 to 50,50 and aboe

    • Decision tables--->systems behaviour with various set of inputs--->upload document

    • State transistion--->for any given input how the system transisionted from one to another

    • Use case testing: actions done by the user-->how system behaves

  • Structure based

    • statement coverage

    • decision coverage

  • Experience based

    • Exploratory testing

    • error guessing--->based on experience

General Testing Concepts

  1. What is the difference between verification and validation?

    1. Verification ensure the product is built correctly(based on requirement product should be built)

    2. Validation ensure the correct product is built(checks product meets user meet)

  2. Can you explain the software testing life cycle (STLC)?

    1. Requirement analysis,test planning,testcase development,test environment setup,test execution,test closure

  3. What are the different levels of testing?

    1. unit testing,integration testing,system testing,acceptance testing

  4. How would you prioritize test cases in a testing cycle?

    1. based on critical functionality,based on risk,business impact

  5. Explain the difference between functional and non-functional testing.

    1. functional testing ,testing the application based on requirement

    2. non function testing the performacne of the application

  6. What is the difference between black-box testing and white-box testing?

    1. Block box-->without knowing internal code structre

    2. white box--testing involves with knowledge of internal code structure

  7. How do you determine when to stop testing?

    1. once test cases execution completed,deadline reached,risk is acceptable

  8. What is regression testing, and why is it important?

    1. to ensure existing functionalities workign fine after bug fix

  9. Describe the V-Model in software testing.

    1. extension of waterfall model,each development face has testing phase ,this forms v model

Test Case Design

  1. How do you write effective test cases?

    1. effective testcases are concise,simple,includes positive,negative testcase,expected result,step by step instructions

  2. What are boundary value analysis and equivalence partitioning?

    1. boundry->tests the edges of the input range

    2. equiavalence--->whole range of inputs grouped to several set of inputs(like valid invalid inputs to minimise testcases)

  3. Explain the concept of a test scenario. How do you derive test scenarios?

    1. kind of highlevel testcase,break up of business requirement in form of possible use cases

  4. How do you ensure that your test cases cover all possible scenarios?

    1. review with peers,use tracebility matrices,consider edge cases

Defect Management

  1. What is a defect life cycle?

    1. new,open,assigned,retest,closed

  2. How do you report a defect? What information should be included?

    1. environment,user details,steps to reproduce,expected,actual result, evidence ,priority,severity

  3. Explain the difference between severity and priority in defect management.

    1. severity:it is measure that indicates the impact of bug in the application

    2. priority: it denotes how urgently the bug needs to be fixed

  4. How would you handle a situation where a developer disagrees with the severity of a bug you reported?

    1. provide evidence and connecting with project manager

Tools and Techniques

  1. What tools have you used for test management and defect tracking?

    1. azure devops,zephyr with jira

  2. How do you ensure traceability between requirements and test cases?

    1. RTM is a document that maps and traces the user requirement with test cases.main purpose of this document is to enusre that all requirements are tested with atleast one testcase.so that no functionality is unchecked during software testing

  3. Have you used any tools for test case design? If so, which ones?

    1. Excel

Test Planning

  1. What key elements do you include in a test plan?

    1. schedule,startegy,estimation,resource availablity,deadline,entry,exit critiria

  2. How do you estimate the time required for testing?

    1. based on experience,complexity,resource availablitiy

  3. What risks do you typically consider when planning your testing activities?

    1. changing requirement

Testing Strategies and Approaches

  1. Describe a situation where you had to adapt your testing strategy. How did you approach it?

    1. if deadline is shortened,i have to prioritse the critical testcases

  2. How do you test an application with incomplete requirements?

    1. have to write testcases based on assumptions and need to connect with stakeholders

  3. What is exploratory testing, and how does it differ from scripted testing?

    1. involves simultaneous learning,test design exploratory is not having pre written testcases unlike scripted testing

Domain Knowledge and Scenario-Based Questions

  1. If you were testing an e-commerce website, what specific functionalities would you focus on?

    1. login,search,pagination,filter,add to cart,payment,order track

  2. How would you approach testing a login page?

    1. valid/invalid credentials,sql injection,html injection

  3. Describe a challenging bug you found and how you handled it.

    1. work with developement closely to resolve this issue

  4. Can you explain how you would test a mobile application differently from a web application?

    1. we have to consider os versions,screen resoluvtion,netweok condition ,device compatability

Communication and Collaboration

  1. How do you communicate your test results to stakeholders?

    1. by test pass/fail rate,oeveral qulaity status,test coverage

  2. Describe a situation where you had to work closely with developers to resolve a critical issue.

    1. during regression i found an bug,then i connected with developers to resolve that issue in urgent manner

  3. How do you handle tight deadlines or last-minute changes to requirements?

    1. by prioritising the testcases and change the test plan accrodingly

Comments

Popular posts from this blog

Rest Assured Api Automation

Appium Notes

Jenkins Notes