Wednesday, December 12, 2012

Manual Testing Questions and Answers:

What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

What makes a good Software QA engineer? 
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

What makes a good QA or Test manager? 
A good QA, test, or QA/Test(combined) manager should:
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
• what is a somewhat 'negative' process (e.g., looking for or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers, managers, and customers.
• be able to run meetings and keep them focused

What's the role of documentation in QA? 
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Bookstore section's 'Software Requirements Engineering' category for books on Software Requirements.)
Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
'Agile' methods such as XP use methods requiring close interaction and cooperation between programmers and customers/end-users to iteratively develop requirements. The programmer uses 'Test first' development to first create automated unit testing code, which essentially embodies the requirements.

What steps are needed to develop and run software tests?
The following are some of the steps to consider:
• Obtain requirements, functional design, and internal design specifications and other necessary documents
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
• Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
• Determine test environment requirements (hardware, software, communications, etc.)
• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through life cycle

What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

Explain in short, sanity testing, adhoc testing and smoke testing.
Sanity testing is a basic test, which is conducted if all the components of the software can be compiled with each other without any problem. It is to make sure that there are no conflicting or multiple functions or global variable definitions have been made by different developers. It can also be carried out by the developers themselves.
Smoke testing on the other hand is a testing methodology used to cover all the major functionality of the application without getting into the finer nuances of the application. It is said to be the main functionality oriented test.
Ad hoc testing is different than smoke and sanity testing. This term is used for software testing, which is performed without any sort of planning and/or documentation. These tests are intended to run only once. However in case of a defect found it can be carried out again. It is also said to be a part of exploratory testing.


Priority and Severity 
Severity defines the impact that a given defect has on the system. A severe defect may cause the system to crash or invoke the dreaded blue screen of death. I don't think anyone would argue about it's severity. But how did it happen? Did it take an obscure set of key strokes or does it happen anytime the uses presses "e"? Or does it only happen only in the month of June, during sunspot activity? What about a spelling error? Or maybe the text color is a really annoying neon green. Severe? Probably not. Maybe just a cosmetic issue.

Priority, on the other hand, defines the order in which we should resolve a defect. Should we fix it now, or can it wait? How difficult is it to resolve? How many resources will be tied up by the resolution. Look at our two previous examples. Issue 1 (system crash) is definately severe, may be difficult to resolve, but only happens rarely. When should we fix it? Contrast that with the second issue (spelling error). Not severe, just makes you look bad. Should be a real easy fix. One developer, maybe 10 minutes to fix, another 10 to validate (if that). Which should get the higher Priority? Which should we fix now? I'm going to recommend fixing the typo immediately, and if there is sufficient time, fix and resolve the blue screen before the next build. I will probably wait until the next major release. It becomes....a Release Note!


Software Development Life Cycle(SDLC) Vs Software Test Life Cycle(STLC)

Software development life cycle(SDLC) and Software Testing Life cycle(STLC) go parallelly.
SDLC (Software Development Life cycle)STLC (Software Test Life Cycle)
SDLC is Software Development LifeCycle, it is a systematic approach to develop a software.
The process of testing a software in a well planned and systematic way is known as software testing life cycle(STLC).
Requirements gathering
Requirements Analysis is done is this phase, software requirements are reviewed by test team.
Design
Test Planning, Test analysis and Test design is done in this phase. Test team reviews design documents and prepares the test plan.
Coding or development
Test construction and verification is done in this phase, testers write test cases and finalizes test plan.
Testing
Test Execution and bug reporting, manual testing, automation testing is done, defects found are reported. Re-testing and regression testing is also done in this phase.
Deployment
Final testing and implementation is done is this phase andfinal test report is prepared.
Maintenance
Maintenance testing is done in this phase.

CMMI Model :

CMMI models there are six capability levels which are designated by the digits from 0 to 5

Capability Level 0: Incomplete
It is a process which is not performed fully or partially. One or more specific goals of the process area would not be satisfied and generic goals do not exist for this level.

Capability Level 1: Performed
This level is expected for performing all the level specific practices. Stable or non-met specific objectives such as quality, cost, and schedule may not be performed well, but the work done is useful. Something is done but could not prove that it will really work for.

Capability Level 2: Managed
It is a process which is planned, performed, monitored and controlled for individual projects or groups or could be stand-alone processes for achieving a given need. Both the model objectives for the process and other related objectives like cost, schedule and quality are managed in this process. The things that are to be managed in an enterprise are managed actively in this level. There are certain metrics which are to be collected consistently and applied for management approach.

Capability Level 3: Defined
This process is characterized as a “well defined process”. It is referred as a managed process which is tailored for the standards of the organization as per the tailoring guidelines of the organization and the work products, measures and other process related improvement information are contributed.

Capability Level 4: Quantitatively Managed
This process is defined as a process which is properly controlled using statistical and other quantitative techniques. Quality and process performance are established and utilized as the major criteria in process management. The quality of process performance is estimated in statistical terms. This quality is managed throughout the process life cycle.

Capability Level 5: Optimizing
It is a quantitatively managed process which is improved that is based on an understanding the process variation inherent cause in the process. The focus is on continually improving the performance of the process by both incremental and innovative improvements. The target of these processes is activities improvement..

Bug Life Cycle:

Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Guidelines on deciding the Severity of Bug:

Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

1. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs

Differences between Bug, Defect and Error??

defect - Is something what went wrong without us expecting it. E.G.: Application crashes when I press "Save" button when registering myself somewhere.

bug is the part of code which causes the defect (e.g. the "save" method refers to unexisting database)

issue is what you are going to raise to the issue tracking system - you will write the set of steps needed to reproduce the defect, in order the bug can be found and fixed. (e.g. Issue 001, Critical, "Save new user does not work")

error Means something went wrong, but we expected it. E.G.: When filling my name, I entered only numbers - the application then shows me an error message, that I need to correct the field "name" prior continuing.


What is a Traceability Matrix / Requirements Traceability Matrix?
The concept of Traceability Matrix is to be able to trace from top level requirements to implementation, and from top level requirements to test.

A traceability matrix is a table that traces a requirement to the tests that are needed to verify that the requirement is fulfilled. A good traceability matrix will provide backward and forward traceability, i.e. a requirement can be traced to a test and a test to a requirements. The matrix links higher level requirements, design specifications, test requirements, and code files. It acts as a map, providing the links necessary for determining where information is located. This is also known as Requirements Traceability Matrix or RTM.

This is mostly used for QA so that they can ensure that the customer gets what they requested. The Traceability matrix also helps the developer find out why some code was implemented the way it was, by being able to go from code to Requirements. If a test fails, it is possible to use the traceability matrix to see what requirements and code the test case relates to.

The goal of a matrix of this type is -
1. To make sure that the approved requirements are addressed/covered in all phases of development: From SRS to Development to Testing to Delivery.
2. That each document should be traceable: Written test cases should be traceable to its requirement specification. If there is new version then updated test cases should be traceable with that.






9 comments:

  1. You nicely explained every essential topic. Thanks for this. Selenium Training in Pune

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. This is my first visit to your web journal! We are a group of volunteers and new activities in the same specialty. Website gave us helpful data to work. https://imada.com

    ReplyDelete