Thursday, December 26, 2019

Difference of Sanitary (Sanity Testing) from Smoke Testing types of testing

Sanitary or Sanity Testing
Refers to the type of software testing that is used to prove the operability of a particular function or module in accordance with the stated technical requirements. Often, sanitary testing is used to verify any part of a program or application as a result of changes made to it by environmental factors. Its execution usually occurs in manual mode.

Learn more about: Regression testing services

Smoke Testing
It is considered as a short cycle of tests that are performed in order to confirm the fact of launching and performing the functions of the installed application after building new or edited code. 

The test results of the most important segments of the application should provide objective information about the presence or absence of defects in their work. Based on the results of smoke testing of the application, it is either sent for subsequent full testing, or it is concluded that it needs to be finalized.



Quite often, in an amateur environment of software developers, one can come across allegations that these types of testing are analogs. 

In our opinion, such statements are unfounded, since sanitary testing is focused on the in-depth study of a certain function, and smoke testing - on testing a large amount of functional testing services in the shortest possible time.

Test coverage criteria in Software Testing

Test coverage
- a criterion that displays the quality factor of software testing services. It characterizes the completeness of test code coverage or requirements for it.

The main approach to assessment is the formation of a test pool. The value of the test coverage of the code is directly dependent on the number of selected verification options for it.

The multitasking and versatility of modern software makes it impossible to organize test coverage with an indicator of 100%. So for the maximum coverage of the tested code, special techniques and tools have been developed.

There are three approaches for assessing the quality and expression of test coverage in numerical representation depending on the area of ​​verification: coverage of requirements, code coverage, and coverage based on control flow analysis. Read more about each.
Requirements Coverage
Using the trace matrix, we consider the test coverage of the requirements put forward by the program.
The formula for the assessment is as follows:

Tcov = (Lcov / Ltotal) * 100.

The result is expressed as a percentage.

Explanation of variables:

Tcov - test coverage;
Lcov - the number of requirements selected for testing;
Ltotal - The total number of claims claimed.




One way to simplify the task is to divide the code requirements into subsections. For this, it is important to conduct a thorough analysis of them. Subsequently, the subsection is attached to the validation tests. The sum of these relationships is called the "trace matrix." It allows you to track verified requirements for a specific test case.

It makes no sense to use tests without regard to requirements. At the same time, if the requirement is not tied to verification, its verification will not be carried out. As a result, it will not be possible to judge the level of implementation of the requirement in program code. Since the multitasking of modern software entails some standardization, expressed in general, for standard conditions, solutions, it makes sense to use standard test design techniques.


Learn more about Performance testing services

Code Coverage
In this case, areas of the program code that were not covered by the verification process itself are tracked.

The formula for the assessment looks like this:

Tcov = (Ltc / Lcode) * 100%
The result is expressed as a percentage.

Variables:

Tcov - test coverage;
Ltc - the number of lines of code covered by the check;
Lcode - the total number of lines of code.

Standard tools have been developed for routine work. For example, Clover. This product tracks and provides information about the timing of occurrences during testing. 

Data analysis allows you to significantly expand the coverage area, since duplicate checks and cases of loss of code sections from testing are revealed. 

This technique of coating optimization is used in the analysis according to the principle of "white box" and in the strategies of unit, integration, system testing. 

Not knowing the internal structure of the code, as can be assumed from the formula, this is not the best choice for implementing black box testing. There will be difficulties in configuring, installing. We will have to attract the authors of the product.

Testing Control Flows
This approach is also more convenient when implementing the principle of
“white box”. In this case, the ways of executing software code are analyzed, which is implemented by the development and implementation of test cases to cover testing of these same paths. As in the first case, there is a special principle that facilitates coverage of the desired area. 

The control flow graphs are used. According to the ratio of the number of entry points and exit points, three components are distinguished:

- a process block (one entry and exit point).
- alternative point (there is more than one exit point per entry point).
- connection point (there is more than one entry point per exit point).
Validation of control flows involves several levels of coverage:

LevelNameDescription
ZeroWithout nameDetailed testing is entrusted to users. The tester himself conducts a shallow standard research.
FirstCoverage (P.) of operatorsEach code statement must be involved in the test at least once.
SecondP. branchesEach branch point of the algorithm code is checked at least once
ThirdP. conditionsIf the condition has two options (TRUE or FALSE) of the solution, it must be satisfied once for each case.
FourthP. conditions and branchesA check is provided for each condition and branch.
FifthP. set of conditionsThe total application of levels from the second to the fourth.
SixthP. infinite number of waysSometimes the number of paths stubbornly tends to infinity in a loop. Then a limited set of cycles is subject to verification.
SeventhP. WaysAll paths are subject to verification without exception

Usability Testing or Usability Testing in Software Testing

It happens that to solve a specific problem we need an application that can help. A variety of software products differ in both functionality and ease of use.

The possibilities that become available when using software, with an ill-conceived interface, seem incomprehensible and incomplete.

Sometimes, an application that is very functional in function does not find popularity among users because of the difficulty in using it.

So, usability testing or usability testing should be an important part of the marketing policy of a mass software manufacturer.
Usability testing
- a testing technique designed to determine the level of comfort in use, learning, accessibility and attractiveness of the end user of a software solution within the given conditions.

The test result allows you to assess the degree of comfort of the application according to several criteria:

  1. Productivity and effectiveness (efficiency) - characterizes the time and number of sequential actions taken by the user to obtain the final result.
  2. Accuracy - indicates the number of erroneous user actions when using the application.
  3. Locking in memory (recall) - shows the amount of information about working with the application that the user has saved in memory, a lot of time after the last work with the product.
  4. Emotional response (emotional response) - will give an assessment of the user’s feelings remaining after working with the software; likelihood of recommending to other people.
Usability Testing at Different Testing Levels
Testing usability involves checking the application for the type of “black box” and “white box”. The tester takes the place of the final consumer and evaluates the product. The level of comfort in using objects, classes, methods, variables is tested.

The level of contentment is studied, if necessary, to change and expand, to ensure interaction with additional modules, systems. Choosing the right interface (API) will positively affect the quality factor, allow you to increase the speed of writing and maintaining the generated code, will entail an increase in the quality level of the application as a result.

Obviously, the process of testing usability should be carried out at all levels of product creation (modular, integration, system, acceptance). Each of them should provide a test case for various levels of the user. Starting from the developer, ending with the operator who will use the application in the process of its activities.

How to improve usability testing site?
First and foremost, an impeccably proven system of protection "from the fool" should be provided. English-speaking resources call it fail-safe or the Japanese term Poka-yoke. The general principle is simple: to prevent receiving false inputs due to inattention or illiteracy of the end user. An approach in this case may be a control scheme over the input data (for example, do not allow numerical values ​​in the text field).

Taking into account the opinions of users of the finished product should be the basis for improving the application. Correctly interpreting the reviews, you can bring the comfort of use several orders of magnitude higher than the original. Chain Plan-Do-Check-Act - planning-action-verification-correction. This is the so-called Deming-Shuhart cycle - a management algorithm for managing the process and solving tasks.

To know more about performance testing services

Popular Misconceptions About Usability Testing
Most often, the false results of the usability test can be obtained as a result of a false installation, which is expressed in the identification of the convenience of the interface and the usability of the application. 

Yes, the user interface is used for verification, but the levels of verification differ. Since the interface is connected with the functionality, but the implementation of program code (for example, the process of client-server interaction) will not always have a visual component.

The second most frequent mistake is the opinion that you can do without the services of a professional tester in testing usability. 


You can understand the subject, but know exactly and take into account the nuances and features both in choosing a test case and in interpreting the results obtained only by a specialist. This becomes apparent if we keep in mind the multi-level and multi-faceted process of software testing services.

What is a test case? What fields does a typical test case consist of?

Test case

What is a test case? This is the smallest part of the test documentation, this is a situation that verifies a particular condition from the requirements. One condition can be checked by several Test Cases (positive and negative).

Key Case Test fields:

  • ID
  • Summary
  • Steps
  • Expected Result
  • Pass / fail
Additional fields are possible for comments, links to the Bug report, Preconditions that we must fulfill before reproducing in Steps.

To know more about: software testing services

More details about the fields of the test case:

ID
The case number is written in this field or the number along with some abbreviation for the example “PD_Sync_123”

is used to uniquely identify the test case among other cases.

Summary
A short description of the problem is written here. Sammari should contain an answer to the question of what happened and under what conditions it does not work correctly.

Steps
Here are the steps to unleash a bug. Steps are recommended to be minimized as much as possible, that is, to find the shortest way to reproduce the bug and describe in steps, and it is very important that they remain as clear as possible for developers.

Expected Result
In this field we describe the expected result after walking on the steps or perhaps after specific steps, which is less common.

Pass / fail
The field is used to affix a status to each test case. If the expected result matches the real one, then put pass, otherwise set fail. There may still be several statuses depending on the processes and rules in the IT company

Case Test Example


A simple visual example of checking the successful login to the Administrator system, provided that its username and password = 'Login' and '12345'.

You can write the following Test Case:



IDSummaryStepsExpected ResultPass / fail
1Administrator Log in (Positive)1. Open the login page
2. Type 'Login' in the Name field
3. Type '12345' in the Password field
4. Submit
Administrator should be logged in successfully(The field remains empty until the test case is completed)

What is Regression Testing in software testing?

Very often Testers have to test those modules that have already been tested previously. And this occupation lures into boredom and “blurring the eyes” BUT I think that Regression is very important.


Regression Testing
Regression Testing is called testing of a previously tested part of an application or program, after any change in the code or change of environmental, in order to make sure that a fixed bug, a new feature or updated server or system settings did not affect the old functionality.

After you find the bug and the programmers fixed it, you should check again if the bug is really fixed. This is correct, but this is not regression testing but “Retesting” , that is, Retesting. If the bug is fixed and everything is in order, do not make hasty conclusions. After testing the bugs, regression testing of the affected module should be carried out, since when fixing the bug, an impact on this module is possible.

To know more about software testing services

Thus holding Regression Testing services is very important in order to ensure the quality of the product at the proper level.

Testing Levels in Software Testing



In Software Testing, 4 typical Testing levels can be distinguished:

Unit Testing
- a module is the smallest functional part of a program or application that cannot function separately, but only in combination with other modules.

 Nevertheless, after the development of this module, we can already begin testing and find inconsistencies with our requirements. Unit testing consists in testing this separate module as part of a program, implying that it is only a module and cannot exist independently and is part of an application, a program

Learn more about software testing services

Integration Testing
- the next level of testing that is carried out after Unit testing. After the individual modules of our application have been tested, we should conduct Integration Testing to make sure that our modules successfully function in conjunction with each other. In other words, we are testing 2 or more related modules in order to verify that the integration was successful and without obvious bugs

System Testing
- the level of testing in which we test the whole system or application that has been fully developed and which is already ready for a potential release. At this level, we test the system, the application as a whole, conduct tests on all required browsers or operating systems (if the desktop application) and conduct all the required types of testing such as functional, security testing, usability testing, performance testing, load testing, etc. d.

Acceptance Testing
- after the successful completion of System Testing, the product passes the level of Acceptance Testing, which is usually carried out by the customer or any other interested parties, in order to ensure that the product looks and works as originally required and was described in the product requirements. Acceptance Testing may also be conducted after each of the test levels described above.

Cross-browser testing

Cross-browser testing


Cross-browser testing - a type of testing aimed at supporting and correct full display of a software product in different browsers, mobile devices, tablets, screens of various sizes.
Cross-browser testing is an important step in the development of any program. Indeed, the appearance of the site and its correct display on any modern device plays a decisive role for the customer.

Features of cross-browser testing
Cross-browser testing of a site begins with a selection of browsers. The customer himself determines with which web browsers his application will work. But the task of the developer and tester is to tell the client which browser will be the main one, you should study the statistics of hits of such applications, determine which browsers this audience uses.
As a rule, the most popular browsers are considered: Google Chrome, Mozilla Firefox, Internet Explorer, Opera.
To know more about compatibility testing services

The main points for testing:
layout (color, fonts, the location of graphic pictures and dynamic elements) and JavaScript.
It should be noted that cross-browser testing must be performed when the system is stable and all functionality is debugged, otherwise errors will occur that are not cross-browser. This is fraught with unnecessary financial costs.

Convenient online services and utilities exist for online cross-browser verification. As a result of this check, you get ready-made screenshots of displaying your site in different browsers, then compare and analyze.

Advantages and disadvantages of Automated Testing

Test Automation

Testing a software product, during which the main stages of the verification are carried out using automatic tools (launch, initiation, execution, processing of results and drawing conclusions) in the English version looks like “functional testing services”. In Russian - automated software testing.




Prerequisites for Automation
Like all highly targeted products, software testing automation has its pros and cons. Accordingly, there are cases when automatic testing can be carried out, and options where manual mode is more useful.
The undeniable advantages of automatic testing include:
  • Cycling is a guarantee that the created autotests will always follow one test algorithm, which will not miss the intended test in one of the cases of use.
  • Quick result - there is no need for the time that a person needs to verify the intermediate results, to confirm the correctness of the fulfillment of requirements.
  • Cheapness - once created software for testing requires less effort to analyze the data obtained, as a result, replacing the same volumes of manual testing without loss in quality.
  • Plenty of room for reporting - ready-made results are easy to process, and the reports themselves are easy to distribute to interested parties.
  • Free hands - a human tester, while the program is running, can perform other useful activities that are not subject to automation. It is permissible to conduct testing at a time when the load on numerical resources is reduced (after hours).
Cons of automated software testing
  • Cyclicity (yes, yes) - uniform tests cannot catch other elements than those for which they are written. A person is able to notice minor inconsistencies and, at the testing level, draw conclusions about the nature of the error or make corrections.
  • Support - although the costs of manual testing are greater, automatic tests also need to be updated and brought up so that the functionality of the checks corresponds to the level of the application being tested (with increasing complexity of the software being tested, there is a need to update the autotest code).
  • Development is writing, and most importantly, debugging and testing of autotests takes a lot of time. After all, in fact, software for testing software is nothing but the same software. Only the functionality is very narrow.
  • Cost - a licensed copy of the test automation services framework can cost a decent amount. And although free options are also usually widely used, their functionality often leaves much to be desired, and a license should help in case of a problem described in clause 2 of this list.
  • Minor errors - autotests may not notice minor defects that do not harm the functionality of the code, but spoil the visual interface and impede the end user (window shifts, grammatical errors ...).

To know more about regression testing services

As you can see, the advantages and disadvantages are the same. Therefore, in each case, it is necessary to compare the expected benefits and the upcoming costs of testing automatically. 
If the shortcomings are insurmountable for you - there remains one alternative - manual testing. But, he also has his own shortcomings.

Load Testing in Software Testing

Software developers, in connection with the accelerated development of the IT industry and hardware capabilities, are forced to ensure the smooth operation of their brainchildren under incredible loads.

Purely commercial: a client who encounters performance deficiencies in an application or program, in the face of a wide selection, is likely to abandon the provoked product. Solving the problem of sudden failure can be solved by testing software under conditions of maximum loads (load testing).

Its high-quality implementation should be the gold standard for those wishing to guarantee the stability of the program.

"Stress Testing". Two words that we will now try to explain for a more complete understanding of the process.

“Load Testing” - load testing in translation into English. A synonym is performance testing. The software imitates the activities of several, a certain number of users of the presented software product (Performance Testing).

In order to effectively conduct load testing, you need to be aware that this is a rather complicated process. It cannot be compared with recording and running executable scripts.

Load testing services, first of all, is to obtain serious analytical data. Without advanced programming knowledge, testing cannot be fully organized automatically. Secondly, given the multi-use of the product, knowledge of networks (protocols, variety of servers, databases) is required.

Third, for various purposes regarding the operation of the product under load, the corresponding types of stress testing are used.

Load testing - in a professional software testing environment, it is used to make a decision on the possibility of its final commissioning. The essence of LOAD testing is to evaluate the performance and speed of a web resource (or its application) under the conditions of a certain artificial load on the system.

To know more about software testing services

The main load indicators for the tested Internet resources may be the expected number of its visitors for a specific time interval, a given number of operations simultaneously performed on the website platform.

The most anticipated result of stress testing is the fact that the obtained results correspond to the system requirements for the web resource operation, which were developed at the stage of forming the web resource functional before the development of software architecture.

But given the fact that the specified requirements can often be undefined or insufficiently specified, an exploratory load testing option is used, which provides for the use of probabilistic options for the expected load on the system.

The use of load testing is optimal for determining the performance of the web resource software at the stage of its early development, as it helps to identify the viability of the system as a whole.