group shape group shape

Test Coverage – Software Testing Services

hero
s

Our Solutions

When testing software, whether it’s mobile or computer based, coverage is vitally important. All aspects of the application must be taken into consideration before sending it out to the people who will make or break your business. At Cohort Data, we have all of your bases covered. From static testing all the way to production validation, we can be with you every step of the way. Even if you have many of your bases covered yourself, we can still step in to fill in a gap here or there, as needed. This section presents you with all of the testing coverage opportunities we have that are at your beck and call..

consulting
TEST DESIGN & EXECUTION ON-DEMAND

Our On-Demand QA testing services for test design and test execution will allow you to schedule our resources whenever you need to..

Read more
consulting
MOBILE/BROWSER COMPATIBILITY TESTING

We can evaluate your processes of communication, testing, and accountability for efficiency and effectiveness to enhance, optimize.

Read more
consulting
QA AUDIT & PROCESS IMPROVEMENT

Progressively empower business "outside the box" thinking with resource-leveling partnerships.

Read more
consulting
AUTOMATION TESTING - QA AUTOMATION

Our QA Automation practice is full of automation experts in a wide range of both open source and enterprise testing tools.

Read more
consulting
LOAD/STRESS/PERFORMANCE TESTING

Our unique performance offerings encompass capacity planning, performance engineering, and optimization coverage.

Read more
consulting
SECURITY/PENETRATION TESTING

Our team is proficient in aggressively attacking application defenses from all possible angles to find loopholes and weaknesses.

Read more
consulting
SREGRESSION TESTING FACTORY

We can execute 10,000 test cases overnight without a need for automation scripts whenever you need us.

Read more
consulting
QA E-LEARNING & CORPORATE TRAINING

More than 30 corporate onsite and remote training available with 20 e-learning covering entire QA domain.

Read more
consulting
CROWDSOURCING TESTING

Cohort Data has access to 15,000 QA Testers in over 123 countries. Our ‘crowd’ is comprised of testing professionals, not users.

Read more

Acceptance Testing

Commonly called User Acceptance Testing (UAT) and sometimes referred to as Beta Testing, Acceptance Testing determines if the end product is useful to the end-user, the people that will be using the live system. In functional testing, we verify that a product works correctly according to specification. In acceptance testing, we validate that the correct thing was built and that it’s what the customer actually needs. It’s possible for an application to pass all of the functional tests and yet fail acceptance testing. If a product works correctly but isn’t actually useful to the end-user, then the project will be a failure. It’s best to determine these issues prior to a production release. Many times, this type of testing is done in cooperation with the customer or actual end-users and monitored by professional QA staff. The typical scenarios used in acceptance testing cover the ways in which we would expect the application to be used on a daily basis. Are the colors suitable? Can the users easily navigate the screens without using a manual? Are all the necessary screens present? Can the user accomplish what they need to within a reasonable amount of time? Can the user modify the appropriate areas and access only what they should be able to? Essentially, does the product solve the problem it was designed to solve? Acceptance testing should be done in an environment as close to production, if not in production itself. If it’s possible to isolate the changes, or the section of the application under test, doing it in production is viable. Many times, these types of tests are done in a lab environment outside of production, though. Most clients use our services as a final, independent verification and validation service for acceptance testing. Having a third party who wasn’t involved with the development review requirements and validate usability is a smart step that many mature organizations make.

group shape group shape

Alpha Testing

The final step before beta testing, alpha testing is most often done in house or by a team who is appraised of business and functional requirements and design specifications. Sometimes this testing is performed before the product is completely finalized. Most in-house QA performs alpha testing as a regular part of their job.
This step before the final testing phase helps to ensure that requirements are met, design specifications are complete, and that all of the significant application functions are in working order. The goal is to find as many of the defects as possible before it goes to beta test. Generally speaking, this phase of testing ensures that the product is ready for an outside team to use.
A good alpha test will have a well-defined, methodical test plan with comprehensive test cases and/or benchmarks to fully measure the product. This phase of testing normally involves several iterations, a lot of defect logging, defect fixes, and re-testing. Almost exclusively done by QA professionals, alpha testing can be a very long process.
Cohort Data’s Testing Execution On-Demand Service has a rotating team of quality assurance professionals who are ready to work on your project immediately with automated or manual tests. For the fully manual project, our Manual Test Design and Execution Service is available to work with your developers, business analysts, and managers to go over requirements, build a comprehensive test plan, design thorough test cases, rapidly execute the tests, help remediate defects, and retest until the project is satisfactorily completed.


Automation QA Testing

If automated QA testing is something you’re unfamiliar with, maybe this can help. In a nutshell, it’s a testing process that uses tools to run scripted tests on web or mobile applications in a predetermined, systematic fashion. Most tools can run the tests, report the results, and even compare results to earlier tests. Automated tests can be scheduled to run any time of the day or night, with or without an actual person monitoring.
The goal with any test automation solution is to both simplify and speed up the testing effort in order to improve software product quality and reduce time to market. Both commercial and open source automated testing tools are available, but most require some level of knowledge about the tool or automated software testing itself. This is often where independent testing groups, such as Cohort Data, come in.
While the actual running of the tests doesn’t require a person to manually initiate or monitor, the creation of the tests does require an intimate level of knowledge about the tool. Implementing automated testing can be time-consuming, expensive, and inefficient at first. However, later on, is when you will notice the full ROI. Once automation testing services are scripted – a process that takes time and in-depth knowledge – they can be used over and over again for years to come. Changes to the application under test may require some modifications to the automated tests, but if the automated test scripts were correctly scripted in the first place, maintenance doesn’t have to be painful.
One of the more difficult decisions to make when embarking on automated testing is determining what tool would suit your company best. If you select the wrong one, you could have a lot of time and money invested in something that can’t give you the return you need or desire. Cohort Data’s Automation Tool Expert Services is here to mitigate that risk for you. Our QA Professional team is full of automation experts in a wide range of both open source and enterprise testing tools. This means that we can help you to select and use the correct tool for your company and project without any added overhead due to trial and error. In addition, Cohort Data’s Automation Framework Design Services uses a systematic, multi-stage approach to creating a framework for your organization. It eliminates inefficient, ad-hoc scripting and replaces it with a well-defined, reusable automation process.

Beta Testing

As the last stage of testing, beta testing can involve in house testing or sending the product outside the company for more real-world exposure. Sometimes these tests are done by a limited public audience, as they often are for video games. The primary goal with a beta test is to have people outside of the development team use and evaluate the software application to catch any residual defects or comment on usability.
More formal beta testing, such as ones done with Cohort Data, involves a more structured approach. After a discovery phase, a thorough beta test plan would be created that covers the objectives, schedule, and strategy for feedback.
Professional testers offer more relevant feedback, which is why using a service like Cohort Data for beta testing will give you better results. Testers who don’t know what they’re doing can cause more harm than good. Feedback obtained by professionals is often triaged by making sure it’s clear, meaningful, and actionable. Efforts are made to ensure complete test coverage and that goals are met.
If done correctly, a beta test can give your company a large amount of valuable end-user and product information. Cohort Data’s Testing Execution On-Demand Service has a rotating team of quality assurance professionals who are ready to work on your project immediately.




Black Box Testing

Black box testing essentially means that the testers don’t have access to the source code of the application. They are testing what the customer wants the application to do, not what the developer programmed it to do. Testers performing black box testing know only that information is put in, what to expect the application to return, but do not know how that information is processed or returned. The internal functioning is a “black box” to the tester. This kind of testing focuses on the software functionality instead of internal system processes. It is used to analyze requirements, high-level designs, and specifications. This kind of testing is sometimes referred to as functional testing or behavioral testing.
Strategies for black box testing include validating each requirement, performing boundary tests, equivalence partitioning, and using both valid and invalid input to return correct system responses. All possible combinations of user actions are generally tested in order to accurately simulate the end-user experience. The types of defects generally found in black box testing are incorrect/missing functions, interface errors, data errors, or access errors.
Many types of testing use black-box testing methods, such as functional testing, requirements testing, security testing, and performance testing. A big bonus with black box testing is that the test cases can be written as soon as requirements are finished, giving testers a jump start on the testing process.

Database Testing

Commonly called User Acceptance Testing (UAT) and sometimes referred to as Beta Testing, Acceptance Testing determines if the end product is useful to the end-user, the people that will be using the live system. In functional testing, we verify that a product works correctly according to specification. In acceptance testing, we validate that the correct thing was built and that it’s what the customer actually needs. It’s possible for an application to pass all of the functional tests and yet fail acceptance testing. If a product works correctly but isn’t actually useful to the end-user, then the project will be a failure. It’s best to determine these issues prior to a production release. Many times, this type of testing is done in cooperation with the customer or actual end-users and monitored by professional QA staff. The typical scenarios used in acceptance testing cover the ways in which we would expect the application to be used on a daily basis. Are the colors suitable? Can the users easily navigate the screens without using a manual? Are all the necessary screens present? Can the user accomplish what they need to within a reasonable amount of time? Can the user modify the appropriate areas and access only what they should be able to? Essentially, does the product solve the problem it was designed to solve? Acceptance testing should be done in an environment as close to production, if not in production itself. If it’s possible to isolate the changes, or the section of the application under test, doing it in production is viable. Many times, these types of tests are done in a lab environment outside of production, though. Most clients use our services as a final, independent verification and validation service for acceptance testing. Having a third party who wasn’t involved with the development review requirements and validate usability is a smart step that many mature organizations make.

group shape group shape

End To End Test

When performing an end-to-end test, the entire flow of the application from the starting point to the end point is validated. The application environment must be complete and each step in the process is checked, such as database communication and network communications. This is done during the initial testing when the application is built, and is also performed as a regression test to verify and validate any changes or patches.
The purpose of these kind of tests it to identify system dependencies and ensure proper data flow between all components. Real-world scenarios are used for a more complete test of all integrated components. Examples of end-to-end testing could include alpha and beta testing, or system testing.
Cohort Data’s Testing Execution On-Demand Service has a rotating team of quality assurance professionals who are ready to help you with your end-to-end testing, with either automated or manual tests. For the fully manual project, our Manual Test Design and Execution Service is available to work with your developers, business analysts, and managers to go over requirements, build a comprehensive end-to-end test plan, design thorough test cases, rapidly execute the tests, log defects, and retest until the project is satisfactorily completed.

Exploratory Testing

Sometimes referred to as ad hoc testing, exploratory testing is done by performing test design and execution at the same time. This kind of testing is the opposite of scripted testing since items are not defined in advance or carried out according to any plan. Many companies use outsourced teams to perform this kind of testing once their scripted tests have been completed.
The value of exploratory testing is manifold. Testers are often given suggestions on what areas to focus on, and then allow the ideas to come to them in a fluid and intellectual manner. Often, what they do next is determined by the results of the last test they ran.Exploratory testing is almost always used when trying to identify or isolate a specific issue. For instance, if a tester obtained an error but isn’t quite sure what caused it, they will be forced to go ‘off script’ to run systematic, logical tests and try to force the error again in a repeatable fashion.Exploratory testing is extremely valuable and nearly all valuable testers employ the method to some degree.
The best example of exploratory testing is putting together a jigsaw puzzle. Trying to design a full and detailed plan for putting together the puzzle would be wasted energy. Each piece is a new test case, and each step you take next depends on the results of the last one. A concrete plan isn’t formed, merely a general idea, such as finding all of the corner and edge pieces first.
While exploratory testing is sometimes done by laypersons, more value is obtained by having experienced testers perform it. Done correctly by trained individuals, exploratory testing can prove more useful than scripted testing. Cohort Data’s Test Execution On-demand service is a perfect way to get professional testers to perform exploratory tests on your application. Either in tandem with scripted testing from your team or on their own, Cohort Data’s testers can help you achieve a higher quality product by assisting you in using multiple testing methodologies to find more defects.

One of the more difficult decisions to make when embarking on automated testing is determining what tool would suit your company best. If you select the wrong one, you could have a lot of time and money invested in something that can’t give you the return you need or desire. Cohort Data’s Automation Tool Expert Services is here to mitigate that risk for you. Our QA Professional team is full of automation experts in a wide range of both open source and enterprise testing tools. This means that we can help you to select and use the correct tool for your company and project without any added overhead due to trial and error. In addition, Cohort Data’s Automation Framework Design Services uses a systematic, multi-stage approach to creating a framework for your organization. It eliminates inefficient, ad-hoc scripting and replaces it with a well-defined, reusable automation process.

Failover Test

Many high-end systems that have heavy use and carry a lot of data have a failover system in place. This can be as simple as allocating additional resources by bringing more web servers online, to moving the entire operation to a back-up. Failover testing is used to verify the system’s ability to continue day-to-day operations while the processing part is transferred to a back-up. It can determine if a system is able to allocate additional resources when needed, or even if it’s able to recognize when the need has arisen.
One example of failover testing: You have four webservers under a heavy load and one of them crashes. Does the load balancer react in the correct manner? Can the other three web servers handle the load? Does the crashed web server restart itself or does it require manual intervention? Is there an automated notification system and did it notify the correct people at the right time?
By testing these things in advance, IT teams can have a benchmark for the future. They can rest easy with the knowledge that they can bring down a server for maintenance without significantly impacting production. Managers can have the confidence that in an extreme situation, the system’s redundancy is capable of handling the problem without downtime, thus keeping the company’s image intact.
Generally completed as part of a performance testing plan, failover testing is vital to verifying the readiness of a production system. Cohort Data’s Performance and Capacity Planning Services were designed to test failovers in addition to all other performance testing facets.




Functional Testing

Functional testing specifically focuses on the expected functionality of the application under test. The primary question answered is: Does the application do what it is supposed to do? Testers use the functional specifications and design documents to create a test plan. Input data is created and the system’s output is analyzed based on expectations. Each action that the system takes is validated by functional test scenarios.
Defects found during functional testing are usually related to user interface defects or communication between various components. Functional testing ensures that your application works in exactly the way it was designed in all possible scenarios. Examples of functional testing techniques include white box testing, black box testing, unit testing, and user acceptance testing.
Cohort Data’s Testing Execution On-Demand Service has a rotating team of quality assurance professionals who are ready to help you with your functional testing, using either automated or manual tests. For the fully manual project, our Manual Test Design and Execution Service is available to work with your team to go over requirements, build a comprehensive functional test plan, design thorough test cases, execute the tests and log defects.

Incremental Integration Testing

Integration testing ensures that the individual unit tested components work well together. However, incremental integration testing is a method of testing often used in Agile projects where a module is tested, then integrated with another module. That integration is tested, and then another module or component is added. Instead of integrating everything at once and testing, the integration is done incrementally as additional pieces are added into the main one.
The objective with this kind of testing is to get feedback to the developers earlier and to help isolate issues. If Modules A and B worked well together but something fails when Module C is added, then that helps to indicate where the problem may be. The underlying issues can be found sooner and fixed without impacting other modules. When the defects are found early in smaller assemblies, it’s much more efficient and less expensive to fix. Both developers and testers can and do perform incremental integration testing.
As with integration testing, there are a couple of different approaches to incremental integration testing. Top-down testing starts by testing the top layers and integrating lower layers as the tester works their way down. Bottom-up testing starts by validating the lowest levels of the software, such as the drivers, then working upwards to integrate higher level components.
The downside to this kind of testing is that it can be time-consuming and repetitive. However, if components are required to work independently of one another as well as integrated, then this testing is a step that cannot be skipped.
It’s imperative that the people performing incremental integration testing have a solid background in the testing techniques involved so that an efficient integration testing strategy is developed and utilized. Cohort Data’s Application Architecture Inspection Services as well as our Testing Execution On-Demand Service can be used to provide you with that expertise. We can assist your team with oversight to fix flaws in your module communication with the goal to catch the errors early, reduce your costs, and lower your risks while allowing your team the freedom to work on other aspects of the project.

Software Integration Testing

While unit testing verifies each module or unit individually, integration testing validates that all of the units work well together. It tests the communication paths between modules, either in smaller aggregates or the entire system as a whole. The purpose of this kind of testing is to verify the requirements of major items, or groups of units. Here’s an easy-to-understand example of integration testing: eBay and PayPal are two independent applications, but when you make a purchase on eBay, you’re offered the option of paying with PayPal. Testing this communication between the two applications is an example of integration testing. A further example is the confirmation email you receive from one or both systems after your purchase and payment is complete.
There are different types of integration testing. Big bang, where all of the modules are tested as a whole. Top-down or bottom-up, where low or top level modules are integrated and tested one by one. Or even a combination of the former two, called sandwich testing. Some of these integrations may be in-house developed units, or they could be third party items such as libraries, web services, or DBMS.
Defects found in this kind of testing are generally related to inter-process communication or parameter and data inputs. Two units may have passed unit testing and work well as individuals, but fail to communicate vital information with one another. By testing the units in aggregate, it’s easier to identify the source of the issues.
Cohort Data’s Application Architecture Inspection Services as well as our Testing Execution On-Demand Service can be used to perform Integration testing either after or independent of unit testing. We can provide your team with the necessary expertise and oversight to fix flaws in your module communication, with the goal to catch the errors early, reduce your costs, and lower your risks.

group shape group shape

Negative Testing

The final step before beta testing, alpha testing is most often done in house or by a team who is appraised of business and functional requirements and design specifications. Sometimes this testing is performed before the product is completely finalized. Most in-house QA performs alpha testing as a regular part of their job.
This step before the final testing phase helps to ensure that requirements are met, design specifications are complete, and that all of the significant application functions are in working order. The goal is to find as many of the defects as possible before it goes to beta test. Generally speaking, this phase of testing ensures that the product is ready for an outside team to use.
A good alpha test will have a well-defined, methodical test plan with comprehensive test cases and/or benchmarks to fully measure the product. This phase of testing normally involves several iterations, a lot of defect logging, defect fixes, and re-testing. Almost exclusively done by QA professionals, alpha testing can be a very long process.
Cohort Data’s Testing Execution On-Demand Service has a rotating team of quality assurance professionals who are ready to work on your project immediately with automated or manual tests. For the fully manual project, our Manual Test Design and Execution Service is available to work with your developers, business analysts, and managers to go over requirements, build a comprehensive test plan, design thorough test cases, rapidly execute the tests, help remediate defects, and retest until the project is satisfactorily completed.


Positive Testing

Positive testing is employed to make sure that the application or system does what it is supposed to do. Generally done in tandem with negative testing, positive testing helps to verify that helpful error messages appear when they should, that functions work as expected, and that correct input is accepted by the system. Positive testing generally tries to verify that a system does what it is supposed to do. By contrast, negative testing verifies that the system does not do what it’s not supposed to do. For example, if an application is supposed to accept x, y, and z values, a negative test would attempt to input a, b, and c values in an attempt to force an error condition while a positive test would verify that the system does in fact accept the x, y, and z values. In the most basic sense, positive testing utilizes all possible valid inputs to verify they are accepted per the requirements.
Boundary testing is the most common form of positive testing. The system should handle the boundary conditions, but they are one of the more common areas for errors. Most often, it’s the data right on the edge that’s often mistaken for invalid. Positive testing must be performed to validate that all possible forms of valid data are accepted by the system. Users should be able to enter anything they want within the valid input range.

Regression Testing

Regression Testing: After a patch or bug fixes during normal software development cycles, a system should be Regression Tested using manual or automated regression testing to ensure that the applied changes didn’t adversely affect other parts of the system. These issues could be functional, non-functional, or even aesthetic. Far too often, by fixing one issue the development teams can introduce defects somewhere else in the software. These unintentional side-effects within the application can do damage to a company name by causing the users to lose trust in the software application. The purpose of incorporating a regression test process into the quality assurance process is to make sure that any modification done has had only positive results and that the application still meets the requirements. While tests for any new functionality or the changes made to existing functionality were successful, regression testing could fail due to problems elsewhere in the application.
Software regression tests are ideal candidates for automated regression tests since they must be performed repeatedly, each and every time changes are made to the system. For this to be done manually, it can be tedious and time consuming. Even when server patches or database upgrades are performed, full regression testing must be done to ensure the integrity of the complete system so that customers are not adversely affected. Generally, Regression test suites cover the full functionality but in a limited capacity.
With Cohort Data’s Regression Testing Execution Factory Services, no job is too big or too small. No matter what changes you have made or are planning to make, our QA Experts can take the burden off your team by performing regular and consistent full regression tests. We work on your schedule, alongside your QA team, so if you have monthly releases then we schedule monthly regression tests – it’s all part of our software testing services. And with our Test Automation Framework Design Services, we can help you build a solid foundation and regression testing approach for your test suite by showing you the best ways to standardize your automation process.




Sanity Testing

Sanity tests and smoke tests are terms that are often used interchangeably. At the core, sanity tests make sure that a system is ready to test. That is the simplest definition, but it is a little more involved than that.
Sanity tests, or sanity checks, can involve two different ways of verifying the system is ready to be tested. The first part is to ensure that the system is stable and that all major components are functioning. These tests are just a small subset of regressions tests and generally touch only the major existing functionality. For instance, are all of the pages there? Is the system communicating with the web servers, media servers, mail servers, and databases? These are very quick and shallow tests to make sure that further testing is even possible. If the database communication was broken, then any other testing would be a waste of time and effort. These tests are easily automated.
The second part of sanity tests verify that any new functionality or bug fixes were actually applied. If your goal is to test a new functionality and it wasn’t put into the build, then spending hours creating the data to test it would be a waste of time. For each defect fixed, change applied, or functionality added, a sanity test must be done. These tests are manual and ad-hoc (or unscripted).
Once the stability of the application has been verified by the sanity tests, the heavy lifting can begin. This kind of testing saves a lot of time and frustration by making sure that the application is ready for the more specific tests and that it’s reasonable, or sane, to continue. If the sanity test fails, then the build is rejected.
Cohort Data’s Testing Execution On-Demand and our Manual Test Design and Execution Service is available to work with your developers, business analysts, and managers to design or perform your sanity tests. We can help you save time by identifying show-stopping issues before more thorough testing is even started.

group shape group shape

Unit Testing

Generally speaking, a unit is the smallest testable part of an application. Therefore, unit testing involves testing each of these units individually. Commonly automated, unit testing’s objective is to validate the correct functionality of each unit under isolation. Sometimes, these types of tests can reveal dependencies that are unnecessary and can be eliminated for more efficient code.
This stage of testing lends itself well to quickly finding the root cause of a defect. For example, if you test an application when 5 units are working together and you find a defect, it will take you some time to figure out which unit is the source of the defect. However, if you test each unit while it’s isolated from the rest of the code, the defects found will only be in that unit and are much easier to identify and fix. Unit tests can also help to ensure the performance of code so that slowness doesn’t inadvertently creep in over time.
Defects found in this early stage of development cost nearly nothing to fix, which makes unit testing a great boon for development teams. Additionally, unit tests can be reused over and over, and as such they facilitate inexpensive, continuous software testing. Generally, these tests are run by the developers themselves, but the task can be outsourced. Cohort Data can assist development teams during their validation/verification phase. Our Architecture Inspection Service coincides with unit testing coverage.

System Testing

Not unlike end-to-end testing, System Testing verifies the behavior of the entire system against business, system, and functional requirements. It’s generally done after the unit testing and integration testing has been completed. A type of black box testing, system testing utilizes use cases, requirements, specifications, business rules, and other high level documentation and descriptions.
The goal of system testing is to verify that the system meets its intended purpose based on the user’s point of view. In depth knowledge of design or code shouldn’t be since testers are concentrating on finding issues with the application behavior and expectation of the end user. For best results, system testing should be performed in an environment as close as possible to the projected production environment.
Since the entire system functionality is being tested for the first time, system testing can be the most time consuming testing process in the SDLC. However, it is extremely important and is often the last step before a production release, or before acceptance testing.
Cohort Data’s Testing Execution On-Demand Service and/or Manual Test Design and Execution Service has a rotating team of quality assurance professionals who are ready to help complete your system testing. We will work with your developers, business analysts, and managers to review requirements, build a comprehensive system test plan, design and execute thorough test cases, and log defects. By utilizing our services, your team can focus on fixing defects and preparing for other testing cycles or the production release.

White-box Testing

White box testing assumes that the testers involved can look at the application code. In this type of testing, testers can look for potential failure scenarios in the code itself and ensure each class is behaving the way it’s supposed to. All internal components are exercised and tests are conducted to ensure that operations perform according to specifications. Sometimes this type of testing is referred to as unit testing.
Test plans or test cases for white box testing can only be done after a stable version of the application, or application block is available. Testers will perform code reviews in order to profile code coverage, resource utilization or leaks. Internal subroutines are identified and tested for integrity and consistency.
Strategies for white box testing include segment coverage to ensure each statement is run at least once, condition coverage to test each multiple-path combination, data flow testing to track variables throughout calculations, and much more. White box testing ensures that all independent paths are exercised, all logical decisions are verified, and all loops are executed at their boundaries. The most common kinds of defects found in white box testing are logical errors, design errors, and incorrect syntax.

Comparison Test

Comparison testing will help to assess how your product performs against your competition. It compares your product’s strengths and weaknesses with other similar software out there. It can be a very good indicator of what sets you apart from your competition, and how useful your product will be its users. In its simplest definition, comparison testing lets you know just how marketable your product actually is. Is your application innovative? Does it have an advantage over its competitors? Comparison testing can tell you that.
Using comparison testing can tell you the weaknesses and strengths of your application, as well as highlight the aspects of it that must be double-checked before its release. It will facilitate your understanding of the design of your competition’s products and help you make better decisions regarding pricing.
Comparison testing can include many areas of evaluation including: functionality, quality, performance, features, usability, help manuals, and security. Some or all of these are reviewed during comparison testing, depending on the criteria most important to the stakeholders. Its main purpose is to make sure your product can compete in its market. Comparison testing can give your business a significant advantage if done correctly.




Compatibility Testing

As a vital part of quality assurance, compatibility testing of devices and browsers ensures that an application performs as expected on multiple operating systems, in multiple browsers and versions, varied mobile devices, on varying connection speeds, and even when used in conjunction with third party applications. The world of one dominating web browser and operating system is behind us. These days we have a handful of top used browsers and new ones popping up frequently. The mobile market is taking the world by storm and businesses without mobile compatibility could lose precious sales and marketing advantages. Comprehensive compatibility testing on web and mobile devices is necessary for ensuring a good user experience. This type of quality assurance testing uses tools and methodologies to help identify problems and bottlenecks in your application to make it more efficient and give your customers the best possible experience on your website or mobile application regardless of what their personal setup is. Add-ons, various mobile platforms, connection speeds, all of these and more are evaluated in compatibility testing. Most often done in a testing lab, compatibility testing can verify data integrity, visual appeal, and identify issues with third party applications such as office or help desk software. It can make sure that your application is backwards compatible and that your recent patch didn’t eliminate anyone as potential users. While various compatibility tests can be run, it’s best to focus on the areas specific to your needs. If your application is for mobile users, then you would focus on mobile operating systems and models. If it’s for computer use, then you might want to do operating system compatibility tests. The most common compatibility issues these days are related to browsers to view web pages and/or web applications. Regardless of the focus, browser compatibility testing ensures that all of your end users will have the same positive experience. Cohort Data’s Lab Compatibility Services was designed for the sole reason of testing all possible browser types and versions, OS and OS versions, add-ons, mobile platforms, and connection speed combinations. It would be difficult, expensive, and exorbitantly time consuming for a company to set up and perform these kinds of quality assurance tests on their own, and why should they? Cohort Data has the software and expertise in place, ready to start testing immediately with a team of key individuals dedicated to your project. Our lab is equipped to provide comprehensive compatibility testing services and ready to test all current mobile environments including Blackberry, Google Android, Apple iOS, Symbian, Windows, Firefox, Ubuntu, Tizen, Bada, and Brew. With our mobile testing, you can rest assured that your mobile web page will look the same to all users, and/or your mobile app will perform the same on all supported devices. As the fastest growing market, this is not an area to neglect. While other companies may take shortcuts, here at Cohort Data we don’t use simulators. We use the real devices, hands on to ensure certified compatibility.

Configuration Testing

Configuration testing involves validating an application’s behavior in various environments. This type of testing determines the immediate or long term effects of configuration changes on the system’s behavior and performance. If it’s possible that your software could be installed on many different configurations, then this kind of testing is absolutely necessary. Doing so will validate compatible configurations and alert you to incompatible ones.
Generally, the number of possible configurations is too large to test. This fact alone means that it’s vital that the planning phase of a configuration testing effort identify only those configurations that will be supported. Priorities must be established based on the expected user base and risks associated with hidden bugs in particular configurations.
Most often done in labs with multiple machine and different hardware, configuration testing can be a time consuming process that requires a great deal of hardware and software knowledge. Cohort Data’s QA Environment Management Service can help you build the right environment for these tests. Alternatively, using our own comprehensive test lab for our Test Design and Test Execution On-demand services would keep your costs down while still providing you with the specific configuration testing expertise you require. Setting up your own lab can be expensive, but we already have one up and running, and ready for your configuration testing needs.

group shape group shape

Globalization Testing

Globalization testing ensures that a product operates the same in multiple international languages as it does in its native language. The most common issues with globalization generally relate to non-Latin languages that use different characters such as in Japanese or Cyrillic. This kind of testing makes sure that your global product can be translated effectively into any language you want, thereby cementing your standing in the global market. When you’re translating your application into another language, or multiple languages, every word, image, and button has to be checked for accuracy. Each entry field must accept multiple types of data input that could be different than just a single language or locale. Installation paths have to be checked using non-ASCII characters, and even keyboard entries need to be verified using language specific keyboards.
Defects found in this kind of testing are usually localized to the presentation layer of an application, but coding issues are not uncommon. Mistranslations, cosmetic issues, and hyperlink navigation errors are all frequent globalization hurdles. All dialogues, dropdowns, list controls, menus, and data transfer processes are thoroughly exercised in each language translation desired. In addition, any third party software utilized in the application has to be verified in each language. Sometimes overlooked but just as important are currency displays, postal codes, weights and measurements, and telephone number formats that are different depending on locale.
Testing the entire application in multiple languages can be time consuming and expensive, as well as requiring some specific skillsets and knowledge of common globalization pitfalls. Cohort Data’s Testing Execution On-Demand Service and our Manual Test Design and Execution Service can help arm your company with the tools necessary to navigate the technological challenges associated with getting your product multilingual. We also have a pool of testers from across the world, speaking multiple languages and able to verify translations as well as perform globalization tests.


Endurance Testing

Also known as soak testing, endurance testing helps to determine if a system can sustain a continuous high load. Memory utilization and performance degradation are closely monitored to detect potential leaks and ensure response times are suitable. Generally, this kind of test is done by applying a heavy load to a system for an extended period of time. While some systems perform well with an hour of heavy loads, those same systems could experience degradation after three hours of sustained use. Using the usage statistics gathered before the tests, an everyday load is determined for the site. Concurrent users and data transfer are incrementally increased that point and are held steady for a predetermined period of time. This test can help identify memory leaks or other issues that a dramatic load test may miss, and since this is the normal, everyday operating capacity of your site, it’s very important to feel confident it can maintain that consistency without error. Endurance testing should be performed in a systematic, planned way and not ad hoc. For that reason, it’s often left to performance testing professionals who know how to create and adhere to a comprehensive test plan. Cohort Data’s Performance and Capacity Planning and Performance Engineering and Optimization Services are here to help you professionally analyze, find, and resolve performance bottlenecks that could be holding your company back. By using end-to-end performance measurement, our highly skilled performance engineers can gauge the capability of your current system, pinpoint and analyze resource consumption, and assess the system’s performance against non-functional requirements.

Installation Testing

Installation testing is specific to distributed software that end-users must install on their computer or mobile device. These could be browser extensions, apps, server software, database applications, or independent software applications. It focuses on what customers need to do in order to successfully install, or uninstall, your application. The process involves full installations, partial installations, installation configuration options, patching processes, and uninstall processes.
If you think about the last application you installed, it likely involved allowing you to choose an installation directory, installation options you could select or deselect, such as if you wanted a shortcut placed on the desktop. In the background, registry changes were also made, and an uninstallation procedure was put in place. Each of these things are also affected by operating environments and other software already installed on the system. All of these steps are testing during installation testing, and verified in as many end-user configurations as possible.
The goal of installation testing is to make sure the software is installed and working as expected after installation. Sometimes called implementation testing, it’s one of the most important tests of distributed software.
Cohort Data’s Testing Execution On-Demand Services and Lab Compatibility Services have a rotating team of quality assurance professionals who are ready to help you with your installation testing. We have a fully equipped lab that comprises all possible browser types and versions, OS and OS versions, add-ons, mobile platforms, and connection speed combinations. It would be difficult, expensive, and exorbitantly time consuming for a company to set up and perform these kind of tests on their own, and why should they? Cohort Data has the software and expertise in place, ready to start testing immediately with a team of key individuals dedicated to your project.

Load Testing

Load testing is one of the multiple facets of performance testing as a whole to fully exercise the software and hardware and identify any weaknesses, as well as benchmark where and when the issues arise. During load testing, a normal load, heavy load, and a projected growth load is determined by using usage statistics. Starting with the lighter load, the number of concurrent users is incrementally increased until the system starts to respond more slowly. The increase is continued until the system actually stops responding at all, thereby discovering the failure threshold. This information is used to determine the load boundaries and locate the bottlenecks.
Load testing tools such as LoadRunner, Cloudtest, and Rational are commonly used and data is gathered during the test. This data is vital to help identify performance bottlenecks, point to infrastructure weaknesses, and help you to make plans for future scalability due to growth. Load testing can tell you exactly how many users or transactions your site can handle before response times increase.
Generally, this test will uncover buffer overflow issues, memory leaks, or load balance problems. Performance optimization can then be completed and hardware changes or additions can be made to increase the failure thresholds as needed.
Every site has its breaking point and Cohort Data’s Performance Testing services can help you find yours so you can fix inefficiencies prior to release, add additional hardware, or be prepared for future scalability needs with our Capacity Planning Services.




Localization Testing

Localization testing is similar to globalization, but this kind of testing is adapted to individual locales. While globalization tries to ensure that a product works everywhere, localization validates that the application will operate in a specific environment. For instance, if you are interested in releasing your application in both the US and China, then you would want to do localization testing for both of those areas independently. A globalization test wouldn’t be necessary since the release is localized to just two countries. So while localization may be part of globalization testing, it can and does stand alone. Another example is postal code fields. In globalization testing you would need to verify that a postal code field allowed both numbers and letters. But in localization testing for the US, only numbers should be allowed in the postal code field.
During localization testing, translations are created, applied, and verified. Preferably this is done with native speakers of the language you’re testing. The use of translators and language engineers is common for the linguistic part of localization. Correct language rules usage is ensured and the appearance and functionality of the complete product is verified.
While primarily focused on the user interface, issues with functionality are not uncommon. System variables shouldn’t be translated and significant problems occur if they are. Spell checkers must be modified for the additional languages and even the tiniest details must be paid attention.

group shape group shape

Performance Testing

Performance testing includes multiple, distinct facets to fully exercise the software and hardware and identify any weaknesses, as well as benchmark where and when the issues arise. Different tests are used to verify the performance from varied angles.
Load testing is done to find the failure threshold of a system by incrementally adding concurrent users until the system’s response slows. This information gives us the optimal load boundaries and helps to identify where the bottlenecks are. Generally, this test will uncover buffer overflow issues, memory leaks, or load balance problems.
Volume testing focuses on data volume. Similar to the concurrent users’ threshold, the amount of data processed or transferred is slowly increased. This test helps to determine the amount of data your site can handle before it starts to display errors or stop responding at all.
With stress testing, the site’s breaking point is targeted. With the information gathered from load and volume testing, the site is sent more data and users than it can handle. This type of stress is abnormal for the system, but it’s important to identify how the software responds and more importantly, how it recovers.
Reliability testing shows how well your site can maintain a normal load. An ‘everyday’ load is determined, applied, and held for a long period of time. This test can help identify memory leaks or other issues that a dramatic load test may miss.
Cohort Data’s QA Performance and Capacity Planning service, as well as our QA Performance Engineering & Optimization can help you find your site’s breaking point so you can fix inefficiencies prior to release, add additional hardware, or be prepared for future scalability needs. We will be able to tell you exactly how many users your site can handle before it starts to respond more slowly. We can point you to exactly how many transactions you can process per day, hour, or even per second before the site begins to exhibit stress. We can then work with you to fix the performance issues and make sure your site is ready for production, and ready for the growth of your company.

Mobile QA Services – Application Testing

Mobile Testing refers to the testing of mobile applications on various mobile devices. Applications are tested for functionality and usability, as well as compatibility with multiple devices and platforms. Occasionally, the testing also involves compatibility with other mobile applications. With billions of dollars of revenue at stake through mobile app purchases, mobile testing has become a significant part of software quality assurance.
Since there are thousands of mobile devices and platforms, as well as hundreds of network operators using various technologies, mobile testing has some unique challenges. To meet these challenges, a combination of testing on actual devices and emulators is often employed.
The goal of mobile testing varies from vendor to vendor. Some developers focus on specific platforms while others try to accommodate as many as they can to maximize their client base. For all, the types of mobile testing are relatively the same.

  • Functional testing that ensures the application meets requirements
  • Laboratory testing to validate the voice or data connections
  • Performance testing to check behavior under certain conditions such as low battery, bad coverage, or low available memory
  • Memory leakage testing to verify memory allocation
  • Interrupt testing to make sure application function is able to recover from interruptions such as network outages, texts, or incoming calls
  • Usability testing to verify the ease of use and learning curve for the end user
  • Installation testing to verify that the installation process succeeds without error

Certification testing for compliance certificates that state the application meets guidelines set by each mobile platform Cohort Data’s Lab Compatibility Services was designed for the sole purpose of testing all of these things and more. Our lab is equipped for testing in all mobile environments without the use of simulators. We use real devices, hands-on to ensure certified compatibility. With the software and expertise in place to start mobile testing immediately, Cohort Data (mobile testing company) has a team of individuals ready to dedicate to your mobile project.usability is a smart step that many mature organizations make.

Recovery TestingTesting

Recovery testing is a non-functional kind of testing that tells us how well an application can recover from crashes or hardware failures. The ability of the system to restart the operation or application after integrity is lost, is verified. The testing process involves forcing a failure of the software in multiple ways to verify that recovery is correctly achieved each time.
For example, if while transferring data the connection is interrupted and then reconnected, does the application resume the data transfer without error? If a browser has sessions and the system restarts unexpectedly, is the browser able to recover the session data?
The goal is to prove how fast the application can recover from any kind of crash or hardware failures, or other major problems in order to ensure that normal operations can be continued. Recovery testing verifies the effectiveness of the recovery operations, the backup procedures in place, and the training of the recovery personnel themselves. In the event of a disaster, Recovery Testing ensures that the integrity of your business can be restored without data loss, security breaches, or exceptional down-time.
Prior to testing, requirements must be documented to specify what should happen for each failure and the acceptable length of time for recovery. The time it takes to recover depends on the number of restart points, the volume of the application, and the tools available for the recovery operations.
Cohort Data’s Test Design and Test Execution Services are perfect for this kind of testing. With your requirements in hand, our seasoned staff can create a thorough recovery test plan and execute it using all possible failure and disaster scenarios.

Stress Testing

Stress testing targets a site’s breaking point. The site is given more users and data than it can handle to see how it responds. The amount of stress applied is considered abnormal, but it’s very important to understand how your software responds. Or more importantly, how it recovers. Think of it like a cardiac stress test. The heart is benchmarked under normal conditions, and then observed under extreme stress. The data gathered can point to various possible problems. The same occurs during a software stress test.
The primary goal is to validate the availability and error handling under heavy loads. While performance testing focuses on the response time, stress testing pushes the software to a level where one or more processes actually fail. These failures cause insufficient resources (memory or space) and the application is evaluated on how well it handles during such a situation. The goal is to make sure the system doesn’t completely crash, offers correct and appropriate error messages, and is able to eventually recover from the stress in a timely fashion without significant downtime.
This type of performance testing should always be done with trained professionals who know how to understand and evaluate the results. Cohort Data’s QA Performance and Capacity Planning service, as well as our QA Performance Engineering & Optimization can help you find your site’s breaking point so you can fix inadequacies prior to release.




Scalability Testing

Every business’ hope is that their customer base will grow. But that means that the software needs to grow with it. Scalability testing ensures that an application can handle the projected increases in user traffic, transaction counts and frequency, and data volume. It tests the system, network, processes, and databases ability to meet a growing need. If you have increased traffic, you don’t want increased wait times to go along with it. Often part of performance engineering, scalability testing lets you know at what point in your growth you will need to add additional hardware or potentially make software adjustments. Scalability testing can also refer to how an application scales when it is deployed on larger systems, or as more systems are added to it.
The goal of scalability testing is to identify the point during the scale-up the system performance is degraded due to data transfer increases, traffic, or workload. Databases are a good example of potential scalability issues. If your database has a caching tier, what happens if the size of that cache expands exponentially? Indexing issues often arise as well. As the database grows, so does the time it takes to perform searches. Scalability testing may point out the need to change hardware or software operations, but it may also point out the need for schedule maintenance on certain parts of the system in order to keep it running efficiently.
A proper scalability plan and sufficient performance metrics gathered during the execution of that plan are vital for understanding what is happening to the infrastructure. These kind of tests are best performed using tools suited for them, and people who know the finer details of this kind of testing. Cohort Data’s QA Performance and Capacity Planning service, as well as our QA Performance Engineering & Optimization can help you determine how well your system will scale with your imminent growth. . We can point you to exactly how many transactions you can process per day, hour, or even per second before the site begins to exhibit stress. We can then work with you to fix the performance issues and make sure your site is ready for production, and ready for the growth of your company.

group shape group shape

Usability Testing

The simplest definition of usability testing is that it validates the application under test is user-friendly. Another black box testing technique, usability testing measures how comfortable users feel using the application based on the layout, navigation, flow, speed, and the validity of the content. Sometimes this kind of testing also compares the application to similar competitors, or to previous versions of the same application to validate increased functionality or the competitiveness of the product.
The goal of usability testing is to test how easy the software is to use, learn, and how convenient it is for the end user. It tries to answer questions like the following: How fast can the user accomplish their tasks? How easy is it for users to learn the basic functions? How many errors does the user encounter? How much does the user like the system?
Usability testing requires some creative thinking, a good understanding of usability issues, and great observation skills on the part of the testers, and a willingness to be open to suggestions and new ideas on the part of the developers or stakeholders.
If planned and executed correctly by experienced testers, usability testing can be highly beneficial and effective in helping to fix all problems that a user might face. Problems that are often easily missed in other types of testing. Cohort Data’s Test Execution On-demand service is a great way to get creative, professional usability testing performed on your application and make sure that your customers will be satisfied with your product. By validating the usability of your product, your company will have a competitive advantage in the market.

Incremental Integration Testing

Security testing is a non-functional type of testing performed to check if an application or system is vulnerable to any number of potential attacks. The process is designed to determine that the system protects confidential data and still maintains its functionality. Lost information means lost business and possibly lost money. Security testing checks data encryption, firewalls, and any other possible access points used by malicious individuals.
Generally speaking, the people performing security tests try to think like a malicious user and attempt to ‘hack’ into the system using multiple methods. Common attack tests include Denial of Service (DOS), SQL Injection, authentication, Cross-site Scripting (XSS), privilege and function exploits, and direct object pathways. Each of these types of tests can reveal a weakness in a web or mobile application that could be exploited for the personal gain of dangerous individuals. The majority of web and mobile applications submitted for security verification do not pass the first time. Even a small breach could cost your company millions in lost business, loss of trust, and lawsuits.
Proper security testing requires dedicated training, ongoing education, continuous practice, and top rated tools. Since the dynamic world of software security is in constant flux, the best security testing engineers immerse themselves in the community of security testing and keep up to date on the latest threats and how to avoid them. This kind of testing is highly specialized and should never be disregarded as something that any developer or tester can do without proper training.
The time to worry about security is before an attack, not after. With Cohort Data’s Security Testing Services, we can work with you to certify your site and give you and your clients the safety and security they deserve. Your customer’s sensitive information may be your prime concern, but it’s our business.

Static Testing

Many people, even some testers, don’t realize that testing can and should start before a line of code is ever written. Static testing is that process, and it continues even after coding has started, but the execution of the code isn’t necessary for static testing. By thoroughly reviewing requirements, design documents, design specifications, and prototypes, static testing can unearth defects early in the SDLC where it’s more cost effective to fix them. Missing or incomplete requirements, poor design, or inconsistent interface specifications are contenders for the first round of issues. Even code reviews are considered a form of static testing, as they can reveal inconsistencies or lack of adherence to coding standards.
The primary objective of all static testing is to find errors as early as possible in the SDLC and thereby improve the quality of the end-product with the least amount of cost. Both formal and informal reviews of documentation, prototypes, code, or test cases are generally performed for thoroughness and appropriateness of the end goal of the product. Nearly half of production defects could have been found during proper static testing cycles.

Volume Testing

Volume testing is a type of non-functional testing that refers to testing the data load capabilities of a product. For instance, if we expect certain database growth, we may want to artificially grow the database to that size and test the performance of the application when using it. System performance can degrade when large amounts of data must be searched or indexed.
Similar to the load testing aspect of validating the concurrent users’ threshold, volume testing involves validating the system’s performance during an increase of data processing or transfer. This kind of testing can determine the amount of data the application can handle before it starts to display errors or even stop responding.
A very important part of volume testing is data generation. Data variation is very important to simulate real-world scenarios that occur in the production environment. Often, production data is used and then additional data is randomly generated based on the production data. Some of the more common issues found in volume testing are insufficient disk space, buffer overflow problems, database expansion, inefficient queuing process, timeout problems, and indexing issues.
Cohort Data’s QA Performance and Capacity Planning service, as well as our QA Performance Engineering & Optimization can help you determine the volume of data your product can handle so you can be prepared. For instance, we’ll be able to point to the exact number of transactions per minute that causes your application to exhibit stress. We can then work with you to optimize your application for production release and ensure its ability to handle future growth.

Static Testing Phase

Before a single line of code is written, testing can and should begin. Static testing is the first phase in software quality assurance, but unfortunately nearly 80% of the companies out there don’t perform this vital step. So why should you by reviewing requirements for thoroughness as they are gathered, defects related to requirements can be eliminated before the next steps in the SDLC are taken. As designs are mapped out, static testing can identify potential usability issues or deviations from industry standards. When defects are found and fixed before they’re actually coded, they are less expensive, less time consuming, and much easier to fix. The cycle of manual testing, defect entry, defect fixes, and retesting is reduced to simply modifying documentation. Additionally, in this phase, defects are much easier to identify than they are when in production.
Studies have shown that nearly half of the defects found in dynamic testing could have been identified and fixed in a proper static testing phase. Projects with aggressive deadlines and tight budget constraints will notice the most benefit from this testing phase. It’s much faster and cheaper to have a meeting and identify issues than to let those issues surface months later while executing code, or letting them slip into production where the costs to fix them multiply. With nearly 100% coverage, static testing has proven to be many times more effective than dynamic testing alone.




Module & Unit Testing Phase

The module and unit testing phase of the SDLC involves testing the smallest parts of an application individually. Commonly automated, unit testing’s objective is to validate the correct functionality of each unit under isolation. Additionally, if automated, unit tests can be reused over and over, and as such facilitate inexpensive, continuous software testing to verify functionality and to eliminate unnecessary dependencies. The end result is a more efficient and effective codebase.
Testing units individually aids in finding the cause of a defect faster. If you test a grouping of units together, it can be difficult and time consuming to root out the source of a particular defect. However, by testing them individually prior to integration, defects within the individual modules can be identified earlier and fixed quicker. Properly engaging in this testing phase can also help to ensure the performance of code so that slowness doesn’t inadvertently creep in over time.

group shape group shape

User Acceptance Testing Phase

User Acceptance testing (UAT) is sometimes the final testing phase for many companies, and is occasionally referred to as Beta Testing. This phase determines if the end product is useful to the people that will be ultimately using the system, the end user. It’s very possible that an application could pass the system or functional testing yet fail in the UAT. If a product works correctly but isn’t actually useful to the end-user, then the project will be a failure. It’s best to determine these issues prior to an expensive production release.
User acceptance testing utilizes real-world scenarios to weed out bugs related to usability, learning curve, and convenience. It tries to answer questions like the following: How fast can the user accomplish their tasks? How easy is it for users to learn the basic functions? How many errors does the user encounter? How much does the user like the system? Is it better than the competition?

Production Verification & Acceptance Testing Phase

After all other testing phases are completed and a build is released into production, the final phase begins. Production verification testing makes sure that the live build is working as intended in its new and final environment. Occasionally a build goes wrong and pieces aren’t applied correctly or are missed altogether. Production verification seeks to identify these problems quickly before a customer or client does. Rollback procedures are established beforehand and employed if the build is considered a failure and will take too long to fix. Generally, test accounts and data are created in the production environment that will either be hidden from end users or deleted after the verification is complete. This testing phase only takes a few hours and is concentrated mainly to ensure that the application is stable and in complete working order. All major components are tested and any new functionality introduced is exercised. This phase occurs after initial product release and after each subsequent patch or update.

System & System Integration Testing Phase

This testing phase is the big one, and unfortunately far too often, it’s the only testing phase utilized by pressured IT teams. All of the testing prior to this phase has been to help get the best possible code base into system testing. This phase generally takes the longest, is the best planned, and requires the most resources. System testing utilizes use cases, requirements, specifications, business rules, and other high level documentation and descriptions to verify the behavior of the entire system.
Some of the types of testing done during this phase are requirements testing, functional testing, security testing, performance testing, end-to-end testing, negative testing, positive testing, boundary tests, globalization testing, database testing, and compatibility testing. Since some of the testing done in this phase is black box testing, test cases can be written as soon as requirements are finished, giving testers a jump start on the testing process.