Tried and Tested Testing Methods
You’ve spent months building a new software application and now you need to make sure that it works. Testing your application may feel like a daunting afterthought to your build, but we’ve found that with proactive planning, tiered execution, and detailed regression testing, your testing process can strengthen your build, ensure you’re delivering the strongest customer experience possible, and bring to light enhancements that could be incorporated in subsequent phases. Here’s how Kenway approaches test management to provide the most value:
Test Planning
First, take a look at your test plan holistically. When do you want to start testing? From there, work backwards to determine how far in advance you’ll need to start creating your testing materials.
Waterfall Test Planning Timeline*
Here’s a timeline for the average-sized build in a traditional Waterfall implementation:
*This timeline is for a moderately-sized release. If there are several large projects going into a single release, you’ll want to stretch out this timeline. The opposite is true for smaller efforts!
Agile Test Planning Timeline
Agile testing is iterative and should follow the sprint timeline. Test planning should go hand in hand with sprint planning sessions so that test mangers can build test cases based on the scope of each sprint. Here’s a timeline for an average-sized Agile implementation:
Here are the materials you’ll need for a comprehensive test strategy:
- Test Plan: This is the hub for all your testing information-from impacted projects to testing data, the Test Plan is designed to guide the team throughout the process. The Test Plan document should include, at a minimum, the following sections: Test Overview, Defect Management Strategy, Roles & Responsibilities, Key Dates and Deliverables, Testing Metrics, Configuration Management, Test Environment Details, and Test Tools.
- Test Cases: High-level descriptions of features that need to be tested based on the business requirements of the build.
- Test Data Specifications: Identify all the data requirements required to test all test cases.
- Test Scripts: Identify all the steps needed to test one or more conditions of each test case. A standard test script template will be used and the contents will include:
- Test conditions
- Test data requirements
- Expected results
- Actual results
- Tester Information
Testing Execution
There are minimally three types of Quality Assurance testing that should take place: Unit Testing, System Integration Testing (SIT), and User Acceptance Testing (UAT). Unit Testing and System Testing occur in a testing environment with no external-facing interaction. Unit Testing is performed by developers on their individual components of the entire system within the development environment. For example, if a developer created a log-in screen, they would ensure that the screen renders correctly, has the appropriate entry boxes, and sends the correct commands out. They would not test whether the username and password combination was stored within the user database. These types of tests would be completed in SIT. SIT tests are performed with test data in a test environment and are meant to test general functionality of the entire application, rather than minute test cases. Once the application passes the test criteria laid out by the technology team, business users participate in UAT to confirm it meets their requirements. UAT is set up in a test environment to mimic the customer or end user experience and uses production-like data to test detailed elements of the application. Tests in UAT can access both external and internal interfaces.
Each testing period should contain a few different deployments. For this example, let’s assume that there are three deployments. The first round of testing cannot begin until the first deployment is complete. By testing all of the test cases in Deployment 1, it ensures that all defects are identified prior to Deployment 2. Ideally, defects would be identified and logged early enough for development to resolve most of the issues prior to the second deployment. This process repeats itself through Deployment 3.
When testing in Deployment 1, we recommend testing all scripts as far as the testers can test until running into blocking issues. Blocking issues are areas where tests fail, not because of development issues, but because all of the required dependencies have not been completed. This will help you shake out any test script errors, test data issues, or code defects as quickly as possible. Typically, we’ve found that 55% of test scripts will successfully pass in Deployment 1 on the first try, which we mark Pass 1A. The next round of tests (still within Deployment 1, before Deployment 2) will allow testers to correct for any incorrect test data, script errors, or approved design changes. Tests that pass in in this round will be marked Pass1B, these usually cover about 65% of all test scripts. After Deployment 2, you can begin another round of testing. Typically, 75% of the total number of test scripts will pass after Pass 2A and 2B. Finally, after Deployment 3, you can complete your final round of testing, hopefully achieving 100% passed scripts.
Regression Testing
Regression Testing verifies that existing functionality continues to work after you’ve made a change or addition to the application. The point of regression testing is to catch any bugs that may have been introduced by a new build or release.
We recommend running regression test cases in the evening (outside of development hours) or towards the end of a deployment cycle so that any regression side effects can be fixed in the build. This reduces the risk by covering almost all regression defects in the early stages rather than finding and fixing those at the end of the release cycle. In order to successfully perform regression testing, the team should:
- Conduct regression tests after each testing phase (Unit Test, SIT, and UAT)
- Develop a Regression Test Set, or a bank of test cases, to run as each new version of the application is deployed
- Regression Test Sets contain tests that cover the core functionality, this will likely stay the same throughout the life of the application
- Regression test sets should be updated regularly to reflect changes to the application, even including features that were added earlier in the project!
- Test sets should cover the following:
- Basic user experiences / use cases. These are the most important tests because they are critical to the application domain and should always be included in the regression sets
- Tests that identified defects in the previous versions of the release
- Save the test data into a database to compare results during later testing and report on discrepancies
- Utilize a naming convention which allows you to quickly understand what each test case is testing
Testing is an integral part of any build. Not only does it ensure that everything is working, but it can also help you to identify all of the improvements that your build has hopefully made to your application. We hope that with these tips and best practices, you will be able to find and correct defects more efficiently and ensure an improved user experience. If you would like further guidance or want to learn more about testing best practices, contact us at info@kenwayconsulting.com.