Testing Automation: What You Need to Know

With a long history in Quality Assurance, Kenway has been a trusted partner in supporting the implementation of automation solutions for our clients, specifically in the IVR (Interactive Voice Response) and Contact Center spaces. Testing automation is a game changer for any company that can implement it; however, many do not consider the investments needed to ensure consistent success and meaningful ROI. Without common dependencies being proactively governed and appropriately managed, automation becomes impossible to implement effectively. Kenway works with our clients to address these extremely complex challenges while guiding them successfully through their automation adoption journey.

The Fundamentals of Testing Automation

Strategy & Process

A well planned and thorough implementation strategy is fundamental to the success of an automation initiative. By establishing specific goals and desired outcomes, teams can prioritize areas in need of automation. This clarity enables efficient resource allocation and ensures that automation efforts are targeted towards areas that will yield the greatest benefits.

Well-defined testing processes guided through comprehensive test plans help mold effective automation certification practices. These plans outline the scope, coverage, and test cases to be automated, ensuring that all critical scenarios and functions are adequately tested. With a solid test plan in place, an organization can effectively prioritize its automation efforts and ensure comprehensive coverage, minimizing the risk of overlooking critical aspects of the application.

Furthermore, strong testing processes provide standardized procedures for test case design, execution, and result analysis. This consistency ensures that the testing automation team follows best practices, adheres to established guidelines, and maintains a high level of quality and accuracy in their testing activities. By following a systematic approach, firms can confidently rely on automation results to make informed decisions about the application's quality and readiness for deployment.

Dependency Management

There are several dependencies which require proactive mapping, with either direct resolution or meaningful mitigation, to support a healthy automated quality assurance experience. Some of the more common and high priority dependencies are associated with data and environments. Clean and reliable data is a fundamental dependency for implementing automation successfully. It is the fuel that drives the automation engine and plays a crucial role in producing accurate and meaningful test results.

Test data configured to mimic real-world scenarios replicates realistic user interactions with the developed applications. With compliant data, automated tests can be executed without encountering false failures, and/or false positives. When working with flawed or unreliable data, automation scripts will likely encounter unexpected issues or fail to produce accurate results. Quality data eliminates inconsistencies, errors, and duplicates that can hinder the effectiveness of automation.

Furthermore, clean and reliable data enables better analysis and troubleshooting during the testing process. When automation scripts produce results that are based on accurate data, it becomes easier to identify and isolate issues or defects within the application. This expedites the debugging and resolution process, saving valuable time and resources. Data management practices, including version control and documentation, contribute to the overall traceability and repeatability of automated tests. Teams can track changes in test data, compare results between different test runs, and reproduce specific test scenarios with confidence. This traceability fosters transparency and enables effective collaboration between testers, developers, and stakeholders involved in the automation process.

Available, stable, and release compliant environments are also fundamental to supporting effective automation implementations. Overall, environments which are managed effectively provide the foundation for consistent, reliable, and efficient QA processes. They help ensure that test results are accurate, defects can be reproduced, and certification processes can be optimized, and eventually automated. Therefore, organizations should strive to establish and maintain available, stable, and release compliant environments to enhance the effectiveness of their quality assurance and automation practices.

Tool Selection and Maintenance

The process of adopting testing automation includes the critical task of selecting the right automation tools which align with project requirements, technology stack, and testing objectives. The choice of tools directly impacts the efficiency and effectiveness of the automation process. By evaluating different options and identifying the tool that best suits a company’s needs, organizations can ensure a seamless and successful implementation of automation.

There is a wide range of automation tools available in the market. Some of the leading choices include Cyara, Selenium, and Hammer. Kenway recently partnered with Cyara, a leading automated CX assurance platform. As Kenway is tool agnostic, we do not recommend one vendor vs another based on our own relationships or partnerships; we help our clients choose the right solution based on their unique paradigm and requirements. In this case we worked with the client to build requirements and validate that it was the right choice for their needs and would recommend that all organizations take the same steps to ensure the tool selected is the best fit.

Once the tools are selected, ongoing maintenance becomes essential for the long-term sustainability of the automation effort. Regular updates, patches, and enhancements keep the tools compatible with the ever-evolving technology landscape.

Maintenance activities also involve monitoring tool performance, addressing any issues or bugs, and ensuring the tools remain reliable and efficient in supporting the automated testing process. Proper tool maintenance also includes staying informed about the latest advancements in automation technology, allowing teams to leverage new features and capabilities to enhance their automated testing efforts.

Moreover, tool maintenance encompasses providing training and skill development opportunities for the testing team. Ensuring that testers are proficient in utilizing the selected automation tools is crucial for maximizing their potential. By equipping the team with the necessary skills, teams can effectively design and execute test cases, analyze test results, and fully leverage the capabilities of the testing automation tools.

Testing Automation Governance

Governance establishes and ensures the maintenance of standardized methodologies, procedures, and guidelines for testing automation, test case design, execution, and result analysis across different projects and teams. By providing a framework for standardization, governance facilitates collaboration, reduces errors, and improves the overall quality of testing – automated or not. Without a foundation in good governance, the ability to reap the benefit of automation becomes extremely difficult.

Governance plays a crucial role in risk management. A well governed program identifies and mitigates risks associated with autonomous certification practices (e.g., data privacy and security, test environment stability, tool selection, regulatory requirements, and industry standards). By implementing controls and protocols, governance functions proactively address potential risks and minimize disruptions or failures in the testing process.

Governance promotes accountability and transparency in testing activities. It establishes clear roles and responsibilities for all stakeholders. By defining and enforcing these roles, governance fosters transparency, trust, and effective collaboration among team members, leading to better outcomes in automated testing. It also facilitates knowledge management by capturing and sharing best practices, lessons learned, and automation artifacts. Managers are better equipped to ensure that valuable insights, tools, and resources are documented, stored, and made accessible to relevant stakeholders. This knowledge management aspect of governance promotes learning, innovation, and efficiency in testing, especially when automated.

Change Management

A well targeted and streamlined change management approach will help guarantee adoption of automation. Each team impacted by the enhanced processes to support automation will need to understand their roles and responsibilities to ensure the organization achieves the highest ROI. Each stakeholder must have a clear understanding of how their role impacts the success of automation. Governance must be equipped to validate that automation processes are being consistently leveraged, with any associated risks and issues being quickly resolved or mitigated.

Kenway understands that automation is an iterative process. As automation initiatives progress, it's common for organizations to revisit and refine their testing strategy, governance, dependency management, and change management practices. Continuous evaluation, adaptation, and improvement are also crucial to optimize the benefits of automation over time.

How Kenway Can Help

When engaging with a client interested in automation, Kenway will assess processes, dependencies, and governance prior to conducting an analysis on automation tools. By identifying gaps in the fundamentals early, we can create an effective plan to prioritize and address issues which ultimately support the delivery of a highly functional automated testing framework. We guide our clients through automation orchestration, model adoption options, and tool selection processes, ensuring that the client’s unique needs are met. Throughout this journey, we focus on the impacts to the existing teams, unique personas, and processes to ensure a smooth transition via a well-defined change management strategy. Automation is a powerful tool, but only when organizations make the proper investments to effectively integrate and nurture it. At Kenway, we pride ourselves on having helped clients maximize the return on their automation investment.

If you are struggling with your manual testing, having difficulties implementing automated testing, or simply have interest in automated testing, we would like to hear from you at [email protected].

 

Test Management Organizations: Key to Quality Delivery

In today's fast-paced and competitive business landscape, delivering high-quality products and services is paramount to success. To ensure the reliability and effectiveness of their offerings, organizations rely on robust testing processes. One key component that can elevate an organization’s testing capabilities is establishing a Test Management Organization (TMO).  

A Test Management Organization (TMO) is a dedicated group within an organization that provides governance, oversight, and standardization to the testing process. Serving as a central hub for all testing-related activities, it ensures that testing methodologies, processes, and strategies are defined, implemented, and continuously improved. The TMO establishes a framework for compliance with industry standards, policies, and procedures, promoting accountability and risk management. It also fosters effective communication and collaboration between stakeholders, aligning testing activities with business goals and user requirements.  

As the driving force behind efficient and effective testing, the TMO supports the organization in achieving its quality and delivery goals. By developing comprehensive test strategies, optimizing testing, and driving continuous improvement, the TMO helps reduce cost, speed up time-to-market, and deliver high-quality products and services. 

Test Management Organization (TMO) Roles 

Within a Test Management Organization (TMO), there are several roles and responsibilities that contribute to the successful implementation and execution of the testing process. These roles work together to establish a well-structured and efficient test management process within the organization. 

These roles include: 

TMO Manager 

The TMO Manager is responsible for overseeing the entire TMO and ensuring its effective functioning. They provide strategic direction, establish goals and objectives, and oversee the implementation of testing methodologies and processes. The TMO PM also collaborates with stakeholders, manages resources, and drives continuous improvement within the organization. 

Test Case Managers 

Test Case Managers are responsible for defining and managing the test case management process. They ensure that test cases are created, documented, and stored in a structured manner. Test Case Managers also collaborate with business stakeholders to understand testing requirements and ensure proper test coverage. 

Test Data Managers 

Test Data Managers focus on managing the test data required for testing activities. They ensure that relevant and accurate data is available for testing purposes. Test Data Managers work closely with other teams to procure, maintain, and refresh test data as needed. 

Environment Managers 

Environment Managers are responsible for managing the testing environments and ensuring their stability, availability, and suitability for testing. They coordinate with infrastructure teams, manage environment configurations, and address any environment-related issues or dependencies. 

Defect and Coverage Managers 

Defect and Coverage Managers oversee the defect management process, ensuring that defects are identified, tracked, and resolved effectively. They also monitor test coverage to ensure that all critical areas are tested adequately and provide visibility into the status of defect resolution and coverage metrics. 

Triage Managers 

Triage Managers play a crucial role in prioritizing and assigning defects to the appropriate teams for resolution. They analyze and evaluate defects based on their severity and impact, ensuring that the most critical issues are addressed promptly. Triage Managers collaborate with development and testing teams to facilitate the defect resolution process. 

Test Team  

The test team, including analysts and developers, is responsible for executing test cases, identifying defects, and ensuring the quality of the software being tested. They analyze requirements, design test scenarios, write and execute test cases, and report any issues found during testing. The test team works closely with other stakeholders to validate software functionality, verify system behavior, and contribute to the overall testing effort. 

Business Stakeholders 

Business stakeholders are individuals or teams representing the business or end-users. They provide input on testing requirements, priorities, and business objectives. Business stakeholders collaborate with the TMO and other teams to ensure that testing aligns with business needs and objectives. They may participate in test planning, review test results, and provide feedback on the software being tested. 

Critical Testing Processes 

The TMO will manage the following processes and make sure they are successfully executed and continue to deliver value.  

Test Strategy Process  

The TMO is responsible for defining and implementing the test strategy process. This involves identifying the testing objectives, scope, and approach for different projects or initiatives. The TMO collaborates with stakeholders to establish guidelines, standards, and best practices to be followed during testing. See Kenway's tried and tested testing methods for more information.  

Test Case Management 

The TMO establishes and maintains the test case management process. This involves defining the guidelines and procedures for creating, documenting, and organizing test cases. The TMO ensures that the test case repository is well-maintained, easily accessible, and up to date. They also collaborate with business stakeholders to validate test cases and ensure appropriate coverage. 

Test Data Management 

The TMO oversees the management of test data required for testing activities. This includes identifying data needs, procuring relevant data, maintaining data repositories, and ensuring data privacy and security. The TMO works closely with other teams to ensure that test data is available, accurate, and suitable for testing purposes. 

Handoffs Between Teams 

The TMO defines and manages the handoff process between different teams involved in the software development lifecycle. This includes coordinating the transition of deliverables, test artifacts, and relevant information between development, testing, and deployment teams. The TMO ensures that the handoff process is smooth, well-documented, and aligned with the overall project goals. 

Stakeholder Involvement 

The TMO actively engages and involves stakeholders throughout the testing process. This includes collaborating with business stakeholders to gather requirements, conducting regular status updates and reviews, and obtaining feedback on test results. The TMO ensures effective communication and engagement to align testing efforts with business needs and expectations. 

Reporting 

Actionable reporting plays a crucial role within a TMO by providing valuable insights and data that drive informed decision-making and process improvement. By presenting clear and concise information about testing activities, results, and metrics, actionable reports empower stakeholders to make data-driven decisions and allocate resources effectively. 

Quality reports enable the identification of areas that require attention or improvement, allowing for timely corrective actions and risk mitigation. Actionable reporting serves as a foundation for process improvements by highlighting bottlenecks, inefficiencies, and recurring issues in the testing process. By analyzing the data and metrics provided in these reports, the TMO can conduct root cause analysis and implement corrective measures to enhance testing efficiency, effectiveness, and overall quality. This iterative process of data analysis and improvement ensures continuous optimization of testing efforts. 

Actionable reporting fosters transparency and communication within the TMO and with other stakeholders. By providing relevant and easily understandable information, reports facilitate clear communication of testing status, progress, and results. This transparency promotes trust and collaboration among team members, stakeholders, and management, leading to more effective coordination, issue resolution, and alignment of expectations. Furthermore, actionable reporting ensures that all parties have a shared understanding of testing outcomes, challenges, and opportunities.  

TMO – The Strategic Move 

Establishing a Testing Management Office is a strategic move that can significantly enhance an organization's testing capabilities and overall software quality. By centralizing testing efforts, standardizing processes, and providing dedicated resources, a TMO promotes collaboration, efficiency, and effectiveness in testing activities. It enables better communication between stakeholders, streamlines test planning and execution, and facilitates the identification and resolution of issues throughout the software development life cycle. Embracing a TMO can empower organizations to stay ahead in today's fast-paced and competitive digital landscape, driving innovation and achieving business success. 

Connect with us to see how we work with clients to understand their testing needs, establish a testing strategy, and stand up a TMO that can self-sufficiently drive value within the organization via quality testing.


Test Management FAQs:

1. What is the meaning of test management?

2. What are the key components of test management?

3. What are test management strategies?

 

Building Your Organization’s Testing Strategy

A test strategy is an essential component of an organization’s software development approach and outlines the overall process and objectives for product testing. It acts as a blueprint defining the scope and resources required, the testing tools to be employed, the risks involved, and the schedule for testing activities.

A well-crafted test strategy not only ensures a comprehensive evaluation of the product's functionality, reliability, and performance but also helps identify defects at an early stage. Finding defects early saves time, reduces costs, and contributes to the final product's overall quality by reducing churn in the development cycle and leads to fewer defects pushed to production. Quality Assurance testing is not only about defining the types of testing and methodologies (e.g., automated or manual) but also about setting clear expectations, fostering early and continuous engagement, and thereby promoting a quality-focused, efficient development process.

The Impact of Delivery Methodologies

An organization’s approach to testing will depend on their delivery methodology, be it Waterfall or Agile. The choice of methodology directly impacts the testing strategy, timeline, resource allocation, and the tools and techniques employed.

The Waterfall model is a linear, sequential approach to software development, where progress flows steadily downwards, like a waterfall, through conception, initiation, analysis, design, construction, testing, deployment, and maintenance. Given its structured and sequential nature, testing in the Waterfall model occurs late in the lifecycle - after the 'construction' or 'development' phase. This implies that any bugs or issues discovered during the testing phase can lead to increased costs and time delays, as changes might require revisiting and modifying large sections of the code.

Conversely, the Agile methodology is iterative and incremental. Testing in an Agile model is continuous and integrated into the development process, often performed in every iteration or sprint. This continuous testing approach enables teams to identify and address issues promptly, enhancing software quality, and reducing time to market. Agile's dynamic nature, however, means that the testing team needs to adapt quickly to changing requirements and maintain robust communication with the developers and stakeholders.

Types of Testing

There are several types of testing that should be carried out to ensure the quality and performance of the product. Understanding these tests and defining their place within the testing strategy allows teams to ensure that their testing efforts provide the desired level of coverage.

Each of these tests plays a vital role in the software development process and must be accurately defined in the testing strategy. The testing strategy should lay out when and how each test type will be conducted, the owner of each test, the resources and environments required, and the objectives of each test. The right balance and sequence of these tests within the testing strategy ensures that software is efficient, effective, user-friendly, and robust under varying conditions.

Requirements Traceability, Dependency Mapping, and Value Stream Mapping

Requirements Traceability, Dependency Mappings, and Value Stream Mapping are fundamental to ensuring requirements are met, enhancements do not disrupt existing functionality, and improvements can be properly prioritized based on ROI and impact. Organizations must set expectations and define processes to create and maintain these deliverables as part of their overarching test strategy to maximize their benefit.

These processes are critical to improve visibility and control over the software development process. Coupled together they contribute to a well-defined testing strategy, ensuring comprehensive coverage of all requirements and dependencies resulting in a reliable, high-quality, software product.

Test Repository

Creating a structured process for developing and maintaining a Test Repository is crucial for effective test management. A Test Repository serves as the centralized location for all test artifacts, including test cases, test scripts, test data, and testing results. This centralization ensures easy access, enhances reusability, and promotes consistency across the testing lifecycle.

A high-quality Test Repository must have the following components:

  1. Well-documented and easy-to-understand test cases and scripts that reflect the business requirements.
  2. An organized set of test data, which can be used across different testing stages and allows for repeatable and consistent test executions.
  3. Testing results, including insights on testing coverage, defects discovered, their resolution status, and the impacts on the software quality.
  4. Established naming conventions enabling searchability, reducing the risk of duplication, and allowing existing tests to be easily updated when enhancements are introduced.

This systematic process for managing a Test Repository ensures traceability, facilitates knowledge sharing, and promotes communication among team members. It serves as a significant asset for regression testing and future projects, saving time and effort in the long run.

Test Data

Test data plays a vital role in determining the outcome of a software testing process. A well-defined test strategy will drive the use of quality data and yield better testing outcomes. It will provide a detailed plan on when and where to use different types of test data, ensuring the right conditions are established for each testing stage. By emphasizing schema compliance and persona-based testing, the strategy promotes comprehensive and realistic testing, enhancing the likelihood of identifying potential defects and improving the overall quality of the software.

Schema compliance is vital for all types of test data. It ensures that the test data adheres to the established rules of data organization within the system, thus guaranteeing data integrity and reducing the risk of invalid test results.

Persona-based testing is an approach where user personas are created to represent different user types that might interact with a product. Persona-based test data should be developed based on these personas' behaviors and needs, ensuring a wide range of scenarios are tested that cover the potential uses of the application.

Based on the function and the testing stage in which it is used, test data is categorized into three types.

  1. Stubbed data is created specifically for testing certain functionality. It helps to isolate the system component being tested and supports the creation of controlled test conditions. For example, it can be used when the system component being tested doesn't rely on the data from other components or when the data from other components is hard to acquire.
  2. End-to-end data, as the name suggests, is used in end-to-end testing. This is a comprehensive testing process where the data flow among all components of the system is tested from start to finish, ensuring all the integrated pieces of an application function are as expected. The data used should mimic real-world usage and interactions to evaluate the system's functionality under realistic conditions.
  3. Hybrid data is a mix of stubbed and end-to-end data. This type of data is often used when certain parts of the system need to be isolated (using stubbed data), while the remaining parts of the system require end-to-end data to emulate realistic conditions.

Test Environments

Within a comprehensive testing strategy test environment management serves as the linchpin that bridges software development with practical validation. Its importance lies in its ability to mirror real-world conditions, simulating diverse user scenarios and uncovering critical issues before deployment.

To fully realize the potential of effective environment management your organization must align release cycles and branching strategies while ensuring compliant configurations and environment stability. This ensures that configurations remain in lockstep with project objectives and comply rigorously with the intricacies of each release. A harmonious integration of test environments within the broader testing landscape not only accelerates issue detection but also fosters a systematic and reliable approach to quality assurance, safeguarding the software's integrity while enhancing overall user satisfaction.

A Well-Defined Testing Strategy

Testing Strategy

A well-defined testing strategy is instrumental in fostering a "pivot left" approach, where testing and quality control are incorporated early in the software development lifecycle. This proactive approach focuses on preventing defects rather than merely detecting them later. Teams can drastically reduce the cost and time involved in the development process, leading to more efficient project execution and higher product quality.

In this context, the testing strategy should clearly define the "done" criteria for development. Such criteria may include specific requirements like code quality standards, successful execution of unit tests, or completion of documentation. A clear understanding of when development is considered "done" aligns the entire team's expectations and contributes to a more streamlined process.

An effective testing strategy emphasizes stakeholder engagement. Requirements and artifacts should be thoroughly reviewed and agreed upon by all stakeholders prior to the build phase. This is essential to avoid any misunderstanding or miscommunication that could lead to delays or rework. Additionally, during and after development, demos should be conducted to engage stakeholders, provide them with a clear understanding of the product’s progress, and gather feedback. This will ensure that the product is developed according to stakeholder expectations and any potential issues or changes are promptly addressed.

How Kenway Defines and Implements Testing Strategies

At Kenway Consulting we help our clients define and implement robust test strategies tailored to their unique requirements. Our consultants bring decades of expertise to construct an approach that aligns with the organization's goals, existing infrastructure, and project timeline. We ensure that the strategy is executable within the client’s organization and provide necessary training, resources, and ongoing support to ensure the strategy's seamless integration, thereby optimizing software development processes and ensuring the delivery of high-quality products.

If you need help tailoring or implementing all these components into your test strategy or driving a quicker and more effective software development process connect with us at [email protected]. We’re here to help.

Tried and Tested Testing Methods

You’ve spent months building a new software application and now you need to make sure that it works. Testing your application may feel like a daunting afterthought to your build, but we’ve found that with proactive planning, tiered execution, and detailed regression testing, your testing process can strengthen your build, ensure you’re delivering the strongest customer experience possible, and bring to light enhancements that could be incorporated in subsequent phases. Here’s how Kenway approaches test management to provide the most value:

Test Planning

First, take a look at your test plan holistically. When do you want to start testing? From there, work backwards to determine how far in advance you’ll need to start creating your testing materials.

Waterfall Test Planning Timeline*

Here’s a timeline for the average-sized build in a traditional Waterfall implementation:

Waterfall Testing Plan Timeline
*This timeline is for a moderately-sized release. If there are several large projects going into a single release, you’ll want to stretch out this timeline. The opposite is true for smaller efforts! 

Agile Test Planning Timeline

Agile testing is iterative and should follow the sprint timeline. Test planning should go hand in hand with sprint planning sessions so that test mangers can build test cases based on the scope of each sprint. Here’s a timeline for an average-sized Agile implementation:

Agile Testing Plan Timeline

Here are the materials you’ll need for a comprehensive test strategy:

Testing Execution

There are minimally three types of Quality Assurance testing that should take place: Unit Testing, System Integration Testing (SIT), and User Acceptance Testing (UAT). Unit Testing and System Testing occur in a testing environment with no external-facing interaction. Unit Testing is performed by developers on their individual components of the entire system within the development environment. For example, if a developer created a log-in screen, they would ensure that the screen renders correctly, has the appropriate entry boxes, and sends the correct commands out. They would not test whether the username and password combination was stored within the user database. These types of tests would be completed in SIT. SIT tests are performed with test data in a test environment and are meant to test general functionality of the entire application, rather than minute test cases. Once the application passes the test criteria laid out by the technology team, business users participate in UAT to confirm it meets their requirements. UAT is set up in a test environment to mimic the customer or end user experience and uses production-like data to test detailed elements of the application. Tests in UAT can access both external and internal interfaces.

Each testing period should contain a few different deployments. For this example, let’s assume that there are three deployments. The first round of testing cannot begin until the first deployment is complete. By testing all of the test cases in Deployment 1, it ensures that all defects are identified prior to Deployment 2. Ideally, defects would be identified and logged early enough for development to resolve most of the issues prior to the second deployment. This process repeats itself through Deployment 3.

When testing in Deployment 1, we recommend testing all scripts as far as the testers can test until running into blocking issues. Blocking issues are areas where tests fail, not because of development issues, but because all of the required dependencies have not been completed. This will help you shake out any test script errors, test data issues, or code defects as quickly as possible. Typically, we’ve found that 55% of test scripts will successfully pass in Deployment 1 on the first try, which we mark Pass 1A. The next round of tests (still within Deployment 1, before Deployment 2) will allow testers to correct for any incorrect test data, script errors, or approved design changes. Tests that pass in in this round will be marked Pass1B, these usually cover about 65% of all test scripts.  After Deployment 2, you can begin another round of testing. Typically, 75% of the total number of test scripts will pass after Pass 2A and 2B. Finally, after Deployment 3, you can complete your final round of testing, hopefully achieving 100% passed scripts.

Regression Testing

Regression Testing verifies that existing functionality continues to work after you’ve made a change or addition to the application. The point of regression testing is to catch any bugs that may have been introduced by a new build or release.

We recommend running regression test cases in the evening (outside of development hours) or towards the end of a deployment cycle so that any regression side effects can be fixed in the build. This reduces the risk by covering almost all regression defects in the early stages rather than finding and fixing those at the end of the release cycle. In order to successfully perform regression testing, the team should:

Testing is an integral part of any build. Not only does it ensure that everything is working, but it can also help you to identify all of the improvements that your build has hopefully made to your application. We hope that with these tips and best practices, you will be able to find and correct defects more efficiently and ensure an improved user experience. If you would like further guidance or want to learn more about testing best practices, contact us at [email protected].

 

What’s With All The Bugs?

You have just spent months designing, building, and testing your application. System test is complete and your testing team has declared you are bug free. You deliver the product to your client and set them loose on User Acceptance Test, confident it will go very smoothly leading to a product that is delivered on-time, within budget, with a high degree of quality. You and the testing team declare the Quality Assurance phase a success. Then the call comes. Your client is irate, claiming the product is filled with errors and not at all working as the requirements intended. After the call, you realize production could be delayed weeks, maybe months, costing thousands of dollars. Worst of all, your client now seriously doubts your ability to handle Quality Assurance on any future project. You scratch your head and ask, “Where did all these bugs come from?”

You begin the investigation into what went wrong and start with the Unit Test lead, who shrugs his shoulders and says “All of our code compiled without errors. We were error free.” What about the unit test scripts you ask and are told there are none. The lead says unit testing does not require scripts, if each unit of code compiles, the unit testing phase is complete. Of course, without scripts, there are no errors! Compiled code simply means there are no syntax errors, not that the logic is correct!

You move onto the System Test lead who shows you the entire suite of test scripts, each showing 100% pass rate on the third phase of testing. You ask who wrote the scripts and how they were written and learn from the testers they wrote them based off of the successfully compiled code delivered from the unit testers. Of course, the scripts passed…rather than being written from documented requirements and designs they were written off what assumed to be error free code!

These examples may seem extreme, but I cannot tell you how many projects I have been on where I have seen these exact scenarios occur. It happens for a multitude of reasons. Unit testers may feel they do not need to write scripts because they are writing the code directly from the requirements or technical designs, so if it compiles, they must have it right. Your system testers may not be experienced or confident enough to write their scripts directly from documentation, so they take “sneak peaks” into the application “just to make sure they have it right”, often leading to several errors being written into the scripts. Most commonly these scenarios result from a lack of documented testing methodology to guide projects through the Quality Assurance phase.

Your Testing Methodology should include your overall test plan outlining each testing phase required for the project; a detailed test plan for each testing phase; templates and instructions for creating test cases, conditions, and scripts; a documented process for capturing and reviewing bugs, and standard metrics to actively track how the testing phases are progressing. Every project is unique, and the Testing Methodology will help you determine the testing phases that will ensure a successful Quality Assurance phase. Although each project is unique there are certain testing phases and certain guiding principles that should be followed within each project.

The number one thing your clients expect from you is quality. If your client was smart enough to budget for appropriate testing resources, be smart enough to execute it correctly. Without the appropriate rigor and focus, you will miss the mark every time.