Software testing is one of the pillars of the software development life cycle, or SDLC (Plan – Design – Implement – Test – Deploy – Maintain). The goal behind the SDLC is to help developers build software cost-efficiently and minimize the related risks while ensuring the shipped product meets the expectations of its end users.
The Parts and Types of Software Testing
As a software developer, testing your code before publishing it occupies a large chunk of your development process. Software that is stable, reliable, functional and secure will lead to high customer satisfaction, sustain your business, and help build your reputation.
The planning and execution of a software testing project can be somewhat daunting, however. The best way to alleviate the problems that can come up in software testing is effective planning and the choice of the right tools and methodology.
Your strategy and approach to software testing will also be influenced by what it is you are testing at any point in time, but there are common threads to all software testing projects. Here are the five most critical aspects a successful software testing project needs to manage effectively:
Clear Objectives
Defining testing objectives and requirements clearly as a first step will shape the rest of your approach. What is the goal your software serves for the user, and what is the experience you want to deliver? Clear objectives help you customize a testing project to focus on specific areas of your software you want to improve, like performance, functionality or security.
The Right Approach
There is more than one approach to software testing, and choosing the appropriate testing approach that makes sense for your project will deliver the expected results. Software testing can be manual, automated, or a combination of the two. The level of complexity of your software, the tools on hand, and the skills of your testing team can help you decide which approach will work best.
Detailed Plan
Now that you know which approach to take, the next step is to plan your software testing project in detail. A well-prepared plan includes the suitable test scenarios, test cases and test data, as well as the required tools and techniques.
Thorough Execution
Execution involves implementing the testing plan, tracking issues as they occur, and documenting the findings and insights. The importance of proper documentation cannot be overstated, as it extends far beyond a single testing project and is vital to the goal of continuously improving software over time.
Regular Regression Testing
The last part of the software testing process is regression testing. Ensuring an application still functions as expected after updates, changes, or improvements have been implemented is what regression testing does.
Regression testing is a vital requirement when you introduce new features or fix bugs, ensuring that the stability and functionality of your application’s features is maintained.
The Ultimate Goal – Delivering High-Quality Software to Users
Software testing can seem daunting at times, but breaking the software testing process into its five components listed above will make it much less so, and help you develop an effective testing strategy that will bring you much closer to your ultimate goal of putting high-quality software in users’ hands.
What’s In This Article
In this article we take a more detailed look at the fundamentals of software testing. We also look at an excellent book by MVP William Meyer, and how you can use it to implement stress-free testing for your software publishing, migration and modernization projects.
Software Testing: Setting The Right Goals
Setting the appropriate goals is the first milestone in your software testing project, and it will directly influence the quality of the outcome.
Shipping high quality software requires that the development work be tested against several key goals before it reaches the end user, and the very first goal is to determine whether the application you’ve created functions as expected.
In other words, testing starts off by determining whether an application does what it’s supposed to do and what its end users expect it to do. In this phase you’ll be looking at the technical aspects of performance as well as the user-friendliness of the application, to identify where the application may fail and may otherwise frustrate its users.
Next, you want to implement software tests that determine whether the application is secure. Security is paramount to long-term sustainability and customer satisfaction, and is a permanent major concern for developers.
Software security tests zoom in on the vulnerabilities of an application and point to the ways to fix them. Like every other aspect of building applications, improving software safety is a continuous cycle of iterations and testing to meet evolving threats and vulnerabilities.
Reliability is the third standard against which an application needs to be tested. Reliability means an application remains fully functional under a variety of conditions. Reliability testing seeks to identify the conditions in which an application may crash or fail to perform as expected, and take action to improve functionality under these particular conditions.
Scalability is next. Testing for growing usage, traffic and workload will quickly identify the points at which the application buckles under the strain and possibly breaks. Scalability testing and subsequent improvements are designed to measure performance under growing use against a set minimum acceptable benchmark.
To review, software testing looks to measure your application against benchmarks for functionality, security, reliability and scalability. These are the four broad areas for which benchmarks are set and performance is measured.
The Right Testing Approach: Manual and/or Automated Testing?
Testing approaches fall under one of two broad categories: manual testing, and automated testing.
In manual software testing, developers manually implement test cases to observe how the application behaves, and compile reports on their findings.
Manual testing is common in the early stages of the software development life cycle, because it’s easier to implement when an application is still in the prototyping phase. It’s also highly relevant in cases when the features being tested involve human interaction, and specifically user interfaces.
One important reason manual testing is preferred over other options is that in some scenarios it’s seen to enable a more comprehensive analysis of an application’s behavior. It’s also suitable for ad-hoc testing due to its flexibility.
Manual software testing offers developers the following benefits:
Flexibility: Manual testing allows developers to explore a variety of testing scenarios, edge cases and user flows that automated approaches may miss. Covering cases that automated approaches don’t can offer developers a great deal of flexibility.
Cost-Effectiveness: For projects with a relatively small scope in particular, manual testing is a highly cost-effective approach. It only requires human testers who can perform the required tests at no additional cost, without expensive automated testing tools.
Accuracy: Manual testing is believed to provide more accurate results in specific situations than automated tools. Examples include visual design and user interface problems, issues automated tools may miss.
On the downside, manual testing can be time-consuming, especially compared to automated testing approaches, and requires a high level of expertise and extreme attention to detail. Developers need to proceed step by step while documenting the results, which takes time and lengthens the software lifecycle.
By definition it’s also prone to human error, as testers can miss defects or make their own mistakes during the testing process. However, for many scenarios, manual testing is the right approach.
Manual testing can also be limited in scope and miss bugs and other issues because human testers are unable to efficiently test all possible scenarios and test cases.
Automated Testing
Automated software testing involves using custom tools to execute a series of test cases to identify and eliminate bugs and errors from the shipped product.
Automated testing offers developers the following benefits:
Speed: the key advantage of automating the software testing process with specialized tools is speed. Automated tests are run quickly, and perform a large number of tests over a short period.
Test Case Coverage: Automated tools perform more tests across a broader set of scenarios than manual testing. This allows developers to fix bugs and deliver reliability across all separate components of their applications.
Consistent Results: Using automated tools eliminates the risk of human error and produces consistent results every time a test is implemented.
Reusability: Testing can employ test scripts an infinite number of times, which eliminates the need for manual testing and saves time and effort.
Scalability: Automated testing can easily process very large amounts of data and test many apps and versions simultaneously. This enables testers to scale the testing process as needed.
Overall, automated testing is much faster than its manual counterpart, more efficient, less prone to errors than manual testing, and with a much lower requirement for human involvement and effort. Automation is the go-to strategy for testing that involves repetitive tasks and regression testing, as well as when testing applications running on different platforms and with different configurations. This is because automated testing can be easily adapted for the required testing environment, and its ease of repetition makes it very cost-effective in the long run despite possible high upfront costs for the right tools.
Costs: High upfront setup costs is one of the downsides of automated software testing, and these costs include installing the appropriate toolset and training personnel to use it. Long-term costs also include maintenance and updates.
Low Human Interaction: Low requirements for human involvement and interaction can lead to certain types of errors and bugs being missed.
Coverage Limitations: Automated tools can provide broader coverage, but still, some aspects of testing need to be handled by humans, including user experience and user interface design.
Manual, Automated, or Both?
The benefits, coverage and varying scope of different approaches to software testing suggests that a combination of manual and automated testing is likely to produce the best results and minimize the likelihood of errors and bugs.
As manual and automated approaches have strengths and weaknesses, it’s up to developers to find the balance between the two approaches that will deliver the most comprehensive analysis of their application’s behavior and break points, which will lead to a better-quality product.
Functional And Non-Functional Testing
Functional Testing And Types
Functional testing verifies the performance of an application or software system against the functional specifications and requirements set by a client. Functional testing independently tests the constituent functions of an application by offering input and comparing the output to the set requirements.
Functional testing is normally executed before non-functional testing. Types of functional testing include:
Unit Testing
Integration Testing
User Acceptance Testing
Smoke Testing
Regression Testing
Sanity Testing
White Box Testing
Black Box testing
Let’s look at each of these functional testing types in more detail:
Unit Testing
Unit testing focuses on the individual components or separate parts of a broader system. Developers focus on smallest independent units of code to test them separately before looking at the overall functionality of the application.
Unit testing can reduce software development costs by giving developers the opportunity to catch bugs and errors early in the development cycle, also making corrective action quicker and easier.
The main downside of unit testing is that enterprise applications are inevitably made up of a large number of testable units of code, which means thorough testing can be very time-consuming and expensive to implement.
Integration Testing
On the other hand, integration testing seeks to test the way different units of a system interact with each other. This type of testing usually occurs after a number of units have been tested, and is designed to determine how well these units work together.
Integration testing can catch bugs that do not appear in unit testing, and identifies integration issues early in the software development cycle, but it can be a complex process that can be more and more time-consuming as the codebase grows.
User Acceptance Testing
As the name suggests, this type of testing validates the performance of a system against the set user requirements. This type of testing is normally assigned to actual users or analysts, and is focused on testing a system in real-world conditions that replicate user needs and use patterns.
The core benefit of user acceptance testing is that it helps guarantee a system will function as intended in a real-world environment with real use cases. However, it can be time-consuming and expensive if there are many user personas and use cases that need to be tested.
Smoke Testing
Smoke testing is also known as confidence testing or build verification testing. This method helps developers determine whether a specific build is ready for the next testing phase, and focuses on the core functionality of an application before the more detailed aspects are tested. If an application fails a smoke test it is returned to the development team.
Regression Testing
The role of regression testing is to make sure a change made to a system does not create new bugs or cause crashes. Regression testing involves checking a system immediately after changes have been introduced to make sure no new bugs or defects have appeared, and that they do not risk breaking the system.
The purpose of regression testing is to catch errors that appear during the development process, and is very useful for checking system integrity in software updates, upgrades or modernization.
Sanity Testing
Sanity testing is a subcategory of regression testing, and its purpose is to test whether a new software build is functioning as intended. It’s usually limited in scope, and is focused on validating critical core functionality, as opposed to identifying bugs in the system.
Sanity testing is one of the first tests conducted after changes are made to an application, and the results determine whether planned further tests can proceed. Sanity tests are implemented after an application has passed the initial smoke test, and include validating such critical components as the user interface and data input/output.
White Box Testing
White box testing focuses on an application’s internal structure and code to improve security, usability and design. Also known as Clear Box testing or Glass Box testing, it searches for problems with internal security, and the integrity of paths, input flows and conditional loops. It can involve testing individual statements, objects and functions, and at the system, integration and unit level. In this process, code is directly visible to testers.
Black Box Testing
Testing input and output without awareness of an application’s source code and internal structure is known as black box testing. Black box testing checks components like the user interface, database connectivity, usability, accessibility, security, communication between client and server, and error conditions. Black box tests and functional tests can be both manual and automated.
Non-Functional Testing And Types
Non-functional testing evaluates the performance, reliability, usability, scalability, maintainability, portability, and other non-functional characteristics of an application, criteria that are not used in functional testing. This helps reduce the application’s manufacturing risk.
Non-functional testing is designed to be quantifiable and measurable against set quality standards, and helps eliminate obstacles to the smooth installation, configuration, execution, management and monitoring of a software product.
Non-functional testing tends to follow functional testing. Types of non-functional testing include:
Performance Testing
Load Testing
Volume Testing
Stress Testing
Security Testing
Installation Testing
Compatibility Testing
Migration Testing
Let’s look at each of these non-functional testing types in more detail:
Performance Testing
Like the name suggests, performance testing measures how an application performs under various load conditions and environments.
The goal of performance testing is to ensure the application will perform as required and handle the expected load while in use.
Load Testing
Load testing models the expected usage patterns of an application, such as simulating many users accessing the application simultaneously, to determine its resilience and capacity. Load testing is thus useful for multi-user systems, or applications that can be asked to handle sudden spikes in load.
Volume Testing
Volume testing is a form of testing that subjects an application to a large volume of data to determine its data processing limits and capabilities.
Volume testing is useful for scalability planning and identifying the points at which the stability of an application begins to degrade.
Stress Testing
Stress testing applies peak loads and user input to an application to identify its break points, and determine the limits of usability and whether they need to be extended before the application is published. Stress testing helps developers ensure the application will handle users’ expectations and use patterns.
Stress testing can also be used to identify bottlenecks that may be slowing an application down, and prevent downtime or failure. It can help determine scalability by simulating heavy loads on a system, either at the same time or over a set period. It’s also a core testing method for measuring recoverability after a crash or failure.
Security Testing
Security testing scans an application for vulnerabilities that could expose it to attacks by malicious actors. Security testing takes on many forms, including penetration testing and vulnerability scanning and code review.
The risks associated with security testing can lead to false positives, which can waste resources and time, and incomplete coverage, which means vulnerabilities may still remain after testing is completed. It can also be heavily time-consuming.
Installation Testing
Installation testing is designed to ensure an application can be properly installed with the right features, configurations and libraries as per a given user’s requirements without glitches or failures. It’s usually performed once an application is fully developed, other types of tests have been completed, and the application is almost ready to be shipped to users.
Compatibility Testing
Compatibility testing checks whether an application is able to function on different types of operating systems, platforms, browsers, hardware, network environments and devices.
Backward compatibility testing checks an application’s behavior and compatibility with previous versions, while forward compatibility testing verifies an application’s behavior and compatibility with newer versions.
Migration Testing
Migration testing is the process of verifying that the process of migrating a legacy system to a new one has been successful, with minimal disruption or downtime and no loss of data. At the same time it ensures that all the functional and non-functional requirements are still met after the migration process is completed.
Testing Legacy Delphi Software Modernization Projects: A Thorough Reference Manual
Updating and modernizing legacy Delphi projects can feel like a risky process fraught with many ways to break an application. At the same time, new user requirements and the advancement of technology demand that legacy projects be updated and modernized, often through refactoring.
One of the toughest challenges is knowing where to begin and how to plan a modernization project. Another is choosing the right strategy and approach.
All this can seem overwhelming unless you have a clear, well-written and very thorough guide to software modernization. And that’s exactly what you get with William Meyer’s extensive volume titled “Delphi Legacy Projects: Strategies And Survival Guide”.
About The Author
William Meyer began his career in hardware logic design and self-schooled in Z80 assembly language. He later discovered Turbo Pascal 1.0 for Z80, and apparently still has the original manual. After a number of years working with Turbo Pascal he “became acquainted with Delphi and never looked back”.
William Meyer’s career has been almost entirely in the television broadcast industry, with a brief sojourn into medical office practice software, also in Delphi.
In 2019, William took a sabbatical and began working on his first book, “Delphi Legacy Projects: Strategies And Survival Guide”, published in July of 2022. He is now working on a second book.
About “Delphi Legacy Projects: Strategies And Survival Guide”
Refactoring and modernizing legacy projects is a task much different than the design and coding of new applications. This volume reviews all aspects of the process and offers strategies.
Chapter 18 in particular, titled Testability, contains extensive guidance on navigating the testing process for modernized legacy Delphi applications.
Get The Book
Watch William Meyer’s webinar with Jim McKeeth on
Upgrading and Maintaining Delphi Legacy Projects
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition
And no one unit-test for RADStudio and Delphi libraries )))