The Future of Software Testing and Code Analysis

Today the role of software testers is changing and they are increasingly required to work more closely with developers and to learn more programming.

The future of software testing is using AI for automatic test generation. New software coding tools are being developed using AI, mathematical modelling and source code analysis techniques to automate software development and automatically generate unit tests. These tools will speed up the whole software development process and produce better quality code. The next generation of software testers will need to understand these new code analysis tools.

Let’s explore the future of code analysis itself.  Indeed, does it have a future at all? Perhaps programming languages will progress so far that code will unambiguously express the intentions of the programmer; components will be assembled on a trial-and-error basis; and fault-tolerance mechanisms will make up for any problems that arise at runtime due to errors in components or in their composition. 

We think not. The next few decades will see a rapid growth in our software infrastructure, so that eventually we will come to rely on software in almost every interaction with our environment. Transportation, energy distribution, communications, banking and health care will all depend on software. For end-user applications, time to market and feature count may continue to be driving forces but, in the development of our infrastructure, ‘getting it right’ will matter once again. And early investment in modelling and analysis will be essential.  Moreover, vast amounts of existing code will be reusable only if there are precise and cogent models that describe their guarantees and assumptions. And with less code to write afresh, the proportion of development effort allotted to coding at the expense of design and analysis will fall further. Code analysis of all kinds will become increasingly common. 

This trend will be driven by three factors:

  • the continuing need for information about the behaviour of software during all phases of the development
  • the widespread use of Java, whose type safety and high-level intermediate language make it significantly easier to analyse than languages such as C and C++
  • the overall progress in program analysis technology

In the future analyses will be model-driven, namely centred on abstract models of behaviour; modular and incremental, to enable analysis of components, and of systems before completion; and focused and partial, rather than uniform, paying closer attention to properties that matter most and to the parts of the software that affect those properties. In support of such analyses, modelling languages will be global, with a focus on structural relationships across the system, and declarative, and one assumes the analyses themselves to make more use of induction than has been fashionable recently. Finally, although we believe that unsound analyses have a bright future, we expect the increasing importance of infrastructural software to bring a renewed credibility to sound, precise and resource-intensive analyses. 

In summary.  Perhaps software analysis will come full circle. In the last decade, an appreciation for cost-effectiveness has caused researchers to identify more carefully the information engineers require, and to find the most effective means of obtaining it. Precise and sound analyses have fallen out of fashion, since they have tended to scale poorly and to exact a high price for questionable benefits.  In the future, several factors may bring such analyses to the fore again: the demand for reliable software; the availability of vast computational resources; and, perhaps most significantly, the exploitation of abstract models, to focus analysis effort where the payoff is greatest, and to enable modular reasoning on a large scale.

Richard Wheeler Associates is working in partnership with start-ups at the forefront of source code analysis, software testing and automation.