Rui Lopes Rodrigues | Test Specialist Leader | everis Brazil

Intelligence in software quality evolution

I’ve been working with software development for some time. In the last ten years, I have been strongly focused on processes, tests, and quality with a more technical look, which is my background. I was a developer for over ten years before migrating my point of view.

Younger people like myself, have had the opportunity to see deep changes in the entire software creation process, but especially in quality.

Expectations have changed a lot. Some time ago, it took two weeks to schedule the first feature planning meetings for a new system, rather than the time to deliver a small, viable product. These shorter deadlines put a great pressure on quality processes, which need to be delivered immediately and working.

There was not such a strong dependence on software. Thirty five years ago, in 1985, bank authentication machines were electromechanical. Transactions were recorded in paper tape, which copied each authentication carried out; also, checking account balances arrived on listings in the morning at the bank branch, and for each debit or credit, employees made a pen note on the list. On the other hand, if the power went out, banks were able to keep working, with tellers using cranks to operate authenticators. Nowadays, power failures (or any other show stoppers) are basically chaos, because we cannot circulate the information with the speed necessary for the current requirements. A huge pressure on quality.

Another source of pressure is the availability of other options. To put it simply and specifically, if your application fails or is slow, there is a huge risk that the user will uninstall it and look for an alternative from a competitor. The chances of him or her finding this alternative very easily are also huge.

I am not saying that only quality has been affected. The entire development process had to evolve in a frantic way, and fortunately it did. I will limit the scope of this conversation, however, to quality; even so, we must keep a somewhat high level of abstraction so we are able to, within the scope of an article, discuss the possibilities of applying intelligent methods, approaches, and processes to software quality.

It is a fact that we have come a long way, but there is still a lot to be done, and the perspective is that the speed of change will keep increasing and require more and more efficiency.

In terms of ntelligence in quality efforts, some topics come to mind:

· Investments have to be adapted to the needs of the business. The needs for a hotsite that will be active for four years, a CRM, and an ICU monitoring software are totally different.

· According to the characteristics of my business, I need to invest more where the cost effectiveness is higher.

· Feedback from quality has to be light (able to be executed when necessary) so that it is frequent and brings about questions and answers when required.

· Relevant information must reach its destination quickly.

· Production failures must be proportional to the tolerance of each business. If there is no tolerance, there can be no failure.

· Failure and inefficiency in quality: who watches the watchers?

· Repeatability: the regression of features cannot exist; a new feature cannot make an existing one stop working.

Taking all of these points into account, the first word that crosses my mind (and I hope that yours too) is automation. It is necessary and fundamental; without it, all of these needs cannot be successfully addressed. In addition to being necessary and fundamental, it is also demanding. When we talk about quality automation, we need to think about the needs of your business and your system in several different dimensions. For example:

· Automation of functional and non-functional tests: normally, the first point that comes to mind about quality automation. There are several different types and levels of validations and checks applicable to different situations and contexts. The cost effectiveness of each type and level of test automation varies widely, and we need to keep in mind that we should do with the most “expensive” tests only what cannot be done otherwise. To be simplistic, this is the summary of the test pyramid theory; presented by Mike Cohn in 2009 and brilliantly defended by Martin Fowler in several articles. Unfortunately, it is still a topic in which many good people still make mistakes and end up consuming more time and resources to validate their systems than necessary.

· Automating the software construction and publication cycle: no wonder people talk so much about DevOps, DevSecOps, SRE… This is where we do code quality checks, adherence to standards, execute the build in an automated way and separately from the developer and its machine, perform the appropriate tests to quickly answer the question “did anything break that shouldn’t be broken in this build?” This is critical for the quick feedback we mentioned earlier to be feasible. With a fine-tuned process, our technical debt does not grow, answers about quality arrive before questions, and time to market is reduced in an impressive way. However, it can be a resource drain and a bottleneck if it is poorly deployed.

· Automating results collection: what is not measured, informed, and repeatable does not exist. Did it get better orworse? How many failures occurred during development, homologation, and in production? Which system areas are more vulnerable and cause more problems? Is the number of issues directly related to the volume of deliverables or are there other issues involved?

This collection of results becomes even more important when we think of a universe that begins to benefit from artificial intelligence. AI processes are built with data; without historical data, there are no insights, there is no training of intelligent models, there are no results.

· Automating the generation of test cases: it still looks somewhat like science fiction, but it is already feasible to make the generation of test cases easier using artificial intelligence. At everis, we already have tools to support us in actually converting natural language into automated tests.

· Automating data mass generation: those who have worked with tests or development for more than two weeks and have never had problems with data mass may cast the first stone. Whether it is a backup or a snapshot, a virtualization or a masked extraction tool, an automatic generator or a specialized team: everyone needs some type of support to generate their data masses, otherwise checks and validation are not feasible.

· Automating environments: the same question as above. Who, in the field, never had problems when creating environments? Many people try to justify that having environments to run the tests is very expensive, so it is not feasible; have they ever thought about the cost of not running the tests? Think about it. Whether in the cloud, in containers, on premises, virtual machines, or virtualized services… What is the “flavor” of automation of environments that perfectly meets your needs?

· Automating security checks: it is a specialized subtype of functional tests, but it deserves special attention. How valuable is your data, your system or their absence? If it is not much, ignore this concern. Otherwise… If you are not taking any concrete action, you should be worried. It’s a jungle out there.

I had no intention of closing the topic. My goal is to bring forth the fact that there are many possibilities and several different approaches to make the quality process more efficient.

The good news is that there are many alternatives for us to evaluate, some are more recent and others are very old. I wanted to provide answers that apply in some scenarios with each of the dimensions above. We must, however, check the synergy between our choices for each dimension, according to the characteristics of the system under test.

Finally, you may say : “but, Rui, we are talking about quality and you only talk about automation. Is there no value in manual tests anymore?” Of course there is. They are fundamental in exploratory tests, in those (few) situations where automation is not possible and to add the experienced outlook of a tester used to locate problems. However, I put manual tests in the context of the test pyramid that I mentioned earlier; it is the most expensive test we have on the “menu”, and should only be used where necessary.

Smile, tomorrow will be difficult, with more challenging needs. That is what keeps the quality (and software) world so fun.