I ask this question to the candidates in the job interviews knowing this is tricky to answer. Mostly I get right answers from the candidate but the answers only scratch the surface in software quality measurement.
We relentlessly talk about having a great software without no known bugs. Is this the only measure of software quality?
Maintainability, Portability, Reliability, performance, functionality, Security, usability are the key indicators of good quality software.
In my experience of measuring the quality of the software, there is no single metric we can take to measure quality. Quality is end results for the combination of various metrics in various stages of SDLC.
Internal Defects In Production
These are the known defects at the time of the release. These defects are surfaced in Internal testing by development or QA personals.The focus must be on addressing all defects raised during the software development lifecycle. No known defects should pass through. Mostly all software releases are timeboxed therefore at times lower priority defects are not addressed.
These are the defects raised by Customers or professional services or Support team. Higher the number defects, the perceived quality of the software diminishes. Higher incoming defects indicate poor software quality which will negatively impact customer experiences.
Static code analysis
These can be done manually like code review of using static analysis tools like Sonar cube. There is specialists tool to do security code review, for example, Coverity, FindSecBugs etc.
Having code review in place in the SDLC process feed in better code management and ensures coding practices are followed.
Having test coverage in various testing levels and types, feeds into software quality:
- Unit & Integration tests coverage – Unit test coverage can be easily measured using a tool like Clover. The process to enforce unit code coverage in place to test the written code.
- System and UAT test coverage – This is tricky to measure from code level. In my experience, this coverage is based on the number of test cases for the release. This can be done in the combination of automated and manual testing.
- Security, Performance and Failover coverage – These are non-functional testing and commonly requires expertise in the various domain. These areas are the keys to performance and sometimes mandatory in industry domain. For example, PADSS is a mandatory compliance for Payment processing application.
- Usability – This indicates the ease of use of the software. Self-explanatory UI’s helps end users to understand the know-how of the software. Poorly designed UIs is one of the important and frustrating factors to degrade the quality of the software.
Annual survey feedback from the customer directly indicates their experience in software quality. Here software quality does not necessarily limit to bugs, it could be ease of software usage, usability, behaviours, defect turnaround duration, support team engagement etc. The survey question is needed to be clear to distinguish these various types of feedback lead to software quality.
Others influential factors
- Team size with appropriate roles in the team. For example, having software architecture and BA significantly improve design and architecture of the software.
- Network with external suppliers and 3rd parties. For example, having a good network or perhaps the partnership with Visa, Mastercard get you the access to relevant up to date specifications, periodic communication with latest mandates changes etc.
- Various technologies used in development have one or other advantages over another technology. OOP concepts better utilise the code in compared to structured programming concepts which lead to better maintainability of code
- Architecture & Design review of the software helps detection of the defects in early stages
- Sales and pre-sales are frontiers in Software sales. They understand the need of customer in order to spread positivity of the software. The number of defects raised by them feeds into the quality pool.