I’m in the process of writing an article for TechRepublic.com on the declining importance of performance testing. Without getting into the articles primary arguments, I came to a realization that I felt compelled to share.
Most IT professionals have no idea how to approach performance testing.
I mean that both from the sense of they couldn’t do the testing if the needed to and also from the perspective that they don’t truly understand how it works. The article for TechRepublic.com is, as most of my articles are, born out of experience. I’ve noticed that way too many organizations get hung up on performance. They avoid items which have been identified as non-performant but which make the development process much easier and therefore it costs less.
So in the interest of educating folks, here is a high level summary of what you should know about performance testing, the details will be in the article …
- Performance Testing – Performance testing is really a set of related evaluations of the system including: responsiveness, throughput, and scalability.
- Responsiveness – Responsiveness is how quickly the system can complete an individual transaction. As the throughput of the application goes up responsiveness typically goes down.
- Throughput – Throughput is the rate at which transactions can be completed. Typically maximum throughput is the number that organizations are trying to find.
- Scalability – Scalability is the amount to which various changes to the environment can impact the maximum throughput and to a lesser extent improve responsiveness. Different changes can result in radically different effects on throughput expecially when the scalability is directly related to a bottleneck.
- Bottleneck – A bottleneck is a performance constraint in the system which prevents the throughput from increasing. Typical candidates for bottlenecks are processor performance, available memory, disk performance, and network performance.
I hope this helps.