Quality On Demand
Sign in

editricon Quality on demand

Director at Interwoven Software Services, Bangalore, India
Probably no other industry has mangled the definition of quality as much as the software industry.  Personally, I find it very difficult not to scream in an interview when I ask some one for their definition of quality, and they come up with "Fitness for use, as defined by the customer".  It is probably something some very brainy quality guru came up with, but if I pick up a high quality product, and try to apply this definition, I am less than happy about how it would have helped in coming up with that product in the first place.

Anyway, this post is not about the definition of quality. I examined the model for quality as applied to a Saas business specifically. Since Saas is software hosted by a provider, and used from a single location, it has different implications as far as quality is concerned.

In general, the software industry equates bugs and quality inversely - that is, the accepted approach is find all the bugs, and you will have delivered a high quality product.  Obviously, it is difficult to find all the bugs.  So, in general, you find as many as you can afford to find, and make the software available for use.  You then have a approach to deal with the bugs you did not find.

The find-release-fix steps that you follow, actually do have an industry specific slant to them, especially when examined in the context of Software as a service - or Saas, as we are now used to saying.  I mentioned that you'll find as many bugs as you can afford to find, and it is also true that you will release as many bugs as you can afford to release.  There is a cost associated with bad quality, in terms of financial loss, loss of reputation, contractual costs, damage to further business prospects, etc.  On the other hand, there is also the cost associated with good quality - the time and effort taken to ensure that things work the way they are meant to.

Bugs are not always the end of the world - there are often ways to recover to a certain extent, if not fully, depending on what happened.  Depending on the context, the impact of recovery could vary, and have different implications for your customers.  For example, a bug in a product that gets sold to multiple customers needs to have its fix distributed to all/many/some of them.  This implies the maintenance of a tracking mechanism, guarantees and service levels.  In contrast, a program written for a single customer, who discovers a bug, just requires a single fix, which needs to be delivered to just one customer.  The impact here though, is that your only user had a problem, affecting the perception of the quality you provide, as against the product scenario, where not all your customers may have encountered the problem. Now take the Saas scenario.  Since you operate the software yourself, you have the ability to "encounter" the bug as soon as it happens (if you have the proper systems in place).  You can then follow up to fix possibly even before other customers notice.


Quality is extremely context sensitive, so please take everything written so far, and written ahead in the context specified here, and apply the context you may encounter elsewhere judiciously. What I am driving at is that there is a different amount of investment in quality versus return that you will see across all these contexts, and in general, Saas companies, service companies and product companies tend to occupy regions that cluster together.  This is not a hard and fast rule, and the context drives the associated exceptions.

Supposing we were able to graph the features of our software against the number of users, sorted by the number of users, in descending order.  And supposing we were also able to graph the number of defects in each feature on the same graph. Such a graph might look something like this (I'm sorry I couldnt upload the picture inline - limitations of this blog editor - please click to see the picture).

There is no hard and fast rule about what this graph should look like.  It probably looks different at different points of time, even for the same software project.  In general, there is a certain number of bugs embedded in every feature.  Over time, bugs are reduced, and based on the approach, the most commonly used features, or the most critical features are made as bug free as possible before releasing the product.  Obviously, it is not necessary that the most critical features are the most commonly used features, and that is also what I previously called context.

In general, this graph would transition through a set of shapes as illustrated here(please click for image).  During development, there would be the maximum number of bugs, which would get removed over time, until the release decision is made.  At some stage, the software would transition from development quality to usable quality, where the basic, most commonly used, most critical or some combination of features across these heads, are working.  Then the quality would transition to a point where most of the requirements are working the way they were intended to work.  Finally a high point of quality would be reached, when the software is as close to perfect as possible.

The release decision could happen at any stage in the development-usable-working-perfect transitions.  Finally, this is decided by context.  There are some things common in the context to Saas, product and service software that bias the release to a specific point. Saas tends to gravitate towards "usable", because that's when it starts benefiting someone, and the cost of fixing post release is low.  Services tend to gravitate towards "works", because of the contractual minimum quality on the one hand and the budget on the other.  Products tend towards as much "good" as possible due to the high cost of distributing fixes.  Again, this is determined by context - it is not a hard and fast rule. Saas may want a minimum quality in place so as to retain existing users, and manage their reputations - and so may consciously move to "works" instead of just "usable".  A service project may put out a "usable" prototype, to get early feedback from a customer.  It may also decide to invest in "good" quality, with the hope of winning future business.  An open source product often does pre 1.0 releases that stretch the definition of "usable" to "dev" sometimes. 

This can be consciously factored in to the testing strategy for a project.  We all know that in general, the time taken to find all the bugs in any software is close to forever.  So rather than targeting finding out what is not working, the opposite strategy of certifying what is working creates a more static target for testing.  The release objective per feature is modified to release when:

  • I can do A
  • I can do A when B
  • I can do A when B, C and D


Rather than: I'll try under what circumstances A can be done and cannot be done.  This includes trying out B, C, D, E, ... Z.  So there's no telling when are we done, and there's no predicting how many defects we may find (so don't go and commit a release date).


Quality professionals could map this to an approach that has gradation from acceptance to alpha to beta testing. Once all acceptance tests are working, you enter "usable", once all alpha tests pass, the software "works", and once all beta passes, you have high quality.  Or at least you should, if you do everything right.

Quality is indicated by an increasing number of bars, starting from build qualification onwards, and running through different manual and automated assessment cycles.  Since there is an infinite amount of resource required to get a decent sized software project to perfection, it is important to control the investment quality against a desired target.

prevnew
start_blog_img