Quality Assuring Online Solutions

4 steps to quality assurance

Step 1: Find your focus area

Testing and quality assuring all parts of your online solution simple isn't profitable, which is why you should choose an area to focus on. That's why the first step is crucial: Clarify which parts or your website or system that are the most business critical.

Make sure you discuss it with several relevant parties in the organisation so that you get a broad perspective of the business and the role played by the platform.

We recommend starting the process with a brainstorm - and later narrowing it down a specific list where functionality and parts of the solution are prioritised. 

It's really important to have a broad perspective to avoid a blinkered process.

Step 2: Define your quality factors

Once you have put together a specific list of priorities for the company's online solution, you should define some quality factors that you can use to assess the priorities against.

For example:

  • How high a conversion rate should the checkout flow have?
  • How fast should the API respond, and is the expected data returned? Even in borderline cases?
  • When should the alarm trigger - and when should it definitely not trigger?
  • How long should the website's load time be?  And what about on a desktop and mobile?
  • How fast should the search respond?

Setting specific figures can be challenging if you have nothing to compare with. But it's important to achieve measurable results in the subsequent optimisation process and investment. In many cases, there are analyses, statistics and experiences available, which all provide specific recommendations for e.g. performance and optimum load time for the best user experience.  

Please contact us to hear which recommendations might be relevant for your solution >>

Step 3: Choose your test methods

Next, you should pick those test methods that provide the most focused, measurable and effective testing of the previously chosen quality factors. 

As in step 1, we recommend starting with a brainstorm of all the methods that might be relevant for the quality factors. Next, prioritise and pick the methods. You could base these on factors like effect, contribution ratio, time spent, available resources etc. 

We use a number of different test methods, depending on the project's focus area. These include e.g. Unit Testing, API-testing, Load and stress tests, SEO testing, Peer review, functionality testing, browser/device testing etc.. Tests can be done as black/white box tests, i.e.with or without knowledge of architecture and set-up in the underlying system. Some are best done manually, while others can be automated. Several test methods require a knowledge of specific tools or methods and often in-depth knowledge of the web solution's code.

It's also important to be aware that for projects under development, quality should be prioritised as early on in the process as possible to have the biggest effect. For example, Unit Testing is conducted at the same time as the system development to ensure business critical functionality. Ensuring high performance requires prioritising the choice of system architecture right from the project's start. Equally, a high conversion ration depends on the design choice, which is done early on in the development process.

We therefore recommend that the decision about which test methods to use and how you conduct them, is one that is made together with your system supplier. 

Step 4: Testing, measuring and evaluating

The final step is to conduct the chosen tests, measure the quality factors and evaluate status. Depending on the test method chosen, this step can be done once to provide a snapshot view and to uncover any problem areas that need fixing. Ot it could be done iteratively, allowing you to keep a regular check of the ongoing optimisation and improvement. The approach you choose really depends on the project's focus area and the relevant quality factors.

Once you have completed the chosen tests and gathered specific, measurable values, you need to evaluate the test results. This should be done in order of priority of the functions you picked in step 1. Then decide which measures are the most profitable to prioritise. 

For example:

  • Does the alarm in the system trigger at  the correct thresholds in all expected situations?
  • Does the checkout flow meet the expected conversion rates, or are improvements needed?
  • Does the performance of the API, website or search need optimising?
  • Where are the bottlenecks for performance?
  • Which findings are most critical in relation to the focus area and the quality factors?
  • How much investment is needed to optimise these findings?
  • And what is the expected effect of the optimisation? 

Quality assurance - a good investment

The scope of resource use and investment in the quality assurance process depends on the methods used, and also on how comprehensive these methods and their subsequent optimisation are. This in turn depends on the relevant quality factors and, finally, on the business focus of your web solution. 

The earlier on in the development process you choose to focus and prioritise your measures, the greater the chance of finding any issues, and the smaller the consequences of such issues. Analyses show that the cost of fixing a problem rises by up to twenty times the later it is discovered.

A problem could be anything from a bug to inappropriate choice of architecture or design. It's easy to imagine how expensive it would be to do a redesign once you've implemented and launched - compared with if you did it earlier on in the design phase. The same applies to other areas of the solution and functionality.

Quality assurance processes are an investment, and the cost and consequence of avoiding quality assurance rises the longer you avoid it.

So dare you avoid it?