Service levels are ubiquitous in the ICT industry. However, many service levels are riddled with problems that can be landmines in the path of a successful customer-supplier relationship. The last thing you want is deficient service levels for your SaaS (Software as a Service) or hosted solution, which could land you in disputes or require additional dollars to get the service you thought you were entitled to. Set out below are six service-level blunders to be avoided at the outset.
• Failure to target business outcomes
A long history of technology-orientated SLAs means sometimes the service levels capture measures that aren’t meaningful to the business users. There’s too much focus on technology inputs (such as server availability) and not enough on delivering results targeting the business outcomes that matter to the business (such as the availability, accessibility and responsiveness of the overall solution, or the speed at which transactions are processed). Fail to target these outcomes and you risk ending up with the situation of the service provider achieving 100 per cent on the service levels, but abysmal results on the customer satisfaction survey.
• Ignoring the five key questions
Issues of who, what, where, when and how need to be addressed for each service level. Forget to work through this sort of detail and you may be signing off on a level of ambiguity that could come back to bite you. For example:
• What does “availability” actually mean? What if the application is still working (i.e. available), but the performance is severely degraded; should that be considered available?
• Where is availability measured? At the service provider’s datacentre, at the end-user’s terminal or somewhere in between? The difference is usually significant.
• When will the metric be measured, who will measure it and how? These questions need to be addressed to ensure the service level will actually work in practice and can’t be gamed by the service provider.
• Averages that camouflage
Averages can be misleading. For example, if the service provider has committed to fixing faults within an average of four hours per fault, is it acceptable that they fix nine faults in one hour, and the tenth fault in 31 hours? Consider setting some upper and/or lower limits when averages are used. For this you might want to state that “faults will be fixed in an average of four hours and, in any case, within 10 hours”.
• A measurement period that’s too tolerant
A long measurement period may unduly favour the service provider. For example, with an availability metric of 99 per cent, the difference between a measurement period of a month and a year is permitted “downtime” of around seven hours compared with about three-and-a-half days. If you opt for an annual measure the service provider may be able to stuff up service in January, but make up for it in February and March. Is that intended? For mission-critical services you would typically want to reduce the period of measurement.
• Half-baked percentages
Are the percentage measures complete? If a service provider has committed to resolving 80 per cent of critical faults within four hours, what’s the commitment to the remaining 20 per cent? Consider using a two-step service level in these situations; for example, 80 per cent of critical faults resolved within four hours and 100 per cent of critical faults resolved within 12 hours.
• When best endeavours are not good enough
A service provider may insist the service levels are targets only, and it only needs to use reasonable or best endeavours to achieve them. The problem with this approach is it is unclear what exercising reasonable or best endeavours will mean in practice. In particular, there’s no certainty as to when “enough is enough” and the service provider has breached the service level. This makes managing performance difficult. If you have to live with a best-endeavours approach, consider inserting some minimum performance levels making it clear when the service provider will be in “material breach” of the agreement (giving rise to various remedies, such as termination).
Time invested at the outset in avoiding these sorts of blunders helps create a robust foundation for the relationship and can prevent unexpected costs and disputes.
Stuart van Rij is a solicitor at Wigley & Company, a law firm specialisingin ICT. He can be reached at o6-499 1842 or firstname.lastname@example.org. Fiona Campbell, solicitor, contributed to this article. She can be reached at (04) 499 1843 or email@example.com.
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.