Digital Information Design (DID) and Effectiveness and Efficiency

First, what is the difference between effectiveness and efficiency (E&E) we hear you cry? Well, certain presidents have effectively duped half of a nation into voting for them and then been entirely useless in putting into practice any form of efficient governing mechanism other than that which does not syphon money from the poor to the rich.

Or put another way it is possible to be effective in proposing or even managing a change but a failure when it comes down to delivering useful outcomes. The satisfaction of the customer or user is the major indicator of effectiveness. In other words, focus on the customer experience. Some common Key Performance Indicators for E&E are discussed in this blog-ette.

Effectiveness vs. Efficiency

While efficiency refers to how well something is done, effectiveness refers to how useful something is. Thus, a plane is a very effective form of transportation, able to move people across long distances, to specific places, in miserable conditions and with the added ‘value’ of lousy food; but it may be considered not to transport people efficiently because of how it uses fuel. You can clear your conscience of course by paying a few more pounds/euros to the airline to show that you understand the problem of the carbon dioxide blasted into the stratosphere…… in the full understanding that the money will have no impact whatsoever on global warming. But your conscience is clear so we can move on……

Effectiveness then, is about doing the right task, efficiency is about doing things in an optimal way, for example doing it in the quickest or in the least expensive way. It could be the wrong thing to do of course, but at least it was done optimally, that should make you feel better. Too often though, the ‘needs’ of managing information are not considered in the same way as the need to be efficient (or agile as some mistakenly call applications development that is expected to be rapid no matter what the complexities of the information being used).

Productivity implies effectiveness and efficiency in individual and enterprise performance. Effectiveness is also the achievement of the objectives/requirements in the DID model. Efficiency is then the achievement of the ends using the minimum amount resources and activities (also illustrated in the model). All public sector organisations are under pressure to perform. The private sector is always under pressure from Wall Street or shareholders. Recent government initiatives have focused on the need for all public sector bodies to deliver greater efficiency, to ensure that the most effective results are obtained from available resources. The fiasco of the UK government ‘test and trace’ (T&T) system (world beating digital design was the mantra….) is a first class example of the effectiveness of the application being sacrificed in the speed of development. Incompetent design? Yes, what about incompetent management? Yes again. Add to that outsourcing to a supplier with no knowledge of the UK NHS and really, what did you expect?

Programmes of change aimed at meeting digital performance targets will need to pay particular attention to the review phase and to the evaluation of outcomes of the programme, in the testing, review and validation activities. The enterprise will be concerned to establish that the target performance levels have been achieved; but some thought should also be given to establishing the governance role of the digital change programme in meeting the objectives.

In complex programmes of organisational change, (almost any large scale information processing project —even those where, ostensibly, the task seems so simple such as finding people to tell them they have had a contact with someone else….) can be difficult or even impossible to establish, after the event, that specific activities or policies in the change programme gave rise to identifiable and quantifiable improvements. Chains of cause and effect must be carefully considered in planning the change, and monitored during implementation. The approach to measurement of performance improvements and other objectives must be considered as part of the planning of the programme, and not tacked on later as an afterthought. Review and validation is an activity in the model to ensure that objectives and requirements as described in the improvement (the change) are being fully developed. These improvements must have the desired impact when in operation, and are an indicator of effectiveness and efficiency.

Effectiveness and efficiency in the real world….

Let’s use a service desk as an example to discuss effectiveness and efficiency. A commonly used example of effectiveness is the ‘ITIL compliant’ service desk, where some genius decided that the best measures of effectiveness involved the speed by which a phone call was answered. For example, was the telephone answered quickly (e.g. 90% of the calls answered within X seconds)? This led to quick fixes and repeated calls because no one did anything more than try to meet the target of answering a call. Who cared if the answer was correct? And this brilliant concept was also copied, almost word for word, in many other best practices, many related to business information management.

Other advice is equally dubious……

Are calls routed to second level support within X minutes (if they cannot be resolved at the Service Desk)? Where did ‘X come from? Is the figure based on what a customer considers is acceptable? What is the goal of second level support? Is it customer based?

Is the service restored within an acceptable time and in accordance with the SLA? SLAs rarely reflected business need; IT generally speaking ripped off a previously published ‘ITIL’ SLA that applied to no one in particular and applied it generically to everything in the IT universe, resulting in very irritated business customers.

Are users advised in time about current and future changes and errors? What did ‘in time’ actually mean? The use of loose phraseology and generic SLA resulted in poor service and outsourcing.

Some performance indicators can only be measured by means of a customer survey, e.g.:

Is the telephone answered courteously?

Are users given good advice on how to prevent incidents?

These sorts of questions were more open, though yet again somewhat loose in phrasing. Your good service is my crappy service desk….

Any service desk focusing only on the IT systems delivering the services will be weak when it comes to issues regarding information processing. Think clearly; if your business involves IT services delivered to people, then think about what issues those people are most likely to bring to the attention of a service desk. They won’t be asking about capacity management issues that’s for certain. How about interviewing your customers so that you know when designing digital information services you are including their requirements for what they wish to experience when using the services.

Building a performance framework related to the customer experience is necessary no matter what the focus of a performance management exercise; overall effectiveness or efficiency of resolution. You may be seeking to assess the performance of a business unit, a small department, a team, a contract or an individual – the attributes of a good performance framework are the same regardless of scale or scope. A test and trace application that actually met the needs of a traumatized general public built by specialists with knowledge of NHS information systems might just have been an improvement on a cobbled together aggregate of freeware and guesswork.

No doubt ‘speed’ was considered a primary concern with T&T, but surely effectiveness should have been a key criterion to the digital design.

Why measure performance?

Performance measures lie at the heart of demonstrating effectiveness and efficiency. They provide the first, vital link in a chain that leads on to better services, improved business models and the realisation of outcomes. Measurements provide the foundations for improvement:

you could be rewarding failure if you can’t identify and reward success

what does success actually look like? And to whom? When you cannot see success, you can’t learn from it

if you can’t recognise failure, you can’t correct it

if you can demonstrate success, you can win support – from management, customers and in the case of the public sector, the citizen.

It is a fact that what gets measured gets done

And always remember that all measures, as with all statistics, is information that needs to be properly processed…….DID anyone?