In my blog post, “The Nine Keys to SharePoint Success” I called out Planning Measurement as the number three key to success. In this blog post we’ll delve into what measurement is – and how to measure the right things.
Measurement is at its heart a standardized way of evaluating something. The essence of measurement is knowing that one road is longer than another based on the measurement of its length – without having to have traveled them both. Sometimes we wrap up the usefulness of the measurement and the fundamentals of measurement and confuse whether a measurement is “right” or whether it’s “useful.”
Central to the discussion about measurement are two broadly misunderstood terms: accuracy and precision. Precision refers to the repeatability of the result – multiple measurements of the same thing will result in tightly clustered values. Precision does not, however, tell you how close those values are to the real or true value – that’s accuracy. Accuracy is, in other words, the “rightness” of the results you get. If I throw four darts at a dartboard and hit the very top, very bottom, very left, and very right of the dartboard, my dart were accurate in that they average to the center. They are not, however, very precise because they’re all over the board. Conversely, I could put four darts practically on top of each other on the rightmost edge of the board and I’d have precision without accuracy.
Sometimes we find that in our quest for precision we’ll often overlook simple ideas that lead to accurate but less precise results. In the case of SharePoint, let’s assume we’re trying to determine the relative levels of activity on different days of the week. We could take the file sizes for the IIS request logs – rather than the individual number of requests and get a roughly accurate comparison between the days.
The precise answer would require we count every request, eliminate those not caused by users, and establish daily request values. However, this level of precision is probably not necessary if we have a few weeks’ worth of logs to evaluate – the relative sizes of the log files will roughly equate to the volume based on day of the week. Measurement is as much about knowing how to find the level of accuracy needed – without driving unnecessary precision.
One of the interesting challenges about measurement is that in most cases performing measurement means effort on the measurement process – which ostensibly takes away from the ability to spend the effort on production. This begs the question “How do you know what to measure?” Certainly there are some measurements which will be clearly impractical to perform. However, in many more cases the decision between deciding to measure and not to measure is difficult.
Douglas Hubbard, Author of How to Measure Anything suggests the following, slightly complex, set of questions for evaluating a measurement:
- What is the decision this measurement is supposed to support?
- What is the definition of the thing being measured in terms of observable consequences?
- How, exactly, does this thing matter to the decision being asked?
- How much do you know about it now (i.e., what is your current level of uncertainty)?
- What is the value of additional information?
However, I believe these can be simplified into a simple litmus test. The litmus test question is: “Can I reasonably expect that I’ll make a decision based on the result of the measurement?” If you will never make a decision based on a measurement then you probably don’t need to do it. You’ll note the word “reasonably” in the question. It’s there to extend it to situations where you don’t know for sure what you’re going to get – and so measuring the results for at least a short time are called for. The word “reasonably” is also to constrain the analysis from the radical extremism that we can sometimes enter into in a meeting. Sure anything taken to an extreme would lead to a decision but is that case even remotely likely?
Leading, Coincident, Lagging
When we’re measuring, we have to look at what we’re measuring not only from the perspective of value but also from the perspective of their place in time. Ultimately our measurements turn into a set of numbers and those numbers are indicators. They’re indicators of the level of some function.
The indicators that we’re measuring can be leading – in that they signal an event to come, lagging – they demonstrate something that’s already happened, or coincident – signaling something that’s happening right now.
Let’s say that your goal is to monitor the amount of cash we have on hand – our cash flow. A leading indicator for cash flow is invoices. We can be reasonable assured that we’ll receive money for an invoice we produce. A coincident indicator would be a bank deposit. We’d be seeing the actual amount deposited at the moment it was deposited. A lagging indicator would be the bank statement. It would say what has already happened in terms of cash flow.
The good news is that lagging indicators are almost always correct – they’re documenting things that have already happened. Coincident indicators are mostly correct. In most cases you’re measuring something that is happening so there’s little chance for accuracy issues. However, leading indicators are sometime wrong – and are often subject to manipulation. For instance, in our invoice example above, it’s possible to create fake invoices which customers will never pay. This would make the invoiced indicator look good leading to the conclusion cash flow would be good – until you realize the invoices aren’t real.
When planning measurement you have to consider what decisions you’re going to be making and whether a leading, lagging, or coincident indicator is the best answer. The obvious answer is that you want to have a leading indicator so you can make changes before things happen – but the obvious answer is sometimes so difficult to get that the right answer might be a coincident or even a lagging indicator.
Measuring the Right Things
Some organizations believe that they have the metrics problem down. With sophisticated tools they measure service availability, utilization, and hundreds of system-automated metrics. From a systems management perspective they have all the data they need – however, despite all of this data they still cannot communicate whether their solutions are adding value. The fewer the number of links between the item you’re measuring and business profitability, the better the measurement is at measuring something that matters.
In the following sections we’ll look at measurable metrics and their ability to illuminate business value in a solution.
Service Measurement (Availability)
If you’re in IT service delivery then the metrics that you care are about are largely the metrics for which you have service level agreements (SLAs). You’ll be concerned about up-time and performance – how many seconds that it takes for a page to load. These are essential service delivery metrics but they don’t tell you much about the users or the business. They tell you exclusively (or nearly exclusively) about the system itself.
Scalability Measurement (Visits)
Often a marketing or communications department will insist on metrics like the number of hits or perhaps visits that the site receives for a certain period of time. The metrics can be generated through sophisticated tools or simple IIS log analysis and provides a level of awareness of what the users are doing – or at least that they’re showing up and using the system. However, this is a measurement of activity. It doesn’t talk about the results that are being driven through the system.
Even if the monitoring can tell you the number of new documents uploaded or the number of records edited in addition to just the number of visits, these still don’t lead to business results.
Business Measurement (Dollars)
Business measurements are those measurements that are tied to business results. An example might be the value of the increased number (or percentage) of closed deals. Another example might be the amount of reduced costs based on the use of the system.
Most of the time in SharePoint projects when I start to talk about measuring business value, the IT folks in the room start to squirm. That is because SharePoint is delivered as a platform to enable business solutions – it is rarely sold with a specific set of objectives. That means that there’s little direct value from deploying the platform, except for the occasional cost replaced by an older system. The other problem is that measurements tend to get used for bonuses and performance appraisals. A technologist doesn’t want the messy business stuff to get in the way of their compensation.
If the measurement is reduced cycle time for responding to a request for proposal (RFP) then the technologist is being judged not just on whether the system is available, or even how many people use it, but also on the ability of the business to use the system in a way that transforms the way the business works – and that’s scary stuff. Measurements based on the business outcome being driven force the technologist to get in the boat and row with the business towards mutual goals. While this is scary – sometimes to the business as well – it helps get the alignment that’s necessary for success.
Putting Measurement Together
The key thing to remember about measurement of your SharePoint project isn’t the distinction between accuracy and precision (though that may help); the key is to realize that no matter how difficult you believe measurement will be there’s almost always an easier way to get a “roughly right” result. You can then use these “roughly right” answers to start to measure more important items pertaining to the SharePoint implementation – like how much money it made (or saved) for the organization.