Getty Images

Tip

Demonstrate ROI in IT with these metrics

When IT ops teams must validate a project's worth to the business to receive funding, they have to look beyond general benefits. Here are several metrics that help prove ROI.

Every IT department wants to prove its value, especially when looking for a technology and personnel budget. The challenge is that organizations measure the business value of IT operations by their effect on business costs and profits -- metrics that are unavailable through IT monitoring. How do you relate visible IT metrics and business value to create a credible ROI? It's a matter of stages.

ROI is a financial measure that stands for return on investment. Demonstrating ROI means showing that the benefits created by a project, divided by the project's cost, meet the company guidelines for internal rate of return. The details of how your company assesses this should be a part of the project's approval process. Demonstrating ROI validates the claims made in the approval process.

Stage 1: Determine the target

The first stage in proving ROI based on metrics is to identify what specific improvements the IT project should make. Generally, an IT project's ROI relates to productivity, meaning the pace or quantity of business activity that an application or set of applications will support. In some cases, the project targets IT costs themselves.

Stage 2: Get specific

ROI depends on both investment and return, so it's essential to understand both the cost and benefits of a project. This sounds simple, but many projects involve incremental changes or modifications to existing applications or resources, such as cloud commitments. Separating the cost and benefits of a modification instead of a new deployment requires granular measurement.

When productivity is a project's target, the most direct metric is the number of transactions per hour, which is available from most application management systems. The only problem with this metric is that its value depends on the IT system being the limiting factor in transaction processing, which is not always the case. This metric might require a long measurement period to ensure it includes all the activity rates that occur.

Measure latency

The alternative is to measure transaction processing time, or latency. Transaction processing time is the time between the initiation of a transaction and the return of results to the user. This data isn't always maintained, but when it's available, it's the ideal ROI metric.

If the data is not readily available, measure the latency of the specific element the IT project targets before and after the project is complete. The difference is in the transaction latency, which relates to the overall rate of transaction processing achieved.

If the applications associated with a project are implemented with their user front ends in the cloud, the cloud's metrics are a great way to prove ROI. Focus on the measurements related directly to user experience or cost, and not on the metrics that relate to controlling or optimizing cloud behavior. The latter metrics are useful, but they're an element in responding to ROI issues rather than demonstrating that the ROI meets CFO targets. It's a mistake to bury financial analysts doing ROI calculations in technical details.

Cost complications

Projects designed to lower costs are justified similarly, but the R in ROI is the cost reduction expected, and the I is the investment to achieve it. Linking specific metrics to create proof of ROI can cause complications for three reasons:

  1. Cost management projects typically target lowering IT spending, which means changing one or more existing applications. To prove cost reduction, it is necessary to develop a before-and-after picture of costs, but the metrics necessary might not be available for both cases.
  2. Cost management projects could substitute expensed IT costs, such as public cloud services, for capital costs like data center equipment. This comparison is difficult to make because some of the assets in the data center will be partially depreciated, and their eventual refresh will cost more.
  3. Differences in hosting, such as a shift to the cloud, will likely change operations costs. It could be difficult to assess these changes with limited exposure to the new environment.

To address these points, start with an accurate measurement of the current costs -- often available for the cloud from billing histories and for the data center from the initial justification for the equipment. In most cases, assess these starting costs in a time period when usage rates mirror what is expected in the long term. Then compare this with the cost experiences for the new project, based on a similar interval.

The key metric here will be usage patterns, which reflect the transaction counts and similar data used for the ROI proofs for productivity projects. The goal is to ensure the past-and-present comparison isn't tainted by differences in activity that could affect the cost of the new option, particularly if it includes public cloud usage.

The task of validating claims is easier if the project approval process answers the question of how to measure success. When laying out benefits, consider and document how you'll know when you achieve them.

Next Steps

Unify business and IT with DevOps value stream management

Dig Deeper on IT systems management and monitoring