Get started
Home > Resources > BLOG > How to Measure AI ROI in Your Organisation: 5 Practical Steps

How to Measure AI ROI in Your Organisation: 5 Practical Steps

This article draws on insights from ELMO's webinar, Is AI Actually Working in Your Organisation? HR Leaders on Closing the Capability Gap, featuring Dr Amantha Imber and Anne Tosky. Watch the full webinar to learn more.

How to Measure AI ROI in Your Organisation: 5 Practical Steps

Your AI licenses have been approved. You’ve set up the network, sent the comms and run all the training. Now, your CEO is asking one thing: are we seeing results?

You pull up the dashboard and review the numbers. People are logging in. AI usage is up and the AI training completion rate looks strong. It’s looking good but when you try to dig deeper, you hit a wall. The current metrics don’t tell you anything beyond usage, or if AI has made a tangible impact. 

If this feels familiar, you’re not alone. ELMO’s 2026 HR Industry Benchmark Report found that even though 94% of ANZ HR teams are using AI at work, yet fewer than one in five can measure its impact extensively — a gap that signals a measurement challenge, not an adoption problem.

The report also revealed that the challenge wasn’t the lack of using AI, it was the lack of meaningful metrics to measure AI’s ROI and real value. So how can HR leaders close this measurement gap?

In The AI Capability Gap webinar, Dr Amantha Imber, Organisational Psychologist and Founder of Inventium, and Anne Tosky, Chief People Officer at ELMO explored the measurement challenge from multiple angles. From their conversation, we’ve distilled five practical steps, each tackling a distinct part of the problem.

1. Define the behaviour change before choosing what to measure

Every measurement problem starts as a clarity problem. Most organisations jump straight from “we need AI” to “let’s track who’s using it” without pausing to outline what actual success looks like. 

AI adoption is a behaviourial change and not all the answers can be found in dashboards or data. As Amantha put it “be really clear on what is the shift you are trying to achieve. What is the behaviour shift? Because this is not a technology transformation. This is a human transformation.” 

Anne echoed a similar instinct earlier in the conversation “this is actually a change management challenge at the end of the day.” It’s sitting in a room with your leadership team and being honest about what success means for your organisation. 

What you can do to start: Write down the specific behavioural outcomes that tie in with the business goal, but also outcomes that are aligned to your organisation values. Are you trying to free up time, improve the quality of work across teams, or accelerate how quickly new hires become productive? Each of these calls for a different set of metrics. Once you’ve decided the outcomes, it becomes your anchor, and everything else flows from it.

Related read: The People Power Behind AI Adoption and Why Organisations Often Fall Short 

2. Why AI usage metrics could be misleading

Login rates, token consumption and training completion percentages feel productive to track as they’re easy to pull and look “good”, but they tell you little about the value of that use.

Amantha has seen this pattern across hundreds of HR leaders. She pointed to Meta’s token leaderboard as a cautionary tale, a system that rewards volume of AI use, not quality. “It just means that people are using the tool. It doesn’t actually tell you that they’re doing it well,” she added.

And here’s the uncomfortable truth: in the world of AI, volume and quality have been decoupled.  

“Three years ago, producing more work generally signalled a better worker…volume now has no correlation with quality in the world of AI,” Amantha explained. “More output can just as easily mean more AI slop. This means poorly reviewed, copy-paste work that creates rework downstream.”

The check: For every metric or data you’re tracking, ask this question: “Does this tell me whether the behavioural shift I defined in Step 1 is happening?” If the answer is no, rethink the metrics you’re using. Which brings us to the next point.  

3. What to measure instead: metrics that show real AI effectiveness 

Knowing what not to measure is only useful if you replace it with something better, and this is where most organisations get stuck. They recognise their metrics are shallow but don’t know what depth looks like.

Amantha shared Inventium’s framework, which was built from running AI capability programs across organisations. It measures four distinct dimensions of behavioural shift: 

Behavioural shiftWhat it tells youHow to capture it
1. Confidence and proficiencyAre people genuinely more capable, or just more exposed?Pre and post self-assessment
2. Hours saved per weekIs AI freeing up real time, or just shifting tasks?Weekly self-report or time tracking
3. Quality of work produced and receivedAre outputs improving and are managers noticing?Manager review, peer feedback
4. Breadth of use casesHas someone moved beyond a simple task like email drafting to genuine workflow integration?Use case log, survey

The breadth of use cases is particularly telling. “If someone is just using AI to write their emails and that is the only use case, I would say that is very low-value AI,” Amantha explained. 

These metrics aren’t complex. They can be captured through pre-and post-program surveys, manager conversations, and simple self-reporting. But they answer the question that usage data never will: is a shift actually happening in how people work?

A practical starting point: Run a baseline survey before your next AI initiative. Capture confidence, hours saved, and breadth of use. Then measure again 90 days later. That before-and-after comparison will tell you more than a year of login data.

4. What happens with the time AI saves (and why HR needs a plan) 

Here’s a question that rarely gets asked: If AI is saving someone an hour a day, what are they doing with that hour? Inventium researched exactly this, and the answer would surprise you.

“They’re doing more work,” Amantha revealed. And it’s not strategic work or creative thinking. It’s actually cognitively intense work, because AI handles the routine tasks and people default to filling the gap with deep-focus activities.

This resonated with Anne, sharing how the work is changing. “The gear switching is faster and faster. You now have this co-pilot, this agent, this chat that’s helping you do more and more and more. You’re getting more and more tired,” she explained.  

Research led by Gabriella Rosen Kellerman and published in Harvard Business Review by Boston Consulting Group described this as “brain fry” and explained how it’s different from burnout. 

Brain fry is cognitive exhaustion that comes from managing multiple AI-assisted tasks simultaneously, context-switching and never fully switching off. It’s an emerging risk that traditional wellbeing metrics don’t catch yet.

“If we’re not being given guidance on what to do with these supposed time savings, that’s generally what’s happening,” Amantha said. 

“But what if HR leaders stepped in and said: the plan for the time you’re saving is to invest it back in learning, or redesign your job, or innovate, or maybe people could just leave at 5pm.”

The shift is proactively designing what fills the space AI creates. For Anne, her instinct was to lean in. “Let’s think about and let’s be deliberate on what we spend our energy and time on instead,” she said. 

Even though this step sits outside the measurement framework, it determines whether the gains you measure in Step 3 (success metrics) translate into something sustainable for your people.

The takeaway: Time savings without a deliberate plan for what fills that time could be a fast track to exhaustion. HR needs to own this conversation at the start to prevent brain fry from happening.

5. How do you move from AI literacy to AI leverage?

The final step is recognising that metrics change as your organisation matures. In the webinar, Amantha stressed how “AI training is not just a box you tick” and drew a sharp distinction between AI literacy and AI leverage

AI literacy is stage one, teaching people how to use the tools. AI leverage is stage two, redesigning workflows so AI creates value. So what does AI leverage look like in practice? 

Working with a major bank’s regional leadership team, Amantha shared an example of how her team helped each leader build a personalised “Quality Compass”, a custom GPT that interviewed each leader about their specific preferences for what good work looks like. 

Leaders then gave this tool to their direct reports with a clear instruction: before you submit anything to me, run it through the Quality Compass first, review the feedback, and apply it. The result? Leaders reported receiving significantly higher quality work, not because the AI wrote it for them but because it coached their team to a higher standard before anything landed on their desk. 

As a HR leader, Anne saw how this could create a genuine impact in how teams collaborate, “learn from that and feel kind of empowered.”

The takeaway:
Know the difference between AI literacy and leverage. Understand that your metrics need to reflect which stage you’re in. If you’re in the early stages, track confidence, proficiency, and hours saved. As you mature, shift to measuring work quality uplift and breadth of use cases. 

The takeaway: Know the difference between AI literacy and leverage. Understand that your metrics need to reflect which stage you’re in. If you’re in the early stages, track confidence, proficiency, and hours saved. As you mature, shift to measuring work quality uplift and breadth of use cases. 

Related reads: Building AI Capability in HR Teams: 2026 Strategy Guide

Where to from here?

These five steps can feel like a lot on top of day-to-day tasks, or when you’re already stretched. But here’s an encouraging note. 

The 2026 HR Industry Benchmark Report also unveiled how business leaders rated HR’s AI performance more positively than HR rates itself. This means you might be closer to driving more impact than you think. 

The hardest part isn’t the measurement itself. It’s being honest about whether what you’re tracking is telling you the real value and ROI of AI, not just what’s easy to report.

All you need is the right metrics to prove it. Start there, and the rest will follow. 

Not sure where your AI capability sits?

Take ELMO’s AI Maturity Assessment, a five-minute quiz that benchmarks your AI readiness and effectiveness against 1,200+ ANZ HR professionals.

Partner WhyRow 02