Get started
Home > Resources > BLOG > The Real Cybersecurity Threat in 2026 Isn’t Technology — It’s Behaviour

The Real Cybersecurity Threat in 2026 Isn’t Technology — It’s Behaviour

ELMO’s Head of Information Security, Prashant Mohan Naik, explains why human behaviour is still cybersecurity’s weakest link — and how organisations can reduce the risk.

The Real Cybersecurity Threat in 2026 Isn’t Technology — It’s Behaviour

Cybersecurity threats are becoming more sophisticated every year. But according to the 2026 HR Industry Benchmark Report, the biggest risk facing organisations across Australia and New Zealand isn’t technology, it’s people.

HR leaders identified three major challenges:

  • Human error
  • Poor cybersecurity processes
  • Shadow IT and lack of security awareness

But Prashant Mohan Naik, Head of Information Security at ELMO, explains these risks are not new — and that’s exactly why organisations must act on them.

Human error: The cybersecurity risk that won’t go away

Despite advances in cybersecurity technology, people remain the most common point of failure. The 2026 HR Industry Benchmark Report outlines it’s people who create our biggest security risk. 

For Australian respondents, 32% highlighted human error as their biggest challenge, and 25% pointed out lack of cybersecurity processes and hygiene as a problem they’re facing (interestingly 19% and 26% respectively for New Zealand).

The good news, according to Prashant, is that it’s nothing new, which means we can at least do something about it. 

“We often talk about security as people, process and technology,” says Prashant, ” but processes and technology are only as good as the people who use them.”

Even the best systems fail when employees:

  • Make mistakes
  • Misunderstand policies
  • Use tools incorrectly
  • Experiment with unapproved software

That’s why human error continues to dominate cybersecurity risk. The solution isn’t just stronger technology. Organisations need to focus on how people interact with systems every day. This starts with regular security awareness training that evolves as emerging threats emerge, ensuring employees understand not just the rules, but why they matter.

Why cybersecurity awareness must constantly evolve

AI use is increasingly embedded in everyday work, with 82% of ANZ organisations and 94% of HR teams using it in some capacity. While this helps teams become more efficient, agile, and strategic, it also increases potential security exposure if employees aren’t properly trained.

At the same time, many organisations are still building the foundations required for secure AI use. Only 21% report having a fully centralised data system, highlighting how fragmented systems can make it harder to manage security, governance and responsible AI usage at scale.

No longer is training around phishing emails enough. Organisations must go deeper and wider into how teams identify and mitigate risks. Cyber threats are increasingly silent and embedded within the tools employees use every day — particularly AI systems that are still evolving.

The threat landscape changes constantly,” Prashant explains. “What worked yesterday may not work tomorrow, which is why awareness and training have to keep evolving alongside the risks.”

For organisations looking to reduce risk, this means shifting cybersecurity awareness from a compliance exercise to a practical capability embedded into daily work. Prashant suggests several areas of focus:

  • Make security guidance simpler and more practical. Long policy documents may satisfy auditors, but employees rarely read or remember them. Short, clear guidance outlining key do’s and don’ts is far more effective.
  • Create safeguards for experimentation. Teams will naturally explore new tools and AI platforms. Rather than blocking innovation, organisations should define clear boundaries around what data can and cannot be shared.
  • Maintain visibility over new tools being used. Monitoring software usage helps organisations identify shadow IT early and intervene before sensitive data is exposed.
  • Treat security awareness as continuous. As technologies evolve, employees need regular updates and reminders about emerging risks and responsible use.

Ultimately, cybersecurity in the AI era isn’t just about protecting systems; it’s about equipping people to make safer decisions in a fast-changing digital environment.

The hidden risk of third-party technology

They solve a problem, and it’s as easy as plug and play. But third-party software isn’t always a band-aid to a pain point. If left unchecked, it can quickly become the very thing that exposes an organisation’s safeguards.

One of the hardest things for businesses to keep track of is who is using what. New AI tools, plugins, integrations and SaaS platforms are appearing almost daily, making it easier than ever for employees to adopt technology without formal oversight.

This rapid experimentation creates a challenge: shadow IT — where tools are adopted outside official processes. Prashant explains that while innovation should be encouraged, organisations must ensure safeguards are in place.

“We don’t want to stop innovation,” Prashant says. “But organisations need to create guardrails so people can explore new tools safely without exposing sensitive information.”

The risk doesn’t necessarily lie in employees exploring new technology. It lies in how those tools are used, particularly when confidential or customer data is uploaded into platforms that haven’t been vetted.

A third-party technology decision, therefore, should never sit with individuals alone. Instead, it should be evaluated at an organisational level to ensure it meets security, compliance and data governance standards.

As Prashant explains, many organisations rely heavily on external platforms — from cloud infrastructure to data systems — which means security responsibility extends beyond internal teams.

“When we use third-party software, we need to ensure those suppliers follow the same baseline security standards we do.”

This is why many organisations now require vendors to meet recognised frameworks such as ISO 27001, ensuring partners follow strong security and data protection practices.

Without this oversight, organisations risk introducing vulnerabilities through the very tools designed to improve productivity.

Prashant recommends organisations take a structured approach to managing third-party technology risk. This includes establishing clear approval processes for new tools, setting guardrails on what data can be shared with external platforms, maintaining visibility into which technologies employees are using, and ensuring vendors meet recognised security standards, such as ISO certifications, before being integrated into core systems. 

As the pace of technology innovation accelerates, organisations must look beyond protecting their own systems and ensure the entire technology ecosystem around them is secure.

Security is everyone’s responsibility

When it comes to AI governance and cybersecurity, responsibility cannot sit with a single department. While leadership sets direction and security teams implement controls, every employee plays a role in how technology is used safely and responsibly.

Yet many organisations are still navigating how to embed AI safely across their workforce. According to the 2026 HR Industry Benchmark Report, only 12% of ANZ organisations say AI is extensively integrated into day-to-day workflows, highlighting the challenge many businesses face in implementing consistent governance and responsible usage practices.

However, shared responsibility only works when employees clearly understand what is expected of them. Many organisations rely on lengthy policy documents to manage cybersecurity, but these rarely change behaviour. Guidance buried in 20-page policies is unlikely to be remembered or applied in everyday decisions.

“At a CISO event I attended, someone shared a simple rule: if a policy is more than three pages long, most people won’t read it,” Prashant says. “Many security policies are written to satisfy auditors, but from an end-user perspective, people aren’t going to read 10, 15 or 20 pages and remember everything.”

Instead, organisations should focus on practical, easy-to-follow guidance that helps employees act safely in the moment. This can include clear dos and don’ts, short policy summaries, and quick-reference resources that make security expectations easy to understand and apply.

Effective security awareness isn’t about creating more documentation — it’s about equipping employees with the knowledge and confidence to make better decisions every day.

The next frontier: AI governance standards

As AI adoption accelerates, organisations are beginning to formalise governance frameworks to ensure the technology is used responsibly, securely and transparently.

Emerging standards such as ISO 42001 for AI management systems are designed to provide structured guidance for organisations developing or deploying AI technologies. Much like established cybersecurity frameworks, the standard focuses on establishing governance for the design, monitoring, and improvement of AI systems over time.

Adopting these frameworks can help organisations demonstrate:

  • Transparency in how AI systems are developed and used
  • Ethical AI practices that protect individuals and organisations
  • Strong governance controls around risk management and oversight

For many organisations, this shift represents the next phase of AI maturity, moving beyond experimentation toward structured, accountable AI use.

Prashant says this is exactly the direction organisations should be heading as AI becomes embedded across business operations.

“We’re currently moving towards ISO 42001 certification, which focuses on establishing a formal artificial intelligence management system,”

Prashant Mohan Naik, Head of Information Security at ELMO

At ELMO, this involves developing clear principles around how AI is used across the organisation, alongside policies, monitoring processes and governance controls designed to ensure AI is applied responsibly.

The certification process itself also requires independent auditing. The first stage assesses whether the organisation is ready for certification, while the second stage validates the controls, evidence and oversight behind how AI systems are managed.

With only a small number of organisations currently pursuing formal AI governance standards, early adoption represents a significant opportunity to build trust with customers and stakeholders.

As organisations adopt more AI, governance becomes essential. It’s not just about innovation — it’s about ensuring that technology is used responsibly, securely and in a way people can trust.

In a rapidly evolving technology landscape, that trust is quickly becoming a powerful competitive advantage.

The real cybersecurity challenge

For many organisations, cybersecurity has traditionally been viewed as a technology problem. But the insights from this year’s HR Industry Benchmark Report — and Prashant’s experience — make one thing clear: the biggest vulnerabilities rarely come from systems alone.

They come from how people use them.

As AI adoption accelerates and new technologies enter the workplace faster than ever, organisations must rethink their approach to cybersecurity. The most effective strategies will combine strong governance frameworks with practical, everyday awareness, empowering employees to make better decisions.

Technology will continue to evolve. Threats will continue to change.

But organisations that focus on people, clear guardrails and responsible governance will be far better equipped to manage risk and build the trust that modern digital environments demand.

As Prashant puts it, security ultimately comes down to how people interact with the systems around them.

Explore ELMO’s HR platform

Talk to our team and learn how ELMO’s products support compliance and workforce security.

Partner WhyRow 02