July 11, 2018 By Douglas Bonderud 3 min read

Machine learning and artificial intelligence (AI) are transitioning from proof-of-concept programs to functional corporate infrastructure. As spending on these technologies continues to drastically rise, their expanding prevalence is all but inevitable.

But the adoption of digital intelligence introduces new risk: IT experts face a steep slope of adaptations while cybercriminals look for ways to compromise new tools.

Could adversarial AI become the newest insider threat?

Why AI Won’t Replace Human Expertise

Security teams are overworked and understaffed, but some still worry that AI tools will eventually replace human expertise. In response to these concerns, Phys.org noted in June 2018 that discussions about artificial intelligence and automation are “dominated by either doomsayers who fear robots will supplant humans in the workforce or optimists who think there’s nothing new under the sun.”

New research, however, suggests that these technologies are better suited to replace specific tasks within jobs rather than wiping out occupations en masse. As reported by The Verge in June 2018, a pilot plan by the U.S. Army will leverage machine learning to better predict when vehicles need repair — taking some of the pressure off of human technicians while reducing total cost.

The same is possible in IT security: Using intelligent tools for the heavy lifting of maintenance and data collection and freeing up technology professionals for other tasks.

Will Machine Learning Reduce or Multiply Insider Breaches?

Though new technology likely won’t be stealing jobs, it could boost the risk of an insider breach. All companies are vulnerable to insider threats, which can take the form of deliberate actions to steal data or unintentional oversharing of corporate information. Since AI and machine learning tools lack human traits that underpin these risks, they should naturally produce a safer environment.

As noted by CSO Online in January 2018, however, malicious actors could leverage the same technologies to create unwitting insider threats by poisoning data pools. By tampering with data inputs, attackers also compromise outputs — which companies may not realize until it’s too late.

According to a May 2018 Medium report, meanwhile, there’s a subtler class of attacks on the rise: adversarial sampling. By creating fake samples that exist on the boundary of AI decision-making capabilities, cybercriminals may be able to force recurring misclassification, compromising the underlying trust of machine learning models in turn.

How to Thwart AI-Powered Insider Threats

With the adoption of intelligent tools on the rise, how can companies safeguard against more powerful insider threats?

Best practices include:

  • Creating human partnerships: These new tools work best in specific-task situations. By pairing any new learning tools with a human counterpart, companies create an additional line of defense against potential compromise.
  • Developing checks and balances: Does reported data match observations? Has it been independently verified? As more critical decision-making is handed off to AI and automation, enterprises must develop check-and-balance systems that compare outputs to reliable baseline data.
  • Deploying tools with a purpose: In many ways, the rise of intelligent technologies mirrors that of the cloud. At first an outlier, the solution quickly became a must-have to enable digital transition. There is potential for a similar more-is-better tendency here, but this overlooks the key role of AI and machine learning as a way to address specific pain points rather than simply keep up with the Joneses. Start small by finding a data-driven problem that could benefit from the implementation of intelligence technologies. Think of it like the zero-trust model for data access: It’s easier to contain potential compromise when the attack surface is inherently limited.

Machine learning and AI tools are gaining corporate support, and fortunately, they’re not likely to supplant the IT workforce. Looking forward, human aid will in fact be essential to proactively address the potential for next-generation insider threats empowered by compromised learning tools and adversarial AI.

More from

FYSA – Adobe Cold Fusion Path Traversal Vulnerability

2 min read - Summary Adobe has released a security bulletin (APSB24-107) addressing an arbitrary file system read vulnerability in ColdFusion, a web application server. The vulnerability, identified as CVE-2024-53961, can be exploited to read arbitrary files on the system, potentially leading to unauthorized access and data exposure. Threat Topography Threat Type: Arbitrary File System Read Industries Impacted: Technology, Software, and Web Development Geolocation: Global Environment Impact: Web servers running ColdFusion 2021 and 2023 are vulnerable Overview X-Force Incident Command is monitoring the disclosure…

What does resilience in the cyber world look like in 2025 and beyond?

6 min read -  Back in 2021, we ran a series called “A Journey in Organizational Resilience.” These issues of this series remain applicable today and, in many cases, are more important than ever, given the rapid changes of the last few years. But the term "resilience" can be difficult to define, and when we define it, we may limit its scope, missing the big picture.In the age of generative artificial intelligence (gen AI), the prevalence of breach data from infostealers and the near-constant…

Airplane cybersecurity: Past, present, future

4 min read - With most aviation processes now digitized, airlines and the aviation industry as a whole must prioritize cybersecurity. If a cyber criminal launches an attack that affects a system involved in aviation — either an airline’s system or a third-party vendor — the entire process, from safety to passenger comfort, may be impacted.To improve security in the aviation industry, the FAA recently proposed new rules to tighten cybersecurity on airplanes. These rules would “protect the equipment, systems and networks of transport…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today