Covering Scientific & Technical AI | Monday, December 2, 2024

ML and Hybrid Cloud Security: ‘4 P’s’ Predictions 

Enterprises are data-rich and insight-poor as it relates to security for multi-cloud, hybrid infrastructure. Machine learning approaches are relatively well-established for email security, data loss prevention and malware detection. But for infrastructure security, ML is largely untapped, yet it can unlock the data jam and delight SecOps users, if applied correctly.

The 4 Ps
In our analysis, most successful breaches exploit the “4 Ps” in unforeseen ways. You’re all too familiar with the usual suspects: (open) Ports, (loose) Privileges, (weak) Passwords and (missing) Patches. The 4 Ps pop up throughout the entire enterprise stack, including the network, storage, databases, virtual and physical servers, containers, cloud services and applications. We expect that ML approaches will hone in on the 4 Ps, learn from patterns thereof and predict potential weaknesses in an enterprise infrastructure.

The latest Kubernetes “runc” vulnerability (https://kubernetes.io/blog/2019/02/11/runc-and-cve-2019-5736/) is more or less a classic illustration of exploiting privileges and unpatched systems. As the blog explains, runc is the low-level module that spawns Linux containers and is used by Docker, Containerd and CRI-O, which are in turn used by Kubernetes. A container process running as root (UID 0) can exploit a flaw in runc to gain root privileges on the host running the container, compromising the container host and all other containers on that host. The mitigation is to run containers and processes within them as a non-0 user with the least privileges. Other mitigations are to update runc with the fix for the vulnerability and using hardened and verified images from public repositories.

A machine learning model could be trained to recognize such exploit patterns and flag infrastructure configurations that might be prone to attacks.

Connect the Dots in the Hybrid Cloud Stack

In hybrid cloud and multi-cloud environments, there are endless combinations of on-premises, virtualized and cloud-native services, vendors and interfaces. Such variety and resulting fragmentation are not conducive for security but does open the doors for breakthrough approaches using ML.

The recent buzz around “zero trust networking” illustrates the concept of and urgency for a multi-layered security posture. Most definitions of zero trust networking start with the principle of “default deny” of traffic at the enterprise perimeter, subnet and workload firewalls. Even with an integrated software-defined network of a public cloud like AWS, making sense of and securing network configurations in VPCs, subnets, load balancers, gateways, routes and security groups is a daunting task. In a sense, zero trust is a draconian strategy that is at least easy to understand and implement in the face of such complexity in the network layers. When you throw in the rest of the services at the compute, storage, database, container, orchestrator and application tiers, current manual approaches to configuration and vulnerability management and monitoring will just not work.

We expect ML approaches to evolve to incorporate risk signals (and provide risk visibility) from the hybrid enterprise’s compute, storage, networking, database and application resources to learn attack patterns and flag potential threats by processing petabytes of data from the various layers in the stack. The good news is that the public clouds with their threat detection platforms (e.g. AWS GuardDuty, GCP Anomaly Detection) have already done a lot of heavy lifting for resources in their control – leveraging and adding value on top of them through proprietary ML approaches will hold the key.

Machine-Assisted vs. Machine-Driven

In our assessment, ML/AI in security contexts has one critical difference from other ML contexts: ML security has to balance true positives (identifying a threat correctly) and false negatives (missing a real threat), with the cost of the latter being disproportionately high. ML/AI algorithms have so far not been optimized to address such an accuracy metric, though one does hope innovations like driverless cars better figure out how to balance false negatives (missing an object) and true positives (detecting an object correctly).

The traditional security issues of balancing user/business experience versus staying safe do not go away either – no one wants a firewall to delay or deny legitimate traffic, for example. As a result, it is hard to conceive of a future where operators are completely eliminated in favor of smart algorithms. Enterprise SecOps teams, MSSPs and MSPs will continue to provide the people aspect of security. However, we should fully expect them to re-tool themselves with next generation ML/AI based security offerings. They’ll need all the help they can get if cyber security skills shortages persist.

As innovation and emerging technology convergence presses forward at a relentless pace, it’s clear that complexity and scale will continue to be central challenges for security teams for the foreseeable future. Developing and optimizing an overarching security layer with an assist from ML/AI will equip your security team with the tools it needs to extract the promised efficiencies and agility from digital transformation efforts, without compromising protections for customers, users, data, or systems.

Bashyam Anant is VP product management and Naveen Ramachandrappa is senior machine learning engineer at Cavirin Systems.

AIwire