AI Cybersecurity Regulations and the Need for Corporate Culture Changes
Artificial Intelligence (AI) systems are increasingly gaining control of our digital and physical environments through technologies like facial recognition, predictive modeling, integrations with external Internet of Things (IoT) devices and beyond. Alarmingly, generative AI is estimated to account for 10 percent of all data produced by 2025, up from 1 percent in 2021, while IoT devices that connect our physical environments to the digital are expected to double from the current 15 billion devices by 2030.
And there’s much more in store. In addition to its almost unlimited application in every industry, cyber-security is also realizing the benefits of such an accelerating technology. Not only will these newer AI capabilities continue to advance our ability to predict and protect against cyber threats, AI will also improve education and training simplifying technical cyber-security concepts and making them more accessible to a wider audience.
But There Are Risks Lurking in the Shadows…
As AI systems become increasingly integrated within our lives, they become lucrative targets and tools for malicious actors to exploit. This is especially relevant to the next generation of AI-based systems, which are the targets of new attacks, which can be manipulated to expose private and sensitive data and improve the efficacy of cyber attacks. However, these AI systems currently operate in relative isolation. The next big evolution in AI will happen when these systems converge as a vast network of data collection sensors and computation: a network in the aggregate.
This will fundamentally change our understanding of AI security. When massive, decentralized AI systems control everything, the question of who holds the reins becomes crucial. Moreover, AI-powered security solutions may generate false positives, resulting in excessive alerts that overwhelm security teams or cause genuine threats to go unnoticed.
Oversight and Regulation
Striking the right balance between AI automation and human oversight is essential for any effective cybersecurity strategy. However, regulations are lagging behind the rapid pace of technological advancement. To safeguard against inevitable challenges such as misinformation, the misuse of deep fakes, and the unauthorized control of critical infrastructure, we must rethink many of our existing assumptions about governance and economic frameworks. Oversight isn’t just a problem that affects certain individuals or organizations - it is an issue that will become increasingly prominent and impact society as a whole.
The European Union’s passing of the Artificial Intelligence Act in March of this year was the first, and biggest step, in this effort. The legislation prohibits high-risk AI applications, including specific biometric and facial recognition technologies, social scoring systems, and AI intended for manipulation or exploitation. It establishes stringent requirements for high-risk AI in critical areas like infrastructure, education, and employment, mandating risk assessments, transparency, and human oversight.
The United States, on the other hand, is lagging behind. California’s Governor, Gavin Newsom, recently vetoed what would have been the most ambitious U.S. artificial intelligence regulatory bill to date. The regulation was supposed to be somewhat of a saving grace, funneling money to protect the people and forcing corporate leaders to change AI-related behaviors. The bill sought to compel AI companies to take safety measures to protect the public from cyberattacks, prevent AI from being used to develop weapons, and prevent automated crime. Newsom, however, believes the bill lacked nuance in differentiating all types of AI within large systems, regardless of their function. It would have required companies to implement safety testing of large AI products, costing at least $100 million to develop, and to enable a "kill switch" on new AI technology. I suspect that, in a state that boasts the headquarters of brands like Google, OpenAI, and Meta, passing the bill felt like political suicide to Newsom, which maybe it would have been, but it still would have been the right thing to do.
Corporate Responsibilities Now
With a lack of regulations to force change, companies utilizing the promise of AI need to take immediate steps to increase the level of rigor in monitoring an organization’s AI platforms. Technology has changed how we do business, but businesses haven’t created new processes. Ask any CEO, CIO, or CTO how many AI models currently are being utilized at their company, and they likely would not be able to answer. There is very little in terms of risk assessment in place for AI. Generally, no regulatory security audits are happening (because no government is requiring it). In short, the tracking and measuring of AI performance is almost non-existent.
Although the regulations have faced criticism from industry players for potentially stifling innovation and competitiveness, and from advocacy groups for not adequately addressing ethical issues, a phased approach must balance oversight with practicality. As AI continues to advance, its development should be guided by policymakers and lawmakers who thoroughly understand both the potential benefits and the associated risks.
Dr. Jeff Schwartzentruber is a Senior Machine Learning Scientist at eSentire, and a Senior Advisor to Rogers Cybersecure Catalyst at Toronto Metropolitan University, where he focuses on the intersection of AI and cybersecurity.