Covering Scientific & Technical AI | Thursday, November 28, 2024

Expert Discusses Current State and Future of AI Regulations 

AI tools are improving fast, and regulators are working hard to catch up. The rapid advancement of AI has outpaced regulatory efforts, raising concerns about the potential risks and unintended consequences of this powerful technology.

Melody Morehouse, Director of Conversation Compliance at Gryphon.AI

As governments scramble to catch up, a patchwork of regulations is emerging. However, comprehensive legislation at the local, national, and global levels is still lacking.

To shed light on the current regulatory landscape and its impact on AI development, we spoke with Melody Morehouse, Director of Conversation Compliance at Gryphon.AI, to hear her thoughts on growing regulation concerns.

A seasoned compliance-focused telecom leader, Morehouse has been closely following the regulatory landscape around AI and telemarketing. She’s worked in the telecom industry for more than 20 years with roles at Sprint and T-Mobile, and she has a lot to say about where society is heading in terms of AI regulatory action.

What are the most significant regulatory developments or new laws related to AI that have emerged in the US over the past year?  How do these aim to govern the development and deployment of AI systems?

 

In the past, state-level AI legislation concentrated on the general use of AI and specific AI technologies such as autonomous vehicles, facial recognition, and video interviews (e.g. Illinois.) We’re now seeing more state-level bills that would require developers to disclose more about how AI systems work to prevent algorithmic bias (such as documentation about the data used to train the tool and how the tool was evaluated proposed in a Washington state-level bill).

At the federal level, President Biden issued an executive order in January 2024 for the safe, secure, and trustworthy development and use of AI, to support responsible AI development and deployment, industry regulation, and international partnership. AI has become a federal priority focused on managing AI risk, strengthening AI safety and security, and ensuring privacy protections, while avoiding a negative impact on innovation and competition.

It’s not just politicians who have been making significant regulatory decisions around AI. The FCC’s recent ban on robocalls using cloned voices generated by AI underscores how regulators are taking a proactive stance to mitigate risks and prevent harm. However, with the increase in regulations, there’s growing concern that it will lead to a wave of litigation and put courts in a position to determine the liability of corporate AI algorithms, something that is difficult to do without clear directives and guidance.

This said, developers must also consider how end users will leverage AI as these models do not always generate accurate outputs. This issue can impact the way people interact with AI-powered technology and make it difficult to trust the results. That’s why ensuring models are created to help inform decision-making, instead of completely replacing this process is imperative. The way these systems are developed lays the foundation for how trustworthy and effective they are for empowering the decision-making process.

Beyond the US, which other countries or regions have introduced notable AI regulations recently? How do their approaches differ from or align with the American stance?

 

While AI has been under increased regulatory scrutiny – especially over the last year – the magnifying glass has not been limited to the United States. Increased privacy legislation and consumer advocacy initiatives directed at AI have been a catalyst to newly established cross-entity partnerships, policies, and laws.

Introduced three years ago and expected to take effect this summer, the European Parliament voted in favor of adopting the Artificial Intelligence Act in March 2024. The landmark law — the world’s first regulatory policy specifically targeting AI algorithms focuses on creating safeguards around general-purpose AI systems, banning social scoring and AI used to manipulate or exploit user vulnerabilities, and the right for users to issue complaints.

The act established a risk system (between unacceptable and minimal or no risk) and will impact all AI systems that are based, used, or operated in the EU regardless of where the business is headquartered. This act is likely to have an impact similar to the GDPR and garner significant impact globally.

While both the US and the EU share conceptual alignment on risk-based approaches and standards, their specific AI regulations diverge in application and structure. The EU emphasizes comprehensive legislation, while the US has focused on non-regulatory infrastructure. Adding the UK into the mix, their stance is pro-innovation; however, there are rumors about UK legislation underway, defining new — potentially stricter —rules to regulate the technology.

Specifically, the US approach is highly distributed across federal agencies, with investment in non-regulatory initiatives such as AI risk management, evaluation of specific technologies (facial recognition software), and AI research. Meanwhile, the EU takes a legislative approach to AI regulation, categorizing AI systems and implementing specific reporting and oversight requirements. All in all, both the U.S. and the EU prioritize safety, security, and human-centric AI. While the U.S. emphasizes national security and innovation, the EU focuses on concrete rules, excellence, and trust.

How do you assess the current state of AI governance globally? Are existing regulations sufficient, or do we need more comprehensive international frameworks to effectively regulate AI?

 

In the rapidly evolving global landscape around AI, there's an ongoing debate about whether existing regulations are sufficient. On the one hand, some argue that current regulations aren't specific enough to address the unique risks posed by AI. For instance, AI systems can be opaque and biased, and there's currently no clear legal framework for dealing with these issues.

Meanwhile, others argue that it's too early for comprehensive international regulation. AI is rapidly evolving, and inflexible regulations could stifle innovation. Additionally, international cooperation on AI governance is complex because countries have different political systems, values, and priorities.

Finding the right balance is key. Some level of regulation is needed to ensure AI is developed and used responsibly without stifling innovation. Given how quickly AI systems are changing, regulations may need flexibility to quickly accommodate advancements. Gaining international consensus on key principles for AI governance, such as fairness, transparency, and accountability could go a long way, but might be out of reach (especially in the near term.) One thing that remains steadfast is the need for international cooperation as it’s critical to address the global challenges posed by AI.

What role do you think industry self-regulation and ethical guidelines should play alongside government regulations in shaping the responsible development of AI?

 

Industry-led initiatives can be nimble and move quickly, adapting practices in a much shorter timeframe than it takes for a government to pass laws. For example, industry groups can develop best practices for bias detection in AI algorithms and share these practices with their members. In addition, ethical guidelines can raise awareness about the potential risks of AI and promote responsible development. These guidelines can also mandate impact assessments to evaluate risks and benefits, helping set expectations for how AI systems should be designed, developed, and deployed.

However, self-regulation has its limitations. While encouraging innovation, there is no guarantee that companies will follow self-imposed voluntary guidelines or maintain ethical boundaries. Companies may be reluctant to implement practices that could make their AI systems less effective, less profitable, or more expensive to develop. Self-regulation may not be enforced effectively as there’s no clear way to hold companies accountable for violating industry guidelines.

That’s where government regulations can address these issues by enforcing regulations through fines or penalties for non-compliance — a deterrent for companies that may violate these rules.

While industry self-regulation and ethical guidelines are pivotal in shaping the responsible development of AI, the ideal approach is probably a combination of both and government regulation. This approach can help to ensure that AI is developed and used in a way that benefits society while mitigating risk. There is a need for ongoing and open dialogue between industry, government, and civil society stakeholders about the responsible development of AI.

As AI systems become more advanced and ubiquitous, what are the potential risks or unintended consequences that regulators need to be vigilant about?

 

Large language models (LLMs), the basis of AI systems, are data-driven applications that can create unique and personalized experiences. However, this also heightens consumer data privacy and security vulnerability concerns, doubts about compliance adherence to laws and regulations, and depending on the data source, can amplify negative decision outcomes (biased outputs algorithms.)

Regulators need to be vigilant about these risks and take steps to mitigate them. Suggestions include developing ethical standards, requiring regular AI system risk audits, and increasing enforcement of existing laws and regulations.

If organizations do not take precautions to self-regulate and follow established guidelines, they may be at risk of allowing harmful AI use cases to proliferate. Some examples from the consumer finance landscape include product steering, discriminatory pricing, unfair credit rationing, exclusionary filtering, and digital redlining. Similarly, if companies do not ensure their AI systems are routinely updated and fine-tuned, they might receive unintended consequences — just look at what happened when users tricked a chatbot into agreeing to sell a Chevy Tahoe for $1. This underscores why AI systems must be properly vetted to ensure compliance and adherence with the most current guidance/protocols.

 

Looking ahead, what are the emerging AI trends or applications that you believe will require particular regulatory attention or new governance models in the near future?

 

Looking ahead, special attention must be placed on how scammers and fraudsters use AI to manipulate and exploit vulnerable consumers. In particular, speech deepfakes are worrisome as these technologies can easily create a synthesized voice - appearing as someone’s loved or well-known individual like a celebrity or politician.

In fact, Gartner predicts that within two years, 30% of enterprises will consider identity verification and authentication solutions unreliable due to AI-generated deepfakes. Thankfully, legislation coupled with regulatory actions have worked to address this issue, but there’s still the matter of catching the bad actors before the damage is done.

What we will likely see in the coming years are more modifications around compliance policies to accommodate evolving AI risks. Companies will also lean on tools to help them comply with the changing laws. These forthcoming regulatory policies will most likely have components that focus on transparency and explainability, ethical AI, data privacy and security, antitrust and competition, liability and accountability, and cross-border collaboration. The regulatory landscape will rapidly change as we learn more about the potential and pitfalls of AI.

AIwire