featured Corporate /en/research-insights/featured/special-editorial/the-ai-governance-challenge content esgSubNav

The AI Governance Challenge

Bruno Bastit
Director,
Global Corporate Governance Specialist,
Sustainability Research
S&P Global Ratings

bruno.bastit@spglobal.com


This article is written and published by S&P Global, as a collaborative effort among analysts from different S&P Global divisions. It has no bearing on credit ratings.

Published: November 29, 2023

Highlights

The rapid adoption of AI, and GenAI in particular, laid bare the risks associated with the technology and the need for robust governance frameworks.

AI regulations and best practices are improving rapidly around the world but still lag the pace of the technology's development.

Establishing robust AI governance at company level will require solid ethical foundations to ensure it is risk-focused and adaptable.

Companies and their boards will have to manage increased pressure from regulators and shareholders to set up internal AI governance frameworks. Common frameworks are emerging, but their implementation remains limited.


The rapid rise of artificial intelligence (AI), and the recent development of generative AI (GenAI) in particular, have created excitement and concern in equal measure. The many risks associated with AI highlight the need for a solid governance ecosystem, including governance at a legal, regulatory, and company level. We explore how AI governance is shaping up to address the key risks and challenges linked with the technology.

 

What are the challenges? 

While AI's potential for doing good is virtually limitless, so is the potential for harm — intended or otherwise. The highly disruptive nature of the technology requires a solid, human-led governance ecosystem and regulations that ensure it can be deployed in a beneficial and responsible manner. The rapid rise of GenAI, in particular, highlights the urgent need for robust frameworks. Broad adoption of GenAI and concerns regarding its use may intensify efforts to address issues surrounding digital governance and accelerate the development of risk management strategies. 

As AI becomes increasingly autonomous and evolves into a general-purpose technology, issues of control, safety, and accountability come into the limelight. Different AI regulations around the world and collaborations between stakeholders in the private and public sectors make AI governance even more complex. Some of the key challenges regulators and companies will have to contend with include addressing ethical concerns (bias and discrimination), limiting misuse, managing data privacy and copyright protection, and ensuring the transparency and explainability of complex algorithms. The latter is particularly true in the case of foundation models, such as large language models (LLMs), as we explain in our article Foundation Models Powering Generative AI: the Fundamentals. GenAI presents additional challenges, including issues of plagiarism and copyright infringement and, on a deeper level, a reinterpretation of the more fundamental concepts of truth and trust. GenAI's ability to create new text, image, audio, or video content that appears to be generated by humans challenges our perception of truth. For instance, deepfakes, which have a highly believable digital likeness to their original subjects, can tarnish a person’s reputation, spread misinformation, and influence public opinion to sway elections. These highly realistic synthetic creations can cause significant societal and political harm by increasing general distrust of news or other content. Deepfakes can pose a security threat to governments and companies alike.

Many have also raised concerns about the potential existential threat of AI. In March 2023, a group of AI experts wrote an open letter that included a list of policy recommendations and advocated for pausing the further development of AI to better understand the risks it presents to society (Future of Life Institute, 2023).

Ethical considerations are at the core of AI governance

AI has raised many ethical dilemmas and considerations, from algorithmic biases to autonomous decision-making. To be efficient, we believe AI regulation and governance must be principle- and risk-based, anchored in transparency, fairness, privacy, adaptability, and accountability. Addressing these ethical challenges through governance mechanisms will be key to achieve trustworthy AI systems. Effective AI governance that can accommodate present and future evolutions of AI will therefore require robust, flexible, and adaptable governance frameworks at company, sovereign, and global levels.

 

Navigating a growing and evolving AI regulatory landscape

The development of AI and GenAI has accelerated in recent years but has not yet been matched with a commensurate level of oversight, be it at a supranational, national or company level. This is changing, albeit slowly. 

Several international and national AI governance frameworks have emerged 

Over the past few years, several AI governance frameworks have been published around the world aimed at providing high level guidance for safe and trustworthy AI development (see Figure 1). A variety of multilateral organizations have published their own principles, such as the OECD's "Principles on Artificial Intelligence" (OECD, 2019), the EU's "Ethics Guidelines for Trustworthy AI" (EU, 2019), and UNESCO's "Recommendations on the Ethics of Artificial Intelligence" (UNESCO, 2021). The development of GenAI, however, has led to new guidance, including the OECD’s recently published "G7 Hiroshima Process on Generative Artificial Intelligence" (OECD, 2023). 

At a national level, several guidance documents and voluntary frameworks have emerged in the past few years, such as the "AI Risk Management Framework" from the US National Institute of Standards and Technology (NIST), a voluntary guidance published in January 2023, and the White House's "Blueprint for an AI Bill of Rights," a set of high-level principles published in October 2022 (The White House, 2022). These voluntary principles and frameworks often serve as guidance for regulators and policymakers around the world. As of 2023, more than 60 countries in the Americas, Africa, Asia, and Europe published national AI strategies (Stanford University, 2023).

AI regulations are few, but fast proliferating

Governments and regulatory bodies in various countries have worked on AI-related policies and regulations to ensure responsible AI development and deployment. While few have been finalized, several AI-related regulations have been proposed around the world to block or limit AI's riskiest uses (see Table 1). They all broadly coalesce around common themes such as transparency, accountability, fairness, privacy, and data governance, safety, human-centric design, and oversight. Even so, the practical implementation of these regulations and policies will likely be challenging.

They often come on top of existing legislation on data privacy, human rights, cyber risk or intellectual property. While these adjacent areas of law address some concerns associated with AI development, they do not provide a holistic approach to dealing with AI. 

For instance, the EU AI Act, a regulatory framework due to be finalized by year-end 2023, is set to be the world's first comprehensive AI legislation. It is also likely to be the strictest one, with potentially the biggest impact globally. The EU AI Act aims to provide a human-centric framework to ensure that the use of AI systems is safe, transparent, traceable, non-discriminatory, environmentally friendly, and in accordance with fundamental rights. The new proposed rules follow a risk-based approach that aims to establish requirements providers and users of AI systems must follow. For instance, some practices are classified as "unacceptable" and "prohibited," such as predictive policing systems or the untargeted scraping of facial images from the internet to create recognition databases. Other practices that can negatively affect safety or fundamental rights will be classified as "high risk," for instance if AI systems are used in law enforcement, education or employment (EU, 2023). The EU AI Act and, for that matter, the proposed US AI Disclosure Act of 2023 also demand that AI-generated content is clearly labeled as such.

China has also been active in launching principles and regulations, from the State Council's "New Generation Artificial Intelligence Development Plan" in 2017 to the "Global AI Governance Initiative" and the recently enacted "Interim Administrative Measures for the Management of Generative AI Services." The latter two represent milestones in AI governance. In the US, the two main proposed legislation pieces at the federal level are the "Algorithmic Accountability Act" and the "AI Disclosure Act," both of which are under discussion. On Oct. 30, 2023, US President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" to create safeguards (The White House, 2023b). Similar regulations and policies are being developed or discussed in Canada and Asia. 

AI is becoming ubiquitous. International coordination and collaboration to regulate the technology and reach some form of policy harmonization is of utmost importance but will take time. Nevertheless, 28 countries plus the EU pledged to work together to address the risks posed by AI during the first AI Safety Summit in the UK in November 2023 (Bletchley Declaration, 2023).

Table 1: Key AI regulatory developments around the world

Region

Country

Regulation

Status

Americas

US

Algorithmic Accountability Act 2023 (H.R. 5628)

Proposed (Sept. 21, 2023)

AI Disclosure Act of 2023 (H.R.3831)

Proposed (June 5, 2023)

Digital Services Oversight and Safety Act of 2022 (H.R.6796)

Proposed (Feb. 18, 2022)

Canada

Artificial Intelligence Data Act (AIDA)

Proposed (June 16, 2022)

Europe

EU

EU Artificial Intelligence Act

Proposed (April 21, 2021)

Asia

China

Interim Administrative Measures for the Management of Generative AI Services

Enacted (July 13, 2023)
Source: S&P Global

 

Regulating AI may require a paradigm shift

The increasing ubiquity of AI requires regulators and lawmakers to adapt to a new environment and potentially change their way of thinking. The examples of frameworks and guardrails for the development and use of AI mentioned above are ultimately aimed at companies and their employees. But as AI gains in autonomy and intelligence, it raises an important question: How can one regulate a "thinking" machine? 

 

Companies are under increased pressure to set up AI governance frameworks 

Calls for companies to manage AI-related risks have grown louder, both from a developer and a user perspective. Until recently, AI developers bore the brunt of agreeing on safeguards to limit the risk of the technology. For instance, the seven major developers in the US — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — agreed in a meeting with President Biden in July 2023 to commit to some standards and implement guardrails (The White House, 2023a). 

However, companies across all sectors are now being asked to explain how they use AI. The speed and scale of GenAI adoption have shown the scope of enthusiasm for the technology. Over 100 million users signed up to use OpenAI's ChatGPT in the first two months alone. Yet, these developments have also lain bare many of GenAI’s pitfalls, such as data privacy concerns and copyright infringements, which have already led to several legal actions.

Beyond that, shareholder pressure is also picking up, with the first AI-focused shareholder resolutions being filed at some US companies. We expect this trend will continue during next year's proxy season. For example, Arjuna Capital, which recently filed a shareholder proposal at Microsoft, and the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), the largest federation of trade unions in the US, filed shareholder resolutions at Apple, Comcast, Disney, Netflix, and Warner Brothers Discovery requesting more transparency on the use of AI and its effects on workers. Prior to that, Trillium Asset Management had filed a shareholder resolution for the same reasons at Google's annual general meeting in 2023.  

Common practices emerge, but implementation remains limited

Companies are only starting to consider what AI and GenAI mean for them. So far, few companies have progressed on AI governance. Nevertheless, the common thread running through most global AI frameworks and principles is that companies must take an ethical, human-based, and risk-focused approach when building AI governance frameworks. For instance, NIST's "AI Risk Management Framework" provides guidance on AI risk management (NIST, 2023) that helps shape corporate policies. We have observed some common practices among the limited number of companies that have already established internal frameworks. They typically focus on the following fundamental principles: 

  • Human centrism and oversight

  • Ethical and responsible use

  • Transparency and explainability

  • Accountability, including liability management

  • Privacy and data protection  

  • Safety, security, and reliability


Corporate AI governance is still in its infancy, but we believe internal frameworks that incorporate these elements are better placed to mitigate AI-related risks and respond to future regulatory pressure. 

Robust AI governance at a company level must be ethical, risk-focused, and adaptable

As companies start to explore how to use AI and GenAI, as well as how to develop their own AI systems, the business implications of the technology, both positive and negative, have moved up on companies' agendas. We believe efficient management of the key risks associated with AI requires AI governance frameworks that are based on ethical considerations. We believe ethical review boards, impact assessments, and algorithmic transparency will help ensure ethical AI development and deployment. 

Additionally, companies' AI governance frameworks must be flexible enough to adapt to new regulations and technological developments. In our recent paper "Future Of Banking: AI Will Be An Incremental Game Changer," we identify a series of mitigation strategies to address AI-related concerns, which are not only relevant for banks but could also apply to other sectors. These strategies include compliance with algorithmic impact assessments to address ethical concerns and explainable AI processes and outputs.

Balancing innovation and control

As with most technology revolutions, success relies heavily on the first-mover advantage and standard setting. We believe AI platforms, models, and app developers focus strongly on innovating, be it collaboratively or in a proprietary backdrop, and on building new products to improve productivity and our day-to-day lives. Since efficient AI governance frameworks are in their best interest, they proactively offer input and feedback on how AI should be regulated. Some key considerations for AI industry participants include:

  • How can we develop ethical standards that allow AI participants to innovate while ensuring safety risks are managed? Would internal AI governance processes to ensure proper implementation of these standards be a good idea? IBM might have paved the way in that regard, with the appointment of a lead AI ethics official in charge of responsible AI and the establishment of an AI ethics board.

  • Should providers of AI platforms, models, and apps be responsible for all content produced on their platforms? Will they be responsible for detecting AI-generated content and authenticating official content? The implications of this question are reflected in Section 230 of the Telecommunications Act of 1996, which generally provides immunity for online platforms with respect to third-party content generated by their users, a subject controversial to the business models of platforms such as Facebook, YouTube, and X, formerly known as Twitter.

  • How can we ensure AI governance is in line with public, national security, and international competition standards?

The focus on AI regulation and governance at such an early stage of development speaks to the technology's significant implications, both good and bad. Getting a grip on them is difficult, as illustrated by the unimpeded growth and influence of tech companies such as Meta, Alphabet, Amazon, and Apple. 

Effective AI oversight starts with company boards 

The board of directors plays a critical role in identifying strategic emerging opportunities and overseeing risks, all the more so for those associated with AI. The board's responsibility is to supervise management, assess how AI might influence the corporate strategy, and consider how the company can effectively handle risks, particularly those that pose a threat to the company's clients, employees, and reputation. As such, it is important for company boards to assess and understand the implications of AI for their strategy, business model, and workforce.

As with many emerging themes, such as cyber risk, effective oversight of AI will require company boards to get knowledgeable about AI. We think a working understanding of AI, together with a solid and comprehensive monitoring process to ensure accountability, are crucial prerequisites to oversee the establishment of robust AI risk-based management frameworks at the executive level. 

We believe effective AI governance models will likely take a holistic approach — from developing internal frameworks and policies, to monitoring and managing the risks from the conceptual design phase to the use phase. Such mechanisms would ensure accountability and transparency in AI systems and would help address the challenges of auditing and verifying complex AI algorithms and decision-making processes.

 

Looking forward  

According to Professor Melvin Kranzberg’s first law of technology, “technology is neither good nor bad, nor is it neutral." This is particularly applicable to AI. So far, regulators and policymakers have struggled to keep pace with the rapid development of AI, while the more recent evolution of GenAI has added further pressure to act now. Many countries have already published their own national AI strategies, and several legislations and regulations are being developed to provide guardrails. 

To ensure safe AI adoption, companies must establish ethical guidelines and robust risk management frameworks. In light of the technology's somewhat unpredictable development, real-time, human-led oversight will be more important than ever. 

 

Related research

 

External research 

 

Contributors

Miriam Fernández, CFA
Associate Director,
S&P Global Ratings
miriam.fernandez@spglobal.com

Sudeep Kesh
Chief Innovation Officer,
S&P Global Ratings
sudeep.kesh@spglobal.com

David Tsui
Managing Director,
Technology Sector Lead,
S&P Global Ratings
david.tsui@spglobal.com