articles Corporate /en/research-insights/articles/making-sense-of-artificial-intelligence content esgSubNav
In This List
S&P Global

Making Sense of Artificial Intelligence

中国汽车行业分析

AONIA Linked Floating Rate | S&P Global

Week Ahead Economic Preview: Week of 22 January 2024


Making Sense of Artificial Intelligence

Sudeep Kesh
Chief Innovation Officer,
S&P Global Ratings
sudeep.kesh@spglobal.com
Nathan Hunt
Head of Content, Social Media & Digital Marketing
S&P Global
nathan.hunt@spglobal.com
Sudeep Kesh
Chief Innovation Officer,
S&P Global Ratings
sudeep.kesh@spglobal.com
Nathan Hunt
Head of Content, Social Media & Digital Marketing
S&P Global
nathan.hunt@spglobal.com

Generative AI has a moment 

On the afternoon of Feb. 9, 1964, a group of young men from Liverpool, England, ducked out of the chill wind and freezing temperatures of Manhattan’s theater district into the gothic vestibule of CBS Television’s Studio 50. That night, the Beatles made their first appearance on The Ed Sullivan Show, debuting in front of an American audience of 73 million. This moment has come to be regarded as a watershed event, marking a decisive shift in the course of cultural history.  

On Nov. 30, 2022, an American artificial intelligence research laboratory called OpenAI announced the release of a large language model-based chatbot called ChatGPT. The new ChatGPT, available only as an early release, was a sibling to its earlier model InstructGPT. It allowed users to interact conversationally by giving prompts and asking follow-up questions. The release of ChatGPT introduced a wider audience to a new type of technology known as generative artificial intelligence. 

Events can appear singular, but they are not. Both the debut of the Beatles for an American audience and the release of ChatGPT were the products and culminations of a series of preceding events, stories and developments. The Beatles provided a new take on existing American music, reinterpreting and repackaging elements from the blues and other Black musical traditions, via Liverpool and Hamburg. Similarly, OpenAI did not invent the transformer technology on which generative AI is based. ChatGPT wasn’t the first chatbot, but its ease –of use and user-friendly interface drew widespread attention.  

Even when we feel history in the making, we cannot know the full implications for society. But it’s clear that we are living through another watershed event. Just as the Beatles redefined rock music, the advent of commercially available generative AI is poised to permanently change the landscape of human-machine interactions. Teenagers may not be screaming in ecstasy, but the cultural impact will be similarly large. 

AI in imagination and reality 

At 2:14 a.m. ET on Aug. 29, 1997, Skynet became self-aware. Except, of course, it didn’t. Moments in time are dramatic in movies. But actual advancement in computer science is a slow and laborious process. Understanding that process allows us to evaluate the progress represented by generative AI in proper context.  

Generative AI doesn’t really make sense without an understanding of the near-70-year history of the development of artificial intelligence. The term “artificial intelligence” is believed to have been coined by John McCarthy in 1956 at the first academic conference on the subject. He defined artificial intelligence as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization, and critical reasoning. Conceptually, the idea of artificial intelligence was exciting, although the goal exceeded the abilities of computers and computation systems at that time.  

However, as history has shown, McCarthy and his contemporaries were not wrong, just early. They were building on the work of groundbreaking scientists John von Neumann and Alan Turing. Von Neumann and Turing translated computational technology from decimal logic (values from 0 to 9) into binary logic (using Boolean algebra, or the chains of 1s and 0s we know today). Despite this impressive intellectual feat, Alan Turing is best known today as the inventor of the “Turing test.” Turing proposed that one way to evaluate the ability of a machine to exhibit intelligent behavior would be to subject a human evaluator to conversation with both a computer and a human interlocutor. If the evaluator could not accurately detect whether they were talking to a human or a machine, the machine should be considered to have successfully imitated human intelligence. Turing, it should be noted, made no claims about actual artificial intelligence demonstrated by his test. 

From the 1950s, engineers were designing, building and employing machines or robots to complete repetitive tasks in primarily industrial and defense applications that were deemed too dangerous for humans. This proved useful in the then-booming automotive industry. In fact, the word “robot” is derived from the Czech word for “work” used in a 1921 play by Karel Capek, referring to an army of manufactured, forced workers. A decade later, robots were commonly used in heavy lifting and welding operations in manufacturing. Advancements in computing continued over the ensuing decades, amplifying both the range of machine applications and the fear of robotic devastation at the hands of Terminator-like super-beings. 

Generative AI is different, but only a little bit  

The excitement, fear and controversy over ChatGPT is just the latest manifestation of a long-standing human fear that the tools we develop will overthrow us. Discussions of artificial intelligence have moved out of the academy and into the boardroom, and from there they have spilled out to the proverbial workplace water cooler.  To temper both the hype and the fear, it’s useful to explore what is different about ChatGPT versus the many AI-based applications that preceded it. 

The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer.” The transformer is a neural network architecture, or model, that can find patterns in text to derive the meaning. This technology was developed by researchers at Google roughly five years before the release of ChatGPT to reduce the complexity of recursive neural network (RNN) models. Before transformers, AI processed words one at a time. This was slow and lacking in larger context. When you read this article, you may be reading the words in approximate order, but you are also aware of the topic of the article, which serves as context. The transformer AI developed at Google used a concept called “attention.” The transformer allows large-language models (LLMs) to understand the relationships between words collectively, rather than one word at a time, amplifying both the speed and accuracy of training models used to “read and understand text” or “process and understand audio information.” This ability to process an enormous amount of information simultaneously gives the technology an advantage over previous generations of AI. 

Google’s research gave birth to a sub-field of AI called “generative AI.” Google and others applied generative AI to the corpus of nearly all English-based literature on the internet to train LLMs to recognize patterns across a whole host of topics and subjects. Once these large language models were trained, they became powerful tools. Generative AI powers chatbots such as ChatGPT and Google’s Bard, and conversational AI applications such as IBM Watson assistant, but it can also be used to verify humans on websites, or to help us autocomplete sentences on our computers and smartphones.

Generative AI is important 

There is a concept in technology of the “killer app” — the practical application that allows an existing technology to break through and gain widespread adoption. For personal computers, the killer app was a spreadsheet program that simplified accounting processes. 

Artificial intelligence and generative AI encompass significantly more than the applications directly in front of us. Over the past several decades, artificial intelligence has been used in applications ranging from fraud detection and counterterrorism operations to translation services to genomics and autonomous driving, and even in space exploration. New developments such as the transformer technology discussed above are likely to be game changers as society continues to strive for AI-based solutions to problems such as climate change, disease, world hunger, education and income inequality, and the energy transition. 

The rising use and eventual ubiquity of generative AI technologies could also galvanize further development of processing technology to keep up with advancements in algorithms. Such advances could catalyze the progress of quantum technology and allow for “digital experiments” to simulate complex physical processes, such as nuclear power generation, without the danger of conducting these experiments in the physical world. Such an application of AI could meaningfully alter the conversation about energy production and generation in ways unimaginable today. 

Nevertheless, each application of these technologies will have its balance of promise and risk. Technologists are excited about generative AI, in part because the cyclical, recursive nature of application development will likely spawn larger, deeper and faster advancements — a pattern that could accelerate even further because large language models enable researchers to do more things more quickly. Thus, the technology itself may pave the way for future AI applications that are yet unknown. 

Artificial intelligence at S&P Global 

At S&P Global, we aim to facilitate a conversation with the market. The biggest need at this point is for transparency in the face of both the fear and the hype surrounding generative AI. At the same time, this landscape is ever-changing. Our view is that these technological advancements are equally important to economic development as other major trends like globalization, social inclusion, and climate and energy transition. We aim to provide this information in three parts: 

  1. AI Fundamentals — primers on fundamental technologies and how they can be used
  2. AI Applications — exploration of pertinent issues emanating from the use of AI in various sectors, geographies and economies, both current and prospective, and relevant opportunities and risks
  3. AI Governance and Regulation — considerations for governance structures that may help manage, diffuse or at least come to terms with various risk implications raised in the Fundamentals and Applications sections