by Navaneeth Dontuboyina (’24) | February 3, 2023
In 2015, influential tech figures such as Elon Musk, Peter Thiel, and Samuel Altman created OpenAI—a non-profit, research-oriented company that gathers and tests data to create responsible artificial intelligence. The organization promised that there would be a collaborative, transparent effort to work with other companies, institutions, and individual researchers to harness AI for the common good, combating early concerns of AI “taking over the world.”
Thanks to these noble intentions, OpenAI has received generous donations over the last seven years from companies like Microsoft, as well as the commitment of leading AI research experts. OpenAI has developed different AI models with unique niches, but the most successful products have been OpenAI’s generative models such as DALL-E and the GPT-n series, which take prompts as input and return human-like responses. Recently, OpenAI’s newest language model ChatGPT has taken the world by storm, amassing over one million users only five days after its release in November 2022.
A key component of ChatGPT is GPT-3, which stands for Generative Pre-trained Transformer. Transformer architecture, in the most basic sense, utilizes a deep learning neural network that mimics the way a human brain learns—with more efficiency. These Transformer architectures are then combined to create a Natural Language Processing system. GPT-3 can be fed information from the internet to craft a string of words to answer a prompt in a human-like manner.
ChatGPT’s intelligence can be used in everyday society, from explaining complex mathematical calculations to serving as a pseudo-therapist. However, the overarching issue surrounding ChatGPT is its potential corruption of information. One particular problem is a phenomenon called “artificial hallucination,” which occurs when ChatGPT tries to provide a diverse set of answers to mimic a unique human response, but instead weaves in false information that appears convincing.
ChatGPT’s role in education has also been a growing concern, since utilizing AI-generated answers for assignments can introduce potentially biased information into academia and compromise academic integrity. According to Dan Gilmor, a professor of journalism at Arizona State University, “academia has some very serious issues to confront.”
There have been measures taken to combat these negative consequences. In January, the International Conference on Machine Learning banned all ChatGPT-written papers. During the same month, the New York City Department of Education restricted access to ChatGPT and other OpenAI sites on school premises.
While ChatGPT is sometimes viewed as a hyper-intelligent machine, it has also been labeled a “stochastic parrot” by researchers. In truth, these varying reactions are due to ChatGPT and the entire AI industry’s extreme infancy. Only time will reveal the future implications of this rapidly advancing technology.
gnaganab • Feb 4, 2023 at 5:57 am
It is easy to ignore the risks in enthusiastic hype. I am intrigued by the usefulness and human like responses of ChatGPT. But as noted in the paper, the risks are powerful as well. So, it is critical that powerful tools like ChatGPT to come with inbuilt safeguards from day-1.
Very informative and eyeopener article, thank you!
Gopal N • Feb 4, 2023 at 5:56 am
It is easy to ignore the risks in enthusiastic hype. I was intrigued by the usefulness and human like responses of ChatGPT. But as noted in the paper, the risks are powerful as well. So, it is critical that powerful tools like ChatGPT to come with inbuilt safeguards from day-1.
Very informative and eyeopener article, thank you!