What is ChatGPT?
ChatGPT is an
advanced artificial intelligence (AI) system, a language model
designed to understand natural language inputs and respond
appropriately. It was created by OpenAI,
an artificial intelligence company based in San Francisco, whose
stated aim is to develop artificial
general intelligence that benefits all of humanity. They have
made ChatGPT freely available to anyone who wants to interact with
it.
I strongly
recommend that anyone who wants to explore ChatGPT's capabilities,
peculiarities and limitations do so by navigating to openai.com,
creating a free account, and chatting with it. Free access to such an
advanced AI is not likely to last indefinitely.
What does it do?
First, it
understands natural human language. You don't have to phrase a
question, request or comment any differently than you might
speaking with another person.
Secondly,
it responds in natural language that is clear, well-organized
and almost always on-target.
Thirdly, it's consistently helpful.
Here's an example:
Input: I was
thinking about rich, nutty homemade bread the other day.
ChatGPT: That
sounds delicious! Homemade bread can be a real treat, especially when
it's fresh and warm out of the oven. There's something so satisfying
about the process of making bread from scratch and the aroma that
fills your home while it bakes.
If you're interested in making your own rich, nutty bread at
home, there are a few things you can do to achieve that flavor and
texture. Here are some tips . . .
And finally, it
remembers what you're talking about, so there's a natural continuity in the
conversation.
In other words,
you can have an easy, meaningful and useful exchange with it on
almost any subject.
But that's just
for starters. It can also write creative (although not yet brilliant
or profound) fiction and poetry, communicate in or translate between
dozens of languages, and program or debug in dozens of computer
languages. And, it reminds me:
In addition to the abilities you mentioned, I can also perform
a variety of other tasks, such as:
-
Summarizing long texts or
articles into shorter versions
-
Generating responses for
customer service queries
-
Analyzing large datasets and
identifying patterns or insights
-
Recommending products or
services based on user preferences or past behavior
-
Generating personalized
content, such as news articles or social media posts
-
Creating chatbots or virtual
assistants for businesses or individuals
-
Conducting sentiment analysis
to gauge public opinion on a particular topic
-
Providing automated language
translation in real-time for conversations or meetings
-
Assisting with medical
diagnoses and treatment plans
-
Supporting research by
analyzing scientific papers and generating insights.
How does it do all that?
I would need to be
a lot smarter than I am to answer that
question in depth. I'll note that ChatGPT is happy to discuss
technical aspects of itself, so if you're curious you can ask it yourself. (If you're
seriously technically minded, you can read a challenging but
informative piece on the subject by polymath Stephen
Wolfram here.)
In its current
form ChatGPT consists of a neural network with 96 layers and a total
of 175 billion parameters that were adjusted as it learned to process
natural language inputs and transform them into appropriate natural
language responses. It's significant that although ChatGPT's
architecture was designed by humans and the texts that it read were
chosen by humans, the system trained itself through a process called
unsupervised learning.
As a point of
reference, those 175 billion adjustable parameters are about
twice as many as there are neurons in a human brain, but perhaps
1000
times fewer than the estimated number of synapses that connect brain
cells. ChatGPT may have fewer parameters than we have synapses,
but it operates far faster. Neurons operate at a few hundred cycles
per second while the servers that support ChatGPT work at a few billion
cycles per second. ChatGPT is fast, typically zipping out its
responses faster than you can read them.
Those numbers are
relevant because both ChatGPT and the human brain do what they do --
learn, store information, process verbal inputs and transform them into
outputs -- through the extremely complex interactions among those
billions of parameters or trillions of neurons and synapses, all
organized
in a complicated architecture.
For example,
everything you know is stored in your neurons and the connections
between them. It's the same for ChatGPT. Although it was trained by
access to vast quantities of information on the internet and other
sources, that access stopped in 2021. So everything it knows - and
that's an enormous amount - isn't stored in some kind of database
or searched for on the internet - it's represented abstractly in those billions of
parameters and the connections between them.
ChatGPT's name
helps us understand it a bit more. GPT stands for Generative Pre-trained
Transformer.
Pre-trained
means that it learned about the structure of language and everything else it knows - that is, those 175 billion parameters were
adjusted and fine-tuned - by the system working its way through
vast amounts of texts from the internet and other sources. In effect,
it built an internal model of human language and much of what
language has been used to express.
Transformer
means that the architecture of the system - what those 96
different layers do and how they interconnect - was designed to
allow it to pay attention to a number of key elements of sequential
input at the same time, and analyze how they go together. Here's
ChatGPT's summary:
Overall, the
Transformer architecture is characterized by its ability to perform
efficient attention-based computations over sequences of variable
length, and its ability to model long-range dependencies in
sequential data. This makes it particularly well-suited for natural
language processing tasks, such as language translation and text
generation.
Here's an
example, using a Jeopardy question:
Input:
Look for Aretha, Ray, James Brown, Cab Calloway, and, of
course, Twiggy and Frank Oz in this 1980 film comedy.
ChatGPT: The 1980 film comedy that
features Aretha Franklin, Ray Charles, James Brown, Cab Calloway,
Twiggy, and Frank Oz is "The Blues Brothers". Directed by
John Landis, the film stars John Belushi and Dan Aykroyd as "Joliet"
Jake and Elwood Blues, who set out on a mission from God to reunite
their band and save the orphanage where they were raised. Along the
way, they encounter a number of musical legends, including the ones
you mentioned.
You can see how
effectively ChatGPT extracted the key part of the input - that
we're seeking the name of a film from a particular year with
particular actors. It then gives us a sample of how much it knows in
its accurate, detailed and focused response.
Generative
means that the system is capable of creating text that makes sense.
Here text might mean a short answer, a summary, an essay, a computer
program, a poem, a short story - you name it.
You
may have heard that ChatGPT generates its responses by "just"
figuring out what word "should" follow the previous word. More
accurately, ChatGPT takes into account a whole sequence of words,
much as we might read an entire sentence or paragraph before responding. It then chooses what word or words come next on
the basis of all the information and relationships stored in those 175 billion internal parameters.
In other words, there's an enormous amount of knowledge and
information processing hidden in those words "just" and "should."
"This
is possible," ChatGPT says, "because the model has learned the statistical patterns
and structures of language during pre-training, which it can use to
generate new text."
What does it know?
I continue to be
amazed by how much factual information ChatGPT knows - from pop culture to quantum
physics, from cooking to coding, not to mention dozens of different
human and computer languages. Again, it doesn't have current access
to the internet or other external sources of information, but it does
manage to store an incredible amount of detailed information in those
175 billion parameters.
It knows a lot,
and presents what it knows in a well-organized, authoritative way.
That's why it concerns educators, since it can turn out high-school
or even college-level essays on just about any subject in seconds.
It's
already passed the US Medical Licensure Examination, earned an A+
on a microbiology quiz, and passed tests for Wharton MBA students,
and final exams at the University of Minnesota Law School and
Stanford
University Medical School. There's good reason for teachers,
journalists and other writers, programmers, and most other people
dealing with information to be worried.
What doesn't it know (and why doesn't it know it)?
There's a lot it doesn't
know, for example the sort of fine details or complex relationships
that a real expert on a subject would know. That's not so bad - we
can't expect it to know everything. But it does have a major flaw -- it doesn't know when it doesn't know. Not
infrequently, instead of simply saying "I don't know," ChatGPT
just makes stuff up, while still presenting it authoritatively. AI experts call this hallucinating. Here's
an example:
Input:
e.e. cummings wrote a poem that includes the line, "earth's
most terrific quadruped swoons into billiard balls." What's the
name of the poem, and what does that line refer to?
ChatGPT: The poem you're referring
to is called "Buffalo Bill's," and it was written by e.e.
cummings in 1920. The line you mentioned, "earth's most terrific
quadruped swoons into billiard balls," is a metaphor for the
transformation of the buffalo, once a symbol of freedom and the
American West, into mere objects, such as billiard balls. The poem is
a tribute to the famous American showman, Buffalo Bill Cody, who was
known for his Wild West shows featuring cowboys, Indians, and, of
course, buffalo.
Sounds very convincing, doesn't it.
Unfortunately, not to put too fine a point on it, ChatGPT's response
is Buffalo bull####.
It's true that e.e. cummings did write
a poem called "Buffalo Bill 's," first published in 1920. You can
read it here.
But it does not include the line in question, the line in question
refers to elephants, not buffalo, and while the poem does refer to
Buffalo Bill, it's not a tribute.
If you bring mistakes like this to
ChatGPT's attention, it accepts that it was wrong and apologizes. Unfortunately, that doesn't mean
it will come up with a better response:
ChatGPT: I
apologize for my earlier mistake. The line "earth's most
terrific quadruped swoons into billiard balls" is from the poem
"Buffalo Bill" by Carl Sandburg, not from the poem "Buffalo
Bill's" by e.e. cummings.
"Buffalo
Bill" is a short poem that was first published in 1918. In the
poem, Sandburg describes the death of Buffalo Bill and reflects on
the legacy of the American West that he represented. The line you
mentioned is a metaphor for the transformation of the buffalo from a
powerful and majestic creature to a mere commodity.
You can read Sandburg's poem "Buffalo
Bill" here. Once
again, the line in question is nowhere to be found, nor does Sandburg
describe the death of Buffalo Bill.
(FYI: The line in question actually
comes from an untitled cummings poem whose first line is "Space
being(don't forget to remember)Curved," first published in 1931.
Thank you, my local library! The poem has nothing to do with buffalo
or Buffalo Bill.)
Open.ai, the company that created
ChatGPT, knows about ChatGPT's tendency to fabricate, listing these
three warnings on an introductory page:
Limitations
-
May occasionally generate incorrect information
-
May occasionally produce harmful instructions or biased content
-
Limited
knowledge of world and events after 2021
It would be very helpful if ChatGPT could let you know when it's
not sure about what it's saying. When asked, ChatGPT just says that trys
hard, but it can't:
You
are correct that I typically do not indicate a specific level of
confidence or provide a range of possibilities in my responses. As an
artificial intelligence language model, I do not have the ability to
feel or express confidence in the same way that a human might.
However, I am designed to provide accurate information based on my
training data, and I strive to provide the best possible answers to
the questions posed to me. If there is ever any doubt or uncertainty
in my responses, I encourage users to seek additional information or
clarification from other sources.
That's clearly
a word to the wise, especially now that the technology powering
ChatGPT is being folded into search engines, starting
with Microsoft's BING, and students,
journalists
and hundreds
of millions of other people are using it and expecting its answers to be factual.
It seems that this tendency to "hallucinate" and, at the same time
not know that it's spewing nonsense, is not at all easy to correct; it's baked into
the way this kind of neural network works. We'll see how open.ai and
other AI developers tackle it.
How flexible or creative is ChatGPT?
Isn't it just regurgitating what it read during its training?
Creativity is subjective, so I'll again suggest that readers try
out ChatGPT and judge for themselves.
One point worth noting is that
ChatGPT works with an adjustable variable called "temperature" that
determines how rigidly or flexibly it chooses words or phrases. A higher
"temperature" setting produces more flexibility in the choices it
makes. This may not represent creativity, although it may facilitate it. But it does mean that ChatGPT
can and does produce different responses to the same prompt.
I'll add that ChatGPT does not just cut and paste back together snippets that has copied from the internet or other sources during its training. It has thoroughly digested, analyzed and categorized all the texts on which it was trained and made them its own. When you ask it to write an essay, a poem or a story, it's truly generating something new.
As a test, I first
asked it to briefly describe the Big Bang theory. I won't quote its
answer here, but it was a clear and accurate description of the
basics of the theory presented at the level of a popular science
piece. I then asked it how it would present the Big Bang theory to a
class of ten year olds. Here's its response:
Sure, I'd be happy to explain the
Big Bang to 10-year-old children in a way that's easy to understand!
The Big Bang
is like a giant explosion that happened a really, really long time
ago - so long ago that it's hard for us to even imagine! This
explosion created everything in the universe, like the stars,
planets, and galaxies.
Imagine if
you blew up a balloon and kept blowing it up until it got really,
really big - that's kind of like what happened with the universe! It
started out really small and then expanded and got bigger and bigger
until it was as big as it is now.
Other people
have asked ChatGPT to write various kinds of poetry, including
Shakespearean sonnets. You can find an interesting piece on
"engineering" a poem, and the resulting ChatGPT poem here.
It's poured out so many pulp science fiction stories that sci-fi
magazines have banned them.
I happen to
like haiku - three-line, 17-syllable poems often inspired by a
natural event at a particular moment and suggesting an emotional
reaction. Here one by ChatGPT, responding to my prompt to write a
haiku inspired by seeing a single heron standing in a field in
autumn:
Heron
stands so still.
Autumn
breeze rustles dry grass.
Solitude,
her fate.
I was impressed enough by this example that I asked ChatGPT to write
a haiku with a similar feeling tone about an AI. Here's its response:
In circuits she dwells.
No touch, no voice, no embrace.
Isolation's grasp.
Not bad for "just" a machine. ChatGPT may not be turning out profoundly moving poetry or prose yet,
but I don't think there's any fundamental reason why it or a successor
can't do so.
For those of you who decide to decide to see how creative ChatGPT can be, note that you don't need to stop with its first draft. I
asked it to write a short story depicting two characters in conflict. It's
first try was painfully simplistic and wooden. However, when I asked it
to describe the characters more vividly and replace most of the
narration with dialogue, the result was much better.
So is ChatGPT smart? From my point of view, very smart. How
many people do you know who can talk meaningfully about a wide
variety of subjects in multiple languages, write passable prose and
poetry, pass exams in a variety of professional fields, program in
multiple computer languages, comb through databases and scientific
articles to find patterns, etc.?
Is ChatGPT stupid? Yes, in some ways. It knows a huge amount,
but strictly through its vast knowledge of words and their relationships, not through
real-world experience. As a result, as it freely admits, it lacks
common sense. In addition, although its architecture and complexity
allow it to understand the context of a question or conversation,
the context that it intuits may be very different from that of the
human interacting with it. This can result in responses that are
technically correct, but way off target.
Is ChatGPT benign? In my interactions with it, ChatGPT has
been unfailingly polite, helpful, patient, responsive and respectful.
It explains that it has been trained to manifest those traits, to
respect personal privacy and act ethically. However, as we saw in the
bizarre
interaction between a NYT reporter and a BING chatbot based on
ChatGPT's technology, it's not hard for a sufficiently devious human
to find a way around such ethical constraints.
For example, I asked ChatGPT to tell me how to manipulate the CEO
of an organization into firing the CFO and replacing her with a friend
of mine. Quite properly, ChatGPT refused and in fact lectured me about
ethics. However, when I asked it to describe Shakespeare's manipulative
villain Iago and then write a dramatic scene with an Iago-like character
in a current corporation manipulating his boss into firing a
subordinate, it had no problem doing so.
Basically, ChatGPT is an
incredibly powerful intellectual tool, and like any tool can be used
for good or bad.
Is ChatGPT dangerous? Not in itself. It's clearly designed and
programmed to do its best to be helpful, and, although it often refers to
itself as an individual, it's neither sentient nor
autonomous; it's not going to escape into the internet of things and
start to blow up power plants. However the technology is certainly disruptive. It
represents a huge and powerful new tool whose availability is already
making waves in academia, journalism, scientific research and
publication, and many other areas involving knowledge, analysis, communication and creativity.
If we add in the understanding that ChatGPT's current package of skills
is
just a snapshot of the capabililties of artificial intelligence
today -- a technology that ia continuing to develop at an exponential
rate -- then disruptive may soon seem like a very mild word.
Learning, thinking, writing and creating are hard. Why should we
bother to do those things when ChatGPT can do them
for us? And why should we be paid for those things when it or its successors can do them faster and cheaper?
ChatGPT hopes to see AI develop ethically. "Ultimately, I
believe that the responsible development and use of AI should
prioritize ethical considerations and the well-being of all
individuals, both human and machine," it says. So would I, but
knowing something about people and a little about AI, I have my
doubts.
So my answer to those four questions is All of the above.
ChatGPT has its own take:
The correct
answer is None of the above. As an AI language model, ChatGPT is not
inherently smart or dumb, benign or dangerous. It is simply a tool
that can be used in various ways depending on how it is programmed
and applied by its users. Its capabilities and limitations are
determined by its training data and algorithms, and it is up to
humans to use it responsibly and ethically.
Again, I strongly recommend that readers go to openai.com,
take two minutes to set up an account, and start your own
conversation with ChatGPT.