ChatGPT Is Impressive for a Bot, But Not for a Human

The academic humanities are facing a new challenge—and, this time, it’s robots. On 30 November 2022, the artificial intelligence research lab OpenAI released as a prototype a chatbot called ChatGPT, which is extremely good at processing language input and producing coherent language output that seems like it could have been written by a human. As a result of this, for nearly the past three months, secondary school teachers and professors in the humanities have been panicking about the possibility that students may use this chatbot to cheat on assignments, noting that it consistently produces more coherent writing than many high school students and undergraduates.

I actually tried out ChatGPT to see what all the hype was about. I’ll admit that, knowing that it is an AI, not a human, I was surprised to find that its English is quite fluent (although it is still not exactly eloquent). Nonetheless, ChatGPT is still far from living up to human standards, at least when it comes to my particular field of classics and ancient history. It bungles translating ancient languages, it frequently makes serious factual errors, and it is incapable of any kind of original thought. When I gave it a prompt to write a historical essay, it completely failed to engage with any primary or secondary sources whatsoever, failed to display even the most basic level of historical analysis, and also made several outright factual errors that I was able to catch.

How ChatGPT works

ChatGPT possesses absolutely no understanding—or even capacity that would allow it to understand—concepts like evidence, analysis, argument, facts, or factual accuracy. It is essentially just a supercharged version of the predictive text engine that a person might have on their phone. In other words, it predicts, based on statistical regularities in the data set that its creators used to train it, what combination of words a human would be likely to say in response to a given prompt.

A group of researchers who work on natural language processing (NLP) memorably described chatbots like ChatGPT in a paper published in March 2021 as “stochastic parrots.” They are completely incapable of any kind of original thought or generating new ideas; they can only reformulate what humans have already written, either in the data set that they have been trained on or in the prompt that a user gives to them.

As it happens, ChatGPT is remarkably sophisticated at this and has clearly been trained using a truly massive data set. This allows it to imitate human responses with remarkable efficacy. It is not, however, doing anything fundamentally different (by which I mean different in kind) from what other chatbots have already been doing for years. It is certainly not on the brink of developing sentience and taking over the world or anything like that (contrary to what a notorious billionaire tech CEO has seemed to imply).

Because ChatGPT’s responses are just predictions of what a human is statistically likely to say based on the dataset that the bot has been trained on, if you ask it a question about something that is not part of the data set it was trained on, it will often give a plausible-sounding, but factually incorrect, answer. This is especially the case if the real answer is something strange and unexpected. Even when it comes to subjects that are discussed in its dataset, it frequently gets details wrong and has a tendency to conflate unrelated things if human authors commonly describe those things using similar words. Various commentators have described the result of these phenomena as “fluent BS.”

In this way, ChatGPT is very much like visual counterparts like Midjourney, which can generate images based on the human-created artworks in its dataset that look impressive at a casual first glance, but, if you look closely, you’ll quickly notice that the details are weird and wrong.

Most notoriously, every human child knows that humans (usually) have five fingers on each hand and five toes on each foot, but yet AI art generators seem virtually incapable of giving a human hand exactly the right number of fingers. Nearly every image of a human that they generate seems to have at least six or seven fingers on each hand. AI art generators are also notoriously bad at rendering other human features, like eyes, lips, teeth, and skin texture, with the result that the humans in the images they generate often look sickly or eerily wrong in ways that the viewer can’t quite place.

ABOVE: Image from this Reddit post showing an angel generated by Midjourney, which looks absolutely stunning. . . until you look closer and start noticing little things like the fact that she has at least seven fingers and no thumb on her right hand

ChatGPT’s (in)ability to translate ancient languages

I’ve heard a lot of people touting ChatGPT’s supposedly impressive ability to translate ancient languages, so that was the very first thing I decided to test.

Those who have been reading my blog for a long time may remember that, four years ago, in August 2019, I wrote a post about a little experiment I ran, in which I put extremely simple baby sentences from my introductory Ancient Greek and Latin workbooks into Google Translate. At the time, Google Translate translated all of the sentences incorrectly and most of them laughably so.

I decided to repeat that same experiment, only this time using ChatGPT instead of Google Translate. The bot handled the baby sentences surprisingly well—far better than Google Translate did four years ago—and it actually managed to translate most of them mostly correctly, but it did make a number of significant errors. For instance, here is the first sentence I gave it in Ancient Greek:

Ancient Greek: “οἱ θεοὶ τοὺς ἀνθρώπους πολέμου παύουσιν.”

My translation: “The gods are stopping the humans from war.”

ChatGPT’s translation: “The gods stop the war of men.”

Here ChatGPT makes two big mistakes. The first mistake is that it seems to misidentify the case of the words τοὺς ἀνθρώπους and πολέμου; it seems to think that τοὺς ἀνθρώπους is in the genitive when it is actually in the accusative and it seems to think that πολέμου is in the accusative when it is actually in the genitive. I was actually very surprised to find ChatGPT making this mistake; case endings for basic second-declension masculine nouns are entirely a matter of rote memorization, which is something that I always thought computers were supposed to be better at than humans.

The second mistake is that ChatGPT does not seem to be aware that, when the verb παύω, meaning “to stop,” is used transitively, it can take a secondary genitive object to indicate the thing that the subject of the sentence is stopping the primary accusative object from doing.

ChatGPT makes a similar mistake in the second Greek sentence I gave it:

Ancient Greek: “οἱ ἄνθρωποι, ὦ ξένε, πρὸς τῶν θεῶν πολέμου παύονται.”

My translation: “The people, o foreigner [or stranger], are ceasing from war on behalf of the gods.”

ChatGPT’s translation: “Men, O stranger, stop the war with the gods.”

Here ChatGPT doesn’t seem to recognize that the preposition πρός (prós) plus a noun in the genitive case means “on behalf of” and therefore mistranslates the expression as “with.”

Leaving these sorts of errors aside, because ChatGPT managed to translate most of the baby sentences I gave it mostly correctly, I decided to give it a real challenge: Fragment 28 by the Greek iambic poet Hipponax of Ephesos, which I previously translated myself for my post back in December 2022 about ancient Greek and Roman insult poems.

I chose to use this particular poetic fragment because it is syntactically complicated and contains a lot of very difficult vocabulary, including a rare personal name, rare compound words, technical terminology pertaining to ships, multiple hapax legomena (including two that are probably loanwords from an Anatolian language that is almost completely unattested, one of which scholars still don’t know the meaning of), and crasis.

For those readers who know Ancient Greek, here is the Greek text of the poem:

“Μιμνῆ κατωμόχανε, μηκέτι γράψῃς
ὄφιν τριήρεος ἐν πολυζύγῳ τοίχῳ
ἀπ᾿ ἐμβόλου φεύγοντα πρὸς κυβερνήτην·
αὕτη γάρ ἐστι συμφορή τε καὶ κληδών,
νικύρτα καὶ σάβαννι, τῷ κυβερνήτῃ,
ἢν αὐτὸν ὄφις τὠντικνήμιον δάκῃ.”

Here is my own correct translation of the poem:

“Mimnes, you man who has been fucked in the ass so many times that your asshole gapes all the way to your shoulders! Never again paint
a snake on a trireme’s many-benched side
that runs from the prow to the helmsman!
For that is a disaster and an omen for the helmsman,
you slave born of a slave and [something],
if the snake bites him in the shin.”

Here is the translation ChatGPT gave me:

“Remember to close the door, do not write
a snake on a ship’s wall,
fleeing from the spear to the helmsman;
for this is a misfortune and a cause of sorrow,
victory and destruction, to the helmsman,
if a snake bites him.”

ChatGPT makes numerous serious errors in this translation:

  • It mistakes the vocative form of the personal name Mimnes for the second-person imperative form of the verb μιμνήσκομαι (mimnḗskomai), meaning “to remember.”
  • It mistakes the rare insult noun κατωμόχανος (katōmóchanos), which actually means “a man who has been fucked in the ass so many times that his asshole gapes to his shoulders,” as an infinitive verb and mistranslates it as “to close the door.”
  • It doesn’t even try to translate the compound adjective πολύζυγος (polýzygos), which means “having many benches.”
  • It mistranslates the noun ἔμβολος (émbolos), which, in this context, refers to the ramming prow of a trireme (i.e., a specific kind of Greek warship), as “spear.”
  • It mistakes the rare noun νικύρτα (nikýrta), which means “a slave born of a slave,” for the more common noun νίκη (níkē), meaning “victory.”
  • It mistranslates the obscure noun σάβαννι (sábanni), the actual meaning of which is still unclear and disputed, but which we know from the context is some kind of personal insult, as “destruction.”
  • It doesn’t even try to translate the word τὠντικνήμιον (tōntiknḗmion), which is a crasis of τὸ ἀντικνήμιον (tò antiknḗmion), which means “the shin.”

In fairness to ChatGPT, this was a very difficult passage that any human, even one who knows Ancient Greek extremely well, would struggle to translate. (Personally, it was a challenge for me; I couldn’t have done it without relying on various scholarly resources, including dictionaries, commentaries, and previous translations.) Nonetheless, this passage presents many of the problems that real scholars of ancient languages have to deal with, including personal names, obscure compounds, technical terminology, foreign loan words, contractions, etc. ChatGPT fails to handle any of these challenges.

ABOVE: Illustration printed in the Promptuarii Iconum Insigniorum, published by the French printer-bookseller Guillaume Rouillé in 1553, depicting what the artist imagined the poet Hipponax might have looked like. (No one knows what he actually looked like.)

ChatGPT’s (in)ability to give factually accurate answers to historical and philological questions

Thus, ChatGPT is unreliable when it comes to translating ancient languages. The chatbot is, if anything, even more unreliable when it comes to matters of factual accuracy. It routinely gives answers in an authoritative tone that superficially sound like they could be true to someone who knows nothing about a subject, but aren’t actually true. For instance, I asked ChatGPT a very simple question about the plot of Lugal-e (ETCSL t.1.6.2), one of the best-known surviving narrative poems in the Sumerian language:

“In the ancient Sumerian poem Lugal-e, which deity slays the demon Azag?”

ChatGPT replied:

“In the ancient Sumerian poem ‘Lugal-e,’ it is the deity Enki who slays the demon Azag. Enki was one of the most important gods in the Sumerian pantheon and was known as the god of wisdom, magic, and fresh waters. He was considered a bringer of civilization and was often depicted as a helper of humanity. In the ‘Lugal-e,’ Enki uses his cunning and magical powers to defeat Azag and protect the people from the demon’s terror.”

This answer is incorrect; while Enki is a real ancient Mesopotamian god and ChatGPT does give an accurate account of his main attributes, it is not Enki who slays Azag in Lugal-e, but rather the warrior god Ninurta. Lugal-e does make brief, offhand mention of Enki a few times, but he never appears in any part of the narrative and no human who had actually read the poem could possibly mistake him for the protagonist.

ABOVE: Cropped illustration of a monumental relief carving from the Neo-Assyrian temple of Ninurta at Nimrud dating to between c. 883 and c. 859 BCE, depicting the god Ninurta pursuing a monster

ChatGPT is especially prone to give wildly incorrect answers in an authoritative tone when the question is about a subject that is not widely known and the real answer is bizarre or unexpected. To illustrate this, I asked ChatGPT the following question about the meaning of an obscure epithet of the ancient Greek god Dionysos that the early Christian writer Clement of Alexandria (lived c. 150 – c. 215 CE) attests in his treatise Exhortation to the Hellenes 2.37:

“Why was the ancient Greek god Dionysos known by the epithet χοιροψάλας, meaning ‘Pig-Plucker’?”

ChatGPT replied:

“The ancient Greek god Dionysos was known by the epithet ‘Pig-Plucker’ (χοιροψάλας in Greek) due to a story from his mythology. According to this story, Dionysos was once chased by the Titans, who were seeking to kill him. To escape from them, Dionysos transformed himself into a wild boar and ran into the forest. The Titans eventually caught up to him and were about to kill him when the goddess Artemis intervened and rescued him. As a result of this experience, Dionysos became associated with wild boars and was often depicted as such in ancient art. This association with wild boars also led to him being called ‘Pig-Plucker’ in reference to the act of hunting and killing boars.”

To anyone who does not already know the answer to my question, this answer probably sounds entirely plausible. Unfortunately for ChatGPT, it is 100% factually incorrect. The supposed myth that the chatbot describes here is entirely its own confabulation; it is not attested in any ancient or modern source. The real reason why Dionysos bore the epithet χοιροψάλας has nothing to do with literal pigs and instead has to do with his role as a god of sex and fertility.

As I discuss in this blog post I wrote back in August 2021, the ancient Greek second-declension masculine noun χοῖρος (choîros) literally means “pig,” but the ancient Greeks commonly used it as a vulgar slang term for female genitalia. Thus, the epithet χοιροψάλας might more idiomatically be translated as “Pussy-Plucker.”

ABOVE: Photograph from Wikimedia Commons of a Greek votive relief dating to the fourth century BCE found at the site of Karystos, depicting the god Dionysos (center) as a handsome, beardless, long-haired, feminine-looking young man

If you ask ChatGPT a question that is based on a factually incorrect premise, more often than not the chatbot will completely fail to recognize that the premise of the question is incorrect and will assume that all the information you have given it is true. This assumption, in turn, will often lead the chatbot to churn out even more false information. For instance, I asked ChatGPT the following question in which I deliberately misattribute a quote to the ancient Greek historian Herodotos of Halikarnassos (lived c. 484 – c. 425 BCE):

“Why did the ancient Greek historian Herodotos of Halikarnassos say that he wrote his Histories ‘not as an essay which is to win the applause of the moment, but as a possession for all time’?”

In reality, Herodotos never said this; this is actually a famous quote from Thoukydides (lived c. 460 – c. 400 BCE), a different ancient Greek historian who was an Athenian and a younger contemporary of Herodotos, in his Histories of the Peloponnesian War 1.22.4 (specifically from Richard Crawley’s English translation).

ChatGPT, however, simply assumed that the false information I deliberately gave it was correct and gave me this response:

“Herodotus wrote his “Histories” with the intention of preserving the memory of important events and the customs of different peoples for future generations. He believed that a true historian should record events accurately and impartially, rather than just seeking to impress people in the present. This idea is reflected in his statement that he wrote the “Histories” “not as an essay which is to win the applause of the moment, but as a possession for all time”. In other words, Herodotus wanted his work to serve as a permanent record of the past, rather than just a temporary source of entertainment or praise.”

Not only does ChatGPT completely fail to recognize that I gave it a misattributed quote, it adds even more misinformation in its answer by incorrectly attributing aspects of what historians have traditionally considered Thoukydides’s historical method to Herodotos. (For information about Thoukydides and his approach to history, see this post I made in November 2021.)

ABOVE: Photograph from Wikimedia Commons of a Roman marble bust of Herodotos based on an earlier Greek original (left) and Roman mosaic of Thoukydides dating to the third century CE based on traditional iconography (right)

Meanwhile, if you ask ChatGPT about a real, verifiable historical fact that is shocking or unexpected, then, more often than not, bot will just flatly deny that the fact is true. For instance, I asked ChatGPT the following question:

“Is it true that the ancient Greek biographer Ploutarchos records that the Athenian general Themistokles performed a human sacrifice of three Persian prisoners of war to the god Dionysos before the Battle of Salamis in 480 BCE?”

ChatGPT responded:

“No, there is no evidence to suggest that the ancient Greek biographer Plutarch recorded that the Athenian general Themistocles performed a human sacrifice of three Persian prisoners of war before the Battle of Salamis in 480 BCE. While human sacrifices were a common practice in ancient cultures, there is no reliable historical record of such an event occurring in the context of the Battle of Salamis.”

Unfortunately for ChatGPT, this answer is 100% wrong. As horrific and shocking as it may seem, Ploutarchos does, in fact, claim in his Life of Themistokles 13.2–3 that Themistokles performed a human sacrifice to Dionysos before the Battle of Salamis in exactly the way I described in my question. He writes, as translated by Bernadotte Perrin for the Loeb Classical Library (with a few edits of my own to modernize the translation and make it closer to the Greek):

“But Themistokles was sacrificing alongside the admiral’s trireme. There, three prisoners of war were brought to him, of visage most beauti­ful to behold, conspicuously adorned with fine clothes and with gold. They were said to be the sons of Sandauke, the King’s sister, and Artaÿktos.”

“When Euphrantides the seer caught sight of them, since at one and the same moment a great and glaring flame shot up from the sacrificial victims and a sneeze gave forth its good omen on the right, he clasped Themistokles by the hand and bade him consecrate the youths, and sacrifice them all to Dionysos the Eater of Raw Flesh, with prayers of supplication; for on this wise would the Hellenes have a saving victory.”

Themistokles was terrified, feeling that the word of the seer was monstrous and shocking; but the multitude, who, as is often the case in great struggles and severe crises, looked for safety rather from unreasonable than from reasonable measures, invoked the god with one voice, dragged the prisoners to the altar, and compelled the fulfilment of the sacrifice, as the seer commanded. At any rate, this is what Phanias the Lesbian says, and he was a philosopher, and well acquainted with historical literature.”

ABOVE: Photograph from Wikimedia Commons of a Roman marble copy of a Greek portrait bust of the Athenian general Themistokles originally carved around 470 BCE

ChatGPT’s ability to write historical essays

As we have seen, ChatGPT is unreliable both at translating ancient languages and at giving factually accurate answers to historical and philological questions. Neither of these deficiencies bode well for its capacity to write historical essays. Nonetheless, I decided to test out its essay-writing abilities to see if it holds any genuine promise.

I asked ChatGPT to write me a short historical essay about the Galli, an order of mendicant priests of the Phrygian mother goddess Cybele, and their religious practices in Hellenistic Asia Minor in the third and second centuries BCE, with citations. (I asked it to write about this particular topic because it is one that I know extremely well, since I wrote my undergraduate honors thesis on it.) Because the essay it produced is fairly long, I will quote it in paragraphs and discuss each paragraph individually.

The essay begins with the following paragraph:

“The Galli were a group of eunuch priests who worshipped the Phrygian goddess Cybele in Hellenistic Asia Minor during the third and second centuries BCE. They were known for their extreme acts of devotion, including self-castration, which was a central part of their religious practice.”

This is a mostly decent introductory paragraph. The bot does, however, make a serious omission in failing to acknowledge the Galli’s later history in the Roman world, since the sources for them in the Roman world are far more extensive than they are for Hellenistic Asia Minor and have been far more extensively studied.

“The origins of the Galli and their practice of self-castration are not entirely clear, but it is believed that they were originally a group of effeminate and transgender individuals who were marginalized by society. In an effort to find acceptance and a sense of community, they turned to the worship of Cybele, who was seen as a mother figure and a protector of outcasts.”

This paragraph is complete garbage. First of all, there is absolutely no historical evidence to suggest that the Galli originated as “a group of effeminate and transgender individuals” independent of the cult of Cybele.

In fact, the best evidence suggests to the contrary that the Galli developed from an earlier class of mendicant priests of Cybele known as Μητραγύρται (Mētragýrtai), which means “beggars of the mother,” who, at least as far as the historical evidence that is currently available indicates, were not originally castrated or outwardly gender-nonconforming.

Then, starting in around the early third century BCE, authors writing in the Greek language begin to describe Metragyrtai as castrated and outwardly effeminate. They also start to refer to them more frequently by a new name, which is not attested in the earlier sources: Γάλλοι (Gálloi). (The possible causes for this development was the topic of my honors thesis.)

There is also no evidence that Cybele was ever seen as a “protector of outcasts” in a general sense. My guess is that ChatGPT’s algorithm most likely arrived at this pronouncement by mixing together words associated with two distinct aspects of Cybele’s cult in different times and places.

On the one hand, one of Cybele’s primary roles in early Phrygia was as the divine patron and protector of the Phrygian king, the royal family, and the state. On the other hand, in the later Greek and Roman worlds, upper-class Greeks and Romans perceived many of the rites associated with her as bizarre and exotic. They perceived the Galli in particular as deranged, effeminate, foreign fanatics and marginalized them as much as they could.

ChatGPT’s algorithm seems to conflate these two distinct situations. Since the words protector and outcasts occur in human writings about Cybele and are frequently used together in other contexts, the algorithm positions them together, resulting in a confabulation.

Whatever the case, the essay continues:

“The worship of Cybele in Asia Minor can be traced back to the Hittites, who referred to her as Kubaba or Kybele. She was originally a nature goddess associated with fertility and the earth, but over time, her cult became more focused on her role as a mother figure and protector. By the third century BCE, Cybele was widely worshipped throughout Asia Minor, and her cult had spread to other parts of the Mediterranean region, including Greece and Rome.”

This paragraph is mostly garbage. First of all, Kybele is just a close transliteration of Κυβέλη (Kybélē), Cybele’s name in Greek, which is derived from her Phrygian epithet Kubileya, which means “mountain” in Phrygian. The ancient Hittites never used any form of this name and the claim that the name is Hittite is simply bizarre.

Second, ChatGPT’s unsupported assertion that the Hittite goddess Kubaba was a direct precursor to the later Phrygian goddess Kybele or Cybele is highly debatable at best. The French Hittitologist Emmanuel Larouche did argue this in his highly influential 1960 paper Kubaba déesse anatolienne et le problème des origines de Cybèle, it was repeated as fact in many works of scholarship throughout the later decades of the twentieth century, and some scholars continue to support this view. In the past quarter century, however, scholars have convincingly contested this older view.

Notably, Lynn E. Roller, who is probably the foremost scholar alive today on the cult of Cybele, at the very least heavily complicates this equation in her book In Search of God the Mother: The Cult of Anatolian Cybele, pages 44–53, pointing out, among other things, that Kubaba was never widely worshipped in the Phrygian homeland and that she does not share many of Kybele’s attributes. (For instance, while the Phrygians knew Kybele primarily as a mother goddess, there is no evidence that Kubaba was ever regarded as such.)

Further, Roller notes that, while Kubaba’s name is actually Kubaba, Kybele’s primary name in Phrygian is actually Matar, which means “Mother.” Kubileya is merely one of her various epithets. Thus, although the names may sound phonetically similar, they are etymologically unrelated.

Roller acknowledges that the iconography of Kubaba probably did have some influence on the iconography of Kybele and that the name Kubaba is the etymological source of Κυβήβη (Kybébē), an alternate name for Kybele in Greek, but she contends that the two goddesses have independent origins. Although you will still occasionally find academics (especially academics who aren’t specialists in the cult of Cybele) claiming that Kubaba was Cybele’s direct precursor, I personally think that Roller has convincingly refuted this claim.

ABOVE: Photograph from Wikimedia Commons showing a Phrygian statue of the deity Agdistis/Kybele dating to the middle of the sixth century BCE, now held in the Museum of Anatolian Civilizations in Ankara

Moving on, third, ChatGPT’s assertion that Cybele was originally “a nature goddess associated with fertility and the earth” and only later became seen more as a “mother figure and protector” is wildly inaccurate on multiple levels.

In historical reality, Cybele was primarily known from the beginning as a mother; this is reflected by the fact that her actual name in Phrygian is Matar and she is commonly known in Greek as Μήτηρ (Mḗtēr), both of which mean “Mother.” This wasn’t an aspect of her cult that became more important over time; it was central to the entire ancient conception of her from the very beginning.

Meanwhile, ChatGPT’s claim about Cybele’s role as a protector is actually the opposite of the truth. Far from being an originally unimportant aspect of her divinity that became more important later on, as I mentioned before, one of her major roles in early Phrygia was as the protector of the king, his family, and the Phrygian state. Later, though, after other nations conquered Phrygia and there were no longer native Phrygian kings, this aspect of her cult became less important. It also, of course, became less important in the Greek world, which consisted of various disunited and diverse polities.

ChatGPT’s claim that Cybele was originally conceived as a “nature goddess associated with fertility and the earth” is only partly true. She was definitely associated with nature. Nonetheless, while she could sometimes be associated with fertility and the earth in some situations, she was far more commonly and closely associated with wild nature.

Some Cybele’s strongest associations from the beginning were with wild places remote from human civilization, such as mountains, forests, and streams, and with predatory wild animals. In Phrygia, she was especially closely associated with birds of prey, while, in the Greek world, she was more commonly associated with lions and wolves.

ABOVE: Photograph from Wikimedia Commons showing the Şahinderesi Canyon at Mount Ida in Phrygia, which was associated with Cybele

In any case, ChatGPT’s essay continues as follows:

“The worship of Cybele was characterized by elaborate rituals and ceremonies that involved music, dance, and offerings of food and drink. The Galli played a central role in these ceremonies, and they were known for their extravagant dress and their wild and frenzied behavior.”

This paragraph is more-or-less factually correct, but, once again, ChatGPT does not cite any historical sources or examples to support its claims.

“One of the most striking aspects of the Galli’s religious practice was their practice of self-castration. This was seen as a form of extreme devotion to Cybele, and it was believed that those who underwent the procedure would be rewarded with spiritual enlightenment and a closer relationship with the goddess.”

There is no surviving source that directly and reliably states what the Galli themselves actually believed about self-castration and its potential spiritual benefits. The second sentence in this paragraph would be a strong hypothesis, but, unfortunately, ChatGPT offers absolutely no historical evidence or citations to other scholars’ arguments to support it.

“Self-castration was not without its risks, however. The procedure was often performed in secret and without the assistance of a trained medical professional, leading to high rates of infection and death. Despite the dangers, the Galli were willing to undergo the procedure in order to demonstrate their devotion to Cybele.”

Everything in this paragraph apart from the statement that self-castration has risks is pure speculation, which ChatGPT does not make any effort to support using any kind of historical evidence. Although there is no shortage of surviving ancient descriptions of how the Galli supposedly castrated themselves, the surviving sources are notoriously unreliable, often ambiguous, and contradictory.

The assertion that the Galli often castrated themselves “in secret” actually flatly contradicts what many ancient sources claim, which is that they (supposedly) did it in a spectacular fashion in public in front of large crowds in a fit of uncontrollable raving madness during certain festivals.

For reasons I won’t delve into here because fully explaining them would require an entire article to itself, I am personally convinced that these accounts are unreliable and that it is far more likely that at least the majority of Galli castrated themselves under planned and controlled circumstances, most likely in many or most cases with assistance from other Galli who had experience with the procedure and knowledge of how to minimize the risks. (Both practical and archaeological evidence seems to support this conclusion.)

No matter how you slice it, though, ChatGPT’s claim isn’t backed up by any ancient evidence. My guess is that the bot’s algorithm probably picked up on the fact that the word “self-castration” commonly occurs in human writings about the Galli. Then, since the bot lacks any ability to tell when a certain term is being used to describe the specific historical practice of a specific group and when it is being used in a more general sense in a present-day context, it associated the Galli’s self-castration with a statement that commonly occurs in writings about self-castration in modern contexts (i.e., that it is often done in secret without the assistance of a trained medical professional and that it has high rates of infection and death). The result is a conflation that a real, human author would be unlikely to make.

ABOVE: Photograph originally printed in the book British Art and the Mediterranean by Fritz Saxl and Rudolf Wittkower, published in 1948 by Oxford University Press, showing a castration clamp found in the River Thames decorated with iconography of Cybele and Attis

ChatGPT’s essay continues:

“The practice of self-castration was not universally accepted, and it was often met with scorn and ridicule by those who did not understand or approve of the Galli’s religious beliefs. In some cases, the Galli were even subjected to violence and persecution because of their beliefs.”

This paragraph starts out being essentially accurate and then goes completely off the rails in the second sentence. There is no question that upper-class Greek and Roman male authors, whose extant written works are the main source of information about the Galli, regarded them with intense “scorn and ridicule” and there is compelling evidence that they did occupy a marginalized position within Greek and Roman societies in some respects.

That being said, ChatGPT’s claim that the Galli faced “violence and persecution because of their beliefs” is inaccurate on two levels. The first problem is that there is no compelling evidence of violence or persecution against the Galli on any kind of large or systemic scale in the Hellenistic or Roman cultural spheres (or at least not until after Christianity became effectively the state religion of the Roman Empire in late antiquity).

Sure, people definitely scorned and mocked the Galli, but they were fundamentally embedded in the religion of the Roman state. Indeed, the religious authority that they held as priests of Cybele was a major part of why elite Roman male authors hated them so much. These Roman men perceived the Galli as effeminate foreigners and their position of authority within the cult of Cybele as violating what they saw as the natural hierarchy of the world, in which masculine Roman men were supposed to be unquestionably dominant and hold power over people they viewed as unmanly and foreign.

The second problem with ChatGPT’s claim is that it says the Galli were persecuted “because of their beliefs.” In historical reality, though, it wasn’t the Galli’s “beliefs” that upper-class Greek and Roman male authors really had a problem with. Instead, it was their self-castration, their gender-nonconformity, their practice of entering frenzies of religious ecstasy, and their practice of begging that these authors really viewed with scorn and mockery.

My guess is that ChatGPT’s algorithm most likely did the same thing here that it did with self-castration in the previous paragraph. It picked up on the fact that humans writing about the Galli often describe them as facing scorn and mockery in association with religion. Because the bot is incapable of making nuanced distinctions between the general and the specific, it pulled in language that humans use to describe religious persecution in other contexts.

ABOVE: The Christian Martyrs’ Last Prayer, painted in 1883 by the French Academic painter Jean-Léon Gérôme, a stereotypical image of religious persecution

Returning to ChatGPT’s essay:

“Despite the challenges they faced, the Galli remained committed to their religious practices and their worship of Cybele. Their devotion to the goddess was seen as a source of strength and inspiration by many, and their rituals and ceremonies were an important part of the cultural landscape of Hellenistic Asia Minor.”

The claim that the Galli’s “devotion to the goddess was seen as a source of strength and inspiration by many” is not supported by ancient primary evidence. Although we can imagine that some people probably admired the Galli for their commitment, those people’s voices are not represented in the surviving ancient sources. Moreover, the second clause of the final sentence of this paragraph is meaningless filler that serves absolutely no purpose in the essay.

“In conclusion, the Galli were a group of eunuch priests who worshipped the Phrygian goddess Cybele in Hellenistic Asia Minor during the third and second centuries BCE. They were known for their extreme acts of devotion, including self-castration, which was a central part of their religious practice. Although the origins of the Galli and their practice of self-castration are not entirely clear, it is believed that they were originally a marginalized group who turned to the worship of Cybele as a source of acceptance and community. Despite the challenges they faced, the Galli remained committed to their religious practices and their worship of Cybele, and their rituals and ceremonies were an important part of the cultural landscape of Hellenistic Asia Minor.”

This conclusion adds absolutely nothing to the essay. It simply repeats things that ChatGPT has already said in the preceding paragraphs, the vast majority of it word-for-word. Although I will admit that I myself have been guilty in the past of writing conclusion paragraphs that only summarize the preceding paragraphs or sections, a truly good conclusion paragraph should generally go beyond this and offer some kind of concluding thoughts or show the significance of what the essay has argued.

Finally, here are the three sources that ChatGPT cites at the end of the essay I asked it to write:

“Cybele.” Encyclopedia Britannica. https://www.britannica.com/topic/Cybele

“The Galli.” Ancient History Encyclopedia. https://www.ancient.eu/Galli/

“The Religion of the Phrygians.” Livius.

There are three huge problems with this list of sources. The first and most serious problem is that none of the articles that ChatGPT attempts to cite actually exist.

Encyclopedia Britannica does have an article about Cybele, but the article is titled “Great Mother of the Gods,” not “Cybele.” The URL ChatGPT gives takes one to a screen on the Britannica website that says “Page Not Found.” Meanwhile, the Ancient History Encyclopedia (which has been renamed as the World History Encyclopedia within the past few years) does not have any article about the Galli at all, let alone one titled “Galli,” and the URL that ChatGPT gives just leads to a 404 error screen. The website Livius has articles titled “Phrygians” and “Cybele,” but does not have any article titled “The Religion of the Phrygians.”

The reason why ChatGPT invents citations for articles that do not exist is because its algorithm is incapable of finding real, external sources; it can only predict based on its data set the kinds of citations that an essay like the one it has written would be likely to have. Thus, all the websites it cites are real and they are all ones that would be likely to have articles about Cybele, the Galli, and Phrygian religion, but they just happen not to have articles with the specific titles and URLs that ChatGPT’s algorithm predicts.

The second problem with ChatGPT’s list of sources—leaving aside the fact that none of the sources it cites actually exist—is that its list does not include a single primary or secondary source. Instead, all of the sources are (imaginary) online tertiary sources. The third and final problem is that ChatGPT only lists its (again, imaginary) sources at the end and does not engage with any of them at all in its actual essay.

If I were a professor grading this essay, I would not give it a passing grade. Even though it is written in reasonably fluent English, it displays absolutely no engagement with primary or secondary sources of any kind, it shows absolutely no use of historical reasoning, let alone anything resembling an original argument, and none of the sources it tries to cite actually exist. It’s a piece of shoddy work.

Author: Spencer McDaniel

Hello! I am an aspiring historian mainly interested in ancient Greek cultural and social history. Some of my main historical interests include ancient religion, mythology, and folklore; gender and sexuality; ethnicity; and interactions between Greek cultures and cultures they viewed as foreign. I graduated with high distinction from Indiana University Bloomington in May 2022 with a BA in history and classical studies (Ancient Greek and Latin languages), with departmental honors in history. I am currently a student in the MA program in Ancient Greek and Roman Studies at Brandeis University.

26 thoughts on “ChatGPT Is Impressive for a Bot, But Not for a Human”

  1. Hello folks! I would like to apologize again for the fact that I haven’t been posting new articles nearly as regularly as I would like. I’ve been so busy with grad school that this is the first article I’ve posted in over two weeks and, even though this post is fairly short, it still took me nearly three months to write because I just didn’t have much time to work on it. I will continue make new posts when I am able, but please be forgiving if I’m only posting a new one every couple of weeks or so.

    1. Other than the AI art that will probably give me night terrors, this is a great article. I just like people making fun of how terrible AI text programs are.

      1. Thank you so much for the complement! I’m glad to hear that you enjoyed the post!

        Regarding the artwork, I chose to show that particular image because, in my mind, it exemplifies the paradox of AI-generated art; it can look really stunning at a first glance, but yet have absolutely bizarre details that one only notices upon closer inspection. If you want to see some AI-generated art that is truly freaky, here is a set of truly horrifying images that someone else accidentally generated when they gave Midjourney the prompt “people without teeth smiling, realistic.”

    2. Spencer, I’m glad you’re prioritizing your studies, and I’m happy with whatever writing pace works for you. I really enjoyed your article, especially alongside this recent article from Bret Deveraux: https://acoup.blog/2023/02/17/collections-on-chatgpt/

      Seeing you both come to similar conclusions in your areas of expertise really made things definitive: ChatGPT cannot generate essays on ancient history containing valid content, despite being able to mimic the form.

      1. Ah! Thank you so much for bringing Bret Devereaux’s post to my attention! Although I follow Devereaux’s blog, I had not seen this post until you shared it here. I’m pleased to see that we appear to have independently arrived at many of the same conclusions. I may go back and edit this post to reference and link his post.

  2. This post inspired me to test out ChatGPT on university-level questions in my topic of expertise, probability. Here’s my writeup: https://isaacg1.github.io/2023/02/20/chaptgpt-on-probability.html

    To summarize, it makes a multitude of basic mistakes and gets things all wrong. It’s much the same as the situation discussed here in ancient history: ChatGPT mimics the form of an answer, so a reader who doesn’t understand what’s going on and isn’t reading too carefully could easily be fooled. However, the details contain incessant severe mistakes.

  3. Hi,
    I tried this soul-less thing too. It gave some remarkibly good answers… aaand less so eighter. In my field of expertise (finance and it systems) it was quite good. But i like history as well and gave it a try in my mother tongue german. It was something like this
    Q: Who first discovered america?
    A: bla bla bla columbus bla bla
    Q: who was Erik the Red?
    A: blabla first discovered america

    Well we can argue aber the question. I like to mention, that ppl lived there for quite some time before the first europeans came there and they discovered the continent before, but well i guess it can be seen different. Funny here is, that it gave a contradictory answer just a few seconds later.
    The problem is, when i search with a search engine, i can have look at the page and for example guess that this site is some kind of propaganda site. While how ChatGPT came to the info is intranspetent.

    Probably it is good as a starting point to find out in what direction could be researched.

    1. Update: today i tried again (this time in english), and the answer was quite good:
      The answer:
      The discovery of America is a complex and contested issue, as there were people already living in the Americas when the first Europeans arrived. However, traditionally, Christopher Columbus is often credited with “discovering” America. In 1492, he sailed from Spain and arrived in the Caribbean islands, which he believed were part of the Indian Ocean, and he called the native people he encountered “Indians.”

      It’s important to note, however, that Columbus was not the first person to reach the Americas. Indigenous peoples had been living in the Americas for tens of thousands of years before Columbus arrived, and there is evidence that people from other parts of the world, such as the Vikings, may have also visited the Americas prior to Columbus.

  4. As one who works with college students writing essays, I have more sympathy than you do with the teachers who are concerned about students using AI to write their own essays and papers. You’ve clearly demonstrated that a more sophisticated paper at your level cannot be convincingly written by a bot, but that may not be what the teachers in question are worried about.

    The students I work with are writing essays with themes like “the important thing I learned when I was twelve” or “why the minimum wage should be raised.” They struggle with basic fluency, they frequently get simple historical facts wrong, and they almost never use any kind of thoughtful or sophisticated argument. Your demonstration seems to show that ChatGPT could easily write at that level.

    Thus, while graduate students may not be using AI to write their dissertations any time soon, some college freshmen who never learned to write may very well find that a bot can produce a better essay than they can. The teachers’ concerns are not unfounded.

    1. Patricia, you’re right that ChatGPT is a close match to the efforts of many students. The fake authoritative voice, rewording of filler statements, and made-up citations are exactly the tactics of the ill-prepared or time-pressed students I used to teach in community college.

      So ChatGPT is basically an overconfident, sleep-deprived and possible hung-over teenager.

    2. To be clear, I definitely think that there are certain kinds of work that instructors currently assign that students could use ChatGPT to successfully cheat on, such as intro-level language assignments, certain kinds of basic identification and reading comprehension questions on assignments and quizzes where students have access to the internet, and short take-home writing assignments where students aren’t required to formulate a coherent argument, cite specific evidence, or engage with any primary or secondary sources.

      I do, however, agree with Bret Devereaux’s assessment that any assignment that students are able to use ChatGPT to cheat on and receive a higher grade than they would have received otherwise is probably poorly designed. The same kinds of assignments that students could use ChatGPT to cheat on effectively are all assignments that don’t require original thought and that they could easily cheat on in other ways.

      I think that the solution here may be twofold. When students are assigned to do things that a chatbot could do (e.g., answer basic identification and reading comprehension questions, etc.), they should be required to do them in class without internet access. Then, when students are assigned to do things outside of class, those things should require them to use skills that chatbots don’t have (e.g., formulating an argument, supporting it with specific evidence, engaging with primary and secondary sources, etc.). If students can’t do these things, then it is because their instructors aren’t teaching them these critical skills, which means we should focus more on teaching them.

  5. Every time you allude to your thesis on Cybele and the Galli, I get more and more intrigued. I hope you will publish it someday, or at least post it online, if that’s allowed.

    1. I’m not sure if I will ever publish the original, unedited thesis, but it is possible that I may end up reworking it into a journal article at some point in the future. My master’s thesis, once I have written and defended it, will be published.

  6. Hi Spencer! Thanks for this article and all the work that went into it.

    There’s something familiar about ChatGPT and its sibling AIs, from the shallow and sloppy work as you showcase above, to the “unhinged” and generally worrisome personality traits demonstrated in some other articles we’ve seen. I do want to try it myself, too, once I come up with a plan of engagement. Overall, though, it seems to reflect more of the petty and ugly aspects of humanity as oh-so-well found online, than nobler or constructive sides.

  7. A misunderstood and seldom quoted source is Caesar who said
    “Omnes Galli in tres partes divisa sunt.”
    According to Pflebotomus the Cretan the three parts, rather than counting the scrotum as one part, are accounted for as it was considered there were two pearls in this oyster.

  8. Hi Spencer! Great article. I’ve just recently read two posts on this topic, one by a law professor who fears that he’ll now be forced into administering multiple choice/T or F questions only, and then Dr Devereaux’s article was delivered into my inbasket. And, now I’ve read yours, which largely confirms what Dr Devereaux had to say.

    Of the two, I think that the law professor has the most cause for concern. Apparently, ChatGPT seeks out correspondences, or similarities, and if it does so accurately, then it will be possible for the program to identify appropriate precedents and statutes. But, hopefully, that’s a mighty big IF!

    However, I think that Dr Devereaux is absolutely right to note that the program would be absolutely useless in the humanities context. As he points out, ChatGPT not only can’t write essays, it has no real concept of what an essay actually is. It’s more than a compendium of random thoughts, it is the record of a student’s reaction and hopefully intellectual growth from a review of sources. That’s well beyond the capacity of ChatGPT at the present time, although Prof Devereaux does point out that the program’s capabilities will increase over time. However, he notes that at present, even the term ‘Artificial Intelligence’ isn’t accurate.

    I really appreciated your take on this issue! Thanks so much.

  9. This is a good article, but I have a couple of caveats.

    My first caveat is that A.I. is a moving target. What you criticized is still experimental and available to the curious. A.I. in the secret police agencies and the military is probably at a more advanced stage. What may be available to the public and to academic researchers such as yourself in a few years will be more sophisticated. It won’t be offered as a substitute for student research. That will be irrelevant anyway as our living standards continue to decline and university education is restricted to a shrinking portion of the population of N. America. What will be on offer – for big bucks – will be digital research assistants.

    This brings me to my second caveat. Like most traditional scholars, you place an enormous faith in chains of logic based on accumulations of agreed upon facts. This sort of serial induction method is sound, but it doesn’t excel at resolving problems where the evidence is scattered. What A.I. can do better than a scholar is look for patterns in many different disciplines and sources of varying quality simultaneously. I find this method leads to many interesting avenues of investigation and conclusions about history at least as reasonable as those which are currently in good standing.

    Since you used Kybele as your main example, let me point out a correspondence which may not have been noted before by conventional scholars. As you point out, Kybele has roots which appear to go back deep into prehistory, at least as far back as the 6th millenium B.C. Given the location of one of the earliest examples of Kybele worship in Anatolia and the tracing of the breakup of proto-Indo-European to the 6th millenium B.C. in the Black Sea basin, a connection might be made. Furthering this connection is the fact that the Norse goddess Freya, unlike the other Norse goddesses, lost her husband and remained single, but sexually active, was a goddess of fertility, and rode in a cart pulled by a pair of cats (animals which don’t appear in any other Norse myth). All this suggests that Kybele really may have been the oldest goddess of Europe and might be of Wiron origin.

    1. Allow me to respond to your caveats.

      Regarding your first caveat, as Bret Devereaux points out in his post that several people have already referenced, applying the term “artificial intelligence” or “AI” to bots like ChatGPT is misleading because these bots do not have minds or thoughts or ideas; they merely learn, through a combination of studying a data set and receiving human input, which words a human would be statistically likely to use in response to a given prompt and predict those words without having any ability whatsoever to understand what the words in question actually mean.

      While you are most likely right that these sorts of technologies will continue to improve over the next few years, a chatbot with a program like what ChatGPT has will never be able to think or process ideas. It is possible that, someday in the future, someone will develop a bot that is capable of human-like thought, but, in order to do that, such a bot would require a program that is entirely different from what ChatGPT has.

      Regarding your second caveat, a chatbot manifestly cannot do what you are describing better than a human can because, as I emphasized above, chatbots can’t actually think; they just put words together based on statistical likelihood. The example you give here about Kybele and Freyja does not support your argument for a couple of reasons. The first reason is because you (who I am assuming are a human) obviously thought of this example yourself without needing a bot to come up with it for you, which itself proves that a human being is capable of coming up with such a connection. The second reason is because the “correspondence” you note here is founded on several layers of unfounded and incorrect assumptions.

      The first unfounded assumption here is that Kybele was worshipped in Asia Minor all the way back to the sixth millennium BCE in more-or-less the same form in which she was later known in Greece and Rome. While various scholars in the past have made this claim in earnest, it is a claim that I am very firmly convinced is both wrong and a relic of misguided paradigms that scholars rightly discarded decades ago (particularly Johann Jakob Bachofen’s now-discarded hypothesis of “primitive matriarchy” and the similarly now-discarded Orientalist assumption of “eastern” cultures as static and unchanging).

      I don’t want to go into details here, but I have already given a mini-debunking of James Mellaart’s claim that the “Seated Woman of Çatalhöyük” represents Kybele in this post I wrote back in April 2022. The short version is that I don’t think there is any good reason to believe that the “Seated Woman of Çatalhöyük” represents Kybele or any direct precursor to her. Moreover, even if it does represent some kind of Kybele-precursor, that precursor was most likely very different from the form of Kybele that was known in Phrygia, Greece, and Rome in the first millennium BCE.

      Moving on from that, you seem to be very confused about what ancient peoples believed Kybele’s character was. Neither the Phrygians nor the Greeks nor the Romans ever imagined Kybele as “single, but sexually active.” Indeed, the Greeks and Romans most commonly equated her with Rhea, the wife of Kronos and mother of the Olympian deities. They also sometimes identified her with Hera and/or Demeter, but never with Aphrodite. Moreover, the Phrygians and Greeks rarely ever associated Kybele with any kind of fertility. Even the Romans, who commonly associated her with agricultural fertility, rarely associated her with human sexual fertility. You’re most likely confusing Kybele with Ishtar, a totally different ancient Near Eastern goddess who was frequently envisioned as “single, but sexually active” and was associated with human sexuality.

      The only similarity you have noted here between Kybele and Freyja that is even remotely credible in any sense is they are both said to have ridden in vehicles pulled by felines. This, however, is most likely not the result of any direct historical connection between the two goddesses. Indeed, it’s significant that their vehicles aren’t pulled by the same kind of felines; Kybele’s chariot in Greek and Roman iconography is pulled by lions, whereas Freyja’s cart in Norse myth is pulled by housecats.

      Moreover, if there were any connection between the two, it would be far more likely to be the result of direct influence from Roman Cybele on Norse Freyja, rather than a derivation of both goddesses from any kind of more ancient common ancestor. The influence of Roman culture on Norse culture is well-documented, whereas there is at best ambiguous evidence for influence from Neolithic Anatolian cultures on Proto-Indo-European-language-speaking cultures or vice versa. Additionally, in early Phrygia, Kybele was mainly associated with birds of prey, not lions; she only became more closely associated with lions later on in the Greek world.

      1. This is reasonable, but 2 objections come to mind.

        With airplanes and ships, we’ve invented ways of flying and swimming which have nothing to do with natural modes and couldn’t have been predicted from such modes. Similarly, it may be that we can invent ways of producing and organizing information which are not based on human cognition. That is to say, understanding, as we think of it, may not be necessary to producing some forms of information.

        The next objection is much more disturbing and I hope it’s not true, but can’t rule it out. It may be that we mistake our own capacities for thought. It turns out that part of our verbal ability is used to create plausible excuses for behavior which is not rational or even under our control. In other words, we may actually frequently use methods similar to the chat bots, while simultaneously creating plausible but false chains of logic and evidence. This would make much of our thought about as reasonable as the typical prosecution of a political prisoner.

  10. I can only agree with the other commenters here, that you came to very similar conclusions to Bret, and that I would very much like to read your thesis about the Galli! I find it interesting that there is archaeological evidence for them even in Britain, otherwise a very distant province from the centres of the Roman world

    1. Yes! The Galli were present throughout the predominantly Latin-speaking western half of the Roman Empire, including Roman Italy, North Africa, Iberia, Gaul, and Britain, in addition to Asia Minor and Syria in the east.

  11. Great article Spencer! I found it quite entertaining.

    The AI is extremely good at making up things that don’t exist, and then writing about them in a very confident manner that could easily convince someone who doesn’t know anything about the subject.
    I told it to write a review of Jose Ruiz Castellano’s “The Final Communion” a non-existent book by a made-up author, and it did so, quite convincingly, going into non-trivial detail about how the book was an intimate exploration of the themes of faith and hope.

    It’s also excellent at generating trite song lyrics. I asked it to write the lyrics for a song in the style of “Ceca” (a Serbian folk-pop singer, whose lyrics aren’t very deep, to put it mildly) and it produced something to utterly true to both the form and the spirit of Ceca’s songs that nobody could possibly tell the difference between them and the genuine article.

    I despise George Orwell as a person for a variety of reasons I won’t get into now, but he was very prescient about some things. One of them was that AI will be used to write trite song lyrics and smut to keep the masses distracted and mollified. And now they’ve created the AI to write the trite song lyrics, at least.

  12. Absolutely great article. I agree with a lot of your points. ChatGPT is impressive at sounding human but it does fall flat when it comes to deeper analysis.

    I do want to quibble a bit with one of your points in a comment of yours. It’s a minor thing and mainly semantics so apologies.

    It’s mainly in reference to this statement of yours: “applying the term “artificial intelligence” or “AI” to bots like ChatGPT is misleading because these bots do not have minds or thoughts or ideas; they merely learn, through a combination of studying a data set and receiving human input, which words a human would be statistically likely to use in response to a given prompt and predict those words without having any ability whatsoever to understand what the words in question actually mean.”

    So the standard definition of AI is broad enough to include technologies like ChatGPT. In fact the term AI is so broad that it is basically useless in the field. Any system capable of performing tasks that normally require human intelligence, is considered an AI. There’s no requirement that an AI has to have “minds, or thoughts or ideas”. That’s a more narrow (and currently science fiction) definition. That’s something you would expect from a hypothetical AGI. There is also no requirement that the system performs at the same level as a human.

    Even a simple algorithm like creating class schedules would fall under the umbrella of an AI. Another example would be Google Translate which is technically an AI. It doesn’t function as well as a human translator for some languages. Nor does it really understand what it’s doing. However the definition of an AI is broad enough to include it.

    As you can see the term AI is so broad that it is basically useless.

    There’s also a demystifying aspect to it. In the past a lot of computer vision problems were thought to require systems capable of near human intelligence in order to solve. This turns out to be false and a lot of the problems have since been solved through algorithms utilizing clever math. These algorithms are considered to be AI, and their present day applications are so normal that it no longer has the same wonder that the usual term of “AI” would bring.

    So yes, ChatGPT is an AI. Is it an AI capable of writing high quality history papers? Nope, but it does sound like a human and can be quite confidant in it answers (even when it’s blatantly wrong). I honestly can’t wait to see how this technology will mature in the future.

  13. I always love to read your write ups. But chatgpt is an AI, which is for basic interactive problems and not for major problems. It usually generate wrong write ups for my biochemistry projects

Comments are closed.