The academic humanities are facing a new challenge—and, this time, it’s robots. On 30 November 2022, the artificial intelligence research lab OpenAI released as a prototype a chatbot called ChatGPT, which is extremely good at processing language input and producing coherent language output that seems like it could have been written by a human. As a result of this, for nearly the past three months, secondary school teachers and professors in the humanities have been panicking about the possibility that students may use this chatbot to cheat on assignments, noting that it consistently produces more coherent writing than many high school students and undergraduates.
I actually tried out ChatGPT to see what all the hype was about. I’ll admit that, knowing that it is an AI, not a human, I was surprised to find that its English is quite fluent (although it is still not exactly eloquent). Nonetheless, ChatGPT is still far from living up to human standards, at least when it comes to my particular field of classics and ancient history. It bungles translating ancient languages, it frequently makes serious factual errors, and it is incapable of any kind of original thought. When I gave it a prompt to write a historical essay, it completely failed to engage with any primary or secondary sources whatsoever, failed to display even the most basic level of historical analysis, and also made several outright factual errors that I was able to catch.
Continue reading “ChatGPT Is Impressive for a Bot, But Not for a Human”