This is a talk I wrote for Columbia's Journalism School to show adjuncts just what they're up against when it comes to ChatGPT.
I've found most discussions of these tools to be a lot of "oh jeez we're all doomed and/or saved!" but without any examples or silver linings. This has both!
Starting from a pretty basic first-day assignment — write a journalistic pitch — we'll explore how both that assignment and most any follow-up you can think of might fall prey to ChatGPT. I'd assume it's at the very least suitable for teachers and journalists!
I promise it's all much better in-person, especially as the notes were slammed together on a deadline. If you'd like an inspirational doomsday talk given in real life, find me at @dangerscarf or js4571@columbia.edu.
Hi, I'm Soma! I'm a professor at Columbia's Journalism School who runs two very fun programs:
But you don't care about that, so let's talk about ChatGPT and all those other AI tools that have been flooding the news lately.
Before we even get to what these tools are, a full disclaimer is that I love ChatGPT and all of the other popular modern AI based tools. I am wholly and completely biased!
Last week I even taught the opposite of this class - instead of instructors it was for students, and it was all about how to more or less learn to code a web scraper through ChatGPT instead of listening to someone like me drone on and on.
But yes, I'm a big fan! I use ChatGPT-like tools in my own work (specifically Github Copilot in Visual Studio Code), and I also teach my students how to use these students to simplify their workflows.
It's easy for my classes because they're mostly focused on programming and how to use programming to get to solutions and analyze data. It's more about the concepts and less "write me a paper."
But what about everyone else?
Even if I love these tools, apparently ChatGPT is the absolute destruction death knell knife in the barely-beating heart of education (at least according to the news). No one is going to do anything anymore except cheat! So we're all doomed.
But before the tears: what is ChatGPT?
ChatGPT is a chatbot. You type things to it, it types things back.
Chatbots have been around forever, but the secret about ChatGPT is that it's really really good. It's read the entire internet (and more), knows more than everyone in this room put together, and is even better at communication than someone with an English degree who has been through years of therapy.
Technically speaking it's a generative large language model (LLM), but we don't care at all about the technical things going on here. We actually don't even care about ChatGPT, it's just one example in a big sea of tools.
There are a lot of new tools, so we aren't going to become experts at them all.
Instead, we're going to think about them in the abstract, see their abilities, think about where they might go. So while we're talking about ChatGPT, know that in three months from now when we start talking about some other tool all of the same lessons — and many of the same weaknesses! — probably apply.
But before we demo ChatGPT, a question for you: is using Grammarly cheating?
We have a lot of international students, a lot of folks for whom English is a second language. Is getting language tweaks from a robot like Grammarly or a writing coach or a roommate something that's a problem? Why or why not?
Pretend that you, at home, followed a great discussion where people generally think it's perfectly good if not a delight. You're here to teach reporting, not focus on the minutiae of the English language.
Wonderful, now let's look at how ChatGPT works. We aren't talking about journalism yet, we're just talking about the tool itself.
ChatGPT works like this: you, the user, ask it for some information. It, the chatbot, gives you some information. In this case we asked it for a sugar cookie recipe, and it gave us a sugar cookie recipe.
This isn't just some randomly selected person's sugar cookie recipe, it's a distillation of all the sugar cookie recipes that ChatGPT has read the internet! Just like how you'd read like sixteen recipes and weigh them each and pick a few to use as the basis for your own, that's (kind of) the same thing ChatGPT is doing.
This might be a little bit impressive, but it's not why ChatGPT is famous. ChatGPT is famous because as a "generative language model," it can generates texts that are startling human.
Now let's get more interesting: asking it to write the beginning of a cyberpunk noir novel in the voice of Hemingway.
While it doesn't sound like Hemingway, it does sound like the beginning of a cyberpunk noir novel where the main character is an avian cyborg with an addiction to homemade sugar cookies.
I find this impressive! It's creative and thoughtful and plays on tropes and has everything you'd like to see. These cute one-offs are what the internet loves to show you with ChatGPT, but it definitely isn't journalism.
So in order to terrify and inspire you I'm going to walk a straw man of a Reporting class, and how a savvy student might leverage ChatGPT every step of the way. And we'll start with something very simple.
Cheating. Simple cheating. If you were worried about plagiarism through cutting and pasting from Wikipedia or existing news articles, this is hundreds of steps worse.
You can't just google the text the student writes to find the source because in the same way that the sugar cookie was a distillation of everything ChatGPT knows about sugar cookies, each answer that ChatGPT gives is going to be fresh-baked new content based on an idea, not just cut and paste.
And as a bonus: it's most likely going to give a slightly different answer even if you ask it the same question again. Just like a person!
So, our first prompt! One-paragraph pitch about the NYC public transit system. Real baby basic stuff, something relatable and researchable and easily improved.
And easily fed into ChatGPT.
First: is this cheating?
Second, what do we think about it? Boring, right?
Don't worry, ChatGPT is like your very best student in that it loves to get and incorporate feedback.
How's this one? Better?
Is this cheating? Is it cheating if a student writes their own bad pitch and feeds their own pitch into ChatGPT to spice it up?
Okay, it looks like writing simple pitches is going to be trouble. But that's just the first five minutes of reporting class, we have so much more to offer!
What's the next step that you might have students take after writing a pitch, or along with submitting a pitch?
The "chatbot" part of ChatGPT really shines because it knows what we're already talked about. As you saw with the editing, we can actually have a discussion about the pitch.
As a result, any follow-ups you can ask your student to do, your student can also ask ChatGPT to do.
Now we ask for who some targets might be for interviews. These responses are pretty generic, but how does this compare to what your students produce? What would your feedback be?
Is something like this cheating?
Your students should be able to do this themselves, but I'm not really impressed with either the prompt or the output. Knowing that you should interview someone from the MTA for a story about the subway doesn't get you a Pulitzer.
If not interview subjects, what differentiates two students who are working on this project? How about the quality of their interviews?
Yeah, don't worry, ChatGPT can invent interview questions, too.
How would we rate these interview questions? Give me a percentile. Mediocre? Awful? How does this compare to your other students?
Is it cheating to have ChatGPT generate interview questions? Is this useful to a student to see? What could we do to make this a useful exercise?
Admittedly, these questions are pretty boring, not a real deep dive. There's a lot to be said about follow-up questions, but we'll just assume we'll get the same caliber as the first time around.
Let's go back to the last prompt, though, where we asked who we might interview. Which one of those groups seems to be the most interesting to interview? Personally, I love the idea of talking to a transit engineer! But how in the world are you going to find one? Any ideas?
Obviously, we can just ask ChatGPT.
Is this cheating? Or is the actual outreach the important part?
How would we find "industry publications that cover transportation engineering?" Yeah, sure, just ask ChatGPT.
Now this is great, I love this. I love this and I hate it. First, is this cheating?
Second, this is one of the first times ChatGPT has been specific. Here are SPECIFIC PUBLICATIONS AND JOURNALS! Perfect, right?
A word of caution when you get to this point is that once you start asking ChatGPT about specifics it loves to start lying to you. Technically speaking AI lies are called "hallucinations," probably because lying sounds too intentional or casts this work in a bad light, but ChatGPT will state things that are completely untrue and claim that yes, it's a fact, don't even think about disagreeing.
Do we think anything on here might be untrue, might need a fact check or a follow-up? I myself cannot believe that "Transportation Research Part A" is a legitimate journal.
For the purposes of this talk I was very excited about ChatGPT hallucinating, but it turns out my hunch was completely wrong! Transportation Research Part A: Policy and Practice is a completely legitimate journal and is on Volume 169. I could not believe it.
So besides the potential for hallucinations, what's a downside of getting all of your answers through talking to ChatGPT?
I want to hear thoughts!!!
And I heard them, and they were great and full of examples, and we had a wonderful conversation! But I did not write them down, but I have faith you can figure some thoughts out yourself.
So far most everything from ChatGPT has been just reasonable. Nothing too insightful, nothing too awful, really just middling.
At this point I feel like it's remarkably similar to students using Wikipedia. What are Wikipedia's strengths and weaknesses? How boring are the things you find on it? Do students need to follow citations to find better material? How does it compare to what we've seen so far with ChatGPT?
Now let's veer away from helpful suggestions to everyone's favorite: fabrication on that part of students.
Interviewing is a big time/effort sink: it involves persistence, creativity, talking to human beings, etc etc etc. All sorts of things that show how good of a journalist a student might be, but are not always a lot of fun.
But what if... they just didn't you it? ChatGPT can lend a hand and invent quotes that sound completely reasonable, especially if they're coming out of the PR department.
If you're malicious or have an angle to work, you can steer the quotes in one direction or another. For example, maybe the narrative of my story is purely driven by the beef I have with Albany.
It's spooky, it's scary, it might make problematic students more problematic (or problematic journalists more problematic!).
If you think anything and everything is cheating, you're probably not feeling too great now. But even in that case, ChatGPT isn't all doom and gloom.
Students who are struggling now have an additional resource they didn't have before: a teaching assistant and writing coach who never gets bored and who never gets irritated and can steer them along the way.
Remember this boring pitch? Remind me what your feedback was on it – what would you tell students to change about it?
You can feed the pitch right back into the box and ask for feedback. And it turns out the "high-ranking editor at a well-established daily newspaper" fka ChatGPT has many of the same suggestions!
If our original pitch was something a student wrote themselves, and they do this before turning it in, is this cheating? What concerns do we have about this?
Althooouuuugh one concern might be that in the same way that ChatGPT can make suggestions about changes it can also implement them.
Just like you're an editor of your students' work and provide feedback and suggestions and they incorporate them, your students can now also be an editor.
Here's a 5-paragraph news story that we've steered in one direction or another. And if we don't like it, same as before, we can provide ChatGPT guidance and suggestions about how to fix it up.
Outlets are trying their best to do this in a legitimate fashion as we speak, but they're failing pretty hard.
If you're a fancy editor or product person right now, template-based automated news is probably a better decision for another 12 months or so. No need to rush, it's only getting better!
And so you're that professor who is TERRIFIED from the NY Post article, you're the NYC school system that has banned ChatGPT, you just want to crush it where it stands. So you're going to use one of those detection tools that you're sure exists!
And the tools do exist! There are a lot, of varying degrees of quality. Here's one at http://gltr.io/dist/index.html.
The text is predictable – boring by design, almost – in that each word can be predicted from the word before it. "I am hungry, I am going to the _____" is it store? Grocery store? Restaurant? Many of these tools measure how unpredictable each word is to see whether you have a human making wild choices or an AI tool making safe bets.
The best detector is at https://openai-openai-detector.hf.space/. It's not the most popular one, but does a (comparatively) remarkable job.
This detector is very famous, it's called GPTZero, and it claims things like "a ton of educators have signed up! we got millions of views!!" and "we're the best!" but that's all marketing and an attractive website and a fun name. It doesn't actually do a good job detecting our ChatGPT AI story as being written by a robot.
I will say that oftentimes you plug in a generic lead into an AI detector and it has says there's a 50/50 chance it was written by an AI tool. That's just because a lot of the way journalism is written is boring and predictable and follows a pattern!
I did try hard to get an example of this, but suddenly there was a lot of breaking news and everything was coming up as absolutely human-written! News is news because it's new, so I guess it wasn't predictable enough for the AI tools to think a robot wrote them. But if your students are writing evergreen stories there's a good chance they'll be partially flagged as a bot.
But then I ask ChatGPT to rewrite it to sound like a bot. Now we're at about 50/50.
If your student had an AI-written story, can't they just ask an AI to make it sound more like a human? So that's what I did.
And now we're back up to about 90% certainty of it being a human.
Trying to detect this stuff is technically somewhat kind of possible, but also tricky and imperfect. My best follow-up here is just a shrug.
Instead of talking about how ChatGPT is going to bring down the educational world, we can also talk about its shortcomings.
Its database of information is often out of date, it hallucinates, and is pretty boring. Maybe you know someone like that?
Here's my favorite example.
And then it will go ahead and lie about it, which is great. At least it's very apologetic.
While has all been about how students can use ChatGPT, but I want to sneak in my favorite reason why we as instructors can use it. Not for homework, not for assignments, but for letters of recommendation!
They're a great way to lend a hand to your students, but so painful to write. But NO LONGER. Give it a list of bullet points and there you go!
You can also provide follow-up suggestions like "add more specific details" or "make it a stronger recommendation." I love it, and your students will absolutely love your ability to submit these recommendations in a timely manner, too.
In a lot of the stories you've been reading about ChatGPT, it's all about defensive moves. Banning tools, plagiarism detection, things like that
I personally think being offensive and using ChatGPT is a better route, but you'll need to decide on how that works best for your classes.
Here are my suggestions! I know everyone will have a different level of comfort, so it can range from being completely grumpy and saying no no no to tools like ChatGPT, or going out in full-throated support for them.
The final one is my favorite. The ability to write custom material for each and every student to have them edit in class, or ask ChatGPT to do a "2 truths and a lie" about a topic that the students need to research, or even provide them with ChatGPT pitches on their beat and have them critique them. The possibilities are endless!
The end!
And yes, those positive ideas are all thrown in there at the end with absolutely no fleshing-out because that's a whole other talk. And if you're an adjunct being fed coffee and tiny sandwiches you'll only listen to someone talk for so long, no matter how scared you might be of the academic singularity.
If you'd like to know when I publish the follow-ups, you can do things like:
I'm also obviously very interested in your takes, because anyone who reads this far has an incredible amount of focus and is meant to be respected.