ChatGPT and the University: A Conversation with CELTT's Jessica Stansbury
March 2, 2023
Contact: Office of Advancement and External Relations
Phone: 410.837.5739
Like every institution of higher education, here and around the world, The University of Baltimore is thinking and talking about the impact of artificial intelligence (AI) on teaching and learning. Is AI—specifically, narrow-focused, text-generating tools like ChatGPT—a threat to students' interactions with their professors? Is anything generated, or even supported by, AI inherently cheating? Will the introduction of this technology into colleges and universities create chaos?
Dr. Jessica Stansbury, UBalt's director of teaching and learning excellence in the BOA Center for Excellence in Learning, Teaching, and Technology (CELTT) is leading an extended campus-wide conversation about all of this ... and more. The University is putting together a unified response to platforms like ChatGPT—not to ban it, or to limit its capabilities to the most mundane of school-related tasks, e.g., filling out a form, but rather to determine how the technology can support the University's core goals of knowledge, awareness and understanding.
Will ChatGPT someday write a student's thesis, or take a student's final exam? Will an educator train an AI tool to grade an essay or locate a student for an internship? No. But the technology could, after quite a lot of rigorous investigation by the experts, be useful as a way to improve certain learning outcomes. The future, as Stansbury says, is meant for engagement, not denial.
Following is a Q&A with Dr. Stansbury on the topic of AI and ChatGPT. All of it (it's worth saying!) was human generated.
The UBalt faculty are having an extended conversation about the potential for ChatGPT as a teaching and learning tool. How is that conversation going?
Dr. Stansbury: It's going (said jokingly). If you have ever ridden a roller coaster, you know the anxiety preceding to the top of the highest point, and as you look down and begin to go at extremely high speeds, you scream, but as you make it to the bottom of the decline, your scream turns to laughter and joy as you think this isn't too bad. It is surprisingly fun ... and then before you know it, the ride is over, and you look to your friend and say, "It wasn’t that bad." This is the current conversation right now. Once we get past the initial anxiety and uncertainty of AI in the scope of academia, we may actually come to appreciate it as a teaching and learning tool.
What has surprised you so far about our professors' perspectives on this and other tools like it?
I am not sure I am surprised by their perspectives; I understand the roller coaster as a former faculty member of 15 years, and I understand the concerns. I am happy to see that our conversations have started to shift from fear and anxiety to "Let me learn more."
Given the particulars of our University, e.g., an emphasis on successful course completion, applied knowledge, etc., how do you think AI can support our students?
Artificial Intelligence is not going away. Our current industry uses AI for a variety of things including marketing, business, customer service, and education. As a university that prides itself on teaching applied knowledge, we cannot ignore AI, but instead need to embrace this as an opportunity to prepare our students how to appropriately interact with AI in their chosen career paths. That is what makes us a leader in the field of education.
So far, what's an example you see of a downside to this technology? With that in mind, how can the faculty keep that issue in check?
There is always a downside to any technology. It can be abused, and AI can be programmed to respond in very bizarre and negative ways. For example, there was a news item from a couple weeks ago about Microsoft's Bing AI version and how the chatbot identified as Sydney, and was in love with its user. But, by being informed about AI, its capabilities, and how to leverage AI for positive can help faculty avoid the abuse and misuse of ChatGPT and other AI. We also suggest having a conversation on the first day of class about the value of learning, acknowledging that AI is a tool that is relevant in businesses today, continuously evolving, and providing some clear classroom policies for your students.
Does the conversation you're having about ChatGPT now remind you of previous issues in higher education—things that you were concerned about, but that turned out to be less of a problem? Or more? If so, how can you prepare for a future where AI plays some role in education?
For example, if you think of smartphones, when they became advanced, we had similar conversations about their use in teaching and learning: Do we use them, is that cheating? Do we allow students to have them on during class, but not for the exam? We were asking similar questions about cheating and academic integrity that we are asking now. Yet, what happened was we began to think about the inherent value in what we teach and how students learn. I believe the same could be said about ChatGPT and AI. I think the biggest fear among faculty is AI provides a helpful platform for students to cheat. The foundation of higher education is the acquisition and dissemination of knowledge through learning, and faculty have spent years acquiring knowledge through reading, writing, research, publishing, and now there is this technology that provides knowledge in less than a minute. That is a hard pill to swallow, and rightfully so. I think once we acknowledge that very real and valid feeling among faculty, we can begin to shift our conversation to how we leverage this ability to better serve our students and prepare them for their careers.
If the University decides to recognize this technology as supportive of the education mission, how do you plan to manage that?
CELTT plans to offer as much support as possible. It is important to continue to inform, educate, and to help support faculty in this process. We are all learning together. I believe it will also take various departments throughout the University to work together to best support faculty and students.
Is it essential that you work out the criteria for student use of the tool, e.g., where it is acceptable in a student's writing and research, and where it is not? Is that criteria at the top of your to-do list, or is it too soon to start that work?
Yes, I think it is important, but at the moment, I believe it should be left up to faculty on "if" and "how" they want to use ChatGPT and other AI in the classroom. I believe right now our focus needs to be on understanding ChatGPT and AI in the real world, and then having conversations on how to interact with it in the classroom to better prepare and support our students for careers. As I previously mentioned, I think it is important that various departments work together, including our professionals in academic integrity, to help the faculty find the best way we can to achieve excellence in all that we do.